Author |
Message
|
tr |
Posted: Wed Apr 30, 2014 10:14 pm Post subject: Error handling based on Rows |
|
|
Novice
Joined: 30 Mar 2014 Posts: 11
|
I have to insert a file into database based on the Group_ID. All rows with the same ID should be entered seperately in to detail table. And the sum of these rows should be entered as one single row into other table. I am able to do this using flow FileINput-->compute--File Output node. My file is always sorted by group Id. Problem is with error handing. Whenever there is a database exception/error, I have to insert that row and all rows with same group Id into a error table in database and process the rest of the file.Basically I need to know how to handle db exceptions in compute node and at the same time process the rest of file without those error rows.
Last edited by tr on Thu May 01, 2014 4:54 am; edited 2 times in total |
|
Back to top |
|
 |
smdavies99 |
Posted: Wed Apr 30, 2014 11:41 pm Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
Have you thought about sorting the records before starting the DB Insert?
Search for Quicksort in this forum _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
Esa |
Posted: Thu May 01, 2014 7:16 am Post subject: Re: Error handling based on Rows |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
tr wrote: |
My file is always sorted by group Id. |
The most simple solution is to have two flows. Flow 1 copies records to OutputRoot and propagates to an MQOutput each time Group_Id changes. In other words, it just groups the input records into MQ messages, each message containing all rows associated to a certain Group_ID. You can even use a simplified DFDL schema or Message Set that only extracts the Group_ID, to avoid parsing errors.
Flow number 2 processes the messages from the queue, inserting the rows into database and creating the sum table entry, the same way you do now. From MQ Input nodes catch terminal you wire a node that logs the exception and throws a new one, unless you have a more sophisticated mechanism for logging.
So if there is an exception in processing any row, the whole database operation is rolled back.
From the MQ Input nodes failure terminal you wire a compute node that inserts all rows in the error table in a new trasaction.
Wouldn't it be nice if you implemented this solution and then found out that you don't even get the database exceptions any longer?  |
|
Back to top |
|
 |
tr |
Posted: Fri May 02, 2014 7:04 am Post subject: |
|
|
Novice
Joined: 30 Mar 2014 Posts: 11
|
Thanks for the wonderful idea.Can you please tell how to group the input records into mq messages,each msg containing all rows associated with same group_id. I am not sure how to do this and propagate in esql. |
|
Back to top |
|
 |
Vitor |
Posted: Fri May 02, 2014 7:47 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
tr wrote: |
Can you please tell how to group the input records into mq messages,each msg containing all rows associated with same group_id. |
You don't want to group them, you want to Collect them (hint). _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
tr |
Posted: Fri May 02, 2014 1:33 pm Post subject: |
|
|
Novice
Joined: 30 Mar 2014 Posts: 11
|
Collector node????my file is a pipe delimited csv msg which is parsed with mrm domain. |
|
Back to top |
|
 |
Esa |
Posted: Sun May 04, 2014 2:40 am Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
I'm sorry, to group is not the correct term in this context. The rows are already grouped in the input file. What you need to do is to split the file in messages based on Group_Id. Splitting is the opposite of collecting. |
|
Back to top |
|
 |
fjb_saper |
Posted: Sun May 04, 2014 7:41 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Esa wrote: |
I'm sorry, to group is not the correct term in this context. The rows are already grouped in the input file. What you need to do is to split the file in messages based on Group_Id. Splitting is the opposite of collecting. |
Not quite clear. If you meant one message per group id, this may not be practical as the number of records per group id is unknown... (1GB?)  _________________ MQ & Broker admin |
|
Back to top |
|
 |
Esa |
Posted: Sun May 04, 2014 11:03 pm Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
fjb_saper wrote: |
Not quite clear. If you meant one message per group id, this may not be practical as the number of records per group id is unknown... (1GB?) |
Yes, that's correct. But from the original post I somehow got a hunch that the size of MQ messages would suit well. As if the flow managed to process several groups before getting a database error, perhaps because of trying to issue too many operations withing one transaction. This may turn out to be a false assumption.
But for now the main challenge seems to be how to write basic ESQL for splitting an input message. After that the solution can be refined by splitting into files, which requires a bit more complex configuration of File Input nodes, and improving performance of the splitting process, if needed.
@tr: you need to read at least these:
WHILE
LASTMOVE
CREATE LASTCHILD FROM
PROPAGATE
MOVE NEXTSIBLING
RETURN |
|
Back to top |
|
 |
|