Author |
Message
|
cwazpitt3 |
Posted: Wed Feb 22, 2012 5:31 am Post subject: Reading file record at a time and using Environment |
|
|
Acolyte
Joined: 31 Aug 2011 Posts: 61
|
I am using the Record Detection feature of the FileInput node to read a file a record at a time and then building an output file (also record at a time). What I noticed is that the Environment seems to be cleared after each record is read. I did not see this explained in any of the documentation I read on reading files, so I guess my first question would be, is there a way to keep the environment alive through record reads?
If not, I am needing to cache/store a small amount of data (like a DB result set) somewhere in the flow to be used for each record. So for instance, when I am building the output record, I might need to do a lookup in this result set to get the value (rather than call the DB every time through). Is there any way I can accomplish something like this easily?
Thanks in advance |
|
Back to top |
|
 |
mqjeff |
Posted: Wed Feb 22, 2012 5:46 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
Each record is propagated into a new instance of the message flow.
Use a SHARED ROW to solve your problem. |
|
Back to top |
|
 |
smdavies99 |
Posted: Wed Feb 22, 2012 5:57 am Post subject: Re: Reading file record at a time and using Environment |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
cwazpitt3 wrote: |
So for instance, when I am building the output record, I might need to do a lookup in this result set to get the value (rather than call the DB every time through). Is there any way I can accomplish something like this easily?
Thanks in advance |
so what happens to this data when the Broker or the Execution Group is restarted? _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
cwazpitt3 |
Posted: Wed Feb 22, 2012 6:00 am Post subject: Re: Reading file record at a time and using Environment |
|
|
Acolyte
Joined: 31 Aug 2011 Posts: 61
|
smdavies99 wrote: |
so what happens to this data when the Broker or the Execution Group is restarted? |
Well, I only need the data for the life of the flow instance. So I would recreate it at the start of each flow normally (like when recordCount=1 run the query store the results in Environment), but because of the behavior of the record based file reading and how it propagates (per mqjeff), I don't think I will be able to do that. |
|
Back to top |
|
 |
cwazpitt3 |
Posted: Wed Feb 22, 2012 6:04 am Post subject: |
|
|
Acolyte
Joined: 31 Aug 2011 Posts: 61
|
mqjeff wrote: |
Use a SHARED ROW to solve your problem. |
Wouldn't this only store 1 row of information from the database though or can it hold multiple rows? |
|
Back to top |
|
 |
mqjeff |
Posted: Wed Feb 22, 2012 6:09 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
cwazpitt3 wrote: |
mqjeff wrote: |
Use a SHARED ROW to solve your problem. |
Wouldn't this only store 1 row of information from the database though or can it hold multiple rows? |
Think of it like a Perl Hash. |
|
Back to top |
|
 |
cwazpitt3 |
Posted: Thu Feb 23, 2012 6:08 am Post subject: |
|
|
Acolyte
Joined: 31 Aug 2011 Posts: 61
|
SHARED ROW worked like a charm. Thanks all!
 |
|
Back to top |
|
 |
cwazpitt3 |
Posted: Thu Feb 23, 2012 11:13 am Post subject: |
|
|
Acolyte
Joined: 31 Aug 2011 Posts: 61
|
mqjeff wrote: |
Each record is propagated into a new instance of the message flow.
Use a SHARED ROW to solve your problem. |
@mqjeff, since the behavior is that it creates a new instance for each record, is there any way to "stop" when you have an exception? I throw an exception, the Catch terminal of the FileInput node is hit, it logs my exception, then continues to the next record. Is there a way to stop it? My next option is to store a SHARED variable HasError and check that at end of data terminal before writing output file, but was just wondering if you know of any other way to stop processing in this situation. |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Feb 23, 2012 11:15 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
You've chosen to configure the node to process each record independently of each other.
You could choose to save a flag into your shared variable that says "something went wrong" and then use that to redirect the rest of the rows.
But you can't roll back the rows you've already processed.
Unless you go back to processing the file as a whole. |
|
Back to top |
|
 |
cwazpitt3 |
Posted: Thu Feb 23, 2012 11:16 am Post subject: |
|
|
Acolyte
Joined: 31 Aug 2011 Posts: 61
|
OK, thanks for the explanation. Having trouble finding this information in the docs, much appreciated. In this case, it's more important to do the record based approach, so I will handle the errors accordingly. Just trying to see if there was another approach.
Thanks! |
|
Back to top |
|
 |
|