Author |
Message
|
sameetbhogi |
Posted: Tue Aug 23, 2011 9:26 pm Post subject: Issue with Parsed Record Sequence in File Input Node |
|
|
Newbie
Joined: 12 Jan 2011 Posts: 8
|
Hi,
I have a requirement where i need to read a file (copy book file of size 5MB) which might contain 1000's of records. I need to batch 100 records from the input and place them in a output queue. So for this i have used parsed record sequence and in my message definition file i have mentioned the max occurences field to 100. So that at each iteration it reads 100 records from the input file. I'm getting an exception generated at the last iteration where i have only 40 records left in the input file. As i have specified the max occurences to 100 it should read 40 records without throwing any exception. The exception being thrown is "CPI Text Buffer Input Data Too Short". Kindly help me.
NOTE: I need to read the file using Parsed Record Sequence Record detection method. |
|
Back to top |
|
 |
smdavies99 |
Posted: Tue Aug 23, 2011 10:46 pm Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
Is this related to this -->http://www.mqseries.net/phpBB2/viewtopic.php?t=58624&
Could you be working on the same problem by any chance?
If you used a collector node to gather the elements the last collection would timeout and you could send it off. _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
kotagiriaashish |
Posted: Wed Aug 24, 2011 12:39 am Post subject: Re: Issue with Parsed Record Sequence in File Input Node |
|
|
 Disciple
Joined: 06 Aug 2011 Posts: 165
|
Since the only constraint is to read the input using
sameetbhogi wrote: |
NOTE: I need to read the file using Parsed Record Sequence Record detection method. |
why place the max occurences field to 100... you could just use a aggregate node and collect 100 messages, the last batch would timeout and you will get it through Expire terminal... or you could simply write ESQL to place the data in Env Manually...Hope this helps.. |
|
Back to top |
|
 |
mqjeff |
Posted: Wed Aug 24, 2011 1:26 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
CWF doesn't support optional records. You need to use TDS. |
|
Back to top |
|
 |
kotagiriaashish |
Posted: Wed Aug 24, 2011 11:34 am Post subject: |
|
|
 Disciple
Joined: 06 Aug 2011 Posts: 165
|
With the same setup...you will receive the left records at catch terminal... |
|
Back to top |
|
 |
sameetbhogi |
Posted: Thu Aug 25, 2011 2:28 am Post subject: |
|
|
Newbie
Joined: 12 Jan 2011 Posts: 8
|
Hi,
Thank you all for your valuble suggestions. I have tried using TDS as well still i get the same error. My question is if we specify Min occurences to 1 MAX Occurrences to 100 in the mxsd then it sould accept any records in the range 1 to 100. This case is failing when i use parsed record sequence. Please let me know why i'm facing the exception.
Thanks in advance.
PFB the exception
Quote: |
Text:CHARACTER:CPI Text Buffer Input Data Too Short |
|
|
Back to top |
|
 |
kimbert |
Posted: Thu Aug 25, 2011 4:42 am Post subject: |
|
|
 Jedi Council
Joined: 29 Jul 2003 Posts: 5542 Location: Southampton
|
Your message model is not correct. It cannot handle less than 100 records.
If you really want the magic number 100 in your message model then you need to
- construct a standalone test in which you put 40 records ( or some other number less than 100 ) into your flow
- take a debug-level user trace
- read the trace to see why the TDS parser is failing
- adjust your model and re-test
Alternatively, tell the FileInput node to propagate the records one at a time, and do the batching within the message flow. That might make it a little easier to control the batch size. The collector node would be one solution, as smdavies has pointed out. |
|
Back to top |
|
 |
|