Author |
Message
|
mb2011 |
Posted: Fri Jul 15, 2011 4:24 am Post subject: TDS message set |
|
|
Newbie
Joined: 15 Jul 2011 Posts: 5
|
Hi All,
I am new to message set creation.
I need to read a file which contains multiple records separated by new line .Single record contains elements which are fixed length.
If an element is not coming then spaces are coming in place of that .
Also if an element is of 10 characters and nine characters are coming in the input then extra space is coming for the tenth character.
I tried to make TDS fixed length message set for this scenario.
But when I send the message in which an element is having less characters than what ever is mentioned for its length ,even if space is there for compensation of total length ,message is not parsed by messge set .
Please anybody throw light on message set creation .Please attach some samples along with the input message so that I can learn creating the message set .
Thanks in advance. |
|
Back to top |
|
 |
smdavies99 |
Posted: Fri Jul 15, 2011 4:30 am Post subject: Re: TDS message set |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
mb2011 wrote: |
Single record contains elements which are fixed length.
I tried to make TDS fixed length message set for this scenario.
|
What makes you think that TDS is the solution here. you have not mentioned the delimiter (the 'D' in TDS').
Have you looked at there post in thei forum on this topic. Many actually give representations of their incoming messages. Why don't you do the same or are you wanting the solution to an exam question? (These are always vauge) _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
mb2011 |
Posted: Fri Jul 15, 2011 8:41 am Post subject: |
|
|
Newbie
Joined: 15 Jul 2011 Posts: 5
|
Hi ,
I thought of TDS message set as records are delimited by new line .
My sample message structure is as follows:
Account no Integer 12
TSCode Character 4
Ts Type Character 1
RS Code Character 3
Filler Character 5
BaNo Character 4
Filler Character 12
CodeCom Character 1
Confno Character 6
Filler Character 23
Amount Float S9(07)V99 .Sign is leading length is 9
Sample msg in input file is as follows .This sample file contains 8 records
04000841004 000 3 000100000
01000255001 000 1 000121442
88168986019 000 1 000003714
88044656024 000 1 000013505
08004306002 000 1 000030603
87029157010 000 2 000033200
06008280002 000 1 000029131
88024280050 000 1 000020649
I need to read this file through MB
as records in file are separated by new line ,I thought of using TDS fixed length message set .
Please suggest solution for the same . |
|
Back to top |
|
 |
Vitor |
Posted: Fri Jul 15, 2011 8:44 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
mb2011 wrote: |
Please suggest solution for the same . |
How about setting the input node to treat the newline as an end of record character so you get the fixed length records presented to you in the form you expect, and then model them as fixed length records? _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
WGerstma |
Posted: Mon Jul 18, 2011 11:18 am Post subject: |
|
|
Acolyte
Joined: 18 Jul 2011 Posts: 55
|
What also should work:
Create a message definition with the message itself being a complex element and containing lines which itself are complex elements. The "line" element as complexType beyond the message element get repeating 1 to -1.
In the properties of the message complex type put the Data element seperation to "all elements delimted" with Delimer set to <CR><LF> (or whatever you need).
Then in the complex type of the line, choose Fixed Length as Data element seperation. Add all your elements to the line complex type and assign lenght values to all of your elements |
|
Back to top |
|
 |
mb2011 |
Posted: Mon Jul 18, 2011 6:30 pm Post subject: |
|
|
Newbie
Joined: 15 Jul 2011 Posts: 5
|
Hi,
I created a CWF message set and I used File input node property :Record and Elements with delimiter as DoSor UUNIX line end .
With this message set one record will be parsed at a time .
I think this would be at a cost of memory.
Is this the right approach. |
|
Back to top |
|
 |
WGerstma |
Posted: Mon Jul 18, 2011 9:00 pm Post subject: |
|
|
Acolyte
Joined: 18 Jul 2011 Posts: 55
|
As I stated: If your message set is set up properly, You can parse the entire file in a single read operation. No Line Separator setting to the Input Node necessary. |
|
Back to top |
|
 |
mb2011 |
Posted: Mon Jul 18, 2011 9:20 pm Post subject: |
|
|
Newbie
Joined: 15 Jul 2011 Posts: 5
|
Hi,
Thanks for your prompt reply.
But I still have one doubt .In case the file is too big containing more than 1000 records ,will the broker be able to parse them at a strech.Also there is no identifier to identify sets of records i.e whether 500 records have finished or not .
What I think now its better to read line by line only .
As I need to enter data into SAP (destination)also record by record only .
Please guide. |
|
Back to top |
|
 |
WGerstma |
Posted: Mon Jul 18, 2011 9:31 pm Post subject: |
|
|
Acolyte
Joined: 18 Jul 2011 Posts: 55
|
I do not think, that size can become a problem, though I never handled files above 30 MB.
If you intend to pass the content to SAP record per record, I see no drawback in your approach of reading line by line. But I have no pratical experience with how the two approaches behave in Time-consumption, CPU-Usage and Memory-Usage.
Just what I would do: read everything as one chunk, and pass the splitted records via the PROPAGATE statement. |
|
Back to top |
|
 |
Vitor |
Posted: Tue Jul 19, 2011 4:08 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
mb2011 wrote: |
I think this would be at a cost of memory. |
Not really. WMB parses on demand and is good at memory management. Mostly.
mb2011 wrote: |
Is this the right approach. |
It's the one I'd be inclined to use. It simplifies the message set thereby reducing admin and most OS will use file buffering for you. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
Vitor |
Posted: Tue Jul 19, 2011 4:19 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
WGerstma wrote: |
I do not think, that size can become a problem, though I never handled files above 30 MB. |
If you read the file as a single line, size can easily become a problem. If you know that files are always 30Mb or less then you're fine (though you still need to fit 30Mb of data in memory to parse it) but even if you've been assured at design time file size will never increase sooner or later it either grows because the assurance was based on now-changed circumstances or there's an "exceptional" condition and you're faced with 150Mb file to process.
WGerstma wrote: |
I have no pratical experience with how the two approaches behave in Time-consumption, CPU-Usage and Memory-Usage. |
As I said, if you read the entire file as a block, WMB has to store it in memory and process it. Clearly you'd get less I/O with a single read but with buffering how much is that saving?
WGerstma wrote: |
Just what I would do: read everything as one chunk, and pass the splitted records via the PROPAGATE statement. |
I would also suggest that FileInput node can find & propagate a record faster than your ESQL, given it's position as optimised C code and lots of clever IBM people tuning it!
There's also an element of wheel invention - why split the file into records when the node will do it for you?
Reading record by record also avoids any potential memory issues. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
mb2011 |
Posted: Tue Jul 19, 2011 8:26 pm Post subject: |
|
|
Newbie
Joined: 15 Jul 2011 Posts: 5
|
Hi ,
Thanks all of you for your replies.
As I am really not sure about the file size and I am the only Message broker resource in the project ,I really cannot take risk .I am going with the option of using the File input node to parse record by record.
Again Thanks a lot . |
|
Back to top |
|
 |
|