ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Usage of Data pattern

Post new topic  Reply to topic Goto page 1, 2  Next
 Usage of Data pattern « View previous topic :: View next topic » 
Author Message
kash3338
PostPosted: Tue Oct 04, 2011 5:43 pm    Post subject: Usage of Data pattern Reply with quote

Shaman

Joined: 08 Feb 2009
Posts: 709
Location: Chennai, India

Hi,

I have a scenario where in I get the input fixed length TDS message like below,

01 NAME 1234 EMPID
01 NAME 1234 EMPID
01 NAME 6789 EMPID
01 NAME 4567 EMPID
01 NAME 4567 EMPID

The above is a sample fixed length message which comes in. Here I need to generate a XML message for each record with same value (1234, 6789, 4567). That is, I need to generate a XML for first two lines (1234) and then a XML for 3rd line (6789) and then a XML for last two (4567) based on the input value at that position.

In case I encounter any error while generating a particular XML, i need reject that alone and process rest.

Can I use Data Pattern for this or how can this be done?
Back to top
View user's profile Send private message Send e-mail
kimbert
PostPosted: Wed Oct 05, 2011 1:37 am    Post subject: Reply with quote

Jedi Council

Joined: 29 Jul 2003
Posts: 5542
Location: Southampton

I think you have a "tagged fixed length" message format . No need for data patterns.

You can ( and should ) distinguish between the parsing and the processing of the message.
- Parsing can be done without data patterns, and will produce a message tree. - Processing of the message tree is done with nodes and ESQL/Java, and is a completely separate problem.
Back to top
View user's profile Send private message
kash3338
PostPosted: Wed Oct 05, 2011 1:51 am    Post subject: Reply with quote

Shaman

Joined: 08 Feb 2009
Posts: 709
Location: Chennai, India

kimbert wrote:
I think you have a "tagged fixed length" message format . No need for data patterns.

You can ( and should ) distinguish between the parsing and the processing of the message.
- Parsing can be done without data patterns, and will produce a message tree. - Processing of the message tree is done with nodes and ESQL/Java, and is a completely separate problem.


Its not a tagged Fixed Length. Its just Fixed Length message. I have just posted a sample message. For Fixed Length what do you suggest please?
Back to top
View user's profile Send private message Send e-mail
mqjeff
PostPosted: Wed Oct 05, 2011 2:26 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

kash3338 wrote:
kimbert wrote:
I think you have a "tagged fixed length" message format . No need for data patterns.

You can ( and should ) distinguish between the parsing and the processing of the message.
- Parsing can be done without data patterns, and will produce a message tree. - Processing of the message tree is done with nodes and ESQL/Java, and is a completely separate problem.


Its not a tagged Fixed Length. Its just Fixed Length message. I have just posted a sample message. For Fixed Length what do you suggest please?


You certainly can choose to model it as a plain fixed length message, but the sample you have posted can certainly also be modelled as a tagged/fixed length message.

It may make an easier and faster running model if you use tagged/fixed length, as well.

Kimbert's point, however, is that you should not be trying to use the model to combine the first two records into one thing that is processed.

You should use the message flow to determine what set of records should be processed as a group.
Back to top
View user's profile Send private message
kimbert
PostPosted: Wed Oct 05, 2011 2:37 am    Post subject: Reply with quote

Jedi Council

Joined: 29 Jul 2003
Posts: 5542
Location: Southampton

I think kash3338's point is that the third field ( the one on which he wants to group the output messages ) is not a tag, so the value is unpredictable.

That just confirms my second point, though. kash3338 does not need data patterns *or* tags. He needs some message flow logic that groups the records into batches.
Back to top
View user's profile Send private message
kash3338
PostPosted: Wed Oct 05, 2011 2:40 am    Post subject: Reply with quote

Shaman

Joined: 08 Feb 2009
Posts: 709
Location: Chennai, India

mqjeff wrote:

Kimbert's point, however, is that you should not be trying to use the model to combine the first two records into one thing that is processed.

You should use the message flow to determine what set of records should be processed as a group.


I agree on that, but in that case I need to PROPAGATE my XML messages one by one and I have another requirement wherein in case of error while processing one record, the rest should not be affected. I should throw error only for that record. Hence I thought of going for "Parsed Record Sequence".
Back to top
View user's profile Send private message Send e-mail
kimbert
PostPosted: Wed Oct 05, 2011 5:06 am    Post subject: Reply with quote

Jedi Council

Joined: 29 Jul 2003
Posts: 5542
Location: Southampton

Quote:
Hence I thought of going for "Parsed Record Sequence".
Sounds like a perfectly reasonable approach. I was wondering whether to mention it.
Back to top
View user's profile Send private message
kash3338
PostPosted: Wed Oct 05, 2011 6:52 am    Post subject: Reply with quote

Shaman

Joined: 08 Feb 2009
Posts: 709
Location: Chennai, India

kimbert wrote:
Sounds like a perfectly reasonable approach. I was wondering whether to mention it.




But how to do this by this way is what I am confused?
Back to top
View user's profile Send private message Send e-mail
kimbert
PostPosted: Wed Oct 05, 2011 7:01 am    Post subject: Reply with quote

Jedi Council

Joined: 29 Jul 2003
Posts: 5542
Location: Southampton

Getting one record at a time from the parser is easy - just use the Parsed Record Sequence setting on the FileInput node.

In the flow, some combination of nodes with error / catch terminals wired appropriately should do the trick. I'll leave the details to you, or to people who are better qualified to comment.
Back to top
View user's profile Send private message
kash3338
PostPosted: Wed Oct 05, 2011 7:21 am    Post subject: Reply with quote

Shaman

Joined: 08 Feb 2009
Posts: 709
Location: Chennai, India

kimbert wrote:
Getting one record at a time from the parser is easy - just use the Parsed Record Sequence setting on the FileInput node.

In the flow, some combination of nodes with error / catch terminals wired appropriately should do the trick. I'll leave the details to you, or to people who are better qualified to comment.


Yes I have used this Parsed Record Sequence before and it can parse one record at a time. But the problem is I need to generate one XML for all similar records (with same ID). So I would need all this as a single record, but how do I acheive that?
Back to top
View user's profile Send private message Send e-mail
smdavies99
PostPosted: Wed Oct 05, 2011 8:42 am    Post subject: Reply with quote

Jedi Council

Joined: 10 Feb 2003
Posts: 6076
Location: Somewhere over the Rainbow this side of Never-never land.

That is the quandry that you have to solve.

If you get each record 'one at a time' then each record is a separate message in its own right and totally impervious to the ones before and after UNLESS you take special measures to get round the default stateless operation of Broker.

Therefore, you have a choice.

1) Continue getting one record at a time and find a way to hold state between messages

2) Go back to getting the whole files at once and extract the groups of similar/identical records as you desire.

I'd probably think about storing the records in a DB table and manualy handle the errors when trying to insert a duplicate primary key. Then ehen all records have been processed you can then extract them and send them on their merry way knowing that the sorting has been already done.
_________________
WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995

Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions.
Back to top
View user's profile Send private message
kash3338
PostPosted: Wed Oct 05, 2011 6:43 pm    Post subject: Reply with quote

Shaman

Joined: 08 Feb 2009
Posts: 709
Location: Chennai, India

Is there any way by which this can be achieved by message set and not through the code logic? Because I feel its better to handle in message set than to complicate the code.
Back to top
View user's profile Send private message Send e-mail
kimbert
PostPosted: Thu Oct 06, 2011 3:17 am    Post subject: Reply with quote

Jedi Council

Joined: 29 Jul 2003
Posts: 5542
Location: Southampton

If there are only a very limited number of values for that third column then you could probably devise a message set that produced the same element name for each unique identifying value.
But I think that's the wrong approach. This is not a parsing problem - it's a transformation problem. Might as well accept that, and solve it using a standard solution such as the one suggested by smdavies99.

Alternatively, if you can afford to hold the entire input file in memory at the same time, you could use a SELECT statement to pick out groups of elements with the same identifier. But that would not scale to large files, so you would need to be sure that the input file size will never get large.
Back to top
View user's profile Send private message
kash3338
PostPosted: Fri Oct 07, 2011 7:50 am    Post subject: Reply with quote

Shaman

Joined: 08 Feb 2009
Posts: 709
Location: Chennai, India

Finally i managed to create a splitter flow, which just splits the message based on the ID value as BLOB message and each separate message I parse it in the new flow MQInput node.

The problem that I now face is unique. I have split the messages with 400 length, but when i read from MQ it comes as 401. There is a extra '.' at the end of each message. I send the message as BLOB in my first flow. What might be the reason?
Back to top
View user's profile Send private message Send e-mail
mqjeff
PostPosted: Fri Oct 07, 2011 7:51 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

It's not a ".".
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2  Next Page 1 of 2

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Usage of Data pattern
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.