ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » WMB Design - Splitting and Correlating the messages

Post new topic  Reply to topic
 WMB Design - Splitting and Correlating the messages « View previous topic :: View next topic » 
Author Message
satya2481
PostPosted: Tue Sep 10, 2013 9:57 am    Post subject: WMB Design - Splitting and Correlating the messages Reply with quote

Disciple

Joined: 26 Apr 2007
Posts: 170
Location: Bengaluru

Hello All,
I am back again with a new Question related to Designing a Flow to meet below requirement. Which pattern to be used, which nodes are best suited. Which solution is good from Performance perspective...

Requirement
1. Source application sends N number of Customer Ids in a single Message
2. Back end system is having a restriction on number of Customer Ids sent in a single mesage. Assuming 10 Customer Ids per message.
3. Broker has to split the entire N Customer Ids in to N/10 Messages.
4. Send all the requests to back end system
5. Collect all the responses back from Back end application
6. Create a common response message by aggregating all the responses and send to Source Application

We are using WMB V7
Option 1
Usage of Aggregate Nodes
Option 2
Use single Compute node and Propogate the mesages each time and store the responses in Environment tree, temporarily.
Option 3
Use Java Compute node, and make use of Java capabilities
Option 4
Use data base to store the responses temporarily

Regards
Satya
Back to top
View user's profile Send private message Send e-mail
lancelotlinc
PostPosted: Tue Sep 10, 2013 10:24 am    Post subject: Reply with quote

Jedi Knight

Joined: 22 Mar 2010
Posts: 4941
Location: Bloomington, IL USA

The options as you have them listed are ranked according to performant preference.

For Option 3, you can create your own equivalent to GlobalCache using a Singleton pattern.
_________________
http://leanpub.com/IIB_Tips_and_Tricks
Save $20: Coupon Code: MQSERIES_READER
Back to top
View user's profile Send private message Send e-mail
fjb_saper
PostPosted: Tue Sep 10, 2013 12:53 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20697
Location: LI,NY

Have you thought about using a collector node? Seems better suited than the aggregation...
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
mqjeff
PostPosted: Wed Sep 11, 2013 1:56 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

fjb_saper wrote:
Have you thought about using a collector node? Seems better suited than the aggregation...


It's a straight-forward aggregation pattern. Break a single input apart into a set of requests, assemble all responses into a single reply.
Back to top
View user's profile Send private message
satya2481
PostPosted: Fri Sep 13, 2013 9:11 am    Post subject: Reply with quote

Disciple

Joined: 26 Apr 2007
Posts: 170
Location: Bengaluru

We are thinking to first come up with a POC flow with different Options and then decide which design option is good and Scalable and Performance is better...
We are yet to progress on this... will keep updated this discussion...
Back to top
View user's profile Send private message Send e-mail
smdavies99
PostPosted: Fri Sep 13, 2013 9:21 am    Post subject: Reply with quote

Jedi Council

Joined: 10 Feb 2003
Posts: 6076
Location: Somewhere over the Rainbow this side of Never-never land.

satya2481 wrote:
We are thinking to first come up with a POC flow with different Options and then decide which design option is good and Scalable and Performance is better...
We are yet to progress on this... will keep updated this discussion...


Well done. Keep up the good work.

Try a few things out and see what's best for your situation.

We see far too many post from your part of the world where the OP won't even try a few things and learn from the experience.



_________________
WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995

Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions.
Back to top
View user's profile Send private message
satya2481
PostPosted: Mon Dec 23, 2013 11:27 pm    Post subject: Reply with quote

Disciple

Joined: 26 Apr 2007
Posts: 170
Location: Bengaluru

I am back on this topic. We have finalised 2 Options mentioned below. Now need to understand which Option is best and what are Pros and Cons of each option. Would like to see your thoughts around these options.

Option 1
1. Split incoming message from Flow-1 and send to mapping flow, Flow-2. Mapping flow will be running with Multiple instances.
2. Flow-2 will transform the message and sends it to back end application
3. Once Flow-2 receives response back from Backend application it will send trigger to next flow Flow-3
4. Flow-3 will cross check if all the transactions are completed from Flow-2 and it makes next backend application call

Option 2
All the steps mentioned in Option 1 remains same. However instead of Flow-2 triggering Flow-3 we have to use timer control node to trigger Flow-3 at specified interval of time.

We are cross checking on which option is best. Meantime if someone can provide their comments it will be of helpful.

Regards
Satya
Back to top
View user's profile Send private message Send e-mail
satya2481
PostPosted: Wed Sep 17, 2014 2:10 pm    Post subject: Reply with quote

Disciple

Joined: 26 Apr 2007
Posts: 170
Location: Bengaluru

Hi All,
I am back on to this thread as we are facing issues in the NFT testing. All went fine and worked from Functionality perspective. However we are facing big challenge in reaching the required NFR. As per the requirements we need to achieve 200 TPS with below design.
We are facing issues with multi threading and SQL Queries and not able to reach 50TPS with response time of 500milli Seconds. All the backend applications responds back in 50 msec

We are using DB2 database to retain the intermittent status of the split messages. Once all the Customer Id processing is done we fetch the details from the table combine it and produce the final response.

Issue we are facing when we increase threads of all the flows. We are not having control on the transactions. Some times final flow triggers multiple times and some times record lock happens. So different behaviour each time.

Anyone have any suggestions in improving the below design to achieve above mentioned requirements? Any clue would help much.
Back to top
View user's profile Send private message Send e-mail
smdavies99
PostPosted: Wed Sep 17, 2014 10:44 pm    Post subject: Reply with quote

Jedi Council

Joined: 10 Feb 2003
Posts: 6076
Location: Somewhere over the Rainbow this side of Never-never land.

how about (for starters) ensuring that all lookup selects that are not going to result in changes to the selected data are done to avoid row/page locking.
_________________
WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995

Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions.
Back to top
View user's profile Send private message
chotabheem
PostPosted: Thu Sep 18, 2014 12:05 am    Post subject: Reply with quote

Newbie

Joined: 10 Sep 2014
Posts: 9

Hi,

Make effective design with shared memory and collector node instead of the database and timer nodes . This may help you .
Back to top
View user's profile Send private message
satya2481
PostPosted: Thu Sep 18, 2014 9:14 am    Post subject: Reply with quote

Disciple

Joined: 26 Apr 2007
Posts: 170
Location: Bengaluru

Thank you for quick reply.
Quote:
how about (for starters) ensuring that all lookup selects that are not going to result in changes to the selected data are done to avoid row/page locking

We are planning to implement retry logic when such error happens. So that at a given point of time only one thread having access to specific record. Other thread which gets lock error will execute the Query once again as part of the retry logic.
Quote:
Make effective design with shared memory and collector node instead of the database and timer nodes

We are not using timer nodes. We are using SQL query to see all the transactions of the group are completed or not. Once the number of customer ids sent in the request matches with the completed transactions we trigger the next flow.
Back to top
View user's profile Send private message Send e-mail
smdavies99
PostPosted: Thu Sep 18, 2014 9:29 am    Post subject: Reply with quote

Jedi Council

Joined: 10 Feb 2003
Posts: 6076
Location: Somewhere over the Rainbow this side of Never-never land.

satya2481 wrote:
\We are using SQL query to see all the transactions of the group are completed or not. Once the number of customer ids sent in the request matches with the completed transactions we trigger the next flow.


could that be an ideal candidate for an non locking select by any chance?
_________________
WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995

Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions.
Back to top
View user's profile Send private message
satya2481
PostPosted: Sun Sep 28, 2014 1:48 pm    Post subject: Reply with quote

Disciple

Joined: 26 Apr 2007
Posts: 170
Location: Bengaluru

Updating this post with some of the points on which we have worked and the performance looks good now. But all these points are specific to Database and very few in ESQL coding.

ESQL Changes
1. Most of the Update commands are changed to INSERT in the message flow. This increased the DB operation
2. Changed all the COUNT(*) to COUNT(ColumnName) in the ESQL
3. Used the column names while selecting the data from the tables. Instead of SELECT *

DB2 Related Changes
1. Increased Index memory size
2. Added Inline LOB option for the CLOB and BLOB columns
3. Configured CLUSTERD index for some of the tables
4. Added Index to the columns which are used in the WHERE clause of all the SELECT and UPDATE SQL Queries use in the ESQL code

Will keep you posted on the udpates and the Performance results.
Back to top
View user's profile Send private message Send e-mail
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » WMB Design - Splitting and Correlating the messages
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.