|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Multiple brokers with split messages |
« View previous topic :: View next topic » |
Author |
Message
|
3junior |
Posted: Thu Jul 04, 2013 8:02 am Post subject: Multiple brokers with split messages |
|
|
Novice
Joined: 28 May 2013 Posts: 16
|
Hi all,
I have a client that has 3 brokers in a cluster. There are two scenarios we are trying to figure out a solution for.
1. MQ Client is putting a message into two queues Q1 and Q2. A flow needs to take the data and merge the data together. How can this be accomplished in a cluster? In the old point to point setup there was only one gateway and all messages could be consumed locally and merged together.
2. MQ Client puts 4 messages into queue, one control file and 3 data files. A trigger then calls a java app that merges the data based on control file. How can this be accomplished in a cluster?
Thanks |
|
Back to top |
|
 |
exerk |
Posted: Thu Jul 04, 2013 8:34 am Post subject: Re: Multiple brokers with split messages |
|
|
 Jedi Council
Joined: 02 Nov 2006 Posts: 6339
|
3junior wrote: |
1. MQ Client is putting a message into two queues Q1 and Q2. A flow needs to take the data and merge the data together. How can this be accomplished in a cluster? In the old point to point setup there was only one gateway and all messages could be consumed locally and merged together. |
Unless your client application is connecting to more than one queue manager concurrently, the messages will go to Q1 and Q2 of the queue manager it's connected to. Ideally your client application is going to stay connected to the same queue manager for the duration of its run-time, or connect to another queue manager if it drops connection or the queue manager fails.
3junior wrote: |
2. MQ Client puts 4 messages into queue, one control file and 3 data files. A trigger then calls a java app that merges the data based on control file. How can this be accomplished in a cluster? |
See above.
If the root of the question you're asking is "how do I load balance these two message sets across three Brokers?", then one answer is that your client application will need to disconnect from one queue manager and connect to the other, and rinse and repeat - not very efficient and certainly not recommended.
A more valid method would be to put a gateway queue manager in the cluster, to which your client application connects, and use BIND_ON_GROUP (if your level of WMQ supports it) for requirement 2. Requirement 1 is a little more problematical because you'd need to specify the queue manager name for each set of puts to ensure both messages go to the same queue manager, as using BIND_ON_OPEN will probably require you to close the queue and reopen it.
I'm sure someone will be along in a moment to suggest better ways of doing it... _________________ It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Thu Jul 04, 2013 9:26 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
For #1, change the client to put to a new Q3 and have the flow read from Q3. Why is the client putting to two seperate queues if the data just gets merged?
When the client opens Q3 they use the Bind On Open option to insure all the messages that need to go to one instance of the Broker do.
If you won't combine the queues, then open Q1 with the Bind On Open option. After the MQOPEN you should be returned the value of the Resolved QM name. See here:
http://pic.dhe.ibm.com/infocenter/wmqv7/v7r5/topic/com.ibm.mq.doc/qc11040_.htm
Then just use that QM Name on the MQOPEN of Q2, again with the Bind On Open option.
Finish putting your set of messages and then close Q1 and Q2.
You apparently have message affinity in your application design. While it cannot be avoided sometimes it almost always causes issues when you do have it, as you are finding out. Spend your initial effort on solving the actual problem (get rid of the message affinity) before spending to much effort on hacks to work around the message affinity.
#2 is easy, just use the Bind_On_Open option and put all 4 of those messages before closing the queue. But again, message affinity - bad. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
3junior |
Posted: Thu Jul 04, 2013 9:32 am Post subject: |
|
|
Novice
Joined: 28 May 2013 Posts: 16
|
Hi All,
Thanks for the reply.. The only issue is the client applications cannot change there code, as the project I am working on is only able to use MB to merge data or use an application to run batch jobs.
Thanks |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Jul 04, 2013 10:25 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
Designate a single queue on a single broker that acts as an input point to a Collector pattern.
Create multiple flows on all brokers that read the necessary messages, and route them to the collector pattern input.
Alternately, if you're at v8, you can look at using GlobalMap as a collection store. If you're at earlier levels, you can look at using a database as a collection store. In either case, after taking steps to store the current message, the same flow would determine if the collection was complete and then extract and merge it. |
|
Back to top |
|
 |
3junior |
Posted: Thu Jul 04, 2013 10:49 am Post subject: |
|
|
Novice
Joined: 28 May 2013 Posts: 16
|
thanks MQJeff will take a look at GlobalMap |
|
Back to top |
|
 |
3junior |
Posted: Thu Jul 04, 2013 11:23 am Post subject: |
|
|
Novice
Joined: 28 May 2013 Posts: 16
|
mqjeff wrote: |
Designate a single queue on a single broker that acts as an input point to a Collector pattern.
Create multiple flows on all brokers that read the necessary messages, and route them to the collector pattern input.
Alternately, if you're at v8, you can look at using GlobalMap as a collection store. If you're at earlier levels, you can look at using a database as a collection store. In either case, after taking steps to store the current message, the same flow would determine if the collection was complete and then extract and merge it. |
Hi MQJEFF,
Does Global cache get cleared during a reboot?
Thanks |
|
Back to top |
|
 |
3junior |
Posted: Thu Jul 04, 2013 11:48 am Post subject: |
|
|
Novice
Joined: 28 May 2013 Posts: 16
|
3junior wrote: |
mqjeff wrote: |
Designate a single queue on a single broker that acts as an input point to a Collector pattern.
Create multiple flows on all brokers that read the necessary messages, and route them to the collector pattern input.
Alternately, if you're at v8, you can look at using GlobalMap as a collection store. If you're at earlier levels, you can look at using a database as a collection store. In either case, after taking steps to store the current message, the same flow would determine if the collection was complete and then extract and merge it. |
Hi MQJEFF,
Does Global cache get cleared during a reboot?
Thanks
Found http://www.ibm.com/developerworks/websphere/library/techarticles/1212_hart/1212_hart.html
|
Q: Does the global cache work for multi-instance brokers?
A: If an active broker fails, and a standby broker starts up to take its place on a different machine, the catalogs and containers in that broker will fail to start (as they are binding to the wrong hostname). But all execution groups will be able to make client connections to the global cache, assuming that a catalog is still running in another broker. The original active broker will need to be restored in order for the cache components (catalogs and containers) within that broker to restart and rejoin the cache.
Q: Is there a way to persist data to the file system or a database?
A: Currently there is no automatic way to do this with the global cache.
Quote: |
I will try thinking of another solution
Thanks for your help |
|
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|