Author |
Message
|
Empeterson |
Posted: Thu Jul 24, 2003 5:15 am Post subject: Can you workload balance message groups? |
|
|
Centurion
Joined: 14 Apr 2003 Posts: 125 Location: Foxboro, MA
|
If we have multiple instances of the same queue, can we guarantee that all messages in the same group will go to the same instance of the queue? We very much want to workload balance, but I am afraid that messages in the same group will go to different instances of the queue. _________________ IBM Certified Specialist: MQSeries
IBM Certified Specalist: Websphere MQ Integrator |
|
Back to top |
|
 |
EddieA |
Posted: Thu Jul 24, 2003 6:09 am Post subject: |
|
|
 Jedi
Joined: 28 Jun 2001 Posts: 2453 Location: Los Angeles
|
On the Open, specify MQOO_BIND_ON_OPEN as one of the options. That forces all messages written to go to the same Queue Manager.
You could also use the default of MQOO_BIND_AS_Q_DEF and have the Queue definition set to DEFBIND(OPEN), which is also the default. But then, be wary that if anyone changes the Queue definition ...
Cheers, _________________ Eddie Atherton
IBM Certified Solution Developer - WebSphere Message Broker V6.1
IBM Certified Solution Developer - WebSphere Message Broker V7.0 |
|
Back to top |
|
 |
bduncan |
Posted: Fri Jul 25, 2003 3:10 pm Post subject: |
|
|
Padawan
Joined: 11 Apr 2001 Posts: 1554 Location: Silicon Valley
|
Correct me if I'm wrong, but I think that even if you use bind_not_fixed, when you put the first message in the group, the queue manager will resolve a particular instance of the clustered queue to send it to, and then all subsequent MQPUTs (because they all have the same groupid) will continue to go to the same instance of the clustered queue. Once the entire group has been put, and the first message of the next group is put, only then will the queue manager re-resolve the clustered queue. _________________ Brandon Duncan
IBM Certified MQSeries Specialist
MQSeries.net forum moderator |
|
Back to top |
|
 |
mqonnet |
Posted: Fri Jul 25, 2003 4:29 pm Post subject: |
|
|
 Grand Master
Joined: 18 Feb 2002 Posts: 1114 Location: Boston, Ma, Usa.
|
Brandon, Clustering has only one algo and that is "round robin". It just knows that if it finds a "bind_not_fixed", it has to put the messages in round robin fashion to all the instances of the clustered queue. It doesnt distinguish between a physical message or a physical message that is part of a logical message as in grouping or segmentation. And thats the very reason clustering manual suggests that in your application you should not use grouping/segmentation with bind_not_fixed.
When you have bind_on_open, all the messages land up only one instance that is chosen at runtime by the clustering algo again.
Cheers
Kumar |
|
Back to top |
|
 |
kirank |
Posted: Mon Jul 28, 2003 7:51 am Post subject: |
|
|
 Centurion
Joined: 10 Oct 2002 Posts: 136 Location: California
|
I had a similar requirement to send messages in particular sequence. The clustering did not guarantee that messages will go in same sequence they came in. So I decided to use Cluster Workload exit support pack MC76. This has a exit which sends messages to only one queue manager in cluster (The queue manager with highest Network Priority). It still gives failover capability meaning if that queue manager is not available, it sends messages to other queue manager that is available. This way you might be able to send a group of messages to one queue manager and ensure sequencing.
Cheers
Kiran Kanetkar |
|
Back to top |
|
 |
guruprasannas |
Posted: Wed Jun 09, 2004 11:46 am Post subject: Clustering |
|
|
Newbie
Joined: 09 Jun 2004 Posts: 1
|
Can you please tell me, if you are using this support pac (MC76), lets say you have 2 qmanagers QM1 and QM2 in 2 different machines clustered, if QM1 goes down,although failover happens to the QM2, what happens if there some messages stuck in the QM1's Q which the application reads. Did you face this scenario? |
|
Back to top |
|
 |
mdurman |
Posted: Wed Sep 22, 2004 5:42 pm Post subject: |
|
|
 Newbie
Joined: 28 Jun 2001 Posts: 3 Location: Whittier, California
|
As long as you used BIND_NOT_FIXED, the MC76 exit should reroute those messages to the surviving Queue Manager.
However, that doesn't really get you out of trouble. If some messages in the group have already been sent to QM1, do you really want the rest to go to QM2? Probably not. Better that they all go to QM1 when it comes back up.
The trouble with using BIND_ON_OPEN is that once you've opened the queue, ALL messages will go to the same Queue Manager because selection happens at MQOPEN time. If you have a long-running app, that negates workload balancing completely.
To try and get some benefit from clustering for grouped messages you could do the following...
Close and reopen the queue for each new group. At least that allows the cluster workload exit to select a new target Queue Manager for each group. Obviously you have the overhead of multiple closes and opens. How much will depend on how big your message groups are. _________________ Mark Durman
IBM WebSphere MQ Certified |
|
Back to top |
|
 |
PeterPotkay |
Posted: Wed Sep 22, 2004 6:56 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
Mark, if you read contact admin's question again, he is asking about messages on a queue on QM1, and QM1 is down. Nothing, not even the MC76 supportpac, will get those message to QM2 while QM1 is down.
(I think you understand this, but someone else reading contact admin's question and your reply might conclude that the MC76 support pack would somehow move those stranded messages.) _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
mdurman |
Posted: Wed Sep 22, 2004 7:52 pm Post subject: |
|
|
 Newbie
Joined: 28 Jun 2001 Posts: 3 Location: Whittier, California
|
Peter, you are absolutely correct. I did not read contact admin's post closely enough... Nothing is going to get those messages off the target Queue if that Queue Manager is down
What I was really trying to say is that messages sitting in the transmit queue on the sender Queue Manager would get redirected by the MC76 cluster workload exit to the surviving Queue Manager, but that doesn't really help the issue at all. The messages that made it to QM1 are stuck there and not retrievable until it comes back up.
The moral of the story? Message groups don't fit well with cluster workload balancing. _________________ Mark Durman
IBM WebSphere MQ Certified |
|
Back to top |
|
 |
|