|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Really understanding load balancing in a cluster environment |
« View previous topic :: View next topic » |
Author |
Message
|
educos |
Posted: Tue Jul 22, 2003 9:14 am Post subject: Really understanding load balancing in a cluster environment |
|
|
 Apprentice
Joined: 18 Jul 2001 Posts: 34 Location: Salt Lake City, UT
|
First here's the simple setup: I have 3 queue managers in the same cluster (QM1, QM2, QM3). QM2 and QM3 have a local queue definition "Q1" that is shared in the cluster. QM1 has no local queue definition of Q1 but of course sees the two Q1 cluster definitions. QM1 and QM3 are the repository qms (don't know if that's relevant). Also, I don't use any custom load balancing exit.
Now, if I put 4 messages to Q1 on QM1, one right after the other, through the amqsput utility (meaning I don't exit amqsput until all 4 messages are put), how is it that all 4 messages end up on the same queue manager (say QM2)? I would have expected 2 would land on QM2 and 2 on QM3.
If then I run amqsput again and put another 4 messages to Q1 on QM1, all 4 end up on the other queue manager in this case QM3... Can someone explain this behavior?
Also, I purposefully stopped QM2 and put another 4 messages to Q1 on QM1. This time it looks like the load balancing algorithm earmarked the messages for QM2 (it was QM2's turn if we follow the round-robin logic, but QM2 was down) and the messages got stuck on QM1's xmit queue. I would have thought/hoped that MQ would have figured out to send the messages to a queue manager that was up and running, but it didn't. Is there any way to make the load balancing act a bit smarter? _________________ Eric Ducos
EmeriCon, LLC.
Phone: (801) 789-4348
e-Mail: Eric.Ducos@EmeriCon.com
Website: www.EmeriCon.com |
|
Back to top |
|
 |
EddieA |
Posted: Tue Jul 22, 2003 9:21 am Post subject: |
|
|
 Jedi
Joined: 28 Jun 2001 Posts: 2453 Location: Los Angeles
|
The default for an Open is MQOO_BIND_AS_Q_DEF.
The default when you define a queue is DEFBIND(OPEN).
So, all messages put within one execution will always go to the same Queue Manager.
Cheers, _________________ Eddie Atherton
IBM Certified Solution Developer - WebSphere Message Broker V6.1
IBM Certified Solution Developer - WebSphere Message Broker V7.0 |
|
Back to top |
|
 |
mqonnet |
Posted: Tue Jul 22, 2003 9:50 am Post subject: |
|
|
 Grand Master
Joined: 18 Feb 2002 Posts: 1114 Location: Boston, Ma, Usa.
|
The easiest way to load balance without making any coding changes is to change the defbind attribute of all instances of the clustered queue. Make it DEFBIND(NOTFIXED) and run amqsput and you should see msgs being load balanced between 2 qms.
You got to remember that on most platforms the listener doesnt stop when you stop your qmgr, perfect example being Windows. Hence in that case, if you put msgs on the clustered queue after stopping QM2, there is this slight possibility that since you did not specify bind_not_fixed, the msgs are bound on open and randomly QM2 was chosen. It was chosen, may be because the HBINT of the clussdr channel did not expire and hence it was still running. And you put msgs, QM2 was chosen at put time. As soon as the 1st msg arrived, the clussdr channel went into retry state and all the 4 msgs landed up on the cluster tq of QM1.
I would bet you wouldnt see this happen all the time. And of course you wouldnt see msgs piled up for QM2 if you run your put app again after this.
Hope this helps.
Cheers
Kumar |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|