Author |
Message
|
SMQS |
Posted: Fri May 24, 2013 2:28 am Post subject: MQ get messages not getting from all Cluster queues-Help |
|
|
Newbie
Joined: 23 May 2013 Posts: 5
|
I can able to put messages in Cluster queue the message goes to multiple cluster queue and the load balance is achieved.Is there any way to get all the messages from one Cluster queue?
Qm1,QM2 are two full repository Queue Mangers in CLUSTER.
Cluster queues created on both the Qms as same name.
Qclus(QL1) - QM1
Qclus(QL1) - QM2
I am putting 10 messages in QM1 ( amqsputc QL1 QM1)
1 st five messages shared in Qm1-(1-5) QL1
2 nd five messages shared in Qm2-(6-10) QL1
How can i get all the messages in same cluster queue on QM1 .Please help me. |
|
Back to top |
|
 |
exerk |
Posted: Fri May 24, 2013 2:36 am Post subject: |
|
|
 Jedi Council
Joined: 02 Nov 2006 Posts: 6339
|
Take a look at BIND_ON_OPEN in the Info Centre... _________________ It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys. |
|
Back to top |
|
 |
SMQS |
Posted: Fri May 24, 2013 2:47 am Post subject: |
|
|
Newbie
Joined: 23 May 2013 Posts: 5
|
Thanks for your immediate reply.I had already gone through it but my requirement is not fulfilled. See I have two Qms and the load is balanced between two Qms using BIND_OPEN or BIND_NOT_FIXED.
My Application requirement is to get all the messages in one QM after the messages are load balanced.Is there any way? please share with me. |
|
Back to top |
|
 |
McueMart |
Posted: Fri May 24, 2013 3:02 am Post subject: |
|
|
 Chevalier
Joined: 29 Nov 2011 Posts: 490 Location: UK...somewhere
|
I think you are asking whether there is a way for one of the following two things to happen:
- You application connects to a single QM and needs to be able to pull messages from all queues in a Cluster with the same name.
- A method of MQ automatically moving messages which are distributed across multiple cluster queues (all named the same), to a single instance of the queue.
I dont think either are possible. I believe you will have to manually connect to the QMs in the cluster, and move the messages onto the QM you need them on. |
|
Back to top |
|
 |
exerk |
Posted: Fri May 24, 2013 3:21 am Post subject: |
|
|
 Jedi Council
Joined: 02 Nov 2006 Posts: 6339
|
SMQS wrote: |
...My Application requirement is to get all the messages in one QM after the messages are load balanced... |
You really, really, might want to revisit the logic of that statement  _________________ It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys. |
|
Back to top |
|
 |
mqjeff |
Posted: Fri May 24, 2013 4:09 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
So what load are you trying to balance in the first place?
In general, MQ clustering allows you to balance the load of the application across multiple OS instances. It does this by providing messages that sit on different queue managers, that can be read by independent instances of the application running on independent OS instances. Because the messages are spread across multiple queue managers, the queues themselves will be able to provide the messages to the individual application instances faster, since each queue will have fewer messages on it.
If you want to process all messages from a single queue, then only make a single queue shared in the cluster.
But that doesn't let you load balance anything, at all. |
|
Back to top |
|
 |
bruce2359 |
Posted: Fri May 24, 2013 5:00 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
With MQ clusters, workload balancing takes place on MQPUTs only, not on MQGETs. BIND_ON_OPEN and BIND_NOT_FIXED are open options is used by applications that are going to put messages, not get messages. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
|