Author |
Message
|
NotMe |
Posted: Wed Nov 25, 2009 7:46 am Post subject: Pros and Cons of adding a Multi-Instance QM to a Cluster |
|
|
Apprentice
Joined: 25 Nov 2009 Posts: 26
|
What are the Pros and Cons of adding a Multi Instance Queue Manger to a MQ Cluster and is this even possible? |
|
Back to top |
|
 |
PeterPotkay |
Posted: Wed Nov 25, 2009 7:54 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
It makes no difference whether a Multi Instance QM participates in an MQ Cluster or not. The 2 concepts and technologies are completely independent, although they can be used concurrently. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
NotMe |
Posted: Wed Nov 25, 2009 8:06 am Post subject: |
|
|
Apprentice
Joined: 25 Nov 2009 Posts: 26
|
|
Back to top |
|
 |
exerk |
Posted: Wed Nov 25, 2009 11:37 am Post subject: |
|
|
 Jedi Council
Joined: 02 Nov 2006 Posts: 6339
|
PeterPotkay wrote: |
It makes no difference whether a Multi Instance QM participates in an MQ Cluster or not. The 2 concepts and technologies are completely independent, although they can be used concurrently. |
Agreed, provided that all other queue managers are at V7.0.1, or it's all going to fall over rather spectacularly... _________________ It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys. |
|
Back to top |
|
 |
mvic |
Posted: Wed Nov 25, 2009 2:17 pm Post subject: |
|
|
 Jedi
Joined: 09 Mar 2004 Posts: 2080
|
exerk wrote: |
Agreed, provided that all other queue managers are at V7.0.1, or it's all going to fall over rather spectacularly... |
Why so? |
|
Back to top |
|
 |
exerk |
Posted: Wed Nov 25, 2009 3:11 pm Post subject: |
|
|
 Jedi Council
Joined: 02 Nov 2006 Posts: 6339
|
mvic wrote: |
exerk wrote: |
Agreed, provided that all other queue managers are at V7.0.1, or it's all going to fall over rather spectacularly... |
Why so? |
Has the added functionality of chaining the CONNAME's been added to versions of WMQ below 7.0.1? _________________ It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys. |
|
Back to top |
|
 |
mvic |
Posted: Wed Nov 25, 2009 3:18 pm Post subject: |
|
|
 Jedi
Joined: 09 Mar 2004 Posts: 2080
|
exerk wrote: |
Has the added functionality of chaining the CONNAME's been added to versions of WMQ below 7.0.1? |
Well no, but I heard from the developers of 7.0.1 that a comma delimited CONNAME would flow to a pre-7.0.1 qmgr in a cluster and do no harm. Haven't tried it myself though, I must admit. If anyone does find a problem doing this I think IBM would service the PMR. |
|
Back to top |
|
 |
fjb_saper |
Posted: Wed Nov 25, 2009 4:06 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
mvic wrote: |
exerk wrote: |
Has the added functionality of chaining the CONNAME's been added to versions of WMQ below 7.0.1? |
Well no, but I heard from the developers of 7.0.1 that a comma delimited CONNAME would flow to a pre-7.0.1 qmgr in a cluster and do no harm. Haven't tried it myself though, I must admit. If anyone does find a problem doing this I think IBM would service the PMR. |
When you mean pre 7.0.1 did you actually mean V 6.x?
If so what happens when the 6.x qmgr tries to send to the 7.0.1 that has now failed over??
I would expect that in a mixed cluster you might have to front your 7.0.1 with a network device that will find the qmgr irrespective of host/port...  _________________ MQ & Broker admin |
|
Back to top |
|
 |
mvic |
Posted: Wed Nov 25, 2009 4:29 pm Post subject: |
|
|
 Jedi
Joined: 09 Mar 2004 Posts: 2080
|
fjb_saper wrote: |
mvic wrote: |
exerk wrote: |
Has the added functionality of chaining the CONNAME's been added to versions of WMQ below 7.0.1? |
Well no, but I heard from the developers of 7.0.1 that a comma delimited CONNAME would flow to a pre-7.0.1 qmgr in a cluster and do no harm. Haven't tried it myself though, I must admit. If anyone does find a problem doing this I think IBM would service the PMR. |
When you mean pre 7.0.1 did you actually mean V 6.x? |
Yes. I stress again this is not something I have tried. I'm also not aware of specific information in the manuals dealing with it.
Quote: |
If so what happens when the 6.x qmgr tries to send to the 7.0.1 that has now failed over?? |
The auto CLUSSDR channel will go into retry, I expect. Unable to connect to the listener where the qmgr failed/switched-over. Messages put via bind-on-open etc. will not get reallocated to another channel, even if the capacity exists in the cluster. Hmm
One more point (not that it solves the above) you can have a blank CLUSRCVR though I think this then means you have to use the implied port of 1414 for such CLUSRCVRs. |
|
Back to top |
|
 |
fjb_saper |
Posted: Wed Nov 25, 2009 5:03 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Apart from the network device I guess you could also have 2 defined cluster receivers one for the primary instance and one for the failover instance...
The cluster algorithm would then choose the running one...  _________________ MQ & Broker admin |
|
Back to top |
|
 |
mvic |
Posted: Wed Nov 25, 2009 5:39 pm Post subject: |
|
|
 Jedi
Joined: 09 Mar 2004 Posts: 2080
|
fjb_saper wrote: |
Apart from the network device I guess you could also have 2 defined cluster receivers one for the primary instance and one for the failover instance...
The cluster algorithm would then choose the running one...  |
No, the new instance is the same queue manager as the old (failed) instance.
FR=Full repository, PR=Partial repository
Assume it's a PR for the moment. The PR has one qmgr name, one QMID, one set of CLUSRCVRs naming the same CONNAME. Regardless of where it runs. When it restarts in a different location, it will have all of those properties the same, even though it runs on a different piece of hardware with a different IP interface.
I'm beginning to think IP switchover isn't such a bad idea. But then what about the NFS connection that is established with the disk ... wouldn't that break.. perhaps not, if it was attached via a separate interface to the one whose IP was changing..
I'll see what I can find out |
|
Back to top |
|
 |
PeterPotkay |
Posted: Wed Nov 25, 2009 6:05 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
mvic wrote: |
fjb_saper wrote: |
Apart from the network device I guess you could also have 2 defined cluster receivers one for the primary instance and one for the failover instance...
The cluster algorithm would then choose the running one...  |
No, the new instance is the same queue manager as the old (failed) instance.
FR=Full repository, PR=Partial repository
Assume it's a PR for the moment. The PR has one qmgr name, one QMID, one set of CLUSRCVRs naming the same CONNAME. Regardless of where it runs. When it restarts in a different location, it will have all of those properties the same, even though it runs on a different piece of hardware with a different IP interface.
I'm beginning to think IP switchover isn't such a bad idea. But then what about the NFS connection that is established with the disk ... wouldn't that break.. perhaps not, if it was attached via a separate interface to the one whose IP was changing..
I'll see what I can find out |
I suspect FJ knows that its the same QM in this case, but was proposing that the one Multi Instance QM has 2 Cluster Receiver channels:
TO.QM1.CLUSTERNAME.1 (with IP address of primary server)
TO.QM1.CLUSTERNAME.2 (with IP address of secondary server)
The whole cluster knows 2 ways into this QM, and one of the channels will always be valid. And the other would always be retrying - ack!
I think this is what FJ was thinking.
I don't remember if any of the presentations on the new Multi Instance QMs got into the nitty gritty of MQ clustering, but I suspect for it to work cleanly, every other QM in the cluster that could possibly talk to the MI QM, including all FRs, would also have to be MQ 7.0.1, so that they could deal with an Auto Defined CLUSRCVR with multiple addresses in the conname.
There are 2 "magical" things that make traditional H.A. solutions viable for automatic and reliable failover - the shared storage and the virtual IP. A Multi Instance QM only tackles one of them directly - the shared storage. Its the clients and partner QMs that tackle the VIP aspect of it with the multi host conname parameter, but only if they are at MQ 7.0.1 as well. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
fjb_saper |
Posted: Wed Nov 25, 2009 9:47 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Well said Peter, that was exactly my thought.
Now if IBM could bring out a new mqipt with the failover instance knowledge, that too might be a solution for a mixed cluster....  _________________ MQ & Broker admin |
|
Back to top |
|
 |
mvic |
Posted: Thu Nov 26, 2009 2:46 am Post subject: |
|
|
 Jedi
Joined: 09 Mar 2004 Posts: 2080
|
PeterPotkay wrote: |
The whole cluster knows 2 ways into this QM, and one of the channels will always be valid. And the other would always be retrying - ack! |
You'd have to hope that all command messages from the FRs, and application messages from all queue managers, would use only the good channel. And after switchover use only the other (now good) channel. I am not sure there is any guarantee of that |
|
Back to top |
|
 |
mvic |
Posted: Thu Nov 26, 2009 9:01 am Post subject: |
|
|
 Jedi
Joined: 09 Mar 2004 Posts: 2080
|
mvic wrote: |
I'll see what I can find out |
After making more enquiries, I think the discussion in this thread is a fair summary of all the considerations that are relevant.
The idea of virtual IP and switching it over is not explicitly dealt with by MQ guidance information. As I speculated earlier, I think that to make use of the 7.0.1 qmgr switchover capability, if you were also switching IP address you would have to make sure that NFS connectivity was not lost while this was going on. I imagine (???) this implies that the NFS connection would have to be via a separate interface whose address was not changing.
In fact how would you deal with the race condition between the two tasks: 1.queue manager switchover (in principle this goes through as soon as NFS releases the file locks to the standby instance); 2.IP address switchover. If this race can't be handled somehow (I can't see how one would handle it) then you may not be able to do error-free automated switchover of qmgr and IP address. |
|
Back to top |
|
 |
|