|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Disaster Recovery in clusters. |
« View previous topic :: View next topic » |
Author |
Message
|
mq_abcd |
Posted: Thu Aug 30, 2007 12:54 pm Post subject: Disaster Recovery in clusters. |
|
|
 Acolyte
Joined: 13 Jun 2004 Posts: 69
|
Hello All,
We are creating a DR startegy for our cluster and couple of thoughts I would like run by the experts, to get your views on.
DR for the FULL REPOS.
OPTION 1
We are planning to have 3 Full Repositories, FR1, FR2, FR3
FR1 and FR2 will be active ( the Partial Repos will be pointing to these)
FR3 will be passive and resides on a DR box and will always be RUNNING as well to get the latest Cluster information.
OPTION 2
OR Is it suggested to have all the partial Repos pointing to only one FR, FR1 and making FR2 and FR3 as passive but RUNNING repos.
DR for the PARTIAL REPOS
OPTION 1
Name the DR Queue Manager same as the PROD queue manager, but will always be down.
We intend to join the DR Queue manager to the prod cluster only when needed for the DR scenario.
But as the production queue manager is already be part of the cluster,
I have to forcefully remove the prod queue manager from the cluster by QMID.
OPTION 2
Name the DR Queue Manager with a different name as the prod queue manager.
But now all the applications need to point to a new queue manager and I am not really comfortable having the applications to change anything during a DR scenario.
Please give your valuable inputs.
Thanks |
|
Back to top |
|
 |
jefflowrey |
Posted: Thu Aug 30, 2007 12:57 pm Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
Never have two queue managers with the same names running at the same time.
ESPECIALLY with MQ Clustering.
You should look at the 'backup' qmgr options in v6.
And you should be running v6. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Aug 30, 2007 1:49 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
If you need to use the "recovery" qmgr you could create a qmgr alias on it for it to accept messages intended for QMA:
Code: |
def qr(qma) cluster(mycluster) rqmname(recoveryqmname) |
and have this queue only while QMA is out of commission...
Enjoy  _________________ MQ & Broker admin |
|
Back to top |
|
 |
PeterPotkay |
Posted: Fri Aug 31, 2007 12:58 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
None of your options consider marooned messages. You're talking about bring up new QMs, but what about all the messages on the QM in the datacenter that Godzilla just stomped all over?
We are just starting to play with this ourselves. Our initial design, yet to be fully tested, is a 3 node hardware cluster. Node 1 and Node 2 in the primary data center, Node 3 in the DR center. The QM can fail automatically on its own between Node 1 and Node 2. This provides true high availability. Data is asynchronously sent to the SAN in the alternate datacenter. If a disaster is declared the QM is brought up on Node 3 in an automated but not automatic fashion. You may be a few seconds behind as far as data is concerned, so apps have to deal with missing messages or duplictae messages. But if the bandwidth between the sites is beefy that should be a very small #. Some data loss is a fact of life in true disasters.
Whether the QM is in a MQ cluster or not makes no difference in this type of set up. If the QM is in an MQ cluster follow all the regular MQ cluster rules. If you have the option, put 1 FR in each data center.
If marooned messages are not an issue and you want to solve this with stand alone QMs, mirror the sites. Have 1 FR in Datacenter A (DCA) and one in DCB, both running all the time. If you have 5 PRs in DCA, have 5 in DCB. These 5 in DCB should be offline until a disaster strikes. When you online them, its up to the apps to make sure they reconnect over to the alternate PRs. unles you are lucky enough to have your apps reading and writing to the FRs in both DCA AND DCB concurrently. In that case a disaster is rather easy for you, you don't have to do anything. But any messages on those QMs in the lost DC are gone. If the other DC truly is gone I would promote one of the remaining PRs to a Full.
Backup QMs in v6. Eh.
I doubt it handles mq authorities. Or does it since they are held in a q? You have to use linear logging. Will that handle all the MQ object definitions adds / changes / deletions? What about all the stuff held in the qm.ini file or the Windows Registry. You would have to manually keep that in sync. How do you do that for the DR MQ clustered QM if it can't be brought online? (can't have 2 QMs with the same name on the wire in a cluster) Bottom line with a backup QM is: How close to a 100% match is it to your primary QM at any point in time? I don't know. New stuff.
Assuming it works I think a hardware clustering solution spanning the 2 DCs is going to be better. But that takes $$$$$ _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
jefflowrey |
Posted: Fri Aug 31, 2007 4:19 pm Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
Peter..
My best guess at what backup qmgrs in v6 does, is that it's the equivalent of the ability in zOS to write duplicate copies of log files to two disk regions.
But I've only sort of ... glanced... at the documentation. I suspect it's a bit more robust than that. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Fri Aug 31, 2007 4:28 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
I'm gonna play with them. I likened it to SQL Server Log Shipping. It gets your data over there, but there is still some stuff that you have to manually handle to keep things 100% in sync. Unless the documentation is skipping out on a lot of the functionality. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|