|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Clustering, High(ish) Availability and Failover |
« View previous topic :: View next topic » |
Author |
Message
|
damianharvey |
Posted: Mon Jul 11, 2005 10:08 pm Post subject: Clustering, High(ish) Availability and Failover |
|
|
 Acolyte
Joined: 05 Aug 2003 Posts: 59 Location: Sydney, Australia
|
Hi all,
I'm setting up an Message Broker environment comprising 2 Intel Linux Servers with MQ and Message Broker. I want to use Clusters to improve throughput but would like some opinions on the pitfalls especially regarding getting resonably high availability and handling failover.
3. Hardware clustering eg. RedHat Cluster Suite.
3. Hardware clustering eg. RedHat Cluster Suite.
As I see it I should set up 2 Queue Managers (QM1 and QM2 - one for each server) configured with my Cluster Queues (Q1) as Input Queues for Message Broker. I also have 2 further Queue Managers (QM3 and QM4 - again one for each server) that are part of the Cluster but do not have a local definition of Q1. Client Apps will connect to QM3 and QM4 and attempt to PUT messages to Q1. As there are no local definitions, MQ will use the Cluster Queue Q1 and load-balance between QM1 and QM2 (round robin).
This takes care of my load-balancing and failover should QM1 or QM2 go down.
Should QM3 or QM4 go down, then I have 3 options:
1. The Application logic could handle failing over between QM3 and QM4.
2. Implementing a heartbeat such as the one described here http://www-128.ibm.com/developerworks/linux/library/l-halinux2
3. Hardware clustering eg. RedHat Cluster Suite.
#1 and #2 are cheap and do-able right away. #3 may have to wait for further $$$.
We have some messages that are required to be delivered in sequence. If these messages are put on Cluster Queue (Q2) load balanced across QM1 and QM2, and QM1 goes down with messages on Q2 then I have an issue. For these messages I can define the Q2 with DEFBIND(OPEN). This should mean that all messages are delivered to the one QM. In the event of QM1 going down while messages are being delivered then it's ok as the sequence is maintained once it comes back up.
How does this sound? Short of Hardware clustering is this a good high-ish availability solution?
Would really appreciate your feedback as there's no one to bounce these things off here.
Cheers,
D. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Tue Jul 12, 2005 1:51 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
Quote: |
Should QM3 or QM4 go down, then I have 3 options:
1. The Application logic could handle failing over between QM3 and QM4.
|
The easiest way to accomplish this is to use MQ Channel tables, that way you don't even have to code for it.
I looked at that link quickly. It looks like a full Hardware cluster solution, and would be preferred. Does not this solution cost $$$, like option #3?
As for making sure all the messages go to one instance of the Q, this is a bad design, but if the app insists on it, you have some holes, even if you use Bind On Open.
Consider an app that puts 100 messages. It connects to QM4, opens the q for Bind on Open, and happens to bind to QM1. Halfway thru, QM4 goes down. You reconnect to QM3, reopen the queue, and whoops! you bind to QM2 for the remainder!
Make sure you code around this...put all the messages in one unit of work, or choose QM1 or QM2 yourself on the MQOPEN, and keep putting to that one no matter what.
Of course if you are doing this, then why bother with the MQ cluster at all? You don't want to utilize the benifits of MQ clustering fully. Just connect directly to QM1 or QM2, put the coin flip logic in your app, and eliminate QM3 and QM4.
And depending on how strong this Hearbeat from Linux thingie is, you may just as well eliminate QM2 as well, if you can be sure that QM1 will always be up. Well, you and I know no hardware cluster solution is 100%, no matter how much it costs, so maybe QM1 and QM2 are a good idea.
But if all messages need to be processed in order by one instance at a time - toss the MQ cluster and QM3 and QM4. you don't need it. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
damianharvey |
Posted: Thu Jul 14, 2005 8:13 pm Post subject: |
|
|
 Acolyte
Joined: 05 Aug 2003 Posts: 59 Location: Sydney, Australia
|
PeterPotkay wrote: |
The easiest way to accomplish this is to use MQ Channel tables, that way you don't even have to code for it.
|
Cool. I'll check these out. Sounds promising.
PeterPotkay wrote: |
Does not this solution cost $$$, like option #3? |
It would if we didn't already have a SAN. I think the shared disk would be the biggest cost to set it up from scratch.
PeterPotkay wrote: |
But if all messages need to be processed in order by one instance at a time - toss the MQ cluster and QM3 and QM4. you don't need it. |
For the messages that have to be in sequence I won't use the Cluster, however for other messages where sequence isn't so crucial I'd still use it to balance the load between the 2 active/active servers (hmm, will have to study the heartbeat thing and see if it requires active/inactive).
Thanks for the reply Peter. I appreciate the thought put into it.
Cheers,
D. |
|
Back to top |
|
 |
ashoon |
Posted: Fri Jul 15, 2005 6:32 am Post subject: use local MQ connections vs. clients |
|
|
Master
Joined: 26 Oct 2004 Posts: 235
|
Hello - a few comments based on my experience...
First question - what happens to messages on Q1 that haven't been processed by the broker yet? Without some kind of failover (heartbeat/red hat clustering) for that queue manager/broker those messages are unavailable.
I've done the heartbeat package but used a SAN for log/q files storage as the SAN does provide a higher availability on the disk... I based it on the above developerworks link and I don't believe that it'll be very difficult to add the broker under heartbeat control either (however I'll still lean towards the red hat clustering from a pure techie. standpoint).
Finally why use MQ clients to come into the broker cluster? While you can use the client connection tables to handle failure of a gateway queue manager to another MQ itself is meant to be distributed i.e. queue managers at the server application endpoints allows them to be aware of the cluster etc... without having to add code and allowing for greater throughput. |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|