Author |
Message
|
mar |
Posted: Thu Jul 23, 2009 4:04 am Post subject: clustering question |
|
|
Apprentice
Joined: 18 Oct 2004 Posts: 26
|
Hi.
I would like to ask the following:
I have two machines (2 cpu each) with shared disks in active/passive clustering mode.
This configuration gives me a fail over solution for all the applications on every machine including Websphere MQ.
But, I need a load balancing solution. To implement Websphere MQ cluster with some queue managers in each machine but only one machine will be available at any time (active/passive model), does it provide any work load balancing advantage?
The number of messages are 4000 msgs and 4000 response msgs but a work load balancing solution has been requested.
Please help...
Thanks |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Jul 23, 2009 5:48 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Workload balancing is exactly what WMQ Clusters do. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
mar |
Posted: Thu Jul 23, 2009 6:45 am Post subject: |
|
|
Apprentice
Joined: 18 Oct 2004 Posts: 26
|
Thanks for the reply.
But having a cluster of queue managers all located into the same machine, does it worth? Will the load balancing be of significance? |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Jul 23, 2009 10:12 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Yes. Load balancing means spreading the work across multiple qmgrs. If you only have one qmgr, then you only have one instance of the application queue. MQ clustering allows for multiple instances of the same application queue across multiple qmgrs, thus spreading (balancing) the workload - more concurrent applicaitions processing messages. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
mar |
Posted: Thu Jul 23, 2009 11:25 am Post subject: |
|
|
Apprentice
Joined: 18 Oct 2004 Posts: 26
|
Thanks a lot bruce2359.
Do you know where I can find a harware resources sizing algorithm according to my messages size? |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Jul 23, 2009 11:32 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
You don't size hardware around a message.
The Quick Beginnings manual for your platform will give you hardware and software requirments for WMQ.
The System Admin manual will give you information on sizing logs, queues, and so on. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
mar |
Posted: Thu Jul 23, 2009 12:32 pm Post subject: |
|
|
Apprentice
Joined: 18 Oct 2004 Posts: 26
|
Don't I need an algorithm for adding more RAM according to message length requirements? Or find a way to decide how many queue managers I should create in the cluster (all in the same machine) in order to achieve an accepted work balancing solution? |
|
Back to top |
|
 |
PeterPotkay |
Posted: Thu Jul 23, 2009 12:55 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
You workload balance to send some work to this server and some work to that server, so that both can work in parallel. Have multiple QMs on one server and then "balancing" the work between them is illogical. One QM will almost always perform better than 2 or more QMs on the same server. Multiple QMs on the same server will be competing for the same CPU, the same memory, the same disk I/O. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
mar |
Posted: Thu Jul 23, 2009 1:56 pm Post subject: |
|
|
Apprentice
Joined: 18 Oct 2004 Posts: 26
|
Thanks for the reply.
So, having a WMQ cluster in the same machine has no good effect. Isn't there anything to have some workload balancing in this scenario? (2 identical machines in an active/passive model). |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Jul 23, 2009 2:17 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
So, having a WMQ cluster in the same machine has no good effect. |
Please allow me to humbly disagree with Mr. Potkay.
Assuming your server has multiple processors (who doesn't have multi-processor?) and multiple paths to disk, two qmgrs can simultaneously and concurrently process messages. If one qmgr goes down, the other can continue on. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Thu Jul 23, 2009 2:47 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
A QM has a lot of processes, so the server would have to have a lot of cores, a lot of RAM, and a lot of I/O capability. Theoretically I think you are correct. But in practice I think the odds of 1 QM going down on a server while another keeps going is slim, and having enough resources so that 2 or more QMs can run at their full potential is also probably not likely.
One of the other main reasons for work load balancing across multiple servers is to not have a hardware failure impact you for new messages. I think a hardware failure is more likely than a QM failure, all though both odds are slim.
This goes back to the same old discussion about multiple QMs on a server - yay or nay? I am squarely in the camp of one QM per server, and use multiple queues and channels to make logical environments in that one QM if you must. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
mar |
Posted: Thu Jul 23, 2009 3:03 pm Post subject: |
|
|
Apprentice
Joined: 18 Oct 2004 Posts: 26
|
For fail over I have the 2 machines in an active/passive model with shared disks.
The machines have 2 xeon processors each. |
|
Back to top |
|
 |
|