|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
MQ on VPOD or VCE |
« View previous topic :: View next topic » |
Author |
Message
|
vsathyan |
Posted: Sun Jan 22, 2017 11:37 pm Post subject: MQ on VPOD or VCE |
|
|
Centurion
Joined: 10 Mar 2014 Posts: 121
|
Hello,
does any one here have some idea about hosting IBM MQ 9.0 on a VPOD infrastructure or a VCE?
I'm checking the feasibility of hosting MQ on a cloud and as a SaaS, which can scale up or down depending on the load automatically.
Any pointers, references or documents on the same would be appreciated.
Thank you very much.
Regards,
vsathyan _________________ Custom WebSphere MQ Tools Development C# & Java
WebSphere MQ Solution Architect Since 2011
WebSphere MQ Admin Since 2004 |
|
Back to top |
|
 |
mqjeff |
Posted: Mon Jan 23, 2017 8:05 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
What does "scale up and down" mean for your MQ network?
The servers increase or decrease capacity? Or additional MQ servers are created/deleted? _________________ chmod -R ugo-wx / |
|
Back to top |
|
 |
vsathyan |
Posted: Mon Jan 23, 2017 9:13 am Post subject: |
|
|
Centurion
Joined: 10 Mar 2014 Posts: 121
|
Hi Jeff,
I was looking if there is a possibility to increase / decrease the cluster queue instances and the related reader applications dynamically - by spinning up new VMs, auto deploy MQ queues, deploy apps in a different VM, connect as client to the newly created MQs, process the messages and after a day - release the cluster queue instances and spin down the VMs..
Is there a scalable MQ infra available depending on the load?
thanks in advance.
Regards,
vsathyan _________________ Custom WebSphere MQ Tools Development C# & Java
WebSphere MQ Solution Architect Since 2011
WebSphere MQ Admin Since 2004 |
|
Back to top |
|
 |
mqjeff |
Posted: Mon Jan 23, 2017 9:33 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
It's reasonable to do, except you have to be careful about how you deprovision the servers - to avoid cluttering up the cluster repositories, and leaving messages waiting to go to queue managers that don't exist any more. _________________ chmod -R ugo-wx / |
|
Back to top |
|
 |
Vitor |
Posted: Mon Jan 23, 2017 10:18 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
mqjeff wrote: |
It's reasonable to do, except you have to be careful about how you deprovision the servers - to avoid cluttering up the cluster repositories, and leaving messages waiting to go to queue managers that don't exist any more. |
Another option would be to spin up queue managers as you describe but stand alone rather than members of an MQ cluster and run the client connections through a load balancer. In that way the simple act of decommissioning the VM removes the queue manager as a client target and any connected clients get a CC 2 to advise them to reconnect.
This also removes the risk of someone decommissioning one or both of the VMs hosting the FRs......
You could also do this with a CCDT, but editing that & pushing it out will be a PITA. Depends how dynamic you need to be. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
mqjeff |
Posted: Mon Jan 23, 2017 10:24 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
Vitor wrote: |
mqjeff wrote: |
It's reasonable to do, except you have to be careful about how you deprovision the servers - to avoid cluttering up the cluster repositories, and leaving messages waiting to go to queue managers that don't exist any more. |
Another option would be to spin up queue managers as you describe but stand alone rather than members of an MQ cluster and run the client connections through a load balancer. In that way the simple act of decommissioning the VM removes the queue manager as a client target and any connected clients get a CC 2 to advise them to reconnect. |
In any implementation, you need to consider how messages are going to be processed off a queue manager before it is decommissioned...
Remove queues from the cluster? Use pub/sub to make sure several app instances can receive all messages? Something else? _________________ chmod -R ugo-wx / |
|
Back to top |
|
 |
Vitor |
Posted: Mon Jan 23, 2017 10:27 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
mqjeff wrote: |
In any implementation, you need to consider how messages are going to be processed off a queue manager before it is decommissioned... |
Quite so, but it's a more straightforward piece of coding to check for a local queue to be empty than it is that a SCTQ contains no messages addressed to the soon-to-be-decommissioned queue manager and that the FR has acknowledged the removal of the queue manager from the cluster.
Not that doing all that cluster checking is impossible, just less straightforward. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Mon Jan 23, 2017 3:35 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
I wonder what's more likely needed - additional instances of client applications to join the party racing for messages arriving in the queues, or actually new additional queue managers.
If you start off with several properly tuned queue managers in a MQ cluster, odds are they can handle any work you give them, and you just need to spin up more consuming applications to keep up with the increased load.
Not knowing your requirements, maybe you really do overwhelm your MQ servers and need more MQ server in addition to more client apps.
If your apps run directly on the MQ servers, so when you need more app power you drag along more MQ server power, maybe revisit that design and move to an MQ Client model.
Again, we have zero insight into your environment. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
zpat |
Posted: Tue Jan 24, 2017 12:40 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Use a (pair of) MQ appliance and make the apps use MQ client....?
MQ client fits the cloud model more easily than MQ server does.
A decent sized centralised MQ server can handle a huge range of workload (assuming the apps run elsewhere) and you can always use dynamic provision of CPU resources to deal with peak loads. _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
vsathyan |
Posted: Tue Jan 24, 2017 3:54 am Post subject: |
|
|
Centurion
Joined: 10 Mar 2014 Posts: 121
|
Thanks all for the responses!
We are building out a new MQ environment. However the constraint we have is - the application cannot maintain multiple MQ configurations, and cannot use channel table files.
So, we are discussing about hosting MQ on VMs, Cloud, etc considering multiple options to achieve HA with zero downtime.
We are in a point where the architecture of the new environment should be a bench mark to build our future environments.
With that in mind, what is the best possible approach? - Point to Point, or MQ clusters, HACMP, MIQM, etc..
1. All our MQ applications will connect to the MQ queue manager in client mode only.
2. The applications are from diverse platforms - java, jms, dot net, Biztalk, BPEL, weblogic, web methods, including Oracle SOA OSB, which cannot maintain multiple configs, unable to use channel table files (my knowledge is limited to MQ and the above are the inputs i got from my OSB architects)
Thanks all,
Regards,
vsathyan _________________ Custom WebSphere MQ Tools Development C# & Java
WebSphere MQ Solution Architect Since 2011
WebSphere MQ Admin Since 2004 |
|
Back to top |
|
 |
Vitor |
Posted: Tue Jan 24, 2017 5:33 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
vsathyan wrote: |
So, we are discussing about hosting MQ on VMs, Cloud, etc considering multiple options to achieve HA with zero downtime. |
This is a fool's requirement. There will always be downtime, even if it's as short as the time it takes a client to reconnect.
Your actual requirement is not zero downtime, but no loss of processing capability. So you need to consider not so much the HA but how you're going to distribute traffic over multiple queue managers so processing can continue while a failed queue manager is recovered.
This is why we use network level distribution. It's a very fast way of removing a failed queue manager (or queue managers) from consideration as processing targets. We consider the loss of all the queue managers a DR issue rather than an HA one.
vsathyan wrote: |
2. The applications are from diverse platforms - java, jms, dot net, Biztalk, BPEL, weblogic, web methods, including Oracle SOA OSB, which cannot maintain multiple configs, unable to use channel table files (my knowledge is limited to MQ and the above are the inputs i got from my OSB architects) |
Another advantage of the network solution. Any application which can't manage a CCDT or other multiple config solution can just be given a single configuration which happens to include the VIP address on the front of the load balancer. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|