ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum IndexGeneral IBM MQ SupportComparing MQ H.A. Technologies

Post new topicReply to topic
Comparing MQ H.A. Technologies View previous topic :: View next topic
Author Message
PeterPotkay
PostPosted: Fri Feb 12, 2016 8:26 am Post subject: Comparing MQ H.A. Technologies Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7722

You may have seen a slide from IBM PowerPoint presentations that show the various technologies used for H.A., and what each one gives you for access to existing messages waiting on a queue, and access to send or receive new messages generated after “the failure”.

To this analysis I want to add a single queue manager running on a virtual server, supported by a Hyper Visor with lots of magic and lots of redundant hardware under the covers.

The first question is – how do you treat single QM on a single virtual server in this analysis? Don’t forget all that hardware redundancy, reliance on SAN/N A S, and automatic failover (of the O/S image for certain scenarios) that a HyperVisor brings to the table.

Let’s see how this formats since I’m not going to be able to send an attachment that has my version of this drawing.

Code:

                                                    Access to existing messages                Ability to generate or
                                                    Waiting in a queue                               receive new messages

A Shared Queues (z/OS only)                              Continuous                                   Continuous       
B Single QM (single virtual server)                      None (or Automatic?!)                        None (or Automatic?!)
B+C                                                      None (or Automatic?!)                        Continuous
C MQ Cluster of 2+ QMs                                   None                                         Continuous
C+D                                                      Automatic                                    Continuous
D VCS,MCS,M.I.QM, MQ Appliance                           Automatic                                    Automatic
E  Single QM (single physical server)                    None                                         None



Second question is exactly what fail scenario is being considered when talking about MQ H.A. solutions?
Hardware failure?
O/S blue screen of death or kernel panic?
Queue Manager ‘hangs’ (whatever that means)?
Scheduled outage for maintenance on that one operating system (O/S patches, MQ upgrade)?
Should there be a separate chart for each of these scenarios?


Single QM on single virtual should pass 100% for hardware failure – clear automatic in my mind
Single QM on single virtual should fail for planned outage for maintenance – when the QM or O/S are brought down, they are down till the maintenance is over. But it could be argued the maintenance will be simpler, faster and less frequent compared to other solutions.
I guess it’s the in between unplanned failures that we really have to question, understanding that some of those failures sometimes fail to get the H.A. cluster technology to kick in, or if they do, the exact same problem and failure hits you on the second node anyway.


Asked another way: If you can deal with the planned outage windows, is a single QM on a single virtual server “good enough” for MQ H.A.? Is the simplest solution likely to have the greatest uptime at the end of the year?
Don’t forget, MQ is not a database!
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
mqjeff
PostPosted: Fri Feb 12, 2016 8:41 am Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

Traditional HA and MI should be the only ones that let you handle maintenance without significant downtime. (other than failover time)

You could maybe do something with a virtual image, where you store the mq data files on shared disk. Then make a copy of the vm, apply maintenance to that. Then stop the old one, swap the disk, and start the new one. The downtime would likely be on the order of MI. It's riskier.

In any case, if the queue manager crashes/FDCs because of data file corruption none of these solutions will let you recover very well...

Anything that can be solved by a restart (qmgr or OS) is much more resilient.
_________________
chmod -R ugo-wx /
Back to top
View user's profile Send private message
smdavies99
PostPosted: Fri Feb 12, 2016 11:13 am Post subject: Reply with quote

Jedi Council

Joined: 10 Feb 2003
Posts: 6076
Location: Somewhere over the Rainbow this side of Never-never land.

What OS are you planning on using?

If you are going to use Windows (shudder) you can't take a VM , copy it and make it into an MSCS environment. There is a System ID held internally that stops this. Why?
It's windows so why do you expect it to play nicely.

Now if it is Unix/Linux then it should be possible.
_________________
WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995

Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions.
Back to top
View user's profile Send private message
Display posts from previous:
Post new topicReply to topic Page 1 of 1

MQSeries.net Forum IndexGeneral IBM MQ SupportComparing MQ H.A. Technologies
Jump to:



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP


Theme by Dustin Baccetti
Powered by phpBB 2001, 2002 phpBB Group

Copyright MQSeries.net. All rights reserved.