|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Setting Up High-Availability (Reasonably) MQ Environment |
« View previous topic :: View next topic » |
Author |
Message
|
srindies |
Posted: Sun Apr 16, 2006 12:46 am Post subject: Setting Up High-Availability (Reasonably) MQ Environment |
|
|
Apprentice
Joined: 06 Jan 2006 Posts: 28
|
Hi All,
I have been assigned a task for setting up HA environment for our production. Our prouction Setup will be as below.
Machine: z890
OS: zLinux (over z/VM)
Application Server: WBISF 5.1
MQ: MQ Server 6.0
We are planning to have 2 nodes of app. server in different images of zLinux. Each node should have MQ Server installed with in.
Check the env. below.
Quote: |
ZIMG1 -- APPSVR1 -- MQSVR1
ZIMG2 -- APPSVR2 -- MQSVR2
|
APPSVR1 and APPSVR2 are in ACTIVE-ACTIVE mode to enable load balancing. So, MQSVR1 and MQSVR2 should always be active to process messages from respective App Servers.
I thought of going for MQ Clustering for HA but, after setting up POC environment and little google I understood cluster is for workload balancing and to simplify Administration.
If MQSVR1 fails, I need to switch over to MQSVR2 for APPSVR1 load. It seems not possible as we are using MQ JMS provider of AppServer for connection. So, I need to switch App Server for MQ Server failure.
For example,
Quote: |
MQSVR1 Fails:
Identify ZIMG1 is a point of failure
Bypass all HTTP Request to ZIMG2
|
Is there any way to identify the MQ Server failure and switch-over. I was also looking for and got a sample Java PCF program from this site to identify channel status. Please check it below.
Code: |
import java.io.*;
import com.ibm.mq.*;
import com.ibm.mq.pcf.*;
public class wmsGetChStatus
{
public static void main (String [] args)
{
PCFAgent agent;
PCFParameter [] parameters =
{
new MQCFST (CMQCFC.MQCACH_CHANNEL_NAME, "*"),
// This is the default, so it doesn't NEED to be specified:
new MQCFIN (CMQCFC.MQIACH_CHANNEL_INSTANCE_TYPE, CMQC.MQOT_CURRENT_CHANNEL)
// new MQCFIN (CMQCFC.MQIACH_CHANNEL_INSTANCE_TYPE, CMQC.MQOT_SAVED_CHANNEL)
};
MQMessage [] responses;
MQCFH cfh;
PCFParameter p;
String name = null;
Integer iType = null;
Integer mStatus = null;
try
{
// Connect a PCFAgent to the specified queue manager
if (args.length == 1)
{
System.out.print ("Connecting to local queue manager " +
args [0] + "... ");
agent = new PCFAgent (args [0]);
}
else
{
System.out.print ("Connecting to queue manager at " +
args [0] + ":" + args [1] + " over channel " + args [2] + "... ");
agent = new PCFAgent (args [0], Integer.parseInt (args [1]), args [2]);
}
System.out.println ("Connected.");
// Use the agent to send the request
System.out.print ("Sending PCF request... ");
responses = agent.send (CMQCFC.MQCMD_INQUIRE_CHANNEL_NAMES, parameters);
for (int i = 0; i < responses.length; i++)
{
cfh = new MQCFH (responses [i]);
// Check the PCF header (MQCFH) in the response message
if (cfh.reason == 0)
{
for (int j = 0; j < cfh.parameterCount; j++)
{
// Extract what we want from the returned attributes
p = PCFParameter.nextParameter (responses [i]);
switch (p.getParameter ())
{
case CMQCFC.MQCACH_CHANNEL_NAME:
name = (String) p.getValue ();
break;
case CMQCFC.MQIACH_CHANNEL_INSTANCE_TYPE:
iType = (Integer) p.getValue ();
break;
case CMQCFC.MQIACH_MCA_STATUS:
mStatus = (Integer) p.getValue ();
break;
default:
}
}
System.out.println ("Channel " + name + " Instance Type " + iType + " MCA Status " + mStatus);
}
else
{
System.out.println ("PCF error:\n" + cfh);
// Walk through the returned parameters describing the error
for (int j = 0; j < cfh.parameterCount; j++)
{
System.out.println (PCFParameter.nextParameter (responses [0]));
}
}
}
}
catch (ArrayIndexOutOfBoundsException abe)
{
System.out.println ("Usage: \n" +
"\tjava ListQueueDepth queue-manager\n" +
"\tjava ListQueueDepth host port channel");
}
catch (NumberFormatException nfe)
{
System.out.println ("Invalid port: " + args [1]);
System.out.println ("Usage: \n" +
"\tjava ListQueueDepth queue-manager\n" +
"\tjava ListQueueDepth host port channel");
}
catch (MQException mqe)
{
System.err.println (mqe);
}
catch (IOException ioe)
{
System.err.println (ioe);
}
}
}
|
If the SVRCONN channel is inactive/stopped, I will execute steps to switch over to next App server.
I would like someone to throw lime light on above setup and suggest the best possible way to attain. I tried to give a clear picture of our Production environment for better suggestions.
Thanks & Regards,
Sridhar H |
|
Back to top |
|
 |
fjb_saper |
Posted: Sun Apr 16, 2006 4:50 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
To look for failures explore the ConnectionListener in JMS.
Anyway failures in MQ should be quite rare if you set it up correctly.
Depending on what you intend to do you may require a bindings connection and not a client connection. Thus apps1 would automatically connect to mq1 and apps2 to mq2. In this scenario I would make following assumptions:
a) The queues (names) are the same on both qmgrs
b) For failover reasons each qmgr contains a alias of the other. This will allow it to process messages originally meant for the other when the dns name/ip fails over. This would allow to dispatch messages to the other qmgr on the node.(no node failure, mq failure and routing)
c) the two active/active qmgrs on each node would be part of an MQ cluster.
Enjoy
 _________________ MQ & Broker admin |
|
Back to top |
|
 |
srindies |
Posted: Sun Apr 16, 2006 5:52 am Post subject: |
|
|
Apprentice
Joined: 06 Jan 2006 Posts: 28
|
Thanks saper,
I read it one of the redbooks that zLinux (Linux on s390) only supports client connection and not bind connection.
a) Both queue managers are identical.
b) We never want to connect to remote queue manager even under failure. (Because of risk in transmitting data in tcp network). So, If MQ server on any LPAR or Image (MQSVR1) fails, we consider it as Image failure and never forward a request to that server.
c) I couldnt suggest any better options of putting these 2 Queue Managers in cluster.
Thanks & Regards,
Sridhar H |
|
Back to top |
|
 |
jefflowrey |
Posted: Sun Apr 16, 2006 6:56 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
In this type of architecture, I would put the app server and the queue manager in the same unit of failover.
So if EITHER AppSvr1 or MQSVR1 fails, then BOTH fail over to the other node.
With only two nodes, the only advantage MQ clustering provides is workload balancing. If that is important to your application, then use it. If it's not, then don't. If all of the work on the MQ side is being generated from the AppServers - then presumably those are already workload balanced by the web-side, so it's not clear that you will gain anything with clustering. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
srindies |
Posted: Sun Apr 16, 2006 10:48 pm Post subject: |
|
|
Apprentice
Joined: 06 Jan 2006 Posts: 28
|
Thanks jeff,
Now, I would like to know the way to identify queue manager failure and announce it as Image failure and all request should be re-routed to another node.
Is it possible to consider SVRCONN Channel failure of the queue manager. I am planning to use a standalone-Java PCF daemon to Identify the status of the channel. If the channel becomes inactive/Stopped, I will announce and re-route request to next image.
Is there any other best/simple way to acheive this?
Thanks & Regards,
Sridhar H |
|
Back to top |
|
 |
jefflowrey |
Posted: Mon Apr 17, 2006 3:45 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
The HACMP support pack determines this by running the "PING QMGR" command using runmqsc. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|