ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » Clustering » Stop queue instance when client has issues

Post new topic  Reply to topic
 Stop queue instance when client has issues « View previous topic :: View next topic » 
Author Message
RustyNail
PostPosted: Thu Jun 21, 2007 12:08 pm    Post subject: Stop queue instance when client has issues Reply with quote

Newbie

Joined: 13 Jun 2007
Posts: 4
Location: Wisconsin

I don't know if there is already an answer for this one, but here goes. I have a Weblogic JEE cluster with servers instances on several AIX machines listening to a clustered MQ queue.

1) I assume that I bind each WL instance to the queue on the local queue manager, or can I bind them in at the cluster level?

2) I assume each WL will consume messages that end up in the queue on the local machine. Can I stop messages going to that instance if I detect a pending failure (or slowdown) in my WL server?

I understand that I will not get failover from MQ, but I do want high availability. High availability is achived by a combination of MQ clustering matching up with WL server clustering. To achieve this, MQ must know the status of the WL server cluster, and to a lesser extent, WL must know the status of MQ.

Any words of wisdom?

Thanks
-Rusty
Back to top
View user's profile Send private message Visit poster's website
jefflowrey
PostPosted: Thu Jun 21, 2007 12:12 pm    Post subject: Reply with quote

Grand Poobah

Joined: 16 Oct 2002
Posts: 19981

You always connect to a specific queue manager, not to the cluster as a whole.

You can share or unshare queues in the cluster very easily. A queue that is not shared in the cluster shouldn't receive any new work from the cluster. UNLESS the applications sending to it have used BIND_ON_OPEN when opening the queue - this could cause them to address messages to a specific queue instance regardless of the current workload balancing.
_________________
I am *not* the model of the modern major general.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Thu Jun 21, 2007 7:36 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20756
Location: LI,NY

Wouldn't forcing the resolution through a cluster alias take care of the defbind on open problem?
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
jefflowrey
PostPosted: Fri Jun 22, 2007 3:35 am    Post subject: Reply with quote

Grand Poobah

Joined: 16 Oct 2002
Posts: 19981

fjb_saper wrote:
Wouldn't forcing the resolution through a cluster alias take care of the defbind on open problem?


One would hope.
_________________
I am *not* the model of the modern major general.
Back to top
View user's profile Send private message
RustyNail
PostPosted: Fri Jun 22, 2007 5:11 am    Post subject: Reply with quote

Newbie

Joined: 13 Jun 2007
Posts: 4
Location: Wisconsin

Thanks for the help!

Can I use a cluster alias to read from the queue as well as write? That would solve many problems. If my listeners in my Weblogic instances would all listen through the cluster alias they would act as competing consumers. That way if one of my listeners goes down, the remaining listeners would still consume the remaining messages.

I have this picture in my head that when I write to a clustered queue the messages are sent to a queue manager in the cluster and the only way to pull them out is to have a listener on that queue manager. My goal is to be able to have listeners connected to each queue manager of the cluster and be able to shut down any of those listeners and not end up with messages sitting in a queue where there is not a listener to process it.

The scenerio is a request-reply. A consumer sends a message across MQ to a provider who needs to turn around a correlated answer within 5 seconds. For high availability, if one instance of the provider goes down, the messages must automatically route to surviving provider instances for processing.

-Russ
Back to top
View user's profile Send private message Visit poster's website
jefflowrey
PostPosted: Fri Jun 22, 2007 5:27 am    Post subject: Reply with quote

Grand Poobah

Joined: 16 Oct 2002
Posts: 19981

A cluster alias is not what you think it is.

You always connect to only one queue manager, never to the cluster as a whole.

You can always only get messages from local queues on the queue manager you're connected to.
_________________
I am *not* the model of the modern major general.
Back to top
View user's profile Send private message
Vitor
PostPosted: Fri Jun 22, 2007 5:27 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

RustyNail wrote:
I have this picture in my head that when I write to a clustered queue the messages are sent to a queue manager in the cluster and the only way to pull them out is to have a listener on that queue manager. My goal is to be able to have listeners connected to each queue manager of the cluster and be able to shut down any of those listeners and not end up with messages sitting in a queue where there is not a listener to process it.


Your picture is a good one. Because applications connect to a single queue manager not a cluster, they can only get messages from queues local to their hosting queue managers.

Likewise, messages distributed through a cluster are still stored on a given queue manager which participates in the cluster, not the cluster generally. Hence if a given queue manager goes down, any unprocessed messages delivered to it's queues are inaccessable until the queue manager comes back; either the original or a failed-over copy depending on your set-up.

The term is "orphaned messages" and if you search you'll see many discussions of this effect, and various strategies to mitigate it. You'll also find many rants from me about how MQ Clustering is not suitable as a high-availability solution because it provides workload balancing. HA needs an HA solution from HA software.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
RustyNail
PostPosted: Fri Jun 22, 2007 7:59 am    Post subject: Reply with quote

Newbie

Joined: 13 Jun 2007
Posts: 4
Location: Wisconsin

I see two aspects of HA.

1) If a single node of the system goes down, future requests are routed to remaining nodes.

2) In flight requests are migrated to remaining nodes upon failure of another.

For sure #2 is not supplied by MQ clustering without some type of manual intervention to migrate the messages to a different queue manager. I am more interested in #1. I think it happens automatically if a queue manager goes down. I need to programatically make it happen if my application goes down by monitoring the application and put-disabling the queue if the application instance fails.

It would be even better if when the application instance fails that another instance of the application (running on different hardware) could consume the messages from the queue manager running on the hardware where the application failed (assuming the queue manager is still alive). I assume this cannot be done since applicaitons can "only get messages from queues local to thier hosting queue managers". Can messages be automatically forwarded to a different manager in the cluster that hosts the same queue?

In my humble experience, applications fail 50x more often than MQ going down. If I can make that more HA tolerable then our overall HA incresed 50 fold!

Thanks
-Rusty
Back to top
View user's profile Send private message Visit poster's website
Vitor
PostPosted: Fri Jun 22, 2007 12:47 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

RustyNail wrote:
I need to programatically make it happen if my application goes down by monitoring the application and put-disabling the queue if the application instance fails.


If you're staying within the MQ world, you could set a depth event to trigger some kind of PCF application so that when the application goes down, the queue is put disabled when more than an acceptable number of messages are stuck there. Not nice, and does have a few problems, but could work.

RustyNail wrote:
It would be even better if when the application instance fails that another instance of the application (running on different hardware) could consume the messages from the queue manager running on the hardware where the application failed (assuming the queue manager is still alive). I assume this cannot be done since applicaitons can "only get messages from queues local to thier hosting queue managers". Can messages be automatically forwarded to a different manager in the cluster that hosts the same queue?


Only by an application, perhaps one on a depth trigger as above. But then, if you're starting an application to put disable the queue or move the messages why not start the application to process the messages again??
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
RustyNail
PostPosted: Fri Jun 22, 2007 1:18 pm    Post subject: Reply with quote

Newbie

Joined: 13 Jun 2007
Posts: 4
Location: Wisconsin

My applicaiton is running in a Weblogic server cluster. I plan on using JMX notification to generate SNMP traps which can be monitored by NetCool. Then Netcool can trigger a script (or whatever) to put disable the queue. This way I can disable the queue when the application is showing significant stress. If all goes well, I can use the same process to put enable the queue if/when the server clears up. A depth trigger may also be an option to enable the queue (and as a safty net for disabling the queue).

Thanks to everyone for all the help. It validated much of what I already thought was needed. Now we just have to set the standard deployment model and build the common infrustructure to accomplish it!

-Rusty[/quote]
Back to top
View user's profile Send private message Visit poster's website
fjb_saper
PostPosted: Sat Jun 23, 2007 5:20 am    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20756
Location: LI,NY

RustyNail wrote:
My applicaiton is running in a Weblogic server cluster. I plan on using JMX notification to generate SNMP traps which can be monitored by NetCool. Then Netcool can trigger a script (or whatever) to put disable the queue. This way I can disable the queue when the application is showing significant stress. If all goes well, I can use the same process to put enable the queue if/when the server clears up. A depth trigger may also be an option to enable the queue (and as a safty net for disabling the queue).

Thanks to everyone for all the help. It validated much of what I already thought was needed. Now we just have to set the standard deployment model and build the common infrustructure to accomplish it!

-Rusty


That works if you have only one instance of weblogic / box/queue manager.
If you have multiple instances there is no telling which instance is processing as it is on a first come/ first serve basis.
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » Clustering » Stop queue instance when client has issues
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.