ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » Clustering » best practice: programmatically remove queue from cluster

Post new topic  Reply to topic
 best practice: programmatically remove queue from cluster « View previous topic :: View next topic » 
Author Message
pjacob
PostPosted: Mon Jul 29, 2002 10:41 am    Post subject: best practice: programmatically remove queue from cluster Reply with quote

Newbie

Joined: 29 Jul 2002
Posts: 4
Location: San Francsico

We are developing a monitoring tool that constantly checks certain system dependencies (applications / services and servers). In the event of a system dependency failing (e.g. database goes down), this tool needs to programmatically remove / disable a (local) shared queue from an MQSeries Cluster in order that the queue receives no new messages. So far, we have not written a workload exit routine; the round-robin scheme works for us.

What is the best method to programmatically remove / alter / disable a queue from the cluster it belongs to in order to prevent it from receiving new messages?

options we know:
1. mqsc script to alter and remove the cluster membership
2. disabling put for the queue (what are the assumptions with this approach -- some developers report it not to consistently work?)

others?
3. MQAI to alter cluster membership
4. ADSI to alter cluster membership
5. stopping the queue manager (there are no other queues)
...

limitations: MQSeries apps on NT / 2000, programming preference to the MQSeries COM interface

thanks.
-Pritish
_________________
Sr. Architect, PMI Group
Back to top
View user's profile Send private message Send e-mail Visit poster's website AIM Address Yahoo Messenger MSN Messenger
nimconsult
PostPosted: Mon Jul 29, 2002 9:41 pm    Post subject: Reply with quote

Master

Joined: 22 May 2002
Posts: 268
Location: NIMCONSULT - Belgium

I would say that the best solution is to put-disable the queue. The advantages are:
- easy to implement
- safe (I will come back on this later)
- propagated in the cluster, so the sender is aware before reaching the actual queue manager.

This is the most efficient solution if the applications connect to one of the queue managers of the cluster. The application will receive a reason code MQRC_CLUSTER_PUT_INHIBITED either at MQOPEN time, or at MQPUT time if they were already connected when the queue was disabled. It is up to the application developer to make sure that the error is handled smoothly.

If the applications connect to a queue manager outside of the cluster, the messages will pile up in the system dead-letter queue of the first queue manager in the cluster receiving the messages (with same reason code MQRC_CLUSTER_PUT_INHIBITED). In this case you have different alternatives:
- if you cannot afford to loose the messages going to the dead-letter queue, you write a dead-letter queue handler to re-post the messages when the cluster queue is enabled again;
- if you can afford to loose the messages, you have multiple options: either put an expiry on the messages, or set a MQRO_DISCARD_MSG to make sure it does not go in the dead-letter queue at all, or clean the dead-letter queue with a dead-letter queue handler.

Is the option safe?
- yes if you disable the local queue itself and not a queue alias. you can be sure that no further put is possible.
- however if there is a pending logical unit of work, you may have the impression that messages "appear" on the queue after it has been disabled (example scenario: MQPUT, disable queue, MQCMIT)
- do you know other unsafety situations that your developpers have reported?

I hope that this contribution will help,
_________________
Nicolas Maréchal
Senior Architect - Partner

NIMCONSULT Software Architecture Services (Belgium)
http://www.nimconsult.be
Back to top
View user's profile Send private message Send e-mail Visit poster's website
bduncan
PostPosted: Tue Jul 30, 2002 6:19 am    Post subject: Reply with quote

Padawan

Joined: 11 Apr 2001
Posts: 1554
Location: Silicon Valley

Nicolas is right... Put disabling is the way to go. For the reasons he mentioned... The workload balancing algorithm will automatically skip any queue instances that are put disabled.

This method is actually one we used in a production cluster environment for several months (before the company went under - hopefully not because of our MQ setup! ) If you can imagine, we had n application servers creating an MQ message that represented a database update which should occur on m database machines. So we had an additional queue manager in the cluster who simply acted as a middleman. All database update messages would be sent here, and an application on that machine would create duplicates of the message, and send one copy to the queue manager on each database machine.
Now, if a particular database fell behind or couldn't make the updates for some reason, it wasn't in sync with the other databases anymore, and we didn't want it servicing any lookup requests. To effectively take it offline, this middleman application would go in and put disable the request queue on that database, which meant that any application server attempting to do a database lookup (by sending an MQ message to a database in the cluster) would automatically skip that particular database because of the workload balancing algorithm. When the database caught back up, we could take it back online immediately by just put enabling the request queue.
This worked like a charm and we never had any issues with it...
_________________
Brandon Duncan
IBM Certified MQSeries Specialist
MQSeries.net forum moderator
Back to top
View user's profile Send private message Visit poster's website AIM Address
nimconsult
PostPosted: Tue Jul 30, 2002 9:56 pm    Post subject: Reply with quote

Master

Joined: 22 May 2002
Posts: 268
Location: NIMCONSULT - Belgium

Quote:
In fact, I'd already tried the course of action you suggested. But I still had the problem of message redirection when the messages encounter the put-disabled queue.
I need for those messages to seamlessly be redirected to the other available clustered queues in the cluster in a round-robin fashion, without piling-up in the dead letter queue or elsewhere. Could you give me some suggestions on how to do that?


Pritish,

If you have encountered problems where messages go in the dead-letter queue, can you please explain the circumstances?

My mention to messages going to the dead-letter queue was exclusively in the case where *ALL* instances of the cluster queue are unavailable (put-disabled). If at least one instance of the cluster queue is available (put-enabled), the traffic is routed to that instance (or in a round-robin distribution if multiple instances are available). Even messages "in transit" to a put-disabled queue are re-dispatched in the cluster to a new available instance.

I have performed the following test:

- create a cluster MYCLUSTER with QM1, QM2, QM3.
- create a cluster queue LQ.CLUSTER on QM1 and QM2. Set them to put-enabled.
- a client application connects to QM3 and starts posting messages (infinite loop) on cluster queue LQ.CLUSTER.
- at that time I can see an even distribution of the messages between the QM1 and QM2.
- put-disable LQ.CLUSTER on QM1. What happens then? New messages produced by the application are exclusively routed to QM2. The messages that were in transit to QM1 are re-dispatched to QM2. How can I prove it? Open MQ Series explorer, select queue manager QM1, open channel status of TO_QM2, the message counter increases of several units each time I put-disable the queue (there is one message to broadcast the new status of the queue, the other messages are re-routed application messages).

Does this answer your question?

Kind Regards,

Nicolas
_________________
Nicolas Maréchal
Senior Architect - Partner

NIMCONSULT Software Architecture Services (Belgium)
http://www.nimconsult.be
Back to top
View user's profile Send private message Send e-mail Visit poster's website
ryddan
PostPosted: Tue Jul 30, 2002 11:24 pm    Post subject: Reply with quote

Newbie

Joined: 10 Sep 2001
Posts: 9
Location: Sweden

You could stop the cluster receiver channel. No cluster queues will receive messages after that.
Back to top
View user's profile Send private message Send e-mail
pjacob
PostPosted: Wed Jul 31, 2002 7:51 am    Post subject: reasons for DLQ messages Reply with quote

Newbie

Joined: 29 Jul 2002
Posts: 4
Location: San Francsico

Quote:
If you have encountered problems where messages go in the dead-letter queue, can you please explain the circumstances?


Basically, our test was flawed, but only slightly. QM1, QM2, QM3 in cluster X with a shared queue THEQUEUE existing on QM1, QM2, and not on QM3. Connect to QM3 as an MQSeries Client(i.e. on a separate machine), and start sending messages, as per all standard requirements for even distribution, to THEQUEUE. Notice even, round-robin distribution. Put Disable either Queue, and see half of all following messages go to the Dead Letter Queue.

To "fix": move the application to the server on which QM3 resides, and re-run as a direct connection to the MQSeries Server, QM3. Put disable the either Queue Manager (QM1/QM2) and watch all following messages get routed to the enabled Queue.

-Pritish
_________________
Sr. Architect, PMI Group
Back to top
View user's profile Send private message Send e-mail Visit poster's website AIM Address Yahoo Messenger MSN Messenger
bduncan
PostPosted: Wed Jul 31, 2002 8:30 am    Post subject: Reply with quote

Padawan

Joined: 11 Apr 2001
Posts: 1554
Location: Silicon Valley

Ryddan,
The only problem with your approach is that you've effectively put disabled all the clustered queues on the queue manager. If you only have one clustered queue on that queue manager, then fine, but if you have more, you can't continue to send messages to them either. Another problem is that when you stop the cluster receiver channel, this queue manager can no longer receive updates from the full repositories. So if you add another queue manager to the cluster while this queue manager's cluster receiver channel is stopped, he won't be informed of this. I would recommend not stopping the cluster receiver channel when you can achieve a finer granularity and less side-effects by put disabling the one queue you are interested in.
_________________
Brandon Duncan
IBM Certified MQSeries Specialist
MQSeries.net forum moderator
Back to top
View user's profile Send private message Visit poster's website AIM Address
nimconsult
PostPosted: Wed Jul 31, 2002 10:33 pm    Post subject: Reply with quote

Master

Joined: 22 May 2002
Posts: 268
Location: NIMCONSULT - Belgium

Thanks for your mail.

Happy to learn that you now have it working.

This is a real great site!
_________________
Nicolas Maréchal
Senior Architect - Partner

NIMCONSULT Software Architecture Services (Belgium)
http://www.nimconsult.be
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » Clustering » best practice: programmatically remove queue from cluster
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.