Author |
Message
|
cloud9 |
Posted: Fri Aug 13, 2004 8:13 am Post subject: Cluster status info avail to trigger process versus CLWexit |
|
|
Novice
Joined: 18 Jul 2003 Posts: 13 Location: Jacksonville, FL
|
Greetings MQ gods. I have a z/OS based qmgr (QM1) which needs to put request messages to a clustered queue which is hosted on two AIX based qmgrs (QM2 & QM3). The reqts are that all requests go to the "primary" QM2. and only go to the alternate QM3 if QM2 is unavailable or has it's cluster queue PUT(disabled). I am not able to use a cluster workload exit because it would apply to the entire qmgr (QM1), and there are other applications using that qmgr which require the normal (default) cluster load balancing algorithm to work. So, I am thinking of using a triggered process to act as a cluster workload exit just for the one queue that this application uses. It could populate the destination qmgr field in the xmit header to always route to QM2 first if it's avialable ... if not then QM3. My question is this: Will I have access to the same information about the remote qmgr and cluster queue availability that a CLWexit would have ? I am concerned that the way a CLWexit is invoked (differently from the way a triggered process is invoked) makes certain info available to it that a triggered process can not access. Any insight on this is greatly appreciated! Thanks! |
|
Back to top |
|
 |
PeterPotkay |
Posted: Fri Aug 13, 2004 5:22 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
Quote: |
The reqts are that all requests go to the "primary" QM2. and only go to the alternate QM3 if QM2 is unavailable or has it's cluster queue PUT(disabled).
|
Use the NETPRTY attribute to specify a priority for the channel. This helps the workload management routines. If there is more than one possible route to a destination, the workload management routine selects the one with the highest priority.
Set the priority of the CLUSRCVR channel to QM2 higher than the CLUSRCVR of the channel to QM3. If both QMs are up, then QM2 will get all the work. If there are any problems with QM2 (including PUT_INHIBIT, since that factors into the algorithim), then QM3 gets the work.
The problem is this effects ALL the queues in that cluster. So make another cluster on top of the current one. Just make separate channels for the new cluster, and cluster this problem queue only in the new cluster. Its not as bad as it sounds. I have overlapping clusters like this in production for over a year. One cluster has channels with a NPMSpeed of NORMAL, and the other cluster has FAST channels.
I have never fiddled with NETPRTY, so please test that it works the way the manuals say it does. The one thing I am not sure of is if you put inhibit the queue on QM2, will that drop it low enough in the algorithim that the higher NETPRTY of QM2 is overidden? Dunno. Of course, with a separate cluster, thats no big deal. If someone can manually inhibit that queue, they can just as easily manually STOP that CLUSRCVR channel, which will definitly force the messsages over to QM3. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
cloud9 |
Posted: Sun Aug 15, 2004 4:12 pm Post subject: |
|
|
Novice
Joined: 18 Jul 2003 Posts: 13 Location: Jacksonville, FL
|
Thanks for lengthy reply Peter. I actually tried the NETPRTY setting on the recvr channels, but that only works for two channels to the same destination. It doesn't have any impact on actual selection of an eligible destination. The IBM documentation on the cluster work load algorithm is misleading about this NETPRTY feature, but it is more clearly explained in other places. You are right that the application should be able to stop the channel just as easily put(disabling) a queue. The challenge is to have the primary channel used at all times unless the primary destination is unavailable, at which time the switch to the alternate destination should be automatic. It's too bad there isn't a tunable attribute in the cluster workload alg to accomplish this. I had high hopes with the NETPRTY attribute, but it didn't work. The load was still split 50/50 to the two destinations QM2/QM3. |
|
Back to top |
|
 |
MQGrimley |
Posted: Mon Aug 16, 2004 12:56 am Post subject: |
|
|
Novice
Joined: 29 Jun 2004 Posts: 10
|
Hi. SupportPac MC76 only applies to Windows- and was created in 2001 - but it may still be worth checking out for logic, support documentation etc.... |
|
Back to top |
|
 |
PeterPotkay |
Posted: Mon Aug 16, 2004 4:36 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
Sorry cloud9. None of the manuals make the point that NETPRTY only applies to 2 channels to the same destination. Obvioulously that invalidates my suggestion.
The only place I have been able to find that very important little fact was on the MC76 web page.
I doubt there is any other solution other than changing the default Exit. That support pack describes your problem and solution EXACTLY. I would think if there were an easier way to acomplish this, the Support Pack would not exist. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
MQGrimley |
Posted: Mon Aug 16, 2004 9:32 am Post subject: |
|
|
Novice
Joined: 29 Jun 2004 Posts: 10
|
If cloud9 took up the suggestion of an overlapping cluster just for this queue, he would have cluster channels dedicated to this queue. If he had a triggered queue on the mainframe this could start a mainframe process which could attempt to send to the preferred queue manager. If he specified a BATCHHB on the sender channel he would only send the messages if the receiver channel (and queue manager) were available. Otherwise the messages would be backed out, "and may be re-routed" whatever that means. Could he therefore send the messages to the non-preferred manager? This assumes the queue will never be PUT inhibited, to achieve that effect, he would stop the cluster-receiver. Not quite to specification, but avoids changing the default workload exit for the mainframe.... _________________ Vince Grimley |
|
Back to top |
|
 |
cloud9 |
Posted: Mon Aug 23, 2004 7:04 am Post subject: |
|
|
Novice
Joined: 18 Jul 2003 Posts: 13 Location: Jacksonville, FL
|
Thanks for your suggestions guys. I concluded that trying to force MQ to do something it wasn't designed for is like trying to push spagetti uphill. Decided to take the easy way out and convinced the applic developer to accept the default round robin solution that MQ does so well. They were able to change the reqts design. I don't plan on researching this route anymore unless IBM comes up with a configurable solution to tune the MQ cluster work load algorithm. Thanks again for your help. |
|
Back to top |
|
 |
|