Author |
Message
|
exerk |
Posted: Thu Jul 08, 2010 9:07 am Post subject: |
|
|
 Jedi Council
Joined: 02 Nov 2006 Posts: 6339
|
Wrong Info Center, try THIS ONE. _________________ It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys. |
|
Back to top |
|
 |
sumit |
Posted: Thu Jul 08, 2010 9:37 am Post subject: |
|
|
Partisan
Joined: 19 Jan 2006 Posts: 398
|
exerk wrote: |
Wrong Info Center, try THIS ONE. |
That link covers the channel pause status as posted by OP. _________________ Regards
Sumit |
|
Back to top |
|
 |
PeterPotkay |
Posted: Thu Jul 08, 2010 9:39 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
MQ working as designed. Review the channel properties MRRTY and MRTMR. Yes you can change this behavior. I doubt your whole QM stopped working, although if the one channel into the QM is going thru its message retry logic and the channel is in a paused state while that happens, it might appear nothing is working. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
vivica12 |
Posted: Thu Jul 08, 2010 10:19 am Post subject: |
|
|
Acolyte
Joined: 13 Jul 2007 Posts: 58
|
I think since all of our apps on Broker have to go to the route qmgr to get 'anywhere'....either to another app, or back to the requesting app...it seemed liek we were hung, when in reality the Cluster REciever channel was paused, so nothing could be sent from broker to routing qmgr.
I think it is 'working as expected', but we would want to change that behavior because 1 full Q for 1 app, just can't hinder the 100 other apps that are still trying to communicate through that routing qmgr on the cluster receiver.
thanks all for the info. I think we will be researching how to change that behavior on the cluster channels so that we don't do a retry or pause the channel if the destination Q is full. _________________ Vivica - signing off |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Jul 08, 2010 10:36 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9472 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
I think we will be researching how to change that behavior on the cluster channels so that we don't do a retry or pause the channel if the destination Q is full. |
No, no. These are normal channel states. Read in the WMQ Intercommunications manual what RETRY and PAUSED channel states indicate.
As mentioned earlier, the qmgr and its Message Channel Agents (MCAs) respond (behave) is a well-documented way to a q-full condition.
If the q-full condition is the underlying problem, then raise maxqdepth for the queue(s) in question. Or ensure that the consuming app is triggered when a message arrives at the destination queue. Or ensure that the dead-letter-queue handler is triggered when a message arrives in the DLQ on the destination qmgr.
You don't want to change the behavior of WMQ; rather, you want to anticipate problems in a distributed queuing environment, and have recovery actions in place to take appropriate action when these problems arise. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
vivica12 |
Posted: Thu Jul 08, 2010 10:48 am Post subject: |
|
|
Acolyte
Joined: 13 Jul 2007 Posts: 58
|
Ok, so if i don't change behavior, and I do my best to anticipate issues -- well it's not a perfect world.
I can still have an app in production, that goes haywire fills a Q, and in the time it takes operators to get an alert, me to get called and log in and increase the depth or purge etc, i've lost 30 minutes of function on 100's of apps because 1 app has a full Q so my Message Broker apps that are trying to utilize the route qmgr really can't do anything during that 30 mins because it's a constant retry state and paused Cluster receiver.
In this case the depth of Q went from 50,000 to 100,000 messages immediately after increasing depth, then to 200,000. So totally an app problem, and yes we need to fix that. But to allow that to hinder 100's of other apps doesn't seem right either.
shouldn't i also anticipate that issue? And solve that?
And by the way, the DLQ was not full anywhere - including destination qmgr. So this wasn't an issue that we were not handelign our DLQ, it still paused.
I would much rather let the rest of my apps continue to function in this scenario? Wouldn't you? _________________ Vivica - signing off |
|
Back to top |
|
 |
PeterPotkay |
Posted: Thu Jul 08, 2010 11:08 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
There is nothing wrong with altering the default values of Message Retry Count and Message Retry Interval if the values that are set (1 and 1000 respectively) do not make sense in your shop.
If an incoming channel on QM1 services hundreds of apps sending from QM2, it can be risky to have the channel pause for a full second for each message it can't put. One bad app can impact all others. If its appropriate for the receiving channel to dump them to the DLQ immediately and got on with other messages, then drop those values. Or maybe you want the channel to try more times and/or wait longer in between retries, in which case bump the numbers up. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Jul 08, 2010 11:46 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9472 Location: US: west coast, almost. Otherwise, enroute.
|
WMQ anticipates that it is not a perfect world.
If enabled, conditions like q-full and channel failures cause qmgr software to create event messages to be put to event queues. Monitoring software can watch for these and other common events, and take appropriate action.
I worked with one of my clients to stretch and shrink the retry interval and retry counts (both short- and long-) to accommodate sporadic network failures.
Queues can be monitored for low- and high-depth events, then automatically increase maxdepth to accommodate for bursts in message workload.
Take a look at the WMQ Monitoring manual. There are a wide variety of things you can easily monitor. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
vivica12 |
Posted: Thu Jul 08, 2010 12:22 pm Post subject: |
|
|
Acolyte
Joined: 13 Jul 2007 Posts: 58
|
We do monitor for queue depth, but reacting to it in an automated fashion is not a strong suit for us. Any examples of how you automatically do this? Suggestions on monitoring tools?
I like the idea of altering the retry interval, or just letting them dump to DLQ. Seems this is less risk for us to do that, and handle with a DLQ handler rathern the hold up traffic. _________________ Vivica - signing off |
|
Back to top |
|
 |
fatherjack |
Posted: Thu Jul 08, 2010 12:29 pm Post subject: |
|
|
 Knight
Joined: 14 Apr 2010 Posts: 522 Location: Craggy Island
|
Maybe you need to use different, dedicated, channels for more time critical messages? _________________ Never let the facts get in the way of a good theory. |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Jul 08, 2010 12:34 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9472 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
Any examples of how you automatically do this? |
The WMQ Monitoring manual has much of what you need.
1) Enable low- and high- q-depth at the local queue. Ensure that the appropriate events are also enabled at the qmgr.
2) Set values for both low- and high-. 20% for low, and 80% for high, might be a good start.
3) Write an app that does a get/wait on the SYSTEM.ADMIN.PERFM.EVENT queue
4) Get an event message, determine the cause of the event, and queue that caused it.
5) If it's a high-event, use the MQSET MQI call (or pcf) to set the maxdepth to a higher value - perhaps increase it by 50%.
6) If it's a low-event, use MQSET MQI or pcf to set maxdepth down.
Quote: |
Suggestions on monitoring tools? |
IBM Tivoli Omegamon, BMC Patrol, TMON, QPasa, others. A quick search of WMQ+monitoring+tools on Google will yield lots of 3rd party offerings. I have clients with these and others. I grew up with Omegamon (from !Candle). _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
vivica12 |
Posted: Thu Jul 08, 2010 1:08 pm Post subject: |
|
|
Acolyte
Joined: 13 Jul 2007 Posts: 58
|
I can't really use a dedicated channel for a Cluster Receiver channel. There really only is 1 clsuter receiver on my routing qmgr. And the broker needs to send data using cluster transmission to that receiver. If that receiver is Paused..it kind of hinders the rest of the actions on ANY system trying to talk to that Routing qmgr in the cluster.
So dedicated channel in this case is not really possible. _________________ Vivica - signing off |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Jul 08, 2010 1:27 pm Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
You can create multiple clusters on the same queue manager, that will then give multiple channels into it.
But it gets very tricky very quickly going down that road. Take a *lot* of time up front to design and diagram and map out each cluster and where they overlap. |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Jul 08, 2010 1:52 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9472 Location: US: west coast, almost. Otherwise, enroute.
|
Thus far, you've identified a symptom.
BEFORE you start changing applications, creating new network designs, or overlapping clusters, you need to identify exactly what the underlying cause of the symptom(s) you see - the actual problem(s).
Try the shortest, least complicated solution first. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Jul 08, 2010 2:15 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
mqjeff wrote: |
You can create multiple clusters on the same queue manager, that will then give multiple channels into it.
But it gets very tricky very quickly going down that road. Take a *lot* of time up front to design and diagram and map out each cluster and where they overlap. |
Just like Jeff said you can create a mirrored cluster. The only difference between the 2 clusters would be the urgency / channel speed / throughput requirements. Note that this requires a cluster receiver per channel and you cannot use in this case a cluster receiver with a namelist...
At the same time if you have a mixed requirements destination queue you better split it into 2 separate local queues, otherwise a single event (queue full) will affect both clusters.
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
|