Author |
Message
|
John89011 |
Posted: Thu Aug 19, 2010 10:52 am Post subject: System xmitq backing up |
|
|
Voyager
Joined: 15 Apr 2009 Posts: 94
|
Would just like to see your thoughts on this..
In a cluster..
AppA sending message type1 and type2 to AppB
AppB is not able to process the amount of type1 messages coming in fast enough, queue becomes full.
System xmitq of AppA starts to backup so now type2 messages are also impacted.
Are there any other options avalable out there other then:
Once the maxmessage is reached, additional messages will go to DLQ
Creating a new QM to seperate type1 messages form type2
Use non-cluster setup
Thanks! |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Aug 19, 2010 11:09 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
Add another qmgr and an additional instance of the AppB queue and an additional instance of the AppB itself.
Et Voila, AppB can now process twice as many messages at a time.
this is the whole point of clustering, more or less.
If in fact Type 1 and Type 2 messages are going to SEPARATE queues on the AppB queue, then performance may also be improved by adding an additional CLUSRCVR for the same cluster on the AppB qmgr.
But the first step should be to increase the instances of the receiving app so it can handle more throughput. |
|
Back to top |
|
 |
Vitor |
Posted: Thu Aug 19, 2010 11:31 am Post subject: Re: System xmitq backing up |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
John89011 wrote: |
Are there any other options avalable out there other then:
Once the maxmessage is reached, additional messages will go to DLQ
Creating a new QM to seperate type1 messages form type2
Use non-cluster setup |
Increase the maxdepth (rather than maxmessage, which is something else!) of the system queue so it doesn't fill. It's bad when messages destined for SYSTEM objects end up on the DLQ.
If you're sure it's the type 1 messages that are the problem and will always be the problem you could increase the priority of type 2.
You can use the suggestion of my worthy associate which is a more sensible variation of your "create a queue manager for different type messages" idea, which won't help in a cluster. No matter how many queue managers they're all hanging off 1 transmit queue.
You could also use a non-cluster setup and direct messages to specific channels & queues. This is a retrograde step and may cause future problems.
FWIW you're not the first poster to wish WMQ supported multiple SCTQ; I believe something to that effect is in the wishlist in here. You may wish to add your voice to that, or better still mention it to your IBM rep. The more people say "I wish WMQ would....." the more chance a future release of WMQ will..... _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
John89011 |
Posted: Thu Aug 19, 2010 11:54 am Post subject: |
|
|
Voyager
Joined: 15 Apr 2009 Posts: 94
|
Thanks for your input guys!
Vitor I like your idea of increasing the priority of type2 because type2 messages are more important and I know that type1s are the problem.
I've opened an enhancement request to support multi SCTQs and they've said I was not the only one
Thanks again! |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Aug 19, 2010 12:54 pm Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
A typical way to solve priority issues with MQ clustering is to create multiple clusters, one for each of a few Qualities of Service.
Using message priority - or even really using multiple clusters - isn't going to do a lot to improve the performance of an instance of a CLUSSDR trying to read messages from a deep queue.
A higher priority message that is sitting at position 10,000 behind 9,999 low priority messages is going to take longer to retrieve than if it was at position 0.
The deeper the queue, the longer it can take to retrieve any message at all from it.
Adding an additional instance of the destination queue means more messages pulled off of the SCTQ. .
Adding additional CLUSRCVRs on the destination qmgr means more messages pulled off of the SCTQ. |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Aug 19, 2010 1:59 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
I have seen this happen on my cluster.
The destination queue gets full and the messages failover to the DLQ, but not before being submitted to the defined number of retries. This slows down considerably the channel affecting performance of anything else being sent to that qmgr.
One possibility is to set the values of MRRTY and MRTMR to a very low value, thus telling the channel agent to put the messages to the DLQ faster. The default values are 10 and 1,000 respectively...
See http://publib.boulder.ibm.com/infocenter/wmqv6/v6r0/topic/com.ibm.mq.csqzaj.doc/sc11040_.htm
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
tlaner |
Posted: Wed Nov 10, 2010 5:15 am Post subject: |
|
|
Newbie
Joined: 20 Jun 2003 Posts: 8 Location: USA
|
I have a similar issue where I have one MQ Manager(Mainframe, Where putting app is) and 4 Unix MQ Managers all in a cluster. We had an issue where one of the destination queues filled up to max queue depth. This caused other messages to slow down in their delivery to their destination queues in the same Unix Queue Managers. The Cluster receiver couldn't deliver them fast enough because the retry interval was set to default(10sec) before it would put the messages to the DLQ. We have changed that to zero to resolve the immediate issue as well as increasing max depth on the queue that had the issues.
I am now dealing with my users perception of the issue. I have three major categories of users in a common financial infrastructure. 2 of the three are very critical to the firm and one of them is not critical (the one that caused the problem). They are complaining that they want some sort of segregation so that they can be protected since they were effected when we had the incident by appreciable slowness in delivery.
I believe I have 2 options:
Add another cluster receiver channel for each user category ( I didn't know I could do that)
Add several more unix queue managers to allow for the segregation of user categories. I can't add to the MF QM. I believe this will allow me to use the Unix QM's to segregate the user categories so that one user can't effect the slowness of the others.
I don't think Prioroty will help me because they are separate destination queues and there is no issue of performance there.
I would appreciate any comments, questions or answers! |
|
Back to top |
|
 |
bruce2359 |
Posted: Wed Nov 10, 2010 7:05 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
One of the down-side possibilities of using a higher priority for more-important messages is this: if higher priority messages continue to arrive on the queue, the lower priority messages may (never) be processed. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Wed Nov 10, 2010 7:25 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
You can use "stacked" clusters, where all 5 QMs each participate in 2 or more clusters. Each clusters has its own channels, which would help mitigate your problem.
HOWEVER, all those cluster channels still share the same SYSTEM.CLUSTER.TRANSMIT.QUEUE and the same Dead Letter Queue. If one of the clusters is dealing with a q full situation, its possible the S.C.T.Q. can get very deep with the messages for that full queue. Other channels will still be able to get their messages from the transmit queue, but if that transmit queue gets super deep or horrors, full, then all your clusters are still impacted.
Such is the price we pay for sharing infrastructure. The more you share, the more you have to oversize everything and monitoring becomes super important. A queue can fill up very fast, but hopefully you got an email or page anyway saying that destination queue was > x% full, that the XMITQ was backing up, that the DLQ had > 0 messages? _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
tlaner |
Posted: Thu Nov 11, 2010 4:07 am Post subject: |
|
|
Newbie
Joined: 20 Jun 2003 Posts: 8 Location: USA
|
Peter, thanks I will look into the stacked cluster approach. |
|
Back to top |
|
 |
|