Author |
Message
|
dwitherspoon |
Posted: Wed Sep 07, 2005 3:54 am Post subject: What happens if dest queue is full? |
|
|
 Acolyte
Joined: 09 Dec 2003 Posts: 59
|
Suppose I am putting a message to a cluster queue. The message goes on the transmit queue and then ultimately to the destination queue. I know that if the channel is down to that queue, messages will back up in the transmit queue. But what if the remote queue is full? Is the cluster sender sensitive to queue full conditions for the queue he wants to send to?
If the messages are going to land in the dead letter queue on the remote side, can anybody suggest how I can avoid overrunning that queue? In my scenario, I may be suddently dropping 50,000 messages out there.
Thanks. _________________ Good...Fast...Cheap. Choose any two. |
|
Back to top |
|
 |
jefflowrey |
Posted: Wed Sep 07, 2005 3:59 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
I guess it depends.
If there is only one instance of the queue in the cluster, and that instance is full, or if there is more than one instance and all instances are full then normal mechanisms take effect.
If there is more than one instance in the cluster, and not all of them are full, and the app hasn't BIND_ON_OPEN, then I would expect that messages would get load balanced across the other instances until they too are full.
You can use a DLQ handler to help alleviate full dead letter queues. You could also ensure that the DLQs in question are sized appropriately. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
Nigelg |
Posted: Wed Sep 07, 2005 4:05 am Post subject: |
|
|
Grand Master
Joined: 02 Aug 2004 Posts: 1046
|
If the msg is persistent, or NPMSPEED is NORMAL, the msg will be put to the remote DLQ.
If the msg is non-persistent and NPMSPEED is FAST, the msg is discarded.
If the put to the DLQ fails, or there is no DLQ, the channel will end, backout thge current batch, and go into RETRYING. Since this is clustering, an attempt will be made to allocate the msgs for the channel on the cluster xmitq to alternative destinations in the cluster.
At some time, I am not sure when, msgs will be written to the sending and receiving error log file stating that the msg was written to the DLQ.
To avoid overrunning the queue...
Do not define a DLQ at all. The result will be that the channels will go into RETRYING quicker.
Define the MAXDEPTH of the destination queues or DLQ to be enough to hold msgs for the maximum length of time that downstream apps, i.e. the apps draining the destination queue, can be expected to be offline as defined in the system architecture. _________________ MQSeries.net helps those who help themselves.. |
|
Back to top |
|
 |
Mr Butcher |
Posted: Wed Sep 07, 2005 4:26 am Post subject: |
|
|
 Padawan
Joined: 23 May 2005 Posts: 1716
|
Quote: |
If the msg is persistent, or NPMSPEED is NORMAL, the msg will be put to the remote DLQ.
If the msg is non-persistent and NPMSPEED is FAST, the msg is discarded. |
I think it works like this:
MQ tries to put the message to dlq in any case. only if this dlq-put fails, then non persistent messages with npmspeed fast are discarded. otherwise the channel stops/retries. _________________ Regards, Butcher |
|
Back to top |
|
 |
dwitherspoon |
Posted: Wed Sep 07, 2005 8:51 am Post subject: |
|
|
 Acolyte
Joined: 09 Dec 2003 Posts: 59
|
Thanks for the info everyone! _________________ Good...Fast...Cheap. Choose any two. |
|
Back to top |
|
 |
KeeferG |
Posted: Thu Sep 08, 2005 4:51 am Post subject: |
|
|
 Master
Joined: 15 Oct 2004 Posts: 215 Location: Basingstoke, UK
|
In our system we are constanlty running into full cluster queues.
Every queue in the system is in the cluster.
Only ever 1 instance of a queue.
NPMSPEED(FAST)
messages are non persistent
DLQ defined.
We notice that when the queue fills up the messages are placed on the DLQ after going through the message retry loop. This causes a backlog on the XMIT queues as each batch of 50 will take 500 seconds to process.
The sending end of the cluster is unaware of the problem and allows further messges to be put on the XMITQ for the target queue _________________ Keith Guttridge
-----------------
Using MQ since 1995 |
|
Back to top |
|
 |
jefflowrey |
Posted: Thu Sep 08, 2005 5:09 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
The sending end should be unaware of the problem.
But you know that...
You could increase the qdepth, to prevent overflow to DLQ.
You could increase the batch size, try and slow things down a bit, give the receiver more of a chance to process.
Or add more instances of the queue, and more instances of the program to spread the load out. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
KeeferG |
Posted: Thu Sep 08, 2005 5:16 am Post subject: |
|
|
 Master
Joined: 15 Oct 2004 Posts: 215 Location: Basingstoke, UK
|
The system was designed to fail the put to the cluster queue if the queue becomes full. I have asked the designers why they though that was a good idea but yet to hear a valid response. Instead we have to put-disable queues that fill up via exits, applications and monitoring tools. Once that put(disable) propogates around the cluster the application will re-route to an alternative server.
I am looking into using MQ V6 and creating prioritised cluster alias queues to mimic the application behaviour and allow MQ to do the balancing. Fingers crossed.
May even get them to increase max depth from 1000 too though one thing at a time.
 _________________ Keith Guttridge
-----------------
Using MQ since 1995 |
|
Back to top |
|
 |
PeterPotkay |
Posted: Thu Sep 08, 2005 3:22 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
How big are these messages? A Max Depth of only 1000 is very tight IMHO.
The sending QM does not check the depth of the destination cluster Q because it would kill performance if the Clustering Algoritim had to pause and wait for the q depth of everyone of the possible destination queues on each MQPUT. But I suppose you could code that type of logic into a custom Cluster Workload Exit.
I don't know that any of the features in 6.0 will help you in this scenario.
Monitoring! You should know way before the q fills up that it is backing up. And you should know as soon as 1 message hits the DLQ.
"This causes a backlog on the XMIT queues as each batch of 50 will take 500 seconds to process. "
Sounds like the Message Retry values on your cluster recievers are still the default. It has nothing to do with the batch size. Each message that can't be put because the destination queue is full will be retried Message Retry Times, Message Retry Interval milliseconds apart. If you change these values lower, or even to zero, the messages will go to the DLQ faster, and other messages coming down this channel to other queues on this QM will get processed quicker. If the transmit q on the sending QM backs up, it has no effect on messages going to other QMs in the cluster, only to the QM with the backed up channel.
-Peter _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
KeeferG |
Posted: Fri Sep 09, 2005 1:25 am Post subject: |
|
|
 Master
Joined: 15 Oct 2004 Posts: 215 Location: Basingstoke, UK
|
I agree about the maximum depth being too small especially as the system is supposed to handle 1500 messages per second but the architects won't change the design.
The system is designed so that the cluster workload exit will not allow the message to be put to the cluster transmit queue if the target cluster queue is put-disabled or the channel is not running. A custom exit deals with this. If the put is stopped the application then re-routes to its back up server and so on.
I can use qalias and cluster priority to create 4 instances of the target queue and have them prioritised to mimic the bahaviour of the application. This will then allow MQ to do the routing and prevent any messages getting stuck on the XMITQ.
Dont ya just love trying to support badly designed systems. _________________ Keith Guttridge
-----------------
Using MQ since 1995 |
|
Back to top |
|
 |
PeterPotkay |
Posted: Fri Sep 09, 2005 8:47 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
KeeferG wrote: |
I agree about the maximum depth being too small especially as the system is supposed to handle 1500 messages per second but the architects won't change the design. |
Morons. Tell these architects they are morons. A queue that can only hold 2/3s of a second's worth of transactions. Good lord.  _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
fjb_saper |
Posted: Sat Sep 10, 2005 5:45 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
PeterPotkay wrote: |
KeeferG wrote: |
I agree about the maximum depth being too small especially as the system is supposed to handle 1500 messages per second but the architects won't change the design. |
Morons. Tell these architects they are morons. A queue that can only hold 2/3s of a second's worth of transactions. Good lord.  |
As much as it pains me to label somebody so harshly I have to agree with Peter 100%. Tell the architects that they should explain to you how they are going to get through the first 24 hours of processing....
If that does not open their eyes, dump them and get new ones...
 |
|
Back to top |
|
 |
|