Author |
Message
|
srinivasACN |
Posted: Tue Sep 20, 2005 9:30 am Post subject: Queue Depth above 15000 - Slow Performace |
|
|
Apprentice
Joined: 08 Aug 2005 Posts: 43
|
All,
I have observed during some stress testing we are performing with a MDB based application that once the queue depth reaches beyond 15,000 messages, each message approx 100K in size, the dequeuing occurs really slow.
But once the depth reaches below 15,000 messages are dequeued a lot faster. Is this a configuration within MQ?
Any/all opinions appreciated. Thanks in advance. |
|
Back to top |
|
 |
jefflowrey |
Posted: Tue Sep 20, 2005 9:38 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
Are your MDBs using JMS Selectors of any kind? _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
srinivasACN |
Posted: Tue Sep 20, 2005 9:45 am Post subject: |
|
|
Apprentice
Joined: 08 Aug 2005 Posts: 43
|
Hey Jeff,
we do not use any selectors for this reason alone. performance is a HUGE consideration in this application |
|
Back to top |
|
 |
jefflowrey |
Posted: Tue Sep 20, 2005 10:11 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
Does the performance hit happen only when the MDBs are already running when the 15,000 messages appear on the queue?
Or if the app server is stopped, the queue filled up and then the app server started, do you see the same performance issues? _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
srinivasACN |
Posted: Tue Sep 20, 2005 3:12 pm Post subject: |
|
|
Apprentice
Joined: 08 Aug 2005 Posts: 43
|
The test we were performing where I noticed went through the following steps:
1. Stop Application Server.
2. Queue Messages onto a clustered queue with 2 local instances.
3. Start the Application Server & Application.
thanks. |
|
Back to top |
|
 |
jefflowrey |
Posted: Tue Sep 20, 2005 3:56 pm Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
2 local instances? Weird.
Okay. So it's not a slowdown-because-I'm-busy thing.
Your MDBs are using container-managed? Or bean-managed? You are actually committing each GET and not browsing (the "leave on queue" checkbox is not checked)? _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
fjb_saper |
Posted: Tue Sep 20, 2005 5:55 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Jeff he did not specify whether the qmgr was running on zOS or Unix. I suspect that with that many messages the pageset/file system may just be full or near full. This would slow down considerably any read/write process.
Enjoy  |
|
Back to top |
|
 |
Nigelg |
Posted: Tue Sep 20, 2005 11:36 pm Post subject: |
|
|
Grand Master
Joined: 02 Aug 2004 Posts: 1046
|
I think that MDBs use CorrelId to read msgs. There is a performance hit with increasing queue depth when reading by CorrelId. There is no cure, except to keep the queue depth low. _________________ MQSeries.net helps those who help themselves.. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Wed Sep 21, 2005 4:09 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
And if the QM is on z/OS, then index the queue by CorrelID. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
srinivasACN |
Posted: Wed Sep 21, 2005 5:39 am Post subject: |
|
|
Apprentice
Joined: 08 Aug 2005 Posts: 43
|
As far as the transaction, we are using bean managed (as this was easier to integrate with Hibernate) and I do manage the transaction and perform a commit at the end.
We are using MQ 5.3 on unix (solaris 5.9). So what i am understand, there is a high possibility tha the read/write are just slow when the queue depth reaches this high?
Quote: |
2 local instances? Weird. |
What i meant here was we have two queue manager who host the 2 local instances of the final destination queues which are clustered queues. |
|
Back to top |
|
 |
srinivasACN |
Posted: Wed Sep 21, 2005 5:45 am Post subject: |
|
|
Apprentice
Joined: 08 Aug 2005 Posts: 43
|
This should be interesting. I am performing a test with 200,000 messages on the queue.
Anxious to see how long this will take. |
|
Back to top |
|
 |
KeeferG |
Posted: Wed Sep 21, 2005 6:48 am Post subject: |
|
|
 Master
Joined: 15 Oct 2004 Posts: 215 Location: Basingstoke, UK
|
Just out of interest, but what information are you trying to gain from these performance tests.
My main concern is why are you loading the queue so high. When MQ is correctly configured and the loading and processing application correctly balanced the messages hardly ever touch the queue. In pre-loading the queue you are immediately losing performnce over normal operation. _________________ Keith Guttridge
-----------------
Using MQ since 1995 |
|
Back to top |
|
 |
hopsala |
Posted: Wed Sep 21, 2005 4:03 pm Post subject: |
|
|
 Guardian
Joined: 24 Sep 2004 Posts: 960
|
srinivasACN wrote: |
We are using MQ 5.3 on unix (solaris 5.9). So what i am understand, there is a high possibility tha the read/write are just slow when the queue depth reaches this high? |
This has nothing to do with solaris MQ, but is relevant to any QM on any OS - the higher your qdepth be, the slower the performance; only difference is that in z/OS you can customize buffer pools and other parms to handle high-throughput scenarios, and one other OSs you can't.
Moving on, a few questions and suggestions:
1. You say you do not commit every message, then how often do you commit? I suggest playing around with this (look at mq evaluation reports) - usually a commit every 10-15 msgs gives the best performance/stability combo.
2. Have more than one instance working on each queue, this greatly increases mq performance. (2 > 1+1)
3. Increase HD speed, there's a lot of paging going on in your scenario.
4. Move to z/OS
As KeeferG inquired, why are you doing all this? In a normal production system queue depth would only be this high if your application has been down for a long time, and in such a scenario, it seems perfecltly plausible that for at least a while there'll be some performance degredation; do what IBM did when designing MQ logging and recovery - turn your efforts towards a fast runtime, concede to a slow startup and recovery time.
Besides, there are some wonderful performance evalutions people wrote just so you wouldn't have to. |
|
Back to top |
|
 |
fjb_saper |
Posted: Wed Sep 21, 2005 4:18 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Nigelg wrote: |
I think that MDBs use CorrelId to read msgs. There is a performance hit with increasing queue depth when reading by CorrelId. There is no cure, except to keep the queue depth low. |
Using an MDB to read messages with any kind of selector is an antipattern. Use dynamic reply to queues in that case. Or use some kind of application that redistributes to target queues according to selection....
There is really no reason for this to happen this way or for the slow down.
We have an application that can dump up to 200,000+ messages on AIX or Solaris and the dump is way faster than the MDB consuming the messages. We only run 1 instance as the messages HAVE to be processed FIFO.
The processing can take up to 12 hours but is pretty constant. It is dependant on the DB and the speed of insert /vs update. As we have the updates bunched at the end you see the speed (get rate) take off when we start hitting them. Hybernator is used as well for storing the messages in the DB.
Well if you have a lot of paging going on in your scenario I would suggest you take a hard look a what is causing the paging to happen.
This will be your best bet to boost performance and may not be MQ related at all.
Enjoy  |
|
Back to top |
|
 |
|