Author |
Message
|
zpat |
Posted: Thu Sep 27, 2012 9:56 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Vitor wrote: |
Hey, they used to give me contracts when they were short handed. Once they even made me wear the suit. |
I rest my case...  |
|
Back to top |
|
 |
Vitor |
Posted: Thu Sep 27, 2012 10:10 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
zpat wrote: |
Vitor wrote: |
Hey, they used to give me contracts when they were short handed. Once they even made me wear the suit. |
I rest my case...  |
Exactly.  _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
md7 |
Posted: Thu Sep 27, 2012 3:46 pm Post subject: |
|
|
 Apprentice
Joined: 29 Feb 2012 Posts: 49 Location: Sydney.AU
|
Instead of deleting the messages off the queue when it's full, why not have a trigger event to unload the messages to a file when it reaches 95%. |
|
Back to top |
|
 |
SAFraser |
Posted: Mon Oct 01, 2012 2:22 pm Post subject: |
|
|
 Shaman
Joined: 22 Oct 2003 Posts: 742 Location: Austin, Texas, USA
|
As a result of a recent SEV1 PMR, IBM told me that large numbers of messages (on Solaris) will cause a performance hit due to under-the-covers indexing of the queue.
Also learned, from this same PMR, that the entire contents of the queue is loaded into memory upon connection of a listener to the queue. You can see this, by the way, in the qmgr log (the loading & unloading of the queue).
In our case, I was inquiring about a queue manager fault that occurred when connecting multiple listeners to a queue with depth of 1.4 million messages. The fault was a locked system resource during the loading of the queue contents into memory.
So I was told. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Mon Oct 01, 2012 4:00 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
A performance hit for that one connection trying to access that deep queue? I see the same thing on Windows and Linux too. The QM's error logs shows an entry for every 10,000 messages it has to load.
Or a performance hit for the whole QM when this occurs on Solaris? _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
SAFraser |
Posted: Mon Oct 01, 2012 4:45 pm Post subject: |
|
|
 Shaman
Joined: 22 Oct 2003 Posts: 742 Location: Austin, Texas, USA
|
Not a performance hit for the connection, no. Just caused that new connection (and all that followed it) to fail to connect. I'm not crazy about mutex
lock FDCs, though, even when they clear up on their own.
The performance hit, according to IBM, would be for indexing the messages on that huge queue. A few other posters have said that indexing is only relevant to zOS, but that's what IBM told me. Interesting, huh?
Whether it is accurate or not, it supported my contention to the developers that their design which produces at 160,000 per minute while consuming at 1500 per minute might not be the brightest thing they ever thought up. |
|
Back to top |
|
 |
Vitor |
Posted: Mon Oct 01, 2012 5:25 pm Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
SAFraser wrote: |
A few other posters have said that indexing is only relevant to zOS, but that's what IBM told me. Interesting, huh? |
Hey, I talked about the internal indexing on distributed, and with a queue depth of that many messages I'd be surprised if anything moved for a while.
SAFraser wrote: |
Whether it is accurate or not, it supported my contention to the developers that their design which produces at 160,000 per minute while consuming at 1500 per minute might not be the brightest thing they ever thought up. |
Unleash the of Teaching.
I'd also like to know their justification for the disparity in enqueue / dequeue rates. Just because I could use a laugh. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
mqjeff |
Posted: Tue Oct 02, 2012 5:49 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
So what I've heard from markt is that zos indexing allows you to add additional fields to be indexed on the queue, to make it more efficient for certain kinds of fetches over the basic indexing, which is present on all platforms.
Your PMR, Shirley, supports that. It also adds some perspective to http://www.mqseries.net/phpBB2/viewtopic.php?t=62321 |
|
Back to top |
|
 |
bruce2359 |
Posted: Tue Oct 02, 2012 6:01 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9475 Location: US: west coast, almost. Otherwise, enroute.
|
To expedite MQGETs, WMQ for z/OS offers a choice of queue index: CORRELID, GROUPID, MSGID, MSGTOKEN, NONE.
Briefly documented in the MQSC manual (and equivalent InfoCenter). _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
SAFraser |
Posted: Tue Oct 02, 2012 10:13 am Post subject: |
|
|
 Shaman
Joined: 22 Oct 2003 Posts: 742 Location: Austin, Texas, USA
|
Thanks, mqjeff, for the link to esa's very valuable post.
Vitor, the developers had a request to "make the batch run more efficient". So (without consulting us) they optimized the put operation without touching the get operation. We've advised many times that they should reduce the disparity in put-get rates by throttling the input, by improving performance of the consumer, or by using a multi-queue architecture (which we even offered to do with a message flow). The last time they ran this particular batch, we finally just reduced the maxdepth to 400K and told them to segment their job.
You just can't buy entertainment like this. |
|
Back to top |
|
 |
Vitor |
Posted: Tue Oct 02, 2012 10:29 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
SAFraser wrote: |
You just can't buy entertainment like this. |
At least it gives us a ready supply of war stories at the Integration Self Help Group Annual Drinking And Sitting In A Corner Rocking Backwards & Forwards Event. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Tue Oct 02, 2012 2:41 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
One could argue the main point of MQ is to handle situations where the consumer is not as fast as the producer, to the extreme that the consumer is not even up. Take the queuing out of MQ and all we have is M.
Although its plain wrong for the connection to puke and new connections to fail simply because a queue got deep. Slower performance, OK. Failures, no. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
mvic |
Posted: Tue Oct 02, 2012 3:16 pm Post subject: |
|
|
 Jedi
Joined: 09 Mar 2004 Posts: 2080
|
SAFraser wrote: |
large numbers of messages (on Solaris) will cause a performance hit due to under-the-covers indexing of the queue. |
Indexing is not really the right word, but essentially you are right. When the queue needs to be loaded, it takes time. The time is roughly proportional to the depth of the queue, and to the disk I/O seek+read times you are achieving on your system.
Quote: |
Also learned, from this same PMR, that the entire contents of the queue is loaded into memory ... |
Not quite true, only a small piece of data for each message - not the whole message.
Quote: |
... upon connection of a listener to the queue. |
Only if the queue had become "unloaded" for some reason. Eg. No apps using it for a long while. If your apps continue to use the queue, it does not get unloaded, and you do not need to suffer this long re-load time.
Even so, it is still preferable to have queues shallow with MQ. Or maybe (if your current policy does not work out) give this application team their own queue manager, and allow them access to the rest of the estate only via sdr/rcvr channels. |
|
Back to top |
|
 |
|