Author |
Message
|
HenriqueS |
Posted: Wed Apr 04, 2012 12:54 pm Post subject: Performance questions |
|
|
 Master
Joined: 22 Sep 2006 Posts: 235
|
I know this is a performance MONITORING forum but I would like to raise two questions uniquely regarded to MQ performance.
1) How do I calculate the optimal size for my logfiles and how many primary/secondary should I have?
2) FOUR seconds for a BROWSE based on MQID on a queue with 100.000 depth is a reasonable wait ? |
|
Back to top |
|
 |
bruce2359 |
Posted: Wed Apr 04, 2012 1:37 pm Post subject: Re: Performance questions |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
HenriqueS wrote: |
2) FOUR seconds for a BROWSE based on MQID on a queue with 100.000 depth is a reasonable wait ? |
MQID?
Are you saying that you have an app that browses every message in a queue searching for MsgId? Or are you attempting to match on MsgId? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
gbaddeley |
Posted: Wed Apr 04, 2012 4:06 pm Post subject: Re: Performance questions |
|
|
 Jedi Knight
Joined: 25 Mar 2003 Posts: 2538 Location: Melbourne, Australia
|
HenriqueS wrote: |
I know this is a performance MONITORING forum but I would like to raise two questions uniquely regarded to MQ performance.
1) How do I calculate the optimal size for my logfiles and how many primary/secondary should I have?
2) FOUR seconds for a BROWSE based on MQID on a queue with 100.000 depth is a reasonable wait ? |
Logfile sizing does not affect the performance of browsing a queue. Sizing mainly depends on volume & size of persistent messages, UOW lifetimes and your requirements for recoverability.
Browsing a deep queue (say over a few thousand messages) for a particular message (even if MQ does automatic indexing on MID, CID, GID, MsgToken) is not a good design choice. Why do you need to do that? _________________ Glenn |
|
Back to top |
|
 |
fjb_saper |
Posted: Wed Apr 04, 2012 9:07 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Why use browse and not get from within a UOW ?  _________________ MQ & Broker admin |
|
Back to top |
|
 |
PeterPotkay |
Posted: Thu Apr 05, 2012 4:01 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
While all that is true, 4 seconds to get a message out of a queue with only 100 messages is not right.
The System Admin Guide has a section on sizing your log capacity. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
HenriqueS |
Posted: Thu Apr 05, 2012 4:43 am Post subject: Re: Performance questions |
|
|
 Master
Joined: 22 Sep 2006 Posts: 235
|
I am seeing these 2 issues as separate ones, do not worry. My need is to calculate the proper size for log files to conform to our daily througput.
And yes, browsing though a 100k depth queue is sick, we were offloading this queue only at night, but we are changing this scheme...
gbaddeley wrote: |
HenriqueS wrote: |
I know this is a performance MONITORING forum but I would like to raise two questions uniquely regarded to MQ performance.
1) How do I calculate the optimal size for my logfiles and how many primary/secondary should I have?
2) FOUR seconds for a BROWSE based on MQID on a queue with 100.000 depth is a reasonable wait ? |
Logfile sizing does not affect the performance of browsing a queue. Sizing mainly depends on volume & size of persistent messages, UOW lifetimes and your requirements for recoverability.
Browsing a deep queue (say over a few thousand messages) for a particular message (even if MQ does automatic indexing on MID, CID, GID, MsgToken) is not a good design choice. Why do you need to do that? |
|
|
Back to top |
|
 |
HenriqueS |
Posted: Thu Apr 05, 2012 4:44 am Post subject: |
|
|
 Master
Joined: 22 Sep 2006 Posts: 235
|
I meant 100 k messages.
Thanks for the log file calculation pointer.
PeterPotkay wrote: |
While all that is true, 4 seconds to get a message out of a queue with only 100 messages is not right.
The System Admin Guide has a section on sizing your log capacity. |
|
|
Back to top |
|
 |
PeterPotkay |
Posted: Thu Apr 05, 2012 7:20 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
My bad. You did write 100.000, which in Europe is the same as 100,000 here in the US.
4 seconds to browse a queue with 100K messages looking for a specific MSG_ID? That is not unrealistic. Check the QM logs the first time you open that queue with 100K messages in it. There's your answer of what's taking so long. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Apr 05, 2012 9:39 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
I repeat my question:
Are you saying that you have an app that browses each and every message in a queue searching for specific MsgId?
Or are you attempting to match on MsgId using WMQs built-in match option? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
HenriqueS |
Posted: Thu Apr 05, 2012 9:57 am Post subject: |
|
|
 Master
Joined: 22 Sep 2006 Posts: 235
|
I did contact the developer a few minutes ago, he did send me the source code. He is using the matching feature for the get operation.
bruce2359 wrote: |
I repeat my question:
Are you saying that you have an app that browses each and every message in a queue searching for specific MsgId?
Or are you attempting to match on MsgId using WMQs built-in match option? |
|
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Apr 05, 2012 10:07 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Like any other MQGET, messages must be in the queue for the match option to proceed.
Keep in mind that queues are architected objects that exist in virtual storage (memory).
Gets and puts are actions to queues in virtual storage, not to disk. WMQ buffer management components take care of moving messages to/from buffers (virtual storage) and disk. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Apr 05, 2012 11:27 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
Every operation on a queue takes longer as the qdepth increases. This includes Get-with-Match-On-MsgId.
The Best Queue Depth is 0. If you have a queue with qdepth>0, then you do not have enough copies of the receiving application running to handle the workload being produced. |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Apr 05, 2012 7:44 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
HenriqueS wrote: |
I did contact the developer a few minutes ago, he did send me the source code. He is using the matching feature for the get operation.
|
I believe this procedure needs to be challenged and rethought.
What specifically is the developer looking for, and why does he use the matching option in a batch type operation?
Also how does the developer go about his dequeues (FIFO or LIFO)?
This too can make a tremendous difference when using the matching option.
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
PeterPotkay |
Posted: Fri Apr 06, 2012 6:09 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
mqjeff wrote: |
If you have a queue with qdepth>0, then you do not have enough copies of the receiving application running to handle the workload being produced. |
I don't know if I would go that far. That statment is a bit too generic. On Page 1 of Day 1 of MQ 101, the example given is the sending app that is up all day, the processing / receving app that only comes up for an hour at midnight. Queuing is expected and what you bought MQ for.
But these types of receiving apps typically process the queue FIFO and don't care how deep it is because they only care about the first / next message.
If the consumer and sender apps are both up all day long and the q depth is rising, yeah, you need more consuming instances.
If you are browsing a deep queue looking for a specific message, either you are in an rare error situation where you stopped the receiver app to find and yank a bad message, where it can be argued that performance is not critical. Or you are doing this all day everyday, in which case you are using MQ as database and the poor performance is your punishement / reminder that MQ is being misused. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
Andyh |
Posted: Wed Aug 29, 2012 6:32 am Post subject: |
|
|
Master
Joined: 29 Jul 2010 Posts: 239
|
MQ on the distributed platforms is optimized for MQGET by CorrelId when searching for a specific message.
If you're able to change from using MsgId to CorrelId you are likely to see a very considerable improvement in the latency of the MQGET when selecting a message from a deep queue. |
|
Back to top |
|
 |
|