Author |
Message
|
rtsujimoto |
Posted: Wed Apr 14, 2010 6:12 am Post subject: |
|
|
Centurion
Joined: 16 Jun 2004 Posts: 119 Location: Lake Success, NY
|
FWIW, I've been credited with quotes that I did not make. Looks like a cuto-and-paste mistake. |
|
Back to top |
|
 |
jonow |
Posted: Wed Apr 14, 2010 2:19 pm Post subject: |
|
|
Newbie
Joined: 12 Apr 2010 Posts: 8
|
Quote: |
I repeat: Did I miss a description of the physical channel between the sender and receiver? |
Sorry, missed this one - Network is over dual bonded 1Gb ethernet.
We observed throughput (again via NMON) of over 120MB/s for other applications on the same network devices, so the capacity was there.
As this was a benchmark test and the system is now being torn down, we did not have a chance to improve on the throughput. One of the issues we faced was the directory for XMIT queue storage was on a slow system attached disk, whereas the log directory was on an attached raid 5 DS5300. Thus when writing to the XMITQ at 70-80MB/s, the local disk showed 100% utilization, whereas the disk containing the MQ log was writing the same amount of data at 10-15% utilization (all via NMON). Interestingly, we saw higher MQPUTs to the XMITQ when msg persistence was used in the messages! Not sure I understand why..
Anyway, I am just an application developer, and if there is one thing I learned (well knew already.. but it isn't my call) is that if you want to see amazing numbers, get an experienced MQ admin to set it up for you! |
|
Back to top |
|
 |
jonow |
Posted: Wed Apr 14, 2010 2:23 pm Post subject: |
|
|
Newbie
Joined: 12 Apr 2010 Posts: 8
|
Quote: |
Did they provide a documentation link that explains it / identifies it's existence? |
Have asked for it but no response yet.[/quote] |
|
Back to top |
|
 |
bruce2359 |
Posted: Wed Apr 14, 2010 3:10 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Non-persistent messages are not logged as they are mqput to a queue. If your sender channel attribute is set to NPMSPEED(FAST), then non-persistent messages will be sent outside UofWs, thus no logging.
Quote: |
Interestingly, we saw higher MQPUTs to the XMITQ when msg persistence was used in the messages! Not sure I understand why.. |
I don't understand what higher MQPUTs means. Do you mean more of them? Or longer duration?
Message persistence is a message attribute (in the MQMD). When you say '...when message persistence was used in the message...', are you saying that your app created Persistent messages?
When you write persistent messages, on the sender end they are logged at mqput; logged again when the MCA mqgets them from the xmit queue. On the receiving end the message is logged again just before the MCA mqputs the message on the destination queue. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
gbaddeley |
Posted: Wed Apr 14, 2010 4:33 pm Post subject: |
|
|
 Jedi Knight
Joined: 25 Mar 2003 Posts: 2538 Location: Melbourne, Australia
|
jonow wrote: |
... One of the issues we faced was the directory for XMIT queue storage was on a slow system attached disk, whereas the log directory was on an attached raid 5 DS5300. Thus when writing to the XMITQ at 70-80MB/s, the local disk showed 100% utilization, whereas the disk containing the MQ log was writing the same amount of data at 10-15% utilization (all via NMON). |
Anywhere you see 100% utilization is likely to indicate a bottle neck that needs to be investigated and relieved. With MQ, its usually some system resource like disk throughput, cpu, memory, network bandwidth or badly behaved applications or other s/w infrastructure that are at fault. Poor performance in native MQ is actually quite rare.  _________________ Glenn |
|
Back to top |
|
 |
rtsujimoto |
Posted: Thu Apr 15, 2010 6:27 am Post subject: |
|
|
Centurion
Joined: 16 Jun 2004 Posts: 119 Location: Lake Success, NY
|
Here's a thought. It involves your BATCHSZ value as well as the processing of NP messages. I believe the default BATCHSZ value is 50, whereas you have it set to 500. The effect is a buildup of a larger chunk of data before it gets transmitted. MQ tries to keep NP messages in memory, but if the buffer gets saturated, MQ will harden it (e.g. write it disk). So, you may be incurring more disk I/O that anticipated, which is bad for NP messages. Next, the buildup finally gets transmitted causing a surge of data on the other side. This too may result in NP messages being hardened. You may want to lower the BATCHSZ value to force a more even flow of data and to avoid buffer overflows to disk. |
|
Back to top |
|
 |
gunter |
Posted: Mon Apr 19, 2010 11:00 am Post subject: |
|
|
Partisan
Joined: 21 Jan 2004 Posts: 307 Location: Germany, Frankfurt
|
Maybe the same problem exists with the commt interval.
500kB * 128 ~ 64 MB
DefaultQBufferSize 10485760 ~ 10 MB
The queuemanager cannot hold the data in its memory.
DefaultPQBufferSize is not used, but may waste memory(???).
Max DefaultQBufferSize is 100 MB (MQ 6.0), maybe it's increased in 7.0
Both parameter take effect only if set before creating the queue.
Is there a way to check the effective value for a queue? _________________ Gunter Jeschawitz
IBM Certified System Administrator - Websphere MQ, 5.3 |
|
Back to top |
|
 |
bruce2359 |
Posted: Mon Apr 19, 2010 4:23 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
We observed throughput (again via NMON) of over 120MB/s for other applications on the same network devices, so the capacity was there. |
Oddly worded. Are you saying that for other non-MQ applications ...?
Like FTP, for example? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
JLRowe |
Posted: Tue Apr 20, 2010 2:37 am Post subject: |
|
|
 Yatiri
Joined: 25 May 2002 Posts: 664 Location: South East London
|
turn on channel interleaving
increase the batch size as big as you dare
possibly channel compression may help if you lots of spare cpu
jumbo frames on your gigabit lan
increase priority of the MCA's
first 2 points are likely to make the largest difference
and that's all you can do really - MQ is not really designed to saturate a gigabit link, perhaps if you ran several channels you may get to the theoretical throughput, but not with a single channel. |
|
Back to top |
|
 |
bruce2359 |
Posted: Tue Apr 20, 2010 5:21 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
MQ is not really designed to saturate a gigabit link |
WMQ makes no value judgements or technical adjustments based on bandwidth or other resource availability.
As with any other application, WMQ competes for, and will use resources (cpu, ram, network) as available. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
jonow |
Posted: Tue Apr 20, 2010 10:21 am Post subject: |
|
|
Newbie
Joined: 12 Apr 2010 Posts: 8
|
rtsujimoto wrote: |
Here's a thought. It involves your BATCHSZ value as well as the processing of NP messages. I believe the default BATCHSZ value is 50, whereas you have it set to 500. The effect is a buildup of a larger chunk of data before it gets transmitted. MQ tries to keep NP messages in memory, but if the buffer gets saturated, MQ will harden it (e.g. write it disk). So, you may be incurring more disk I/O that anticipated, which is bad for NP messages. Next, the buildup finally gets transmitted causing a surge of data on the other side. This too may result in NP messages being hardened. You may want to lower the BATCHSZ value to force a more even flow of data and to avoid buffer overflows to disk. |
Thanks, that is a really useful suggestion. In future I will consider this.
Cheers,
Jono |
|
Back to top |
|
 |
jonow |
Posted: Tue Apr 20, 2010 10:24 am Post subject: |
|
|
Newbie
Joined: 12 Apr 2010 Posts: 8
|
bruce2359 wrote: |
Non-persistent messages are not logged as they are mqput to a queue. If your sender channel attribute is set to NPMSPEED(FAST), then non-persistent messages will be sent outside UofWs, thus no logging.
Quote: |
Interestingly, we saw higher MQPUTs to the XMITQ when msg persistence was used in the messages! Not sure I understand why.. |
I don't understand what higher MQPUTs means. Do you mean more of them? Or longer duration?
Message persistence is a message attribute (in the MQMD). When you say '...when message persistence was used in the message...', are you saying that your app created Persistent messages?
When you write persistent messages, on the sender end they are logged at mqput; logged again when the MCA mqgets them from the xmit queue. On the receiving end the message is logged again just before the MCA mqputs the message on the destination queue. |
higher MQPUTS - I meant more MQPUTS per second
message persistence in message - yes, the app was changed to create Persistent messages.
Man, I have to use the correct lingo..! |
|
Back to top |
|
 |
jonow |
Posted: Tue Apr 20, 2010 10:27 am Post subject: |
|
|
Newbie
Joined: 12 Apr 2010 Posts: 8
|
bruce2359 wrote: |
Quote: |
We observed throughput (again via NMON) of over 120MB/s for other applications on the same network devices, so the capacity was there. |
Oddly worded. Are you saying that for other non-MQ applications ...?
Like FTP, for example? |
DB2  |
|
Back to top |
|
 |
Michael Dag |
Posted: Tue Apr 20, 2010 10:44 am Post subject: |
|
|
 Jedi Knight
Joined: 13 Jun 2002 Posts: 2607 Location: The Netherlands (Amsterdam)
|
Just curious what type of business application (type if name is not allowed of course...) is spitting out these number and sizes of messages and what are they (or what you can to described these messages if you can not name them) ???
or have you reinvented file transfer over MQ? _________________ Michael
MQSystems Facebook page |
|
Back to top |
|
 |
|