ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » IBM MQ Performance Monitoring » MQ channel performance scalability

Post new topic  Reply to topic
 MQ channel performance scalability « View previous topic :: View next topic » 
Author Message
sunny_30
PostPosted: Thu Jul 15, 2010 7:05 pm    Post subject: MQ channel performance scalability Reply with quote

Master

Joined: 03 Oct 2005
Posts: 258

HI

I have a question regarding performance of MQ Message-channels (Sender-Receiver type pair) connecting two QueueManagers

Is the channel performance automatically scalable to utilize all available network bandwidth ?

Even though unrealistic please assume:
1) At the Sender side Messages are posted at very high speeds (or) even assume the remoteQ is preloaded with thousands of large messages before channel is started
2) No scarcity of resources. The System-resources (disk, logs, ram, cpu, page space etc ) are ample and automatically scalable
3) The delay/ onus is always on the channel transmission
4) Lot of available network bandwidth, no throttles, no network-profiling in place

I want to know IF the channel-pair's performance automatically scale up (occupying as much bandwidth & resources as it needs) to try to clear the XMITQ depth as fast as it can ? (or)
Is there a hard-limit/ throttle set to message-channel's performance ?

please share your thoughts

Thanks in advance

-Sunny
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Thu Jul 15, 2010 8:41 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20756
Location: LI,NY

It is completely unrealistic to gage performance by only looking at one end of the channel.
You need to have both ends running to see what the throughput can be.
Then check how you would do with 2 chls instead of one channel, and you may get the surprise that you get more throughput.

To tweak throughput you may also tweak the batch size of the channel.

Looks like you have a MF on one side... It may all depend on the capabilities of the other side...

Have fun
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
sunny_30
PostPosted: Fri Jul 16, 2010 6:37 am    Post subject: Reply with quote

Master

Joined: 03 Oct 2005
Posts: 258

fjb_saper,
Thanks for your response

Correct- I meant to assume that there are no shortage of resources on both sender & receiver ends of MQ.
Trying to understand how MQ works actually.. i.e IF the channel performance is automatically scalable with the available resources.. (both network & system)

Quote:
you may get the surprise that you get more throughput
You nailed it. Yes we did. Thats exactly the behavior Im noticing..
contrary to the expectations

Let me explain:
We have a MQ server to server type connection using Sender-Receiver Message-channels connecting two remote QMs over WAN.
We have done a lot of throughput testing using JMS perfharness tool

All are Non-persistent Messages, used compression over MQ.
Channels run using two-threads (pipelinelength=2)
High performance QMs are used (atleast we think they are..). Circular Logging. MQv7
OS is AIX, all ulimits (soft) are set to "unlimited"
Ample disk space & Memory. QueueBufferSizes of 1MB
Assuming BatchSZ has no effect on Non-PErsistent messages, we used default-50. NPMspeed set to FAST. Batch interval set to 0.

What we have noticed is pipelinelength didnt alter the ThroughPut at all.
A lot of bandwidth was available at network level but the channel looked to have a hard-limit set on how much it can use.
Looked at the Network Utilization graphs. It seems like no matter how much data is posted on the sender side, there is no change in per-channel throughput at all. All we see is increase in XMITQ depth on sender side.
We are surprised on the scalability behavior of MQ.

Increased the channel connections to 5 pairs & we noticed exactly 5X improvement in throughput.
Used Multiple Listeners, no change.
Even when tests are run during peaktime, the per-channel throughput still remains the same

The point Im trying to understand is if thats how the MQ is designed to behave (or) not sure if there is a cap at network level that we are unaware of..
or may be some setting in MQ that we may need to alter

we are working on post-analysis of our tests. At this point we are stuck what to expect from Network team when we dont have clear understanding where the bottleneck is..

please help
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Fri Jul 16, 2010 6:42 am    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20756
Location: LI,NY

Look at the bottle neck on disk. UOW are being handled by the channel.
Check disk capability speed and storage at the log level and queue level on both sending and receiving side. ... You might find some explanation there.

Think also about max speed on the disk controller channel.
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
PeterPotkay
PostPosted: Fri Jul 16, 2010 2:14 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7722

QM1 and QM2 have channels between them. On QM1 make a remote queue that aims at a remote queue on QM2 that aims back at the remote queue on QM1 - an endless loop. Drop one non persistent message into the remote queue. As the message loops back an forth it will give you a good idea how fast MQ can move the message, and eliminates any bottlenecks that your apps may be introducing. I would expect this message to loop back and forth entirely in memory if the message is non persistent and the channel speed is set to fast.

Put Inhibit one of the remote queues to stop the loop.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
sunny_30
PostPosted: Sun Jul 18, 2010 12:51 pm    Post subject: Reply with quote

Master

Joined: 03 Oct 2005
Posts: 258

saper:
Quote:
Look at the bottle neck on disk

dont see any problem with disk I/O (for now..)
Looked at vmstat & nmon reports during tests, doesnt show any disk level contention

peter:
Quote:
As the message loops back and forth it will give you a good idea how fast MQ can move the message

this is a great idea! But how can we calculate the transit times using this approach- by running MQ trace?
more importantly once we get the results, how to deduce if there is room for more improvement in the per-channel throughput ?
(keeping same environment)

I have run some tests for the weekend to monitor network at TCP level:
For our tests, the iptrace has shown that the single channel performance to be largely bounded by the OS (:AIX) level TCP socket parameters like-(path MTU size (1500), TCP send/receive buffer sizes (320KB), ipqmaxlen (100) at network IP-layer, etc)
These socket level settings look be determining the TCP/IP hard-limit boundary for a single MQ channel performance at the process level. We may get more performance if the network level settings are modified but certainly dont want to risk altering the default values.. specially in our case where multiple remote routers, ethernet interfaces are involved across the WAN

Both MQ type compressions: ZLIBFAST & ZLIBHIGH have shown approx 13x increase in throughput over WAN. Having multiple channels (each with a dedicated XMITQ) look to be the key to increase over all usage of "available bandwidth". We used 5 parallel channels which resulted in almost 5X increase in throughput (during normal network activity hours) & didnt peak the overall network either.
Tested single channel connection during network peak hours (later day network WAN utilization graphs proved whole available bandwidth to be actually occupied) which didnt alter the throughput at all.

However as I dig more into the network & OS layer intricacies, my initial assumption(topic subject) that a single MQ channel wd automatically scale up to consume the available bandwidth is turning out to be both naive and over-ambitious thinking from my part...

any suggestions are welcome
thanks for your replies
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Sun Jul 18, 2010 3:50 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7722

sunny_30 wrote:

peter:[/u][/b]
Quote:
As the message loops back and forth it will give you a good idea how fast MQ can move the message

this is a great idea! But how can we calculate the transit times using this approach- by running MQ trace?
more importantly once we get the results, how to deduce if there is room for more improvement in the per-channel throughput ?
(keeping same environment)


Watch the Message Bytes Sent attribute of the channel. Fiddle with the MQ channel settings and repeat the test. Keep fiddling and keep repeating till you see that the channel is performing as well as it can.

With non persistent messages, on a fast channel, with Batch Interval set to zero, I'd be curious to know if you can make it perform any better with MQ settings alone. If the messages are large, character data and you have ample CPU cycles then channel compression settings will also probably help.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
bruce2359
PostPosted: Sun Jul 18, 2010 4:59 pm    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

Quote:
Both MQ type compressions: ZLIBFAST & ZLIBHIGH...

Are you solely looking at network bandwidth - to the exclusion of cpu at both ends for compression?

Is this primarily a study on network? Or is this for overall throughput?
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
gbaddeley
PostPosted: Mon Jul 19, 2010 4:01 pm    Post subject: Reply with quote

Jedi Knight

Joined: 25 Mar 2003
Posts: 2538
Location: Melbourne, Australia

In the real world, its usually application design and behaviour which determines maximum transaction throughput. eg. hosting environments (JVMs are notorious), application processing, database access, integration adapters. MQ performance is often called into question when applications are running slowly, but it rare to find that MQ is the bottleneck.
_________________
Glenn
Back to top
View user's profile Send private message
bruce2359
PostPosted: Tue Jul 20, 2010 6:01 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

Quote:
...my initial assumption(topic subject) that a single MQ channel wd automatically scale up to consume the available bandwidth is turning out to be both naive and over-ambitious thinking from my part...

WMQ neither scales up, nor scales down, based on bandwidth.

Are you thinking that WMQ channel ends behave like modems, and negotiate (down to) the highest transmission rate based on transmission success/failure? Not so.

WMQ channels are abstracted from the underlying, network layer transmission protocol - TCP/IP, VTAM LU6.2; and the mac layer - ethernet, token-ring, wireless, ATM, etc..

WMQ will transmit messages as soon as they arrive in a transmission queue, and things like batch interval and batchsize are met. These channel attributes (and others) are tunable by the sysadmin to maximize message channel throughput.

While, technically not a network issue, (application) choice of persistence influences effective aggregate application throughput because persistent messages require pre-transmission and post-transmission logging, along with pre- and post-batch commit/backout negotiation by the channel end MCAs.
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
sunny_30
PostPosted: Wed Jul 21, 2010 7:07 am    Post subject: Reply with quote

Master

Joined: 03 Oct 2005
Posts: 258

Quote:
Are you solely looking at network bandwidth - to the exclusion of cpu at both ends for compression?
We are stress testing the WAN network connecting two continents. We dont see the CPU to be an issue, the peak usage doesnt cross 25% of CPU pool available on both MQ end servers. Same MQchannel settings over LAN produced ~4.5 times throughput with around same level of CPU usage. So, YES- primarily our focus is on improving per-channel WAN network throughput.
Quote:
WMQ neither scales up, nor scales down, based on bandwidth.
Thank you for the confirmation. Clearly notice the same.
Network had available bandwidth of abt 30Mbps. Single channel doesnt exceed 2.3Mbps

Here are some numbers:
WAN ZLIBHIGH compressed XML 4MB data produced ~30 Mbps data-rate. Non-Persistent, batchSZ default 50, batchINT 0, Circular logging, single channel connection
Both PipeLineLength 1 & 2 produce same. Real post compression amt of data traveling over network is ~2.3Mbps
5-channels (Compression) result in: ~150 Mbps
Over LAN 5 parallel channels increased throughput by abt 2.5 times.

Thanks again
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » IBM MQ Performance Monitoring » MQ channel performance scalability
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.