|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Channel speed |
« View previous topic :: View next topic » |
Author |
Message
|
jeevan |
Posted: Tue Jun 24, 2008 8:12 am Post subject: Channel speed |
|
|
Grand Master
Joined: 12 Nov 2005 Posts: 1432
|
In one of our recent tests, where a lot of messages were sent to a queue, I realised that message were piled on the xmit queue. I am not sure how many messages can pass per minute/per second from channel/queue? where can I find this info?
The scenario is like this:
client connect to a cluster queue manage which does not hold the actual physical queue. The cliet puts the message in. It then obviously travel through the cluster transmit queue and to cluster channel.
What are the limitation of the channel in term of speed ? Or it is only size limitation as the I left the default size ? or there is speed limit as well ? what aout a queue? As the message are passing through a queue?
Could some one please shed light on it |
|
Back to top |
|
 |
bruce2359 |
Posted: Tue Jun 24, 2008 10:53 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9471 Location: US: west coast, almost. Otherwise, enroute.
|
Some thoughts:
Physical channel bandwidth impose speed limitation; but this is not likely the real issue for you.
Clients require more network flows to ship the MQI call and messages. Each mqput is a message flow across the channel. No batchsize/batchinterval. Client channels are not the stellar performers point-to-point mq channels.
Logging of persistent messages on the qmgr with the SVRCONN channel before the message is put to the xmit queue, AND mqget from the xmit queue add some delay. Then your message flows to the qmgr where the real queue is - logging again. Batchsize/batchinterval impact throughput.
Is there disparity of performance between the two hardware platforms? Sending data from a mainframe to a Windows qmgr, the limitation will be the slower of the two.
Network errors will result in poor performance. These may be discovered at the TCP layer and redriven - mq may not find out about them if TCP/IP recovers (retransmits packets) successfully.
Compression and encryption add to the delay. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
jeevan |
Posted: Tue Jun 24, 2008 12:54 pm Post subject: |
|
|
Grand Master
Joined: 12 Nov 2005 Posts: 1432
|
bruce2359 wrote: |
Some thoughts:
Physical channel bandwidth impose speed limitation; but this is not likely the real issue for you.
Clients require more network flows to ship the MQI call and messages. Each mqput is a message flow across the channel. No batchsize/batchinterval. Client channels are not the stellar performers point-to-point mq channels.
Logging of persistent messages on the qmgr with the SVRCONN channel before the message is put to the xmit queue, AND mqget from the xmit queue add some delay. Then your message flows to the qmgr where the real queue is - logging again. Batchsize/batchinterval impact throughput.
Is there disparity of performance between the two hardware platforms? Sending data from a mainframe to a Windows qmgr, the limitation will be the slower of the two.
Network errors will result in poor performance. These may be discovered at the TCP layer and redriven - mq may not find out about them if TCP/IP recovers (retransmits packets) successfully.
Compression and encryption add to the delay. |
Bruce,
Thank you for response,
Both the queue manager chalient connect and who hold the message ( where message flow) are distruted plateform. The connecting qmgr is in windows 2003 and backend ( who hold message ) is a solaris 10 lpar( or zone ).
Both of these have similar hardware configuration. Both have 2 gb of ram.
The dev/test folks are using c++ based tool to send messages.
I realised that all the message they are sending are not going to the destiantion queue immediately ( the cluster queue). They are on transmission. Message are piled up on cluster transmission queue. So, I am curious to know why these message are on xmitq and not going all to the destination queue.
What are the factors that may impact this:
The GET function
what else?
both cluster xmitq and cluster channel have default attriubtes. Nothing has been changed.
I would appreciate your insight. |
|
Back to top |
|
 |
Esa |
Posted: Wed Jun 25, 2008 10:59 pm Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
How many queue managers do you have in your cluster? How many clients do you have active simultaneously?
I think you could try increasing qmgr attributes maxchannels and maxactivechannels. It is possible that the cluster sender channel cannot start because it is waiting for a free slot. Or it could be the the cluster receiver on the receiving end that cannot start a new instance because of the same reason. You should check the attributes on both ends.
Esa |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Jun 26, 2008 1:32 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Looking at Batch processing from a MF towards a distributed qmgr we have also seen the case where using 2 p2p channels gave us a better throughput than when we were using only 1. This is a case of very high volume in a short time frame. (over 1 Mio messages)
Enjoy  _________________ MQ & Broker admin |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Jun 26, 2008 6:41 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9471 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
...message were piled on the xmit queue |
Hmmm. Can you be a little more specific? Are there 1000's of messages waiting on the xmit queue? Or 100's? If you frequently display queue depth, does queue depth remain fairly constant over time? Or is it erratic?
What are the batchsize and batchinterval settings on both ends of the channel?
Do you see similar performance for your point-to-point channels? Any other symptoms? Any errors written to error logs? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|