Author |
Message
|
zpat |
Posted: Fri Nov 16, 2018 11:31 am Post subject: Improving throughput of sender channel? |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Assuming we have a sender channel to an external party over TCP/IP - both ends are z/OS MQ (one v7.1, one v8.0).
Most messages are around 2kb and non-persistent. The xmit queue is held in a coupling facility for QSG failover reasons.
We seem to be reaching a max message throughput even when the network bandwidth is not more than 30% used.
What's the best way to improve thoughput - for example
1. Increase TCP/IP buffersize?
2. Run more than one sender channel in parallel (different xmit queues)?
3. Mess about with batchsize and such like?
Am I right to assume we can't run two sender channels (at the same time) pulling from the same xmit queue due to the way channel triggering works? _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
bruce2359 |
Posted: Fri Nov 16, 2018 1:07 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9470 Location: US: west coast, almost. Otherwise, enroute.
|
More details, please.
Is this a new problem? What has changed?
30% of what bandwidth?
How many messages in what time period? 1000/second, for example?
What are your batchsize and batchinterval values?
How many packet collisions? How far is the receiver end? 20ft? 20miles?
Private network? If so, who is the provider? Public internet? If so, who is our ISP? Ask them for help? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
exerk |
Posted: Fri Nov 16, 2018 1:49 pm Post subject: Re: Improving throughput of sender channel? |
|
|
 Jedi Council
Joined: 02 Nov 2006 Posts: 6339
|
zpat wrote: |
Am I right to assume we can't run two sender channels (at the same time) pulling from the same xmit queue due to the way channel triggering works? |
Unless it's a CLUSSDR, the XMITQ is opened for input exclusively by the channel. _________________ It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys. |
|
Back to top |
|
 |
RogerLacroix |
Posted: Fri Nov 16, 2018 3:53 pm Post subject: |
|
|
 Jedi Knight
Joined: 15 May 2001 Posts: 3264 Location: London, ON Canada
|
Why don't you turn on channel compression? And you could even add channel header compression. And do it in both direction.
Regards,
Roger Lacroix
Capitalware Inc. _________________ Capitalware: Transforming tomorrow into today.
Connected to MQ!
Twitter |
|
Back to top |
|
 |
zpat |
Posted: Sat Nov 17, 2018 12:05 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
It's a 100 MB private circuit.
It's reaching message rate capacity due to growth in volumes.
BATCHINT is zero, Batchsize is 50.
I did think about compression but we are so far from using all the bandwidth that it seems pointless and may increase processing time in MQ.
NETTIME on channel is about 27k microseconds. XQTIME is about 150k microseconds. _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
bruce2359 |
Posted: Sat Nov 17, 2018 5:27 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9470 Location: US: west coast, almost. Otherwise, enroute.
|
zpat wrote: |
It's a 100 MB private circuit. |
What does the circuit provider say about this issue? What do their network analysts tell you? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
tczielke |
Posted: Sat Nov 17, 2018 6:42 am Post subject: |
|
|
Guardian
Joined: 08 Jul 2010 Posts: 941 Location: Illinois, USA
|
This is from the MP16 - "Capacity Planning and Tuning for IBM MQ for z/OS" in case it would help:
Quote: |
Tuning channels - BATCHSZ, BATCHINT, and NPMSPEED
To get the best from your system you need to understand the channel attributes BATCHSZ,
BATCHINT and NPMSPEED, and the difference between the batch size specified in the BATCHSZ
attribute, and the achieved batch size. The following settings give good defaults for several scenarios:
1. For a synchronous request/reply model with a low message rate per channel (tens of messages
per second or less), where there might be persistent messages, and a fast response is needed
specify BATCHSZ(1) BATCHINT(0) NPMSPEED(FAST).
2. For a synchronous request/reply model with a low message rate per channel (tens of messages
per second or less), where there are only non-persistent messages, specify BATCHSZ(50)
BATCHINT(10000) NPMSPEED(FAST).
3. For a synchronous request/reply model with a low message rate per channel (tens of messages
per second or less), where there might be persistent messages and a short delay of up to 100
milliseconds can be tolerated specify BATCHSZ(50) BATCHINT(100) NPMSPEED(FAST).
4. For bulk transfer of a pre-loaded queue specify BATCHSZ(50) BATCHINT(0) NPMSPEED(FAST).
5. If you have trickle transfer for deferred processing, (the messages are typically persistent)
specify BATCHSZ(50) BATCHINT(5000) NPMSPEED(FAST).
6. If you are using large messages, over 100000 bytes you should use a smaller batch size such as
10, and if you are processing very large messages such as 1 MB, you should use a BATCHSZ(1).
7. For messages under 5000 bytes, if you can achieve a batch size of 4 messages per batch then
the throughput can be twice, and the cost per message half that of a batch size of 1.
|
_________________ Working with MQ since 2010. |
|
Back to top |
|
 |
bruce2359 |
Posted: Sat Nov 17, 2018 7:12 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9470 Location: US: west coast, almost. Otherwise, enroute.
|
|
Back to top |
|
 |
gbaddeley |
Posted: Sun Nov 18, 2018 3:29 pm Post subject: Re: Improving throughput of sender channel? |
|
|
 Jedi Knight
Joined: 25 Mar 2003 Posts: 2538 Location: Melbourne, Australia
|
zpat wrote: |
Assuming we have a sender channel to an external party over TCP/IP - both ends are z/OS MQ (one v7.1, one v8.0).
Most messages are around 2kb and non-persistent. The xmit queue is held in a coupling facility for QSG failover reasons.
We seem to be reaching a max message throughput even when the network bandwidth is not more than 30% used.
What's the best way to improve thoughput - for example |
Hi zpat,
Determine what resource is constraining the throughput....
1. What is the message volume? (msg/sec)
2. Are you seeing queueing on the transmission queue?
3. What range of depths?
From the info you have provided, 1000 msgs/sec capacity should not be beyond reach. _________________ Glenn |
|
Back to top |
|
 |
zpat |
Posted: Sat Nov 24, 2018 4:46 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
I think the volume is about 18 million per day, but not evenly spread out.
Xmit queue depth does sometimes spike to around 100.
What I was really getting at - will two sender channels sharing the same load achieve a higher throughput than one sender channel (over the same link)? _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
bruce2359 |
Posted: Sat Nov 24, 2018 6:13 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9470 Location: US: west coast, almost. Otherwise, enroute.
|
zpat wrote: |
What I was really getting at - will two sender channels sharing the same load achieve a higher throughput than one sender channel (over the same link)? |
You haven’t determined where the bottleneck exists, so we can’t say for sure.
Are you missing SLA’s?
What do SMF and RMF reports indicate? Your MVS sysprogs should be able to help. What do they tell you?
Are the z hardware platforms similarly provisioned? I’ve seen the behavio(u)r when a z/OS image is sending to a much smaller Win/UNIX midrange platform - not enough horsepower at receiver to keep up with sender.
What do the circuit provider folks tell you? Excessive network collisions or other delays? What other traffic on this circuit? Like video conferencing? Image mirroring? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
fjb_saper |
Posted: Sat Nov 24, 2018 9:11 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
zpat wrote: |
I think the volume is about 18 million per day, but not evenly spread out.
Xmit queue depth does sometimes spike to around 100.
What I was really getting at - will two sender channels sharing the same load achieve a higher throughput than one sender channel (over the same link)? |
A running XMITQ spiking at 100 msgs is ridiculous. Assuming that your batches are 50 msgs per batch it means that you never have more than 2 batches on the xmitq at any time (including delays for commits on puts). Now depending on the size of those batches and the network latency, does that make you miss any SLA's? If not you're fine.  _________________ MQ & Broker admin |
|
Back to top |
|
 |
gbaddeley |
Posted: Sun Nov 25, 2018 2:44 pm Post subject: |
|
|
 Jedi Knight
Joined: 25 Mar 2003 Posts: 2538 Location: Melbourne, Australia
|
zpat wrote: |
I think the volume is about 18 million per day, but not evenly spread out. Xmit queue depth does sometimes spike to around 100. |
At a peak time, what is the average throughput?
(look at MSGS andBYTSSENT attributes of channel status, say at 5 minute intervals, and do a calculation)
Eg. BYTSSENT(9616) MSGS(15) _________________ Glenn |
|
Back to top |
|
 |
|