Author |
Message
|
zpat |
Posted: Mon Nov 16, 2020 5:49 am Post subject: Effect of shared conversation on CHIN/MCA overhead |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Let's assume we have an application which connects to MQ for every message that it puts/gets from their queues (request/reply) then disconnects.
I know this is not the best practice but what I would like to know is how much less overhead results from using shared conversations?
Looking at support pac MP16 it suggests that MQ clients direct to z/os sending one message per connect/disconnect takes about 5 times as much CPU overhead on the CHIN (per message) than those that sends 50 messages.
How much would using shared conversations reduce that? Presumably each conversation still has to go through security checking?
QM is z/OS 9.0.0 - Client is unmanaged Dot Net version 7.5.09
Channel has SHARECNV set to 10. _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
fjb_saper |
Posted: Mon Nov 16, 2020 9:30 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
you have to reflect on what the application is doing.
Think about it this way: imagine your application is a put application and it is putting in a sparse manner. Share conversation should not be a problem.
Now imagine your application is a getter application and the messages don't have much time in between (a shared conversation should be possible)
Trusting that the socket wait is not relevant across the connected threads for your application, the main point of the shared connections is to be able to reduce the number of connections counted as in maxinstc and maxinst.  _________________ MQ & Broker admin |
|
Back to top |
|
 |
hughson |
Posted: Tue Nov 17, 2020 12:35 am Post subject: |
|
|
 Padawan
Joined: 09 May 2013 Posts: 1959 Location: Bay of Plenty, New Zealand
|
Remember that the cost of your total connection is going to be very heavily affected by the connection cost, although you don't tell us anything to determine quite how costly that is, it could include TLS handshakes, exits, network costs.
The one put and one get will be a small percentage of the overall cost.
Sharing conversations allows performance enhancements like Read Ahead on get to be utilised, but if you only get one message and then disconnect, there is no benefit there.
Sharing conversations allows heartbeating to occur from either end of the channel, but your connection doesn't sound like it hangs around long enough for a heartbeat to even flow.
Sharing conversations could also allow multiple connections (from a single application process) to utilise the same socket and so exclude the cost of the socket connection and TLS handshakes from 2nd and subsequent MQCONN calls. Authentication and Authorisation checks still need to take place. Are your applications running in the same process so that they can take advantage of using a pre-existing TCP/IP socket up to the queue manager?
Cheers,
Morag _________________ Morag Hughson @MoragHughson
IBM MQ Technical Education Specialist
Get your IBM MQ training here!
MQGem Software |
|
Back to top |
|
 |
zpat |
Posted: Tue Nov 17, 2020 7:43 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
TLS 1.2 cipher is used. Network is internal so pretty fast. No exits.
I have observed 5 different connections with 10 conversations allowed (and active) per connections.
So it is re-using the TCP connection as expected. But clearly it will repeat the authorisation checks.
RACF will cache recently used profiles in storage to avoid I/O at least.
What's the best way to measure the effect on CPU - just look at the CHIN CPU or MSTR or both?
It's hard to distinguish one channel's effect from the rest when the QM is busy with multiple applications. I may be able to get some dedicated time. _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
elkinsc |
Posted: Tue Nov 17, 2020 8:52 am Post subject: Look at the CHIN |
|
|
 Centurion
Joined: 29 Dec 2004 Posts: 138 Location: Indy
|
Unfortunately the channel accounting data does not include the CPU costs for the MQ verbs that are issued, which would be a real help (and has been requested).
Getting dedicated time, defining the tests, and measuring the CPU use in the CHIN is really the only way to get estimates on how much this behavior costs. Most people try it in test or QA environment, then try to extrapolate those costs to production, of course. Works well enough when the environments are similar.
Another thing you should be looking at is the channel initiator statistics, especially if your queue managers have been in production for a while. If there are too few dispatcher or adapter tasks allocated, that can cause bottlenecks when you add direct client connections.
Good Luck! |
|
Back to top |
|
 |
gbaddeley |
Posted: Tue Nov 17, 2020 2:46 pm Post subject: |
|
|
 Jedi Knight
Joined: 25 Mar 2003 Posts: 2538 Location: Melbourne, Australia
|
zpat wrote: |
What's the best way to measure the effect on CPU - just look at the CHIN CPU or MSTR or both? |
Its hard to measure the effect on CPU due to connect / disconnect per message. It would mainly reflect in the CHIN, which handles the MQI & TCP interactions for clients. A connect from CHIN to MSTR would be fast and efficient.
If the client app was changed to connect at start up, and do transaction puts / gets under long running connection handle(s) / object(s), the MQ elapsed times would significantly drop (by 5 - 10 times?) and it would provide increased capacity. _________________ Glenn |
|
Back to top |
|
 |
bruce2359 |
Posted: Tue Nov 17, 2020 5:27 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Good theories all. But, are you missing SLAs or some other symptoms? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
zpat |
Posted: Wed Nov 18, 2020 11:18 pm Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
So the question is really - does using shared conversations mitigate the increased overhead (e.g. 5 times the CPU) of making a fresh connection for each message for a MQ client application connecting to a z/OS QM.
The answer seems to be - only partly. Saves network and SSL handshake overhead but not a lot else. Actual measurement will be needed. _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
hughson |
Posted: Thu Nov 19, 2020 12:44 am Post subject: |
|
|
 Padawan
Joined: 09 May 2013 Posts: 1959 Location: Bay of Plenty, New Zealand
|
zpat wrote: |
So the question is really - does using shared conversations mitigate the increased overhead (e.g. 5 times the CPU) of making a fresh connection for each message for a MQ client application connecting to a z/OS QM.
The answer seems to be - only partly. Saves network and SSL handshake overhead but not a lot else. Actual measurement will be needed. |
That is a good summary yes.
Although, in my limited experience, network and SSL Handshake overhead is a hefty part of the cost of the connection. You should indeed measure, but I imagine you will find that "not a lot else" is not so much to worry about.
Cheers,
Morag _________________ Morag Hughson @MoragHughson
IBM MQ Technical Education Specialist
Get your IBM MQ training here!
MQGem Software |
|
Back to top |
|
 |
zpat |
Posted: Thu Nov 19, 2020 3:53 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Nov 19, 2020 4:27 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
As usual for me, I asked about the problem or issue that underlies the OPs question. Is this a performance tuning issue? If so, shared conversations is not low-hanging fruit.
In any good compare/contrast, one needs to look at benefits, not solely costs. Interesting article, but it only looks at CPU utilization, and not literally the 'cost' of that CPU utilization - a licensing issue.
A well-provisioned z15 box can have 190 configurable processors, with 22 devoted to I/O, so lots of CPUs and CPU-time available and lots of concurrent I/O. In Parallel Sysplex, up to 32 z/OS instances can share a Shared Queue.
For the casual reader, it is the DB2 data-sharing layer of software that is involved in Shared Queues, and not a DB2 data base.
Back to my question, what is the problem/issue? All three tips of the triad (cost, time, quality) need to be addressed - not just cost. You only get to have two of these. If you are looking for concurrency for millions of transactions per second, z is a good choice. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
zpat |
Posted: Thu Nov 19, 2020 4:44 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
The issue is that for various reasons the developers of an application (already in production) want to change their connection model from one connect - many messages, to one connect - one message.
I am trying to persuade them not to - which would necessitate a re-design on their part. As always developers want to do as little work as possible - but they don't pay the CPU costs.
Conserving CPU costs is always desirable to delay hardware upgrades etc - also their volumes might increase and they might re-use the less desirable connection model for other applications.
At present their connection direct to z/OS MQ is a given, although moving to local QMs is possible (license costs would be increased). _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
gbaddeley |
Posted: Thu Nov 19, 2020 2:41 pm Post subject: |
|
|
 Jedi Knight
Joined: 25 Mar 2003 Posts: 2538 Location: Melbourne, Australia
|
zpat wrote: |
The issue is that for various reasons the developers of an application (already in production) want to change their connection model from one connect - many messages, to one connect - one message.
I am trying to persuade them not to - which would necessitate a re-design on their part. As always developers want to do as little work as possible - but they don't pay the CPU costs. |
From a design and efficiency perspective they are making a questionable change. In technical, cost and capacity terms there is no valid justification for it.
Tell the development manager, before its too late. Get agreement that they assume full responsibility for the risk and consequences. _________________ Glenn |
|
Back to top |
|
 |
RogerLacroix |
Posted: Tue Nov 24, 2020 3:17 pm Post subject: |
|
|
 Jedi Knight
Joined: 15 May 2001 Posts: 3264 Location: London, ON Canada
|
|
Back to top |
|
 |
bruce2359 |
Posted: Wed Nov 25, 2020 4:20 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
I’m curious: What values do you have for CHIN DISPATCHERS and ADAPTERS? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
|