ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » General IBM MQ Support » Effect of shared conversation on CHIN/MCA overhead

Post new topic  Reply to topic Goto page 1, 2  Next
 Effect of shared conversation on CHIN/MCA overhead « View previous topic :: View next topic » 
Author Message
zpat
PostPosted: Mon Nov 16, 2020 5:49 am    Post subject: Effect of shared conversation on CHIN/MCA overhead Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5849
Location: UK

Let's assume we have an application which connects to MQ for every message that it puts/gets from their queues (request/reply) then disconnects.

I know this is not the best practice but what I would like to know is how much less overhead results from using shared conversations?

Looking at support pac MP16 it suggests that MQ clients direct to z/os sending one message per connect/disconnect takes about 5 times as much CPU overhead on the CHIN (per message) than those that sends 50 messages.

How much would using shared conversations reduce that? Presumably each conversation still has to go through security checking?

QM is z/OS 9.0.0 - Client is unmanaged Dot Net version 7.5.09

Channel has SHARECNV set to 10.
_________________
Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Mon Nov 16, 2020 9:30 am    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20696
Location: LI,NY

you have to reflect on what the application is doing.

Think about it this way: imagine your application is a put application and it is putting in a sparse manner. Share conversation should not be a problem.

Now imagine your application is a getter application and the messages don't have much time in between (a shared conversation should be possible)

Trusting that the socket wait is not relevant across the connected threads for your application, the main point of the shared connections is to be able to reduce the number of connections counted as in maxinstc and maxinst.
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
hughson
PostPosted: Tue Nov 17, 2020 12:35 am    Post subject: Reply with quote

Padawan

Joined: 09 May 2013
Posts: 1914
Location: Bay of Plenty, New Zealand

Remember that the cost of your total connection is going to be very heavily affected by the connection cost, although you don't tell us anything to determine quite how costly that is, it could include TLS handshakes, exits, network costs.

The one put and one get will be a small percentage of the overall cost.

Sharing conversations allows performance enhancements like Read Ahead on get to be utilised, but if you only get one message and then disconnect, there is no benefit there.

Sharing conversations allows heartbeating to occur from either end of the channel, but your connection doesn't sound like it hangs around long enough for a heartbeat to even flow.

Sharing conversations could also allow multiple connections (from a single application process) to utilise the same socket and so exclude the cost of the socket connection and TLS handshakes from 2nd and subsequent MQCONN calls. Authentication and Authorisation checks still need to take place. Are your applications running in the same process so that they can take advantage of using a pre-existing TCP/IP socket up to the queue manager?

Cheers,
Morag
_________________
Morag Hughson @MoragHughson
IBM MQ Technical Education Specialist
Get your IBM MQ training here!
MQGem Software
Back to top
View user's profile Send private message Visit poster's website
zpat
PostPosted: Tue Nov 17, 2020 7:43 am    Post subject: Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5849
Location: UK

TLS 1.2 cipher is used. Network is internal so pretty fast. No exits.

I have observed 5 different connections with 10 conversations allowed (and active) per connections.

So it is re-using the TCP connection as expected. But clearly it will repeat the authorisation checks.

RACF will cache recently used profiles in storage to avoid I/O at least.

What's the best way to measure the effect on CPU - just look at the CHIN CPU or MSTR or both?

It's hard to distinguish one channel's effect from the rest when the QM is busy with multiple applications. I may be able to get some dedicated time.
_________________
Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.
Back to top
View user's profile Send private message
elkinsc
PostPosted: Tue Nov 17, 2020 8:52 am    Post subject: Look at the CHIN Reply with quote

Centurion

Joined: 29 Dec 2004
Posts: 138
Location: Indy

Unfortunately the channel accounting data does not include the CPU costs for the MQ verbs that are issued, which would be a real help (and has been requested).
Getting dedicated time, defining the tests, and measuring the CPU use in the CHIN is really the only way to get estimates on how much this behavior costs. Most people try it in test or QA environment, then try to extrapolate those costs to production, of course. Works well enough when the environments are similar.
Another thing you should be looking at is the channel initiator statistics, especially if your queue managers have been in production for a while. If there are too few dispatcher or adapter tasks allocated, that can cause bottlenecks when you add direct client connections.
Good Luck!
Back to top
View user's profile Send private message
gbaddeley
PostPosted: Tue Nov 17, 2020 2:46 pm    Post subject: Reply with quote

Jedi

Joined: 25 Mar 2003
Posts: 2492
Location: Melbourne, Australia

zpat wrote:
What's the best way to measure the effect on CPU - just look at the CHIN CPU or MSTR or both?

Its hard to measure the effect on CPU due to connect / disconnect per message. It would mainly reflect in the CHIN, which handles the MQI & TCP interactions for clients. A connect from CHIN to MSTR would be fast and efficient.

If the client app was changed to connect at start up, and do transaction puts / gets under long running connection handle(s) / object(s), the MQ elapsed times would significantly drop (by 5 - 10 times?) and it would provide increased capacity.
_________________
Glenn
Back to top
View user's profile Send private message
bruce2359
PostPosted: Tue Nov 17, 2020 5:27 pm    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9394
Location: US: west coast, almost. Otherwise, enroute.

Good theories all. But, are you missing SLAs or some other symptoms?
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
zpat
PostPosted: Wed Nov 18, 2020 11:18 pm    Post subject: Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5849
Location: UK

So the question is really - does using shared conversations mitigate the increased overhead (e.g. 5 times the CPU) of making a fresh connection for each message for a MQ client application connecting to a z/OS QM.

The answer seems to be - only partly. Saves network and SSL handshake overhead but not a lot else. Actual measurement will be needed.
_________________
Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.
Back to top
View user's profile Send private message
hughson
PostPosted: Thu Nov 19, 2020 12:44 am    Post subject: Reply with quote

Padawan

Joined: 09 May 2013
Posts: 1914
Location: Bay of Plenty, New Zealand

zpat wrote:
So the question is really - does using shared conversations mitigate the increased overhead (e.g. 5 times the CPU) of making a fresh connection for each message for a MQ client application connecting to a z/OS QM.

The answer seems to be - only partly. Saves network and SSL handshake overhead but not a lot else. Actual measurement will be needed.


That is a good summary yes.

Although, in my limited experience, network and SSL Handshake overhead is a hefty part of the cost of the connection. You should indeed measure, but I imagine you will find that "not a lot else" is not so much to worry about.

Cheers,
Morag
_________________
Morag Hughson @MoragHughson
IBM MQ Technical Education Specialist
Get your IBM MQ training here!
MQGem Software
Back to top
View user's profile Send private message Visit poster's website
zpat
PostPosted: Thu Nov 19, 2020 3:53 am    Post subject: Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5849
Location: UK

This blog is interesting (The cost of connecting to a z/OS queue manager)


https://community.ibm.com/community/user/imwuc/viewdocument/the-cost-of-connecting-to-a-zos-qu?CommunityKey=b382f2ab-42f1-4932-aa8b-8786ca722d55

What it mentions is that using a shared listener (I presume that is the same thing as a QSG group listener), doubles the CPU overhead of a connect when measured across the CHIN, MSTR and DB2 address spaces.
_________________
Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Nov 19, 2020 4:27 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9394
Location: US: west coast, almost. Otherwise, enroute.

zpat wrote:
This blog is interesting (The cost of connecting to a z/OS queue manager)

https://community.ibm.com/community/user/imwuc/viewdocument/the-cost-of-connecting-to-a-zos-qu?CommunityKey=b382f2ab-42f1-4932-aa8b-8786ca722d55

What it mentions is that using a shared listener (I presume that is the same thing as a QSG group listener), doubles the CPU overhead of a connect when measured across the CHIN, MSTR and DB2 address spaces.

As usual for me, I asked about the problem or issue that underlies the OPs question. Is this a performance tuning issue? If so, shared conversations is not low-hanging fruit.

In any good compare/contrast, one needs to look at benefits, not solely costs. Interesting article, but it only looks at CPU utilization, and not literally the 'cost' of that CPU utilization - a licensing issue.

A well-provisioned z15 box can have 190 configurable processors, with 22 devoted to I/O, so lots of CPUs and CPU-time available and lots of concurrent I/O. In Parallel Sysplex, up to 32 z/OS instances can share a Shared Queue.

For the casual reader, it is the DB2 data-sharing layer of software that is involved in Shared Queues, and not a DB2 data base.

Back to my question, what is the problem/issue? All three tips of the triad (cost, time, quality) need to be addressed - not just cost. You only get to have two of these. If you are looking for concurrency for millions of transactions per second, z is a good choice.
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
zpat
PostPosted: Thu Nov 19, 2020 4:44 am    Post subject: Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5849
Location: UK

The issue is that for various reasons the developers of an application (already in production) want to change their connection model from one connect - many messages, to one connect - one message.

I am trying to persuade them not to - which would necessitate a re-design on their part. As always developers want to do as little work as possible - but they don't pay the CPU costs.

Conserving CPU costs is always desirable to delay hardware upgrades etc - also their volumes might increase and they might re-use the less desirable connection model for other applications.

At present their connection direct to z/OS MQ is a given, although moving to local QMs is possible (license costs would be increased).
_________________
Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.
Back to top
View user's profile Send private message
gbaddeley
PostPosted: Thu Nov 19, 2020 2:41 pm    Post subject: Reply with quote

Jedi

Joined: 25 Mar 2003
Posts: 2492
Location: Melbourne, Australia

zpat wrote:
The issue is that for various reasons the developers of an application (already in production) want to change their connection model from one connect - many messages, to one connect - one message.
I am trying to persuade them not to - which would necessitate a re-design on their part. As always developers want to do as little work as possible - but they don't pay the CPU costs.

From a design and efficiency perspective they are making a questionable change. In technical, cost and capacity terms there is no valid justification for it.
Tell the development manager, before its too late. Get agreement that they assume full responsibility for the risk and consequences.
_________________
Glenn
Back to top
View user's profile Send private message
RogerLacroix
PostPosted: Tue Nov 24, 2020 3:17 pm    Post subject: Reply with quote

Jedi Knight

Joined: 15 May 2001
Posts: 3252
Location: London, ON Canada

You should have a read of a blog posting I did last year:
https://www.capitalware.com/rl_blog/?p=5362

Regards,
Roger Lacroix
Capitalware Inc.
_________________
Capitalware: Transforming tomorrow into today.
Connected to MQ!
Twitter
Back to top
View user's profile Send private message Visit poster's website
bruce2359
PostPosted: Wed Nov 25, 2020 4:20 pm    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9394
Location: US: west coast, almost. Otherwise, enroute.

I’m curious: What values do you have for CHIN DISPATCHERS and ADAPTERS?
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2  Next Page 1 of 2

MQSeries.net Forum Index » General IBM MQ Support » Effect of shared conversation on CHIN/MCA overhead
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.