|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Relative overhead of the various MQI calls? |
« View previous topic :: View next topic » |
Author |
Message
|
zpat |
Posted: Tue Jun 20, 2023 6:41 am Post subject: Relative overhead of the various MQI calls? |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
A regular issue with our developers or vendors is that they don't appreciate the impact of poor coding practice. All they generally understand is http which is a connection-less protocol (good for losing your data as someone at Hursley once said!).
So bad coding practice such as
Connecting to MQ for every message (or every request/reply pair) instead of reusing a connection handle
Opening/closing a queue for each message instead of re-using an open handle.
This applies whether they are using MQI (unlikely these days) or JMS or more likely some Spring Boot sample they have come across.
Invoking Spring Boot for each message (as a microservice) is a case in point.
I try to tell them than no matter what they wrap it in - MQCONN has a relatively high overhead. MQOPEN is also high but not as high.
Are they any figures for these relative overhead (e.g CPU) impacts (on a remote QM ideally z/OS)?
I would guess if doing MQGET/PUT is a relative overhead value of one - then MQOPEN might be a relative overhead of 10 and MQCONN might be 100.
Any IBM published figures? _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
gbaddeley |
Posted: Tue Jun 20, 2023 5:11 pm Post subject: |
|
|
 Jedi Knight
Joined: 25 Mar 2003 Posts: 2538 Location: Melbourne, Australia
|
There are relative overheads for these MQI operations, but comparative performance depends on resource constraints such as network, CPU, disk I/O, OS.
I don't think the ratio is as wide as 1:10:100 for MQ Client across a network, its more like 1:5:10 (at a guess).
Connect / Open / Close / Disconnect for every message may be quite tolerable in low volume situations, eg <10 msgs/sec for smallish messages, however it may not scale up. There will be a plateau in performance when a resource reaches its limit. Its best to implement a solid design from the outset, and do performance / stress testing. _________________ Glenn |
|
Back to top |
|
 |
hughson |
Posted: Tue Jun 20, 2023 8:44 pm Post subject: |
|
|
 Padawan
Joined: 09 May 2013 Posts: 1959 Location: Bay of Plenty, New Zealand
|
Also remember the cost of MQCONN will vary depending on so many things, such as TLS usage, exits, compression etc.
I have seen some mention of folks doing a single connection from a framework and keeping it open so that all subsequent connections share the same pre-existing socket, but that is working around the problem rather than addressing it. And that doesn't help for repeated MQOPEN calls of course.
Cheers,
Morag _________________ Morag Hughson @MoragHughson
IBM MQ Technical Education Specialist
Get your IBM MQ training here!
MQGem Software |
|
Back to top |
|
 |
zpat |
Posted: Wed Jun 21, 2023 6:42 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Thanks. MP16 has some info on this. _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
Andyh |
Posted: Thu Jun 22, 2023 10:51 am Post subject: |
|
|
Master
Joined: 29 Jul 2010 Posts: 239
|
It's not clear from the original post if the question relates to a particular platform.
IBM stopped measuring individual API calls on the distributed platforms many years (decades!) ago. This was primarily a human resource decision, allowing the limited (even then) performance resources to focus on workloads rather than individual API calls. When the distributed MQ code was first released then uniprocessors were the norm and simplistic measurements made more sense. Nowadays it's normal to have far more processors and the focus has been more towards allowing MQ to more fully exploit the available resources. For example, prior to MQ V8 MQOPEN was quite heavily serialized as the internal index of MQ objects was serialized through the use of a single coarse lock. In MQ V8 the index was restructured to allow a fine grained locking mechanism, thus allowing multiple MQOPEN's to progress concurrently.
Similarly, in MQ V9 changes were made to allow more concurrency in MQCONN and much higher rates of client connect/disconnect activity.
On the distributed platforms it's certainly still true that MQCONN/MQDISC is MUCH more expensive than MQOPEN/MQCLOSE/MQGET/MQPUT. The cost of each API call will vary very significantly with the options, for example an MQOPEN of a dynamic queue inmplies creating a 'new' queue. MQ manages pools of reusable queue infrastructures, and so the cost of creating a new queue would in turn depend heavily on whether the appropriate pool was empty. An MQPUT or an MQGET would vary very significantly with whether the message is persistent and the transactionality. In the case of a persisted operation the IO latency of the file system hosting the recovery log would be critical (although should scale well with multiple concurrent persisted actions). All of the API calls would vary significantly in cost with the type of the connection, for example local vs client bindings, and in the case of client bindings the MQIBindType option of the listener and the level of encryption.
Looking at the MQI statistics makes it very easy to see the ratio of actual messaging operations (MQPUT/MQGET) to other MQI calls and this would typically be used as early input into a performance analysis of any distributed MQ system.
As with almost all performance analysis, having a benchmark which is a good simulation of the target environment is essentail. |
|
Back to top |
|
 |
zpat |
Posted: Thu Jun 22, 2023 11:18 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
I mentioned z/OS and also MP16 which is the MQ performance report for z/OS.
MQI behaviour and costs are a real concern when MQ client applications connect direct to the mainframe as mainframe CPU is very expensive and shared with many other applications.
Unless they are well written of course. Sadly z/OS MQ lacks the AAT feature which would make analysis of their MQI calls easy.
JMS Spring Boot may be good or bad in this sense, some of the defaults are definitely more suited to having a local QM (bindings) on a distributed host rather than a MQ client connection to an expensive mainframe.
I have seen some short IBM Technotes on tuning the Spring JMS defaults. _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|