|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Choice of using Standard Bindings vs Fastpath bindings |
« View previous topic :: View next topic » |
Author |
Message
|
czech77 |
Posted: Sat May 07, 2022 7:49 pm Post subject: Choice of using Standard Bindings vs Fastpath bindings |
|
|
Newbie
Joined: 07 May 2022 Posts: 5
|
I have gone through over 20 posts on this forum on Standard vs Fastpath bindings and they were excellent, but there is one question I need some information about!
Our MQ (We are currently on 9.2.0.2 and it's an Appliance) handles millions of messages today along with some applications with large files. We currently have Fastpath bindings configured and it was all fine until a recent P1 incident bringing our QMGR down due to huge pile up of messages on SYSTEM CLUSTER TRANSMIT QUEUE. Upon analyzing it, the root cause has been identified to amqrmppa process and one of the suggestions that came out (apart from others such as applying the latest fix pack 9.2.0.5 etc.,) was to use Standard Bindings over the existing Fastpath bindings.
I understand that there is some performance trade-off with Standard Bindings but I do not have a datapoint on what that trade-off would be? Is there any documentation or performance analysis report or any other metric that gives me a view of what that trade-off would look like? We need to make a decision quickly with this choice. I appreciate your inputs and suggestions in this regard. |
|
Back to top |
|
 |
bruce2359 |
Posted: Sun May 08, 2022 7:02 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
You’ve omitted the ProbeId, error message, APAR number, which might help us here. amqrmppa is the reciever-side MCA (Message Channel Agent).
Way back in old and slow single or dual processor days, running MQ channels as trusted could improve performance (a few percentage points). The well-documented risks and restrictions still apply today, although the benefits in todays blisteringly fast multi-processor environment have substantially minimized the performance benefits - while the risks remain. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
bruce2359 |
Posted: Sun May 08, 2022 7:59 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Moved to Installation/Configuration Support forum. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
hughson |
Posted: Sun May 08, 2022 10:03 pm Post subject: |
|
|
 Padawan
Joined: 09 May 2013 Posts: 1959 Location: Bay of Plenty, New Zealand
|
The question over whether one should use FASTPATH bindings or not was always, as I recall anyway, about whether you could trust the code running in the same process as the queue manager agent. Since you are on the MQ Appliance, you cannot have any user code in the system, i.e. no channel exits, so there would seem to be nothing that isn't trust-worthy IBM code. This would normally be an excellent choice for FASTPATH.
Who was it that suggested to you to stop using FASTPATH and what was their reasoning?
Cheers,
Morag _________________ Morag Hughson @MoragHughson
IBM MQ Technical Education Specialist
Get your IBM MQ training here!
MQGem Software |
|
Back to top |
|
 |
czech77 |
Posted: Mon May 09, 2022 12:11 am Post subject: |
|
|
Newbie
Joined: 07 May 2022 Posts: 5
|
hughson wrote: |
Who was it that suggested to you to stop using FASTPATH and what was their reasoning?
Cheers,
Morag |
You absolutely makes sense when you describe how the choice of bindings shouldn't affect the MQ on Appliance. I'm surprised that this suggestion is coming from IBM Distinguished Engineer (DE) from Hursley Labs and his exact words are quoted below:
------ Reply from IBM Labs---------
Yes in summary, the MQ Labs are strongly advising you move to the latest FixPack (9.2.0.5) to pick up the identified APAR IT36729 as well as a number of others in the same area. This should mean you do not encounter the same Memory Error which caused the amqrmppa process to Fail.
The recommendation to move to Standard Bindings over Fastpath Bindings is to restrict the potential impact of any future issues to the amqrmppa process to that process only and not create the same Failure Scenario all be it for a different issue with the amqrmppa process.
Hope that helps clarify things, please let us know how you will be looking to progress ?
Thanks and regards ... Adrian (for Bob).
----- End of the Reply from IBM Labs---------- |
|
Back to top |
|
 |
bruce2359 |
Posted: Mon May 09, 2022 2:10 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
|
Back to top |
|
 |
czech77 |
Posted: Mon May 09, 2022 11:17 pm Post subject: |
|
|
Newbie
Joined: 07 May 2022 Posts: 5
|
Yes, we plan to apply 9.2.0.5 which also has the Fix you mentioned above. My question still remains on the choice of bindings (Standard vs Fastpath) though! I'll update this thread post our call with IBM Hursley Labs person later, for everybody's benefit. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Tue May 10, 2022 3:35 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
hughson wrote: |
The question over whether one should use FASTPATH bindings or not was always, as I recall anyway, about whether you could trust the code running in the same process as the queue manager agent. Since you are on the MQ Appliance, you cannot have any user code in the system, i.e. no channel exits, so there would seem to be nothing that isn't trust-worthy IBM code. This would normally be an excellent choice for FASTPATH.
|
And yet IBM decided its better to ship the appliance without the defaults being FASTPATH.
czech77, did you guys do a performance comparison between using Standard vs Fastpath bindings? If yes, was the difference relevant and worth the additional risk? Relevant because lets says it reduced the latency by 50% (wow, how great!!!). But oh wait, its 1 microsecond instead of 2 microseconds, so while technically 50% less latency, not noticeable and thus irrelevant. I don't know the actual performance lift (if any), just using that as an example. You may be chasing irrelevant performance gains at the cost of real risk to stability. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
hughson |
Posted: Tue May 10, 2022 8:12 pm Post subject: |
|
|
 Padawan
Joined: 09 May 2013 Posts: 1959 Location: Bay of Plenty, New Zealand
|
PeterPotkay wrote: |
hughson wrote: |
The question over whether one should use FASTPATH bindings or not was always, as I recall anyway, about whether you could trust the code running in the same process as the queue manager agent. Since you are on the MQ Appliance, you cannot have any user code in the system, i.e. no channel exits, so there would seem to be nothing that isn't trust-worthy IBM code. This would normally be an excellent choice for FASTPATH.
|
And yet IBM decided its better to ship the appliance without the defaults being FASTPATH. |
Is that true Peter? I had been under the impression that the default on the Appliance was indeed FASTPATH?
Cheers,
Morag _________________ Morag Hughson @MoragHughson
IBM MQ Technical Education Specialist
Get your IBM MQ training here!
MQGem Software |
|
Back to top |
|
 |
PeterPotkay |
Posted: Wed May 11, 2022 5:40 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
hughson wrote: |
PeterPotkay wrote: |
hughson wrote: |
The question over whether one should use FASTPATH bindings or not was always, as I recall anyway, about whether you could trust the code running in the same process as the queue manager agent. Since you are on the MQ Appliance, you cannot have any user code in the system, i.e. no channel exits, so there would seem to be nothing that isn't trust-worthy IBM code. This would normally be an excellent choice for FASTPATH.
|
And yet IBM decided its better to ship the appliance without the defaults being FASTPATH. |
Is that true Peter? I had been under the impression that the default on the Appliance was indeed FASTPATH?
Cheers,
Morag |
I don't know! Don't have access to an MQ Appliance.
I ASSumed it was defaulted to standard. I inferred it was defaulted to standard if IBM is telling czech77 to not use FastPath. I mean, why would IBM ship something with a default setting they don't want customers to use.
But I admit, I don't know what the MQ Appliance's default is for Fastpath versus Standard. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
dware |
Posted: Wed May 11, 2022 5:41 am Post subject: |
|
|
Novice
Joined: 18 Nov 2013 Posts: 13
|
I can confirm that fastpath is the default for the MQ Appliance.
On the original question, regarding fastpath performance for channels, it definitely can make a difference to overall message throughput (not just latency), although it will depend on the scenario. The most benefit would likely be seen in CPU intensive scenarios, for example, client connected, non-persistent messaging applications, although benefits across persistent scenarios have also been seen. You'd really need to test your style of workload to understand the relative impact and whether that's acceptable in your situation.
In the case of the MQ Appliance, there is no possibility of user code. This is why fastpath channels are enabled by default in that environment. I would expect the original suggestion to disable fastpath channels on the appliance would have been predominantly as a temporary measure, to mitigate the risk from the known MQ problem that had been identified in the version that was running. With the fix applied, that risk would be removed.
The fact that this situation arose shows that there are never zero implications to running fastpath channels, even without exits in the picture. But then, as we all know, that’s true for any piece of logic. So it’s a constant balance of levels of protection versus performance. MQ believes that when no external code is in the equation, fastpath channels make sense for the benefits it brings in performance. However, the option to be more cautious, for the sake of reduced top-end performance, is still available to you.
It’s probably worth saying that any situation where fastpath is causing a problem with MQ-only channel logic, is treated as a product defect and IBM would obviously work to resolve it.
David Ware
IBM MQ |
|
Back to top |
|
 |
Andyh |
Posted: Wed May 11, 2022 10:58 am Post subject: |
|
|
Master
Joined: 29 Jul 2010 Posts: 239
|
There are also very significant memory consumption savings in using fastpath bindings for channels, particularly where large numbers of clients are attached.
MQ supports 4 types of MQI binding:
1. Client
2. Isolated
3. Shared (aka standard)
4. Fastpath.
With client bindings and isolated binding there is no writable shared memory attached in the 'application' process and hence no matter how badly a client or isolated bound application might behave it cannot damage any other MQ process through overwriting MQ shared memory.
Both shared and fastpath biound processes run the risk that a badly behaved process could overwrite memory shared with other MQ processes potentially leading to a wider queue manger scope issue.
Fastpath bound applications access the most shared state and additionally can execute blocks of code in which MQ cannot tolerate a failure. Any failure in one of these "must complete" blocks would be detected and result in the queue manager aborting. When a heavily multi-threaded process exits abruptly and any thread of that process was executing in one of these must complete blocks the queue manager will terminate abruptly (with no loss of persistent message integrity). Thus the risk of running badly behaved 'applications' in fastpath mode is thus higher than of running those applications in shared bindings mode.
The queue manager treats an amqrmppa process much like any other multi-threaded application in this regard, except that the amqrmppa can ONLY use shared or fastpath binding. Thus a badly behaved amqrmppa (APARable in the absence of any user code (i.e exits)) always implies some risk of an abrupt QM termination. Given that a positive identification of such an APARable issue has been made it is important that the customter applies the appropriate fix, either through applying current service, or by applying the ciorrect specific fix. |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|