|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
SVRCONN active channels VERSUS queue manager performance |
« View previous topic :: View next topic » |
Author |
Message
|
jlc |
Posted: Thu Nov 27, 2008 12:03 pm Post subject: SVRCONN active channels VERSUS queue manager performance |
|
|
Novice
Joined: 27 Nov 2008 Posts: 10
|
Hi all, we are having some performance issues - slowness - on a queue manager and we are not able to identify exactly what's the root cause of the problem.
This particular queue manager is placed into a MQ cluster, and usually handles something around 30.000 messages per day (around 6Gbytes talking about size).
We recently started to receive some JMS SVRCONN connections from a SAP XI environment, and we realized that as the number of active channels gets increased, the performance of the queue manager gets affected. I'm not talking about getting the maximum number of active channels reached, but how many active connections can my queue manager handle without getting its performance affected.
The business requires that this queue manager handle something around 1.000 active SVRCONN channels simultaneously. The server where this queue manager is running is an AIX and has a lot of hardware resources available (memory, processors, etc). The MQ version running there is 5.3 with the latest CSD applied.
The slowness problem itself happens in a non-frequent way, meaning that today the queue manager is performing fine and tomorrow, without changing anything, the queue manager starts to get messages flowing slowly thru it. The message processing speed in this environment is really critical looking into a business perspective. Taking a look into the AIX topas utility at the moment that the problem happens, it doesn't show any issue on the server side: CPU usage is fine, discs, memory, everything looks good.
So my question is: Does anybody know if even having server resources available, does the queue manager have a limitation on the number of SVRCONN channels running at the same time? The queue manager logs don't show anything about failures or problems, neither FDC files are getting created.
Any help will be greatly appreciated!  |
|
Back to top |
|
 |
SAFraser |
Posted: Thu Nov 27, 2008 12:24 pm Post subject: |
|
|
 Shaman
Joined: 22 Oct 2003 Posts: 742 Location: Austin, Texas, USA
|
What symptom has led you to conclude that the latency is in MQ, rather than in the putting or getting application? Just curious, as it is so often the application and so seldom MQ. |
|
Back to top |
|
 |
jlc |
Posted: Thu Nov 27, 2008 1:09 pm Post subject: |
|
|
Novice
Joined: 27 Nov 2008 Posts: 10
|
SAFraser wrote: |
What symptom has led you to conclude that the latency is in MQ, rather than in the putting or getting application? Just curious, as it is so often the application and so seldom MQ. |
Thanks for your reply SAFraser! In fact I missed in my question what drove me to this conclusion.
Most of the 30.000 messages that flow thru this queue manager are transmitted thru cluster sender/receiver channels, and it was performing fine before the JMS stuff started exchanging some few messages with this particular queue manager using SVRCONN channels. As soon as we start to realize that the performance of the queue manager is getting affected, the workaround is to stop the SVRCONN channels immediately - so at this time 1.000 SVRCONN channels are ended and the connectivity with the JMS side lost. As soon as it's done, the performance of the queue manager gets better, and the message flow speed of the cluster backs to a normal stage again.
As 99% of the messages handled by this queue manager flow thru cluster channels, we just wait until the entire backlog gets processed, and just after that we restart the SVRCONN channels and keep on monitoring the queue manager performance again.
So this is the reason that I think that management of a great number of SVRCONN channels affect the performance of the queue manager.
Please let me know if you need any other information. And thanks a lot for your help!  |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Nov 27, 2008 2:35 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Ok
It looks like what you're experiencing may be a backup on the Cluster Sender => backup on the SYSTEM.CLUSTER.TRANSMIT.QUEUE
You need to check which of the destination qmgrs shows messages in the DLQ with 2053 destination queue full.
Either increase the MAXQDEPTH on the offending queues or assign more resources to process faster and in parallel.
Alternatively you can look into the Cluster receiver channel parameters to have the messages moved to the DLQ way faster than what is happening by default.
An other alternative is to create an overlapping cluster with its own channels to handle exclusively batch processes so that online processes keep a higher priority and are not pestered by backlogs. This still means that you need to keep the processing power up to keeping the queues as near as empty as possible. Even with higher priority it still takes some time for the message to make it from the back of the queue to the front of the queue.
When creating overlapping clusters make sure you have either 2 queues one in each cluster to process the messages, or if you want it to keep it to a single queue processing you need at least a queue alias that is clustered and caries the batch cluster information...
exemple:
def ql(myprocess) maxdepth(500000)
def qa(myprocess.online) cluster(online) targetq(myprocess) defprio(9)
def qa(myprocess.batch) cluster(batch) targetq(myprocess) defprio(0)
def chl(myqmgr.online) chltype(clusrcvr) conname('myhost(port)') cluster(online)
def chl(myqmgr.batch) chltype(clusrcvr) conname('myhost(port)') cluster(batch)
This has the advantage that by setting one or the other of the aliases to put(inhibited) you have the capability to force batch/online traffic to a separate group of servers and can add/remove qmgrs to the online processing group at will without the applications having to change or even being aware of the fact.
You will also need to check whether you will need different cluster sender channels for each of the overlapping clusters, or if a single cluster sender chl (with cluster namelist) will do.
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
PeterPotkay |
Posted: Thu Nov 27, 2008 9:35 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
As soon as I hear "JMS" and "performance problem" I immediatly suspect improper use of message selectors, which can be a HUGE drag on performance if the queues get deep. If you are using selectors, and the queues are deep, and you have 1,000 of these JMS clients all doing the same thing, no wonder the QM is slowing down.
Before you do what FJ suggests, please give more details about what these JMS clients are doing, how many queues you have and how deep they get, the MQ version the clients are using. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Nov 27, 2008 9:49 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
PeterPotkay wrote: |
As soon as I hear "JMS" and "performance problem" I immediatly suspect improper use of message selectors, which can be a HUGE drag on performance if the queues get deep. If you are using selectors, and the queues are deep, and you have 1,000 of these JMS clients all doing the same thing, no wonder the QM is slowing down.
Before you do what FJ suggests, please give more details about what these JMS clients are doing, how many queues you have and how deep they get, the MQ version the clients are using. |
And if you are using message selector you need to show us HOW you built the selectors as this is going to have a major influence on how fast they are.  _________________ MQ & Broker admin |
|
Back to top |
|
 |
jlc |
Posted: Thu Nov 27, 2008 11:42 pm Post subject: |
|
|
Novice
Joined: 27 Nov 2008 Posts: 10
|
I really appreciate your feedbacks, thank you.
I agree with Peter, I'd like first to try to solve the JMS performance instead of changing the cluster stuff. The main reason for that is because everytime that I disable the SVRCONN channels, the cluster channels performance gets better (messages flowing thru XMIT queues, local queues, etc, get its performance improved).
Unfortunately the people that developed these JMS applications are not that friendly, so I really think that it will take some time for me to get the information that you guys requested...wish me luck!
I really appreciate your time,
Thanks and regards! |
|
Back to top |
|
 |
fjb_saper |
Posted: Fri Nov 28, 2008 6:52 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Just talk to them politely and ask for their help. Tell you've been asked by the experts to provide some information to help solve the performance problem you are seeing and you need their help to do so...
Should go a long way...  _________________ MQ & Broker admin |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|