|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Jump in number of connections opened by a queue manager |
« View previous topic :: View next topic » |
Author |
Message
|
andrewfemin |
Posted: Mon Dec 16, 2019 9:35 pm Post subject: Jump in number of connections opened by a queue manager |
|
|
 Acolyte
Joined: 26 Aug 2017 Posts: 54
|
Hi,
We have 2 FRs, 80+ PRs, 2 Remote QMs and 3 applications connecting through SVRCONN channels in our Production. The Remote QMs and the applications using SVRCONN channels connect to FR1, thereby making FR1 a gateway QM.
We have been observing a strange behaviour. Every Monday, at precisely 6 PM CST, the CLUSSDR channels in all PRs go to BINDING or INITIALIZING status. Once we restart the FR1, the CLUSSDR channels go back to RUNNING status.
On raising a case with IBM, we were told that MQ was hitting an OS limit with pthread_create and we were asked to contact our platform team.
When we tried troubleshooting it further, we found that DIS CONN(*) returned 47 connections at 5:59 PM and it begins increasing drastically after 6:00 PM. At around 6:04 PM, it reaches 5500 connections and that's when the PRs start having the connection issue. It looks like the OS(or the QM) is hitting some limit and unable to create further connections.
We are also trying to find out which application(or channel) is creating this many connections. But when I display one of the 5500 connections, it does not have much information. Here's the result:
Code: |
AMQ8276: Display Connection details.
CONN(1E71F45DFDDC472B)
EXTCONN(414D51434C534D515052442020202020)
TYPE(CONN)
PID(1449) TID(161)
APPLDESC(IBM MQ Channel) APPLTAG(amqrmppa)
APPLTYPE(SYSTEM) ASTATE(NONE)
CHANNEL( ) CLIENTID( )
CONNAME( )
CONNOPTS(MQCNO_HANDLE_SHARE_BLOCK,MQCNO_SHARED_BINDING)
USERID(mqm) UOWLOG( )
UOWSTDA( ) UOWSTTI( )
UOWLOGDA( ) UOWLOGTI( )
URTYPE(QMGR)
EXTURID(XA_FORMATID[] XA_GTRID[] XA_BQUAL[])
QMURID(0.0) UOWSTATE(NONE)
|
The channel name is empty and APPLTAG is amqrmppa. This does not help much to identify the culprits. In such cases, is there a way to identify which component is creating that many connections? Also, DIS CHS(*) during the issue does not show any increase in the channels. I'm unable to proceed with the troubleshooting due to this. Kindly help me out with your suggestions.
Thanks,
Andrew |
|
Back to top |
|
 |
LJM |
Posted: Tue Dec 17, 2019 6:21 am Post subject: |
|
|
Novice
Joined: 05 Jul 2018 Posts: 22
|
|
Back to top |
|
 |
andrewfemin |
Posted: Tue Dec 17, 2019 11:53 am Post subject: |
|
|
 Acolyte
Joined: 26 Aug 2017 Posts: 54
|
We are running MQ 9.0.0.4 on SUSE Linux Enterprise Server 12 SP3. So it must be a different issue? |
|
Back to top |
|
 |
gbaddeley |
Posted: Tue Dec 17, 2019 3:17 pm Post subject: |
|
|
 Jedi Knight
Joined: 25 Mar 2003 Posts: 2538 Location: Melbourne, Australia
|
At 6:00PM, when the connections begin, start MQ trace, and then end it after a few seconds. This will likely show the source. Make sure you have plenty of space in /var/mqm/trace. _________________ Glenn |
|
Back to top |
|
 |
PeterPotkay |
Posted: Tue Dec 17, 2019 5:32 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
andrewfemin wrote: |
SUSE Linux Enterprise Server 12 SP3 |
andrewfemin wrote: |
MQ was hitting an OS limit with pthread_create |
When our SUSE servers went to SLES 12 SP4 from SLES 11, same problems. Instability with FDCs talking about issues with pthread_create.
An O/S parameter called DefaultTasksMax was the culprit in our case. Out of the box SLES 12 SP3 has that set at a stupidly low value (512 I think) limiting how many threads/processes your mqm account is going to be able to spawn.
System command to view this is "systemctl show --property DefaultTasksMax".
Also check settings for pid_max and threads-max, as well as nofile (-Hn) nofile (-Sn), nproc (-Hn), nproc (-Sn).
I found the overall problem resolution process very frustrating with no one being able to provide us a means to identify exactly what stupid O/S parameter was causing MQ to barf on pthread_create. Furious Googling and some dumb luck led us to some other poor sap dealing with pthread_create issues due to DefaultTasksMax being set too low by SUSE. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|