Author |
Message
|
crashdog |
Posted: Fri Feb 02, 2018 2:17 am Post subject: amqrmppa memory leak on AIX 7.1 with MQ 7.5.0.8 |
|
|
 Voyager
Joined: 02 Apr 2017 Posts: 77
|
We experiance memory leaks on MQ 7.5.0.8 running on AIX 7.1. The process is the infamous amqrmppa.
The behavoir is slighly different on our test system and on the productive.
The difference is that on the productive system after restarting the queue manager we see 4 amqrmppa processes. One of them starts growing in size and the other amqrmppa processes "dissapear" one after the other over time. On the test system also one amqrmppa process grows but the none growing ones remain running.
No FDCs are thrown on any of the systems.
The main change shortly befor we started to observe the memory leak was one application that runs over a SVRCONN channel that started to use message properties. It's a C programm and it calls MQINQMP to inquire the message properties. The application also runs on an AIX system. However the developer does not have a debugger. So I tried to debug it in my own dev environment which runs on Sun 5.10 Sparc using Sunstudio 12.1 and MQ 7.5.0.4. I've seen some minor memory leaks in the application but I could not reproduce the issue on the dev AIX MQ server. So there are some doubts that the application is the cause of the memory leaks.
We have opened a PMR with IBM some weeks ago and sent them several traces like gencore of the amqrmppa process that grows. But sofar no solution has been delivered.
I've looked at probably all IBM Support pages reporting anything on amqrmppa and memory leaks but none apperare to fit this case.
On the production system we changed the SHARECNV attribute from 10 to 1. But also no change in behaviour.
Maybe some one in the MQ community has experienced something simmilar ? or has some suggestion what we could try ?
Kind Regards,
Gerhard _________________ You win again gravity ! |
|
Back to top |
|
 |
fjb_saper |
Posted: Fri Feb 02, 2018 5:47 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Hello Gerhard,
One way of investigating this problem is to run other client applications in isolation and see if you get the same problem with the memory leak. If you don't, it's clearly the application that's at fault.
Hope it helps  _________________ MQ & Broker admin |
|
Back to top |
|
 |
gbaddeley |
Posted: Sun Feb 04, 2018 2:39 pm Post subject: |
|
|
 Jedi Knight
Joined: 25 Mar 2003 Posts: 2538 Location: Melbourne, Australia
|
Google for "amqrmppa memory" shows a lot of hits.
AFAIK, each amqrmppa process handles 64 active MCA's (Message Channel Agents). A new process is started each time this threshold is reached when new channel instances are initiated.
I haven't seen MQ terminate one of these processes when all its MCAs have ended, but that's not to say it doesn't happen. _________________ Glenn |
|
Back to top |
|
 |
PaulClarke |
Posted: Sun Feb 04, 2018 8:23 pm Post subject: |
|
|
 Grand Master
Joined: 17 Nov 2005 Posts: 1002 Location: New Zealand
|
Could you give us a bit more information? How much memory are we talking here ? How many threads ? ie. what is the memory per thread usage value ? etc etc _________________ Paul Clarke
MQGem Software
www.mqgem.com |
|
Back to top |
|
 |
crashdog |
Posted: Mon Feb 05, 2018 6:53 am Post subject: |
|
|
 Voyager
Joined: 02 Apr 2017 Posts: 77
|
Unfortunately we only have one test system. It's going to be difficult to make the other applications stop their testing. But maybe we have to go that way.
Quote: |
Google for "amqrmppa memory" shows a lot of hits.
|
I have, none of the results have any similarity to this case.
We have by the way two productive AIX queue managers and only memory leak on the one with the application that does the message property inquiry.
The production systems have 16 GB and the test has 8 GB dev also 8 GB (Real Memory). Paging space is 24 GB on prod and test. 12 GB on dev.
On prod we currently have 71 threads on the "size growing" process:
Code: |
ps -o pid,comm,user,thcount -p 13303832
PID COMMAND USER THCNT
13303832 amqrmppa mqm 71
svmon -P 13303832 -m
-------------------------------------------------------------------------------
Pid Command Inuse Pin Pgsp Virtual 64-bit Mthrd 16MB
13303832 amqrmppa 501199 9974 0 501176 Y Y N
PageSize Inuse Pin Pgsp Virtual
s 4 KB 464735 86 0 464712
m 64 KB 2279 618 0 2279
L 16 MB 0 0 0 0
S 16 GB 0 0 0 0
Vsid Esid Type Description PSize Inuse Pin Pgsp Virtual
8fb10f 17 work text data BSS heap sm 65536 0 0 65536
81d041 16 work text data BSS heap sm 65536 0 0 65536
9bc6fb 19 work text data BSS heap sm 65536 0 0 65536
80cae0 18 work text data BSS heap sm 65536 0 0 65536
91c7b1 15 work text data BSS heap sm 65536 0 0 65536
97d0f7 1a work text data BSS heap sm 65536 0 0 65536
810841 14 work text data BSS heap sm 52183 0 0 52183
990019 90000000 work shared library text m 1610 0 0 1610
20002 0 work kernel segment m 664 615 0 664
8b05cb 1b work text data BSS heap sm 6099 0 0 6099
94ca34 11 work text data BSS heap sm 3479 0 0 3479
a000a 9ffffffd work shared library sm 2640 13 0 2640
8479a4 a0000005 work default shmat/mmap sm 2603 0 0 2603
9b001b 90020014 work shared library s 2140 0 0 2140
8eceae 9001000a work shared library data sm 589 0 0 589
880708 - work System Segment s 462 73 0 462
950755 a0000001 work default shmat/mmap sm 391 0 0 391
9072b0 13 work text data BSS heap sm 337 0 0 337
97c817 a0000004 work default shmat/mmap sm 274 0 0 274
81cba1 12 work text data BSS heap sm 141 0 0 141
8071e0 f00000002 work process private m 5 3 0 5
89cee9 a0000006 work default shmat/mmap sm 50 0 0 50
e000e 9ffffffe work shared library sm 45 0 0 45
83ca63 80020014 work USLA heap sm 31 0 0 31
980058 9fffffff clnt USLA text,/dev/hd2:8250 s 20 0 - -
82ca62 ffffffff work application stack sm 10 0 0 10
8d79ad a0001000 work default shmat/mmap sm 7 0 0 7
8bcccb 8fffffff work private load data s 6 0 0 6
93cbd3 10 clnt text data BSS heap, s 3 0 - -
/dev/hd2:102749
9070d0 a0000000 work default shmat/mmap sm 3 0 0 3
970277 a0000008 work default shmat/mmap sm 2 0 0 2
980558 a0000003 work default shmat/mmap sm 2 0 0 2
9c703c a0000007 work default shmat/mmap sm 1 0 0 1
846fa4 a0000002 work default shmat/mmap sm 1 0 0 1
90c610 fffffff5 work application stack sm 0 0 0 0
8b248b fffffffb work application stack sm 0 0 0 0
8fc6ef fffffff9 work application stack sm 0 0 0 0
8ec96e fffffff8 work application stack sm 0 0 0 0
9b091b fffffffd work application stack sm 0 0 0 0
817b81 fffffffe work application stack sm 0 0 0 0
9c06dc fffffffc work application stack sm 0 0 0 0
9fbc1f fffffff7 work application stack sm 0 0 0 0
9bcbdb fffffff6 work application stack sm 0 0 0 0
84cba4 fffffff1 work application stack sm 0 0 0 0
9ec93e fffffffa work application stack sm 0 0 0 0
8e73ce fffffff2 work application stack sm 0 0 0 0
8b40ab fffffff0 work application stack sm 0 0 0 0
93cad3 fffffff4 work application stack sm 0 0 0 0
94c634 fffffff3 work application stack sm 0 0 0 0
|
is there a better way to check mem per thread ?
The client application in question is restarted on a daily basis. It's terminated gracefully. Means it calls MQCLOSE followed by MQDISC. When it's started again it calls MQCONN, MQOPEN and MQCRTMH once.
MQINQ and MQINQMP are called for each message as it appears.
In it's retreive_message function I see following:
Code: |
char *bufferPtr;
MQLONG compCode = MQCC_OK, codeReason = MQRC_NONE;
MQMD msgDesc = {MQMD_DEFAULT};
MQGMO getMsgOpts = {MQGMO_DEFAULT};
MQLONG bufferLength;
MQLONG dataLength;
getMsgOpts.MsgHandle = utilCntlPtr->inputMessHandle;
getMsgOpts.Version = MQGMO_VERSION_4;
getMsgOpts.MatchOptions = MQMO_NONE;
getMsgOpts.Options = MQGMO_NO_WAIT;
getMsgOpts.Options += MQGMO_SYNCPOINT;
|
The *bufferPtr is malloced and never free'd and causes a local leak. But this should not concern the qmgr. _________________ You win again gravity !
Last edited by crashdog on Mon Feb 05, 2018 7:30 am; edited 1 time in total |
|
Back to top |
|
 |
fjb_saper |
Posted: Mon Feb 05, 2018 7:29 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Does the application ever release the memory acquired to deal with the properties?
Does the application acquire new memory to deal with the properties for every message?
 _________________ MQ & Broker admin |
|
Back to top |
|
 |
crashdog |
Posted: Mon Feb 05, 2018 7:40 am Post subject: |
|
|
 Voyager
Joined: 02 Apr 2017 Posts: 77
|
I will run it though the debugger again now and check for that. When running it earlier with "check mem" on in sunstudio it didn't show me any other local leaks then the bufferPtr.
Can you think of a small program containing MQINQ and MQINQMP that will provoke such a memory leak ? _________________ You win again gravity ! |
|
Back to top |
|
 |
fjb_saper |
Posted: Mon Feb 05, 2018 7:49 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
crashdog wrote: |
I will run it though the debugger again now and check for that. When running it earlier with "check mem" on in sunstudio it didn't show me any other local leaks then the bufferPtr.
Can you think of a small program containing MQINQ and MQINQMP that will provoke such a memory leak ? |
My understanding is it's all done through buffers and buffer pointers.
However if you are not careful you might acquire memory for each message (buffer) for the properties, and never release it, thus creating a leak...
The same way, verify the memory handling of the message payload.
It's easier in managed .NET or Java because these things are garbage collected.
In C you have to manage the memory yourself.
Hope it helps  _________________ MQ & Broker admin |
|
Back to top |
|
 |
PaulClarke |
Posted: Mon Feb 05, 2018 8:41 am Post subject: |
|
|
 Grand Master
Joined: 17 Nov 2005 Posts: 1002 Location: New Zealand
|
I am sorry I am not familiar with that tool. When I asked about memory usage per thread I guess what I was asking was "have you got a successive set of readings that show that memory is leaking" ?. It is perfectly normal for AMQRMPPA to use more memory as it supports more and more connections. As those connections disconnect you would expect the memory to drop buit if you see one AMQRMPPA that uses 10 times more memory than another it doesn't mean that it is leaking memory - it probably means that it is just running more SVRCONN channels. You can, to some extent, validate this theory by doing a DIS CHS(*) and looking at JOBNAME. THis will give you the processid of the AMQRMPPA that is running the channel. _________________ Paul Clarke
MQGem Software
www.mqgem.com |
|
Back to top |
|
 |
crashdog |
Posted: Mon Feb 05, 2018 9:40 am Post subject: |
|
|
 Voyager
Joined: 02 Apr 2017 Posts: 77
|
On the production system there is now only one process again. Was four. when I list those JOBNAMES I get 43 thread identifiers. Two appear to be sender / receiver and the rest SVRCONN.
The number of threads with the shell command stay stable at 71.
Those numbers appear to be stable over time... only memory is growing.
Of course to really check for a memory leak in the sense of the name, IBM has to check their process with something like Purify. But from the behaviour, continously growing memory that is only released when the process is terminated, it sounds much like what I would expect to be a memory leak. But checking it from the client side is limited to what's reported by sunstudio's check -memuse . This only shows me some local leaks that I can't associate with the memory growth on the MQ server side.
Kind Regards,
Gerhard _________________ You win again gravity ! |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 05, 2018 10:02 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
crashdog wrote: |
IBM has to check their process with something like Purify. But from the behaviour, continously growing memory that is only released when the process is terminated, it sounds much like what I would expect to be a memory leak. |
If you're that certain, it's time for a PMR. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
crashdog |
Posted: Mon Feb 05, 2018 11:53 am Post subject: |
|
|
 Voyager
Joined: 02 Apr 2017 Posts: 77
|
See first post  _________________ You win again gravity ! |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 05, 2018 12:10 pm Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
crashdog wrote: |
See first post  |
My bad.
You've clearly done your due diligence but I'm with @PaulClarke - the amqrmppa process using a lot of memory usually means it's doing a lot of work rather than leaking memory.
I look forward to your updates. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
crashdog |
Posted: Fri Feb 16, 2018 1:38 am Post subject: |
|
|
 Voyager
Joined: 02 Apr 2017 Posts: 77
|
|
Back to top |
|
 |
|