Author |
Message
|
nallabalu |
Posted: Thu Jun 17, 2010 11:03 am Post subject: High Memory Usage After Migrating to v7.0 |
|
|
Novice
Joined: 29 Mar 2005 Posts: 19 Location: Long Island, NY
|
Hi,
I have upgraded my Production queue managers from Wmq 6.0.2.8 and v5.3 to WMQ 7.0.1.1 2 days ago. Every thing migrated fine and saw all the queue managers started fine and processing messages(we use pub/sub mostly). But when i got a call from the support saying the memory usage on that migrated server is too high, and i happened to see that each of the queue manager process is utilising more system memory than what it was using in the previous versions. i just compared this server with other ones in prod which are running at MQ 6.x level and noticed cosiderably a lot of memory is utilised by the migrated queuemanagers. We bounced all the queue managers y'day when it reached the maximum memory and that released some space and again started growing today. The page file usage(PF usage) is hitting at a high 5.35GB compared to the v6.0 queue managers which is max 2.5GB
I have 10 queue managers on each server
server - win 2003 sp1
physical memory - 3GB
Memory utilised - 2.5GB
Websphere MQ version - 7.0.1.1
Any idea, what i have to check. Please also let me know if any specific information required. Any solution/help is appreciated.
Thanks. |
|
Back to top |
|
 |
aditya.aggarwal |
Posted: Thu Jun 17, 2010 11:30 am Post subject: |
|
|
 Master
Joined: 13 Jan 2009 Posts: 252
|
Have you bounced the server and checked the results?
Which MQ process is increasing\growing the memory usage for queue manager?
is this happening with all 10 queue manager process? |
|
Back to top |
|
 |
nallabalu |
Posted: Thu Jun 17, 2010 12:02 pm Post subject: |
|
|
Novice
Joined: 29 Mar 2005 Posts: 19 Location: Long Island, NY
|
Thanks for your response.
aditya.aggarwal wrote: |
Have you bounced the server and checked the results?
|
I Have bounced the server and the memory usage is the same(at high) after a period of time.
Quote: |
Which MQ process is increasing\growing the memory usage for queue manager? |
amqzlaa0 - 452000k, 513468k respectively for all queue managers
Quote: |
is this happening with all 10 queue manager process? |
Yes |
|
Back to top |
|
 |
aditya.aggarwal |
Posted: Thu Jun 17, 2010 12:11 pm Post subject: |
|
|
 Master
Joined: 13 Jan 2009 Posts: 252
|
What is the type of logging used at all 10 qmgrs?
is there any FDC reported?
Do you feel that there is any kind of memory lekage because of amqzlaa process? |
|
Back to top |
|
 |
aditya.aggarwal |
Posted: Thu Jun 17, 2010 12:25 pm Post subject: |
|
|
 Master
Joined: 13 Jan 2009 Posts: 252
|
one more thing in MQ 7 memory requirment increased due to New features and functions....
I would recoomend you to open a PMR if there is any kind of memory leakage...
There was a memory leakge reported for amzlaa process in MQ 7.. check if this can help you..
http://www-01.ibm.com/support/docview.wss?rs=171&uid=swg1IZ43157 |
|
Back to top |
|
 |
exerk |
Posted: Thu Jun 17, 2010 12:30 pm Post subject: |
|
|
 Jedi Council
Joined: 02 Nov 2006 Posts: 6339
|
nallabalu wrote: |
...amqzlaa0 - 452000k, 513468k respectively for all queue managers... |
An old one, but one that may still be pertinent HERE. If nothing else install the latest FixPack (7.0.1.2) and raise a PMR if it persists. _________________ It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys. |
|
Back to top |
|
 |
aditya.aggarwal |
Posted: Thu Jun 17, 2010 12:44 pm Post subject: |
|
|
 Master
Joined: 13 Jan 2009 Posts: 252
|
|
Back to top |
|
 |
nallabalu |
Posted: Thu Jun 17, 2010 1:03 pm Post subject: |
|
|
Novice
Joined: 29 Mar 2005 Posts: 19 Location: Long Island, NY
|
Thanks for your responses.
aditya.aggarwal wrote: |
What is the type of logging used at all 10 qmgrs? |
Circular
aditya.aggarwal wrote: |
is there any FDC reported? |
Yes there are lot of FDC's
exerkl wrote: |
An old one, but one that may still be pertinent HERE. If nothing else install the latest FixPack (7.0.1.2) and raise a PMR if it persists. |
I have opened the PMR and they are investigating it with the info and logs i provided. Mean while i am also getting my ESF team approve to get the latest fixpack 7.0.1.2 to be installed.
Thanks for the links, I too found some from internet, i will go through them and get back to you. |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Jun 17, 2010 1:27 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Wasn't there some kind of stack limitation on Windows that you could not run more than 9 qmgrs simultaneously?.
If you need to run a lot of qmgrs on the same box you might want to move to Linux or a non windows platform.
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
nallabalu |
Posted: Thu Jun 17, 2010 1:42 pm Post subject: |
|
|
Novice
Joined: 29 Mar 2005 Posts: 19 Location: Long Island, NY
|
fjb_saper wrote: |
stack limitation on Windows that you could not run more than 9 qmgrs simultaneously? |
Is it something new in WMq 7.0, as this has been migrated, the same number of queue managers were running in mq 6.0 with out any memory issues.
aditya.aggarwal wrote: |
http://www-01.ibm.com/support/docview.wss?rs=171&uid=swg1IZ43157 |
This link points to the fix pack 7.0.1.0 which bit older to 7.0.1.1 which i am currently using. I guess it is cumulative that all the fixes in the previous release are included in the latest one..!? |
|
Back to top |
|
 |
Vitor |
Posted: Thu Jun 17, 2010 1:53 pm Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
nallabalu wrote: |
Is it something new in WMq 7.0, as this has been migrated, the same number of queue managers were running in mq 6.0 with out any memory issues. |
I've got to say I never managed that number of queue managers simultaniously on Windows without wierdness happening, but I'm not aware of any changes.
nallabalu wrote: |
aditya.aggarwal wrote: |
http://www-01.ibm.com/support/docview.wss?rs=171&uid=swg1IZ43157 |
This link points to the fix pack 7.0.1.0 which bit older to 7.0.1.1 which i am currently using. I guess it is cumulative that all the fixes in the previous release are included in the latest one..!? |
 _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
nallabalu |
Posted: Thu Jun 17, 2010 2:58 pm Post subject: |
|
|
Novice
Joined: 29 Mar 2005 Posts: 19 Location: Long Island, NY
|
Yes, i do agree with you vitor. Infact i even suggested it a long back about having few queue managers per server with lot of queues would be more effecient, than having more queue managers with only few queues. But the management isn't positive to make any changes to the design that they have from a long period of time other than upgrading them. |
|
Back to top |
|
 |
bruce2359 |
Posted: Thu Jun 17, 2010 3:15 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
Yes there are lot of FDC's |
And did you look at the FDCs? Did you research the errors noted in the FDCs? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
nallabalu |
Posted: Thu Jun 17, 2010 5:42 pm Post subject: |
|
|
Novice
Joined: 29 Mar 2005 Posts: 19 Location: Long Island, NY
|
Yes, Most of the FDC's recorded mq was unable to obtain enough storage, and few of them recorded AMQ6118: An internal WebSphere MQ error has occurred ' i did a search on the probe ID but no hits found.
Here is the fdc's generated: (edited host name and QMGR name)
+-----------------------------------------------------------------------------+
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
| |
| Date/Time :- Wed June 16 2010 16:14:06 Eastern Daylight Time |
| UTC Time :- 1276719246.174000 |
| UTC Time Offset :- 60 (Eastern Daylight Time) |
| Host Name :- 1234PROMIN03 |
| Operating System :- Windows Server 2003, Build 3790: SP2 |
| PIDS :- 5724H7220 |
| LVLS :- 7.0.1.1 |
| Product Long Name :- WebSphere MQ for Windows |
| Vendor :- IBM |
| Probe Id :- KN013001 |
| Application Name :- MQM |
| Component :- kpiMQPUT |
| SCCS Info :- lib/lqm/amqkputa.c, 1.286.1.2 |
| Line Number :- 339 |
| Build Date :- Dec 22 2009 |
| CMVC level :- p701-101-091221 |
| Build Type :- IKAP - (Production) |
| UserID :- MQSeries |
| Process Name :- E:\WebSphereMQ\bin\amqzlaa0.exe |
| Addressing mode :- 32-bit |
| Process :- 00006616 |
| Thread :- 00000260 |
| QueueManager :- QM_XYZ_ABCD124 |
| UserApp :- FALSE |
| ConnId(1) IPCC :- 20974 |
| ConnId(2) QM :- 322 |
| Last HQC :- 2.5.5-1673212 |
| Last HSHMEMB :- 2.10.20-106340 |
| Major Errorcode :- xecS_E_NONE |
| Minor Errorcode :- OK |
| Probe Type :- MSGAMQ6037 |
| Probe Severity :- 2 |
| Probe Description :- AMQ6037: WebSphere MQ was unable to obtain enough |
| storage. |
| FDCSequenceNumber :- 0 |
| Arith1 :- 536895543 20006037 |
| |
+-----------------------------------------------------------------------------+
MQM Function Stack
zlaMainThread
zlaProcessMessage
zlaProcessMQIRequest
zlaMQPUT
zsqMQPUT
kpiMQPUT
xcsFFST
MQM Trace History
--------------{ xllSpinLockRelease
--------------} xllSpinLockRelease rc=OK
--------------{ hosPostWaitPostArea
---------------{ xcsPostEventSem
---------------} xcsPostEventSem rc=OK
--------------} hosPostWaitPostArea rc=OK
--------------{ xcsWaitEventSem
--------------} xcsWaitEventSem rc=OK
-------------} mqlInitIOandWait rc=OK
------------} mqlIsLSNOnDisk rc=OK
-----------} mqlWriteLogRecord rc=OK
----------} hlgWriteLogRecord rc=OK
---------} almLogIt rc=OK
---------{ atmUpdateTranLastLSN
---------} atmUpdateTranLastLSN rc=OK
---------{ atxLockTranQHandles
----------{ aocLockQHandle
-----------{ xllLongLockRequest
-----------} xllLongLockRequest rc=OK
----------} aocLockQHandle rc=OK
---------} atxLockTranQHandles rc=OK
---------{ aqmSyncPoint
----------{ atmEnquireTranLastLSN
----------} atmEnquireTranLastLSN rc=OK
----------{ adhQueryOpen
-----------{ aduLocateFileCtl
-----------} aduLocateFileCtl rc=arcE_OBJECT_MISSING
----------} adhQueryOpen rc=OK
----------{ aqsStartQOp
----------} aqsStartQOp rc=OK
----------{ aqhSyncPointAction
-----------{ aqqEnsureDeferred
------------{ aqqGetDeferredLink
-------------{ aqqGetPreviousLink
-------------} aqqGetPreviousLink rc=OK
------------} aqqGetDeferredLink rc=OK
------------{ aqpWriteData
-------------{ aqpOpen
--------------{ adhOpen
---------------{ aduLocateFileCtl
---------------} aduLocateFileCtl rc=arcE_OBJECT_MISSING
---------------{ aduAllocFileCtl
---------------} aduAllocFileCtl rc=OK
---------------{ aduBuildOSName
---------------} aduBuildOSName rc=OK
---------------{ adiOpenFile
---------------} adiOpenFile rc=OK
--------------} adhOpen rc=OK
-------------} aqpOpen rc=OK
-------------{ adhWrite
--------------{ adiWriteFile
Data: 0x00000000 0x00000a48
Data: 0xffffffff 0xffffffff
Data: 0x00000000 0x02a0e808
---------------{ xcsQueryMTimeFn
---------------} xcsQueryMTimeFn rc=OK
---------------{ xcsQueryMTimeFn
---------------} xcsQueryMTimeFn rc=OK
--------------} adiWriteFile rc=OK
-------------} adhWrite rc=OK
-------------{ adhClose
--------------{ adiCloseFile
--------------} adiCloseFile rc=OK
--------------{ aduReleaseFileCtl
--------------} aduReleaseFileCtl rc=OK
-------------} adhClose rc=OK
------------} aqpWriteData rc=OK
-----------} aqqEnsureDeferred rc=OK
-----------{ aqhAddMsg
------------{ kpiTickle
------------} kpiTickle rc=OK
-----------} aqhAddMsg rc=OK
-----------{ aqhRecoverQueue
-----------} aqhRecoverQueue rc=OK
----------} aqhSyncPointAction rc=OK
----------{ aqsRecoverQOp
----------} aqsRecoverQOp rc=OK
----------{ adhQueryOpen
-----------{ aduLocateFileCtl
-----------} aduLocateFileCtl rc=arcE_OBJECT_MISSING
----------} adhQueryOpen rc=OK
----------{ aocUnlockQHandle
-----------{ xllLongLockRelease
-----------} xllLongLockRelease rc=OK
----------} aocUnlockQHandle rc=OK
---------} aqmSyncPoint rc=OK
--------} atxPerformCommit rc=OK
-------} atxCommit rc=OK
-------{ attReleaseTransaction
Data: 0x14a40800 0x00000001
Data: 0x00000000 0x00000000
--------{ atmUnlockDataMutex
---------{ xllLongLockRelease
---------} xllLongLockRelease rc=OK
--------} atmUnlockDataMutex rc=OK
--------{ atmLockTTMutex
---------{ xllLongLockRequest
---------} xllLongLockRequest rc=OK
--------} atmLockTTMutex rc=OK
Data: 0x14a40800 0x00000001
Data: 0x00000000 0x00000000
--------{ attRemoveTransaction
---------{ aocHash
---------} aocHash rc=OK
--------} attRemoveTransaction rc=OK
--------{ xllLongLockRelease
--------} xllLongLockRelease rc=OK
--------{ attDeallocateTransaction
---------{ almReleaseSpace
----------{ hlgReleaseLogSpace
-----------{ mqlReserveSpace
-----------} mqlReserveSpace rc=OK
----------} hlgReleaseLogSpace rc=OK
---------} almReleaseSpace rc=OK
--------} attDeallocateTransaction rc=OK
-------} attReleaseTransaction rc=OK
------} atmSyncPoint rc=OK
-----} apiSyncPoint rc=OK
-----{ kqiFastnetStartChannels
-----} kqiFastnetStartChannels rc=OK
----} kpiSyncPoint rc=OK
---} zsqMQCMIT rc=OK
--} zlaMQCMIT rc=OK
--{ zcpDeleteMessage
--} zcpDeleteMessage rc=OK
--{ zcpSendOnPipe
---{ xcsResetEventSem
---} xcsResetEventSem rc=OK
---{ xcsPostEventSem
---} xcsPostEventSem rc=OK
--} zcpSendOnPipe rc=OK
-} zlaProcessMQIRequest rc=OK
} zlaProcessMessage rc=OK
{ zcpReceiveOnPipe
-{ xcsWaitEventSem
-} xcsWaitEventSem rc=OK
} zcpReceiveOnPipe rc=OK
{ zlaCheckStatus
} zlaCheckStatus rc=OK
{ zlaProcessMessage
-{ zlaProcessMQIRequest
--{ zlaMQPUT
---{ zcpCreateMessage
---} zcpCreateMessage rc=OK
---{ zsqMQPUT
----{ zsqVerifyQueueOrTopicObj
-----{ zsqVerifyObj
-----} zsqVerifyObj rc=OK
----} zsqVerifyQueueOrTopicObj rc=OK
----{ zsqVerMsgDescForPut
----} zsqVerMsgDescForPut rc=OK
----{ zsqVerOptForPutPut1
----} zsqVerOptForPutPut1 rc=OK
----{ zsqSetKernelPutParams
----} zsqSetKernelPutParams rc=OK
----{ kpiMQPUT
-----{ kqiPutTopic
------{ kqiTopicGetMSStatus
------} kqiTopicGetMSStatus rc=OK
------{ kqiVerOptForPut
------} kqiVerOptForPut rc=OK
------{ kqiQPathCheck
------} kqiQPathCheck rc=OK
------{ kqiResolveTopic
-------{ kqiTopicLocateParent
--------{ xlsRWMutexRequest
---------{ xlsRWLockState
---------} xlsRWLockState rc=OK
---------{ xlsRWUnlockState
---------} xlsRWUnlockState rc=OK
--------} xlsRWMutexRequest rc=OK
-------} kqiTopicLocateParent rc=OK
-------{ kqiTopicLocateParent
-------} kqiTopicLocateParent rc=krcE_NOT_FOUND
-------{ xlsRWMutexRelease
--------{ xlsRWLockState
--------} xlsRWLockState rc=OK
--------{ xlsRWUnlockState
--------} xlsRWUnlockState rc=OK
-------} xlsRWMutexRelease rc=OK
------} kqiResolveTopic rc=OK
------{ kqiVerMsgForPutPutList
------} kqiVerMsgForPutPutList rc=OK
------{ zrfParse
-------{ zrfLocateMQRFH2
--------{ xcsConvertString
--------} xcsConvertString rc=OK
-------} zrfLocateMQRFH2 rc=OK
-------{ xcsReallocMemFn
--------{ xcsGetMemFn
--------} xcsGetMemFn rc=OK
-------} xcsReallocMemFn rc=OK
-------{ xcsGetMemFn
-------} xcsGetMemFn rc=OK
-------{ zrfParseInitialize
--------{ zrfQueryCCSIDType
---------{ xcsQueryCCSIDType
---------} xcsQueryCCSIDType rc=OK
--------} zrfQueryCCSIDType rc=OK
-------} zrfParseInitialize rc=OK
-------{ zrfParseUTF8FolderName
-------} zrfParseUTF8FolderName rc=OK
-------{ zrf_folder_parse
-------} zrf_folder_parse rc=OK
-------{ zrfLocatePropertyNode
-------} zrfLocatePropertyNode rc=MQRC_PROPERTY_NOT_AVAILABLE
-------{ zrfParseUTF8FolderName
-------} zrfParseUTF8FolderName rc=OK
-------{ zrf_folder_parse
-------} zrf_folder_parse rc=OK
-------{ zrfLocatePropertyNode
-------} zrfLocatePropertyNode rc=MQRC_PROPERTY_NOT_AVAILABLE
-------{ zrfParseUTF8FolderName
-------} zrfParseUTF8FolderName rc=OK
-------{ zrf_folder_parse
--------{ xcsReallocMemFn
--------} xcsReallocMemFn rc=OK
-------} zrf_folder_parse rc=OK
-------{ zrfLocatePropertyNode
-------} zrfLocatePropertyNode rc=MQRC_PROPERTY_NOT_AVAILABLE
-------{ xcsReallocMemFn
Data: 0x00013c50
-------} xcsReallocMemFn rc=xecS_E_NONE
-------{ xcsFreeMemFn
-------} xcsFreeMemFn rc=OK
------} zrfParse rc=xecS_E_NONE
Data: 0x00000000 0x00000000
------{ kqiTopicPublishComplete
------} kqiTopicPublishComplete rc=OK
------{ kqiErrorEvent
------} kqiErrorEvent rc=OK
-----} kqiPutTopic rc=xecS_E_NONE
-----{ xcsFFST |
|
Back to top |
|
 |
PeterPotkay |
Posted: Thu Jun 17, 2010 7:36 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
http://www.mqseries.net/phpBB2/viewtopic.php?t=47676
Its not unreasonable to assume a new version of any software that has lots of new features will need more memory. Newer servers typically have more memory that older servers of the same type. Software vendors know and assume this when weighing the pros and cons of adding more memory requirements. Sometimes that can be taken to an extreme and it results in lazy programming.
Any design that calls for 10 Queue Managers on one Windows server is questionable. You are finding out why. 3GB of memory to support 10 QMs is not a lot, unless they are not ding much work. If they are not doing much work, there probably is no reason to have so many. 1 QM managing 100 queues will typically perform better than 10 QMs on the same server each managing 10 queues.
Refusing to reevaluate a design just because it worked fine yesterday is a one way ticket to eventual problems sooner or later. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
|