|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
High Queue Depth AM Report |
« View previous topic :: View next topic » |
Author |
Message
|
cicsprog |
Posted: Wed Sep 21, 2005 6:46 am Post subject: High Queue Depth AM Report |
|
|
Partisan
Joined: 27 Jan 2002 Posts: 347
|
I admin 65+ z/OS MQM’s only. We have Omegamon XE but we are not doing any historical reporting (no money for centralized server to house this data, no mips, yada yada ). I would like to produce a consolidated (i.e. one report) morning report of QLOCALs exceeding a QUEUE DEPTH of 500. I am looking for ideas.
One hair-brained scheme is to put a step(s) in the nightly MQM backups to read all the QLOCALs in the MQM and create MQ messages for QUEUEs exceeding the depth boggy to send to a central queue for reporting. However, I don’t see anything in the MQ manuals for the MQI that allows sequential retrieval of QUEUE names to do a MQINQ on (looked at MQOPEN also). So only thought I have is to add a step with CSQUTIL to produce a list of QLOCALs as input to the program to interrogate QUEUE DEPTH. Or have CSQUTIL deliver QUEUE DEPTH and report from that file.
Any other ideas appreciated. |
|
Back to top |
|
 |
EddieA |
Posted: Wed Sep 21, 2005 8:20 am Post subject: |
|
|
 Jedi
Joined: 28 Jun 2001 Posts: 2453 Location: Los Angeles
|
Quote: |
I don’t see anything in the MQ manuals for the MQI that allows sequential retrieval of QUEUE names to do a MQINQ on |
You can send a command to the Command Server, plain text for v5, PCF for v6, and get the list. It's a bit of a pain to parse the data returned for v5, but can be done.
Cheers, _________________ Eddie Atherton
IBM Certified Solution Developer - WebSphere Message Broker V6.1
IBM Certified Solution Developer - WebSphere Message Broker V7.0 |
|
Back to top |
|
 |
jefflowrey |
Posted: Wed Sep 21, 2005 8:56 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
Yeah, for v5 the best thing to do is send "dis ql(*) curdepth" to the command server.
That will return you the queue name and the current depth for all the queues on the qm. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
RogerLacroix |
Posted: Wed Sep 21, 2005 9:16 am Post subject: |
|
|
 Jedi Knight
Joined: 15 May 2001 Posts: 3264 Location: London, ON Canada
|
Hi,
This isn't pretty but will work.
(1) On one queue manger (MQA1) create a local queue (MYSTATS.Q) with a large Max Q Depth (500,000).
(2) Now on every other queue manager create a remote queue pointing to MYSTATS.Q on MQA1 or create a queue manager alias pointing to MQA1.
(3) Now create a very simple program (COBOL/C/Rexx) to put a message to the SYSTEM.COMMAND.INPUT with the following message data:
Code: |
display qlocal(*) curdepth |
Set the MQMD fields
- Format to MQSTR
- MsgType to Request
- Reply-To-Q to MYQSTATS.Q
- Reply-To-QMgr to MQA1 (do not set if using remote queue).
At midnight, run this program against every single queue manager. Now you will get a message for every local queue from every queue manager sent to your stats queue.
Note: You get 2 extra messsages per command issued - a header message (CSQN205I) and a trailer message (CSQ9022I). You can discard these messages.
The only trick is how do you know where the message came from (which queue manager sent it to your stats queue). You have 2 places to check in the message's MQMD: Reply-To-QMgr field or the MsgID field. Both will have the originating queue manager name.
Hope that helps.
Regards,
Roger Lacroix _________________ Capitalware: Transforming tomorrow into today.
Connected to MQ!
Twitter |
|
Back to top |
|
 |
fjb_saper |
Posted: Wed Sep 21, 2005 3:31 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
We do it twice a day through QPasa and distribute per email to the relevant user community.
Trigger is a crontab job that puts a message on our report trigger queue. It clears the queue after 1 min.
We have different rules and a total of over 70 queues are checked accross 3 different plattforms (AIX, Solaris, zOS). Our criteria is to enter the queue into the message with its depth if the curdepth is over 0. You could as well set it 500.
The filtering is done via a perl script. Of course all the queues checked are being monitored by QPasa.
Works like a charm.
Ask your Omegamon support to help you set it up under the motto if QPasa can do it, show me how to do it with Omegamon...
And by the way this is part of instant monitoring and has nothing to do with history... History reporting involves reports over a period of time not instant alerts...
Enjoy  |
|
Back to top |
|
 |
hopsala |
Posted: Wed Sep 21, 2005 3:46 pm Post subject: |
|
|
 Guardian
Joined: 24 Sep 2004 Posts: 960
|
I don't see how creating such a report every morning is a good solution, if you had a problem during the day you would only know about it the day after...
So, another suggestion, which I find slightly better, in steps:
1. Set performance events with a high and low threshold for every application queue.
2. If you wish to collect this auditing data somewhere else than your z/OS, Delete local SYSTEM.ADMIN.PERFM.EVENT queue and create a parallel remote queue that point to the qm you use to collect all data. Conversly write the prog in step 3 as client (a better solution I believe)
3. Write a small prog (or d/l sp) that reads from this queue and reports any infractions.
I always find a trigger-oriented solution (push technology) to be better than a sampliing solution (pull technology) for many undestandable reasons.
You can also trigger this application on FIRST on the PERFM queue, thus getting an online report whenever some queue reaches high depth, and possibly send an SNMP record to Omegamon (if it accepts them).
(p.s Just a note- what you're talking about is not historical reports, but online reports...) |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|