Author |
Message
|
amiivas |
Posted: Tue Jan 02, 2018 2:44 pm Post subject: Running MQSC commands in a scheduled script |
|
|
Apprentice
Joined: 14 Oct 2007 Posts: 44 Location: Texas, USA
|
Hi All,
I am planning to run few mqsc command in a scheduled script ran by a cron job after every 5 second.
The scripts are having few mqsc commands to capture stats.
Will there be any impact on mq performance if the schedulers runs for every 5 secs? _________________ IBM WebSphere Certified Solution Developer |
|
Back to top |
|
 |
bruce2359 |
Posted: Tue Jan 02, 2018 3:43 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Yes.
What commands exactly? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
Vitor |
Posted: Wed Jan 03, 2018 6:59 am Post subject: Re: Running MQSC commands in a scheduled script |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
amiivas wrote: |
The scripts are having few mqsc commands to capture stats. |
This also begs the question why you're using a script rather than have the queue manager generate stats for you. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
amiivas |
Posted: Wed Jan 03, 2018 10:12 am Post subject: |
|
|
Apprentice
Joined: 14 Oct 2007 Posts: 44 Location: Texas, USA
|
bruce2359 wrote: |
What commands exactly? |
The commands that will be used are:
dis ql(*) where(CURDEPTH ne 0)
dis chstatus(*)
and dis qs for some queues.
This will be done to get a historic chart for the connections, threads and queue depth for our performance analysis.
Vitor wrote: |
This also begs the question why you're using a script rather than have the queue manager generate stats for you. |
I agree we can collect queue manager/ queues / channels statistics from queue manager statistics and accounting option but here are the reasons I am not using it:
1. Too many unnecessary data for now as for our requirement
2. Parsing the message into readable format.
3. The logs will be saved in splunk db so it should be small and precise with specifc log format for creating dashboards.
4. Definite impact on performance.
With scripting, I have more control on what data to be captured, when to capture and what format i need [I am using a small java program for formatting].
The duration of the collection is open based on the impact on performance.
I am open for suggestions and thank you so far for your replies on the topic.
Thanks. _________________ IBM WebSphere Certified Solution Developer |
|
Back to top |
|
 |
Vitor |
Posted: Wed Jan 03, 2018 10:28 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
amiivas wrote: |
bruce2359 wrote: |
What commands exactly? |
The commands that will be used are:
dis ql(*) where(CURDEPTH ne 0)
dis chstatus(*)
and dis qs for some queues.
This will be done to get a historic chart for the connections, threads and queue depth for our performance analysis. |
Doing this every 5 seconds will absolutely affect performance.
amiivas wrote: |
Vitor wrote: |
This also begs the question why you're using a script rather than have the queue manager generate stats for you. |
I agree we can collect queue manager/ queues / channels statistics from queue manager statistics and accounting option but here are the reasons I am not using it: |
Ok.......
amiivas wrote: |
1. Too many unnecessary data for now as for our requirement |
And what does this cost you?
amiivas wrote: |
2. Parsing the message into readable format. |
There are utilities and sample code (amqsmon et al) for this.
amiivas wrote: |
3. The logs will be saved in splunk db so it should be small and precise with specifc log format for creating dashboards. |
So you extract only the data you need and put that to splunk. In 3 - 6 months, when someone wants more data added to the dashboard (and they will, they will) the capture mechanism is in place and you just need to extend the extract mechanism.
amiivas wrote: |
4. Definite impact on performance. |
I put it to you that a mechanism specifically designed for this purpose by a room full of clever people at IBM will not overtly affect performance. At a minimum, allowing the queue manager to capture and output the statistics at a convenient point in it's internal processing will be less disruptive than whacking it with at least 3 (and potentially a lot more) administrative commands which it then has to stop and process every 5 seconds.
(5 seconds? Really? You really, really need a dashboard that updates every 5 seconds? Or is that just what some high level management type thought sounded like a nice number? How volatile is this data on your site?)
amiivas wrote: |
With scripting, I have more control on what data to be captured, when to capture and what format i need [I am using a small java program for formatting]. |
Explain to me how this is different if you're using a small java program to extract data from the in-built monitoring.
Also how is this script passing the output of the mqsc commands to your java? If it's a file then that's a lot of I/O you've just added to the server which will in turn impact performance. Especially at that collection speed. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
amiivas |
Posted: Wed Jan 03, 2018 11:53 am Post subject: |
|
|
Apprentice
Joined: 14 Oct 2007 Posts: 44 Location: Texas, USA
|
5 sec was really a
Quote: |
high level management type thought sounded like a nice number |
which we never going to implement. At present we thought of collecting at every 2 min or 5 mins based on the size of the logs and performance impact.
Reg: IO operation :
Java program is only meant to read from the output of the command which got created for that particular interval of time and after that it appends to already existing log file. This file will then be streamed to log servers for slunk to index.
Which is going to remain same irrespective of what method we use to get the stats as the output has to be formatted.
Every solution has its pros and cons. So here it is :
Option 1: running above mqsc commands for 5 mins.
Option 2: Collecting statistics information for queues and channels for every 5 mins.
Assumption:
1. Both option will require some formatting to be done.
2. Option 2 will require 2 level of formatting.
Also keep in mind the amount of data that is required to be collected for this exercise. and in production with queue manager processing a high load, whether external induced probing is better or worse than an internal additional task for queue manager to save the statistics.
While voting please consider the complete solution and requirement. I somehow tending towards option 1. _________________ IBM WebSphere Certified Solution Developer |
|
Back to top |
|
 |
Vitor |
Posted: Wed Jan 03, 2018 12:00 pm Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
amiivas wrote: |
Assumtion: Both option will require some formatting to be done. Option 2 will require 2 level of formatting. |
If option 2 requires 2 levels of formatting (parse stats message and format splunk log), how does that differ from options 1 (parse text output and format splunk long)?
Not seeing the difference here. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
amiivas |
Posted: Wed Jan 03, 2018 12:09 pm Post subject: |
|
|
Apprentice
Joined: 14 Oct 2007 Posts: 44 Location: Texas, USA
|
Option 2:
2 levels of parsing are :
1. first done by amqsmon to get the output in a readable format in a file
2. Java program to format it into a pipe delimited format.
Option 1:
1. 1 level of parsing -> command output to a file [no parsing or formatting, just plain output] --> parsing it into splunk log.
One benefit of Option 2 which I can think now is that that parsing by amqsmon is required for only 2 -3 times to clear the backlog of messages from the queue for the entire day. _________________ IBM WebSphere Certified Solution Developer |
|
Back to top |
|
 |
Vitor |
Posted: Wed Jan 03, 2018 12:19 pm Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
Humph
Well I vote Option 2, call upon the assembled multitude to cast their votes and point out that in the end of the day it's your system & you have to build what you can comfortably support. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
amiivas |
Posted: Wed Jan 03, 2018 1:39 pm Post subject: |
|
|
Apprentice
Joined: 14 Oct 2007 Posts: 44 Location: Texas, USA
|
I agree to your point. I will certainly give a try for Option 2 as well. Thank you for bringing your points which are always to the point and effective. _________________ IBM WebSphere Certified Solution Developer |
|
Back to top |
|
 |
amiivas |
Posted: Wed Jan 03, 2018 1:49 pm Post subject: |
|
|
Apprentice
Joined: 14 Oct 2007 Posts: 44 Location: Texas, USA
|
One more point I want to bring into discussion is the way the statistics are collected.
Say if we open statistics for 5 mins intervals, queue manager has to internally work on saving the statistics for those 5 mins and then put to the system queue. hence there will always be something on queue manager in terms of load.
But if run any external command, the commands get the data for that particular point of time and the load on queue manager ends with the end statement of that command.
Do you think this will change anything considering the small amount of statistics that is required. _________________ IBM WebSphere Certified Solution Developer |
|
Back to top |
|
 |
Vitor |
Posted: Thu Jan 04, 2018 5:41 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
amiivas wrote: |
Do you think this will change anything considering the small amount of statistics that is required. |
I continue to vote no to option 1. The statistics collection is cycled into the queue manager processing, and part of an IBM-optimized process. Any mqsc command acts as a interrupt where the command processor has to spin up, read the message from the queue (remember that runmqsc is just a human friendly way of adding PCF messages to the command queue), do what is required (and it has no way of knowing or differentiating between request for status and a request to define a new queue), then squirt out the output. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
gbaddeley |
Posted: Sun Jan 07, 2018 2:28 pm Post subject: |
|
|
 Jedi Knight
Joined: 25 Mar 2003 Posts: 2538 Location: Melbourne, Australia
|
amiivas wrote: |
One more point I want to bring into discussion is the way the statistics are collected.
Say if we open statistics for 5 mins intervals, queue manager has to internally work on saving the statistics for those 5 mins and then put to the system queue. hence there will always be something on queue manager in terms of load. |
The overhead of collection is quite small. We collect MQ Queue stats for 10,000+ queues every 5 minutes on a very busy qmgr, and there is no noticeable increase in CPU, load or storage. _________________ Glenn |
|
Back to top |
|
 |
|