|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Reading Monitoring Events and Writing to File(s) ? |
« View previous topic :: View next topic » |
Author |
Message
|
shashivarungupta |
Posted: Tue Aug 30, 2016 5:42 pm Post subject: Reading Monitoring Events and Writing to File(s) ? |
|
|
 Grand Master
Joined: 24 Feb 2009 Posts: 1343 Location: Floating in space on a round rock.
|
Hi,
We have Message Broker Applications (SOAP and MQ based, mostly) with multiple Message/Sub flows.
Monitoring Events are enabled on the selected built-in Nodes for logging and auditing purposes.
Monitoring Events are getting published to a Topic Queue.
To read the message from MQ Queue and write into file(s), without parsing messages, which utility or tool can be used which will not affect the performance of an existing system.
Performance is a bigger term, but my point is that, writing messages (300 per second) to file(s) shouldn't eat up CPU or with least possible impact on overall system performance.
Options:
1. Use the custom code to pick and write messages to file, within Broker. What Parser should be used, to write messages in file(s), in readable format?
2. Use custom or readily available utility (which?) to write messages to file, without parsing them, in a readable format?
Kindly suggest. Thanks in advance.
Products :
WMB v8.0.0.4
WMQ v7.0.1.9 _________________ *Life will beat you down, you need to decide to fight back or leave it. |
|
Back to top |
|
 |
smdavies99 |
Posted: Tue Aug 30, 2016 10:07 pm Post subject: Re: Reading Monitoring Events and Writing to File(s) ? |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
shashivarungupta wrote: |
Performance is a bigger term, but my point is that, writing messages (300 per second) to file(s) shouldn't eat up CPU or with least possible impact on overall system performance.
|
you are dreaming aren't you?
At one place I worked we had an event monitoring system in place. We only used it for newly deployed flows and where we were having problems just to avoid a CPU problem.
In my current role we log everything in and out but most of the 'in' is turned off on a stable environment. A 2 core E3 runs our plants fine but we don't have 300 messages per second.
IMHO, no solution will write that amount of data to a file without CPU Loading.
By the way, do you have people to actually sift through those huge files?
How long does it take to find an issue?
Why not use a DB for the job? Sorting through lots of data it what they were designed for. _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
Vitor |
Posted: Wed Aug 31, 2016 5:29 am Post subject: Re: Reading Monitoring Events and Writing to File(s) ? |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
shashivarungupta wrote: |
To read the message from MQ Queue and write into file(s), without parsing messages, which utility or tool can be used which will not affect the performance of an existing system. |
None. A utility that reads without parsing is the amqsget sample, but it will still have an impact on the system.
shashivarungupta wrote: |
Performance is a bigger term, but my point is that, writing messages (300 per second) to file(s) shouldn't eat up CPU or with least possible impact on overall system performance. |
Probably not CPU, but the I/O hit will be significant. Both from the queue manager and the utility doing the file write.
shashivarungupta wrote: |
Options:
1. Use the custom code to pick and write messages to file, within Broker. What Parser should be used, to write messages in file(s), in readable format?
2. Use custom or readily available utility (which?) to write messages to file, without parsing them, in a readable format? |
The messages are already in a readable format, or can be. XML is perfectly readable.
shashivarungupta wrote: |
Kindly suggest. |
Use a database as my worthy associate suggests. Not only does that offload the I/O to the database server (with all the caching, buffering and other assorted goodies) but even if you could inexpensively produce a file as you describe, how are you going to use it? What human is going to read a file which gets 300 new lines of data per second? The only possible way to do it would be to use a search facility, which brings us right back to a database. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
Craig B |
Posted: Mon Sep 19, 2016 9:41 pm Post subject: |
|
|
Partisan
Joined: 18 Jun 2003 Posts: 316 Location: UK
|
If you do consider the Database solution (as the previous responses suggest) then you could also consider using the Record/Replay functionality that is built into WMB/IIB from V8 onwards. You can configure one of the Integration servers (execution groups) to be a recorder where it will make the subscriptions for you, receive the monitoring events and then insert them into the nominated database. You can then use the WebUI to view the data that has been stored, or use the REST/Integration API to programmatically get at the data. If you are concerned about I/O performance etc, then this gives the advantage that it is built into a process you already have running on the system that might be idly waiting for work anyway. You can also configure this feature to work across multiple integration nodes. Information can be found in the documentation:
http://www.ibm.com/support/knowledgecenter/SSMKHH_10.0.0/com.ibm.etools.mft.doc/bj23550_.htm _________________ Regards
Craig |
|
Back to top |
|
 |
smdavies99 |
Posted: Mon Sep 19, 2016 10:33 pm Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
Craig B wrote: |
If you do consider the Database solution (as the previous responses suggest) then you could also consider using the Record/Replay functionality that is built into WMB/IIB from V8 onwards. |
Craig does offer a solution howerver I firmly believe that rolling your own can turn the information store into a valuable asset to the system.
We add all sorts of informatino to our logs including
- On all, the EG_Name and Flow Label (name) and event time
- on input, the Message Transport (MQ, TCPIP, HTTP etc), the Message ID, the original message in parsed form
- on output, The Message Id (links back to the input), then specific fields (indexed) that all the support people to look to see what happenen when to a particular entity.
It gives us the perfect answer to other people who point the finger at the ESb/Broker. In other words Repudiation. We know what was sent where and when and we can prove it.
By enhancing the logging we have a valuable tool.
In my industry, we don't get called upon to do replays (due to time sensitivity) but the original (IN and Out) messages are there if needed but in the last 4.5years we have only had to do that once and that was to help out another system that went TITSUP at a critical time.
YMMV (and probably will) _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|