|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Indexing Monitoring (record and replay) with ElasticSearch |
« View previous topic :: View next topic » |
Author |
Message
|
ruimadaleno |
Posted: Tue May 10, 2016 1:28 am Post subject: Indexing Monitoring (record and replay) with ElasticSearch |
|
|
Master
Joined: 08 May 2014 Posts: 274
|
Hi all,
we are running broker 8.0.0.6 on windows and using record and replay info - monitoring to gather log info.
We have a standard in place. Each msgflow deployed is recording a set of events depending on the node used (as an example: for a soap input node the transaction start is recorded and the payload is included)
Monitoring info is recorded in database table, and it has been proven to be valuable to several troubleshooting scenarios. Of course this database has regular and automated "cleaning procedures"
We are now evaluating and trying to build a proof of concept of using ELK (elastic Search + Logstasth + Kibana) to index all monitoring events emitted by all message flows and search throug it. The goal is to provide a "google search" on monitoring events emitted by flows and some dashboard (using Kibana)
Now the first step is to get the events info into logstash. Right now we have the events being recorded to the database, so the question is:
1) Should we keep the events recorded in database and get logstash to read the events from database tables in a regular basis ?
2) can we configure datacapturestore/source/destination to write the events to database AND filesystem ?
3) Other options ?
Best regards
Rui Madaleno _________________ Best regards
Rui Madaleno |
|
Back to top |
|
 |
stoney |
Posted: Tue May 10, 2016 4:08 am Post subject: |
|
|
Centurion
Joined: 03 Apr 2013 Posts: 140
|
Quote: |
1) Should we keep the events recorded in database and get logstash to read the events from database tables in a regular basis ? |
Logstash has a JDBC input to slurp data from a database on a regular basis so this would work fine.
Quote: |
2) can we configure datacapturestore/source/destination to write the events to database AND filesystem ? |
Record and replay cannot write to a file system, so that option is not available to you.
Quote: |
3) Other options ? |
Monitoring events are published over MQ topics.
Record and replay sets up MQ managed subscriptions that put the messages onto a queue, which the recorder then consumes and inserts into the database.
You could set up something else to consume these MQ publications and send them into Logstash.
For example, you could deploy a message flow that has an MQInput and then a TCPIPClientOutput to write to Logstash's TCP input.
Logstash doesn't have an MQ input as far as I can see, but you could try writing one  |
|
Back to top |
|
 |
ruimadaleno |
Posted: Fri May 13, 2016 6:40 am Post subject: |
|
|
Master
Joined: 08 May 2014 Posts: 274
|
Hi Stoney (and other readers of this thread)
I have successfully setup JDBC logstash input filter to get the events from monitoring tables (latter i will think about a way to make this extraction work automagically in a regular basis ) and dump some thousands of monitoring events to elasticsearch.
Now it's time to build some graphs, a proof of concept to understand if the combination of logstash + elasticsearch + Kibana can deliver some value on broker log search/analysys.
I have decided not to write a dedicated msgflow to read events from topic and dump into files. After some investigation found that logstash is a better tool for this job (also we save processing power to the "real" msgflows )
Right now the challenge is to correlate events, an example is better to understand the issue:
we have a lot of msgflows deployed using Soap* nodes, and we have configured IIB to record some "interesting events" like transactionStart, transactionEnd,etc. This way we can measure the approximate time a certain msgflow take to deliver a response (roughly the time from soapinput.transactionStart to soapReply.TerminalIn)
SoapInput.transactionStart, datetimeA , eventid123
SoapReply.terminalIn, datetimeB , eventid123
SoapInput.transactionStart, datetimeC , eventid124
SoapReply.terminalIn, datetimeD , eventid124
SoapInput.transactionStart, datetimeE , eventid125
SoapReply.terminalIn, datetimeF , eventid125
So, on elastic search, how to correlate this events ? how to say to elasticsearch that a new field name "duration" should be calculated by the formula ( "DatetimeB" - "datetimeA" ) for every unique eventid ?
 _________________ Best regards
Rui Madaleno |
|
Back to top |
|
 |
manoj5007 |
Posted: Fri May 13, 2016 6:31 pm Post subject: |
|
|
 Acolyte
Joined: 15 May 2013 Posts: 64
|
On a lighter note, how efficient is Record/Replay??
Is it able to successfully register the events of all transactions successfully?? and how good is the performance of WebUI. When I had tried earlier the WebGUI performance was not upto the mark.
Will be happy to hear if you have done any extra configurations for better performance. |
|
Back to top |
|
 |
ruimadaleno |
Posted: Mon May 16, 2016 7:55 am Post subject: |
|
|
Master
Joined: 08 May 2014 Posts: 274
|
manoj5007 wrote: |
On a lighter note, how efficient is Record/Replay??
Is it able to successfully register the events of all transactions successfully?? and how good is the performance of WebUI. When I had tried earlier the WebGUI performance was not upto the mark.
Will be happy to hear if you have done any extra configurations for better performance. |
On the server side record and replay is very efficient. Event emitted by flows are put in a topic (queue). This is done asyncronously so that message flows don't have to way to event to be written in the database. Later a datacapturestore/source is used to write the info to the monitoring database.
on the webui side there is problems
This is one of the drivers for looking at elasticsearch capabilities. In a production environment the monitoring tables can easily get several thousands of events and the first screen in webui -> data capture store just issues an "select * from WMB_MSGS" causing a full table scan in a potencial very big table
using broker 8.0.0.6. Maybe higher versions have better webui performance ? _________________ Best regards
Rui Madaleno |
|
Back to top |
|
 |
mqjeff |
Posted: Mon May 16, 2016 8:41 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
ruimadaleno wrote: |
using broker 8.0.0.6. Maybe higher versions have better webui performance ? |
Probably.
I'd suggest it might be easier to find a JMS adapter for Logstash to populate it instead of writing to a database and then selecting back out.
The JMS adapter could directly subscribe to the published statistics/event messages and ship them to the logstash server without passing through an additional system.
Correlating the event messages with each other is a separate question - how to do that in general should maybe be discussed in the Record/Reply documentation. How to do it with ELK is really a question for an ELk community. _________________ chmod -R ugo-wx / |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|