|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Profiling Approaches |
« View previous topic :: View next topic » |
Author |
Message
|
goffinf |
Posted: Sun Jan 31, 2010 7:05 am Post subject: Profiling Approaches |
|
|
Chevalier
Joined: 05 Nov 2005 Posts: 401
|
Environment:
Broker version 6.1.0.3
MQ v6
Platform AIX
I have been asked to look at a number flows that are experiencing some performance and scalability issues and the occasional unexplained crash !
There are a number of fairly apparent areas of concern but before I go stamping around changing this that and the other I want to establish the current set of flows and their resource use profile as a baseline and go from there making incremental changes and reprofiling to assess the impact of each change.
I am certainly no expert here so I wanted to know what people typically use to profile both the flows and the resources used (at least memory, cpu, io, threads).
Since this is a new subject to me I'm not even sure yet what tooling and other resources we have available either as part of Broker, MQ or platform tools.
Well, obviously in Broker we have flowstats and tracing, but I can't say that I'm certain which might prove most useful. We *may* have some Tivoli agents(not sure which yet) and in the past we have used some NMon scripts provided by IBM. I had a scoot through the suppport packs yesterday to see what their might be to help both 'drive' testing and harvest results. Obvious perfromance reports for my environment are going to be worth reading through and a few reports from people like Tim Dunn, but beyond that, I obviously need to be able to understand the specifics of my environment and my flows.
I will have other people from our test teams to assist me (they are not Broker experts), but I want to get a basic feel first for the things I should be looking at (flows, machine resources, queues, broker database, Java settings, etc..) and how best to make sense of the data gathered to propose some changes and then measure the effect. I know enough to understand that 'guessing' is ok as a starting point but no more than that.
In some cases I certainly will want to test the flows under load and for reasonable periods of time (there is some suggestion of memory leakage for example, which occurs gradually but has an increasing effect). That might mean that there is a lot of data gathered (and I need to understand the overhead of that processing too).
I would really welcome comments from anyone who does or has done this kind of work before in terms of their approach and the tooling support.
Just by way of a simple resume' of the flows under scrutiny :-
The flows are used for relatively high thruput (circa: 15 message/second) and make numerous calls out to external [web] services both within and outside the organsation. They don't [currently] make a great deal of use of MQ (although that may change). The logic contained is of moderate to high complexity. Some flows contain custom Java.
Many thanks, all comments welcome.
Regards
Fraser. |
|
Back to top |
|
 |
fjb_saper |
Posted: Sun Jan 31, 2010 2:36 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Dear Fraser,
One of the "due diligences" I would give to you is to search IBM's site and this site about performance. Check as well the infocenter. There is a paper out there (don't just have the url handy) that talks about how to program the transformation of big messages in order to minimize memory usage and maximize speed.
You did not say anything about Java Compute Nodes. If you use them, make sure you do release memory.
15 msg per second => 900 msg / min. How many brokers / instances of the flow do you have handling this. What is the max time you can afford on those (time out)? Think about the implications of any outside influence (db speed) moving your flow time from 200 ms to 400 ms... With this scaling takes on a completely different meaning...
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
goffinf |
Posted: Mon Feb 01, 2010 1:29 am Post subject: |
|
|
Chevalier
Joined: 05 Nov 2005 Posts: 401
|
Thanks for that, good points all.
And yes, there is a good amount of Java used both as JavaCompute and a few custom nodes (we are looking to reduce these if possible).
What I (think) I need at this stage is a way to measure the performance of the flows and the resources used by them and a way to consistently record those results, so that when we do get around to making changes we have a solid way of telling what the effects are (positive and negative). Intuitively I expect to be in the business of trading-off some aspects of non functional requiremts with others and indeed with preferred development practice.
But before I get carried away (again), are there any suggestions about how best to gather and record metrics and approaches/tooling to drive performance testing and tuning.
Thanks
Fraser. |
|
Back to top |
|
 |
elvis_gn |
Posted: Mon Feb 01, 2010 1:56 am Post subject: |
|
|
 Padawan
Joined: 08 Oct 2004 Posts: 1905 Location: Dubai
|
Hi goffinf,
If you only wanted a real time analysis of your flows and resources, I would have suggested the IS02 or something leaner and simpler.
If you want to gather data over days and weeks, I would suggest the Tivoli OMEGAMON.
UNLESS you have already built your flows to audit transaction information.
Regards. |
|
Back to top |
|
 |
nathanw |
Posted: Mon Feb 01, 2010 2:50 am Post subject: |
|
|
 Knight
Joined: 14 Jul 2004 Posts: 550
|
Ok no an expert in the OS you have but as a snapshot case I tend to use the following on linux / unix
top
once this is up then select
u
enter the username of the broker
this will cut down unwanted processes
P (uppercase p)
will sort by cpu useage
H (uppercaseH)
will turn off any multithreads
c (lowercase c)
will show you what commands are running in this case the execution groups
So as long as you know what execution groups hold which flows you can see which ones are using heavily
This may not be as detailed as you may like but as a snapshot I find it very useful.. Also I am sure there are ways to record these over a period of time _________________ Who is General Failure and why is he reading my hard drive?
Artificial Intelligence stands no chance against Natural Stupidity.
Only the User Trace Speaks The Truth  |
|
Back to top |
|
 |
fjb_saper |
Posted: Mon Feb 01, 2010 1:54 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
elvis_gn wrote: |
Hi goffinf,
If you only wanted a real time analysis of your flows and resources, I would have suggested the IS02 or something leaner and simpler.
If you want to gather data over days and weeks, I would suggest the Tivoli OMEGAMON.
UNLESS you have already built your flows to audit transaction information.
Regards. |
If you don't have the Tivoli nodes deployed and don't want to change your flows you can monitor and save data using other commercial products.
BMC monitoring for middleware performance (ex QPasa ) comes to mind...
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|