Author |
Message
|
zpat |
Posted: Thu May 21, 2015 9:51 pm Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Increase the log buffer pages (e.g. to 2048) in mq.ini
If you use a SAN that guarentees writes then change TripleWrite to SingleWrite. This makes a big difference to throughput. _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
Andyh |
Posted: Thu May 21, 2015 11:00 pm Post subject: |
|
|
Master
Joined: 29 Jul 2010 Posts: 239
|
Changing TripleWrite to SingleWrite without fully understanding the implications puts message integrity at risk.
In order to safely exploit SingleWrite it is essential that synchronous writing of aligned 4KB pages is atomic, that is, if a failure occurs (e.g power failure) immediately after scheduling a 4KB write then the outcome is either NO CHANGE, or the FULL 4KB WAS WRITTEN. If there's ANY chance that less than 4KB and more than 0KB (e.g 512 byte sector) could be written then it is NOT safe to use SingleWrite.
If there's sufficient concurrency in the workload there will be virtually no difference in performance between TripleWrite and SingleWrite.
In the case of a heavy workload of persistent messages it is generally good advice to set LogBufferPages to the maximum value (4096).
It is also essential for good performance on a heavily loaded queue (e.g S.C.T.Q), that persistent messages are put and got inside syncpoint. |
|
Back to top |
|
 |
moonwalker |
Posted: Fri May 22, 2015 6:56 am Post subject: |
|
|
 Apprentice
Joined: 02 Jun 2009 Posts: 42
|
212 was once in a while. and not at every second. We have a script that keeps recording the depth of the SCTQ. so the output shows 200 or 212 as the depth once a while..
Just to add a different dimension to the discussion..
I am sensing a problem with the source application in placing the requests. They run out of connections all the time. They open the queue place the request and close it. they wait for a response on a different queue. its a WAS based application.
One other point is at no load say about 40 requests per second we have no problem in terms of response times. everything is absolutely fine. but as load increases then we have a problem.
for some reason i don't sense a problem at the broker or the cluster neighborhood but doubt a lot on the WAS application. but WAS application owners are blaming the round robin method which is delaying the cluster eventually delaying response from broker.
any help or thoughts are deeply appreciated. IBM labs shut the door on PMR saying MQ 6.* isn't supported anymore. |
|
Back to top |
|
 |
mqjeff |
Posted: Fri May 22, 2015 7:00 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
moonwalker wrote: |
They run out of connections all the time. |
effect
moonwalker wrote: |
They open the queue place the request and close it. |
cause
moonwalker wrote: |
its a WAS based application. |
JMS. They're either not using it or using it wrong. |
|
Back to top |
|
 |
fjb_saper |
Posted: Fri May 22, 2015 7:09 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Upgrade to MQ8. There is a significant increase in throughput for a SVRCONN channel with sharecnv(1).
Also you have to look at the WAS configuration. How are they reading the message off the queue. What method are they using. What is the polling interval if they are using one..., connection pool size, session pool size etc...
Can you disclose what your real concern is there? A queue depth of 250 with the rate you mention is nothing to look badly at... Could just be a result of the snapshot and due to uncommitted messages.
What is your average response time, what is your max response time (end to end)?
What is your average / max processing time (total elapsed time: queue waiting + flow time) ?
What is your average / max transmission time (includes all the network stuff)?
What is your goal/SLA for those measurements?
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
moonwalker |
Posted: Fri May 22, 2015 9:43 am Post subject: |
|
|
 Apprentice
Joined: 02 Jun 2009 Posts: 42
|
Well we will plan to upgrade to MQ 8 eventually
Below are the answers to the questions posted in the earlier post
Can you disclose what your real concern is there? A queue depth of 250 with the rate you mention is nothing to look badly at... Could just be a result of the snapshot and due to uncommitted messages.
Real concern is at load of about 100 to 200 messages per second things tend to break apart increasing the response times to 4 or 4.5 seconds. Expected response time is 300 to 500 milliseconds.
What is your average response time, what is your max response time (end to end)? Average is 4secs, max I have seen so far is close to 5 secs
What is your average / max processing time (total elapsed time: queue waiting + flow time) ? queue request put time is 2odd seconds, response get time is 4 seconds, flow processing time is less than a millisecond.
What is your average / max transmission time (includes all the network stuff)? No issues at network as confirmed by Network teams
What is your goal/SLA for those measurements? a 300 to 500 milliseconds response time at 400 transactions per second. So only we have placed four 8 processor broker boxes.
Question that came out of further testing
1. Does it help if we place an alias queue at the gateway QM just so that the put application has to deal with a physical MQ object rather than wait for queue resolution every time it wishes to put a request?
2. Does upgrading the gateway QM to full repository going to be of any help? as its right now a partial repository. |
|
Back to top |
|
 |
fjb_saper |
Posted: Fri May 22, 2015 12:44 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
moonwalker wrote: |
Well we will plan to upgrade to MQ 8 eventually
Below are the answers to the questions posted in the earlier post
Can you disclose what your real concern is there? A queue depth of 250 with the rate you mention is nothing to look badly at... Could just be a result of the snapshot and due to uncommitted messages.
Real concern is at load of about 100 to 200 messages per second things tend to break apart increasing the response times to 4 or 4.5 seconds. Expected response time is 300 to 500 milliseconds.
What is your average response time, what is your max response time (end to end)? Average is 4secs, max I have seen so far is close to 5 secs
What is your average / max processing time (total elapsed time: queue waiting + flow time) ? queue request put time is 2odd seconds, response get time is 4 seconds, flow processing time is less than a millisecond. |
Nicely dodging the question here. It wasn't without reason I asked for elapsed time and not processing time or cpu time... so again for the flow what are the avg and peak values for total ELAPSED time? See flow statistics.
moonwalker wrote: |
What is your average / max transmission time (includes all the network stuff)? No issues at network as confirmed by Network teams
What is your goal/SLA for those measurements? a 300 to 500 milliseconds response time at 400 transactions per second. So only we have placed four 8 processor broker boxes.
Question that came out of further testing
1. Does it help if we place an alias queue at the gateway QM just so that the put application has to deal with a physical MQ object rather than wait for queue resolution every time it wishes to put a request? |
Depends how the put application does it. If it needs to acquire the queue every time... I would expect that at such a rate you would have some kind of "hand off" to a service that declares the queue once and then runs in some kind of MDB setup, keeping the queue handle open until either the qmgr or the server shuts down... So on the put side and with the connection pool open this should not take up to 2 seconds if the message is small. Check with IBM and open a PMR.
As stated you might want to set require new for the transaction on your MDB (outbound) so that the put happens in a transactional context but is not linked to your overall transaction waiting for the response.
Also check the app server for other bottle necks that can have an effect on MQ throughput and create peaks, like DB bottlenecks pushing to a commit release of thousands instead of a few hundred...
moonwalker wrote: |
2. Does upgrading the gateway QM to full repository going to be of any help? as its right now a partial repository. |
Don't think so. With the use of the objects you are doing I suspect that they don't have the time to get old between 2 usages.
There is a max throughput per channel however. So look if defining multiple cluster receiver channels for a single qmgr may speed things up...
Did you check your box stats for io, cpu, and disk usage (MQ queue and MQ logs...) ?
You need to find where exactly your bottle neck is and maybe sending a 1 to 2 mins trace to the PMR can help you there... _________________ MQ & Broker admin |
|
Back to top |
|
 |
|