Author |
Message
|
adubya |
Posted: Tue Jan 30, 2018 6:34 am Post subject: |
|
|
Partisan
Joined: 25 Aug 2011 Posts: 377 Location: GU12, UK
|
PeterPotkay wrote: |
Huh.
Did you make 4 separate Transmission Queues as well? Or do the 4 Remote Queues all resolve to the same Transmit Queue?
Is it repeatable - 4 queues always faster, one queue always slower? |
The "new architecture" described has the QMs remote from IIB, I took this to mean a client connection from IIB -> MQ i.e. no transmission queues. _________________ Independent Middleware Consultant
andy@knownentity.com |
|
Back to top |
|
 |
phil.h |
Posted: Tue Jan 30, 2018 1:51 pm Post subject: |
|
|
Newbie
Joined: 24 Jan 2018 Posts: 7
|
As per the last post - the connections are all MQ client connections from IIB to the remote MQ QMgr via an IIB MQ Endpoint Policy. The tests have been repeated and the results are consistent - the performance using separate qlocal queues on the single remote MQ QMgr for each of the 4 IIB Broker MQ client connections is significantly quicker than when a single qlocal queue is being used by the four IIB Broker MQ client connections. The difference in the timings varies only marginally on the 3m.10s and 8m.30s results when the tests are repeated. I will also repeat the same load with 2 & 6 Brokers to determine the scalability factor (at least an approximation of this). |
|
Back to top |
|
 |
PeterPotkay |
Posted: Tue Jan 30, 2018 2:48 pm Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
AH, OK, now it makes a bit more sense.
As FJ suggested earlier, is their a local queue manager for the IIB Broker (Node)? If yes, I imagine you would get even better performance dumping those messages locally (and letting MQ move them to the remote QM) versus inuring the overhead of the network for every put. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
phil.h |
Posted: Tue Jan 30, 2018 4:07 pm Post subject: |
|
|
Newbie
Joined: 24 Jan 2018 Posts: 7
|
We do have local QMgrs on our IIB servers, however we only use them to enable the broker functions that require the SYSTEM.BROKER.* queues. In future we may not need this local MQ QMgr as we may deprecate the use of these broker functions that require a local MQ QMgr. The reason we are using a remote QMgr via MQ client connections for transaction data is that the central QMgr is on a HA cluster with HA storage - so in the event that we have an (unscheduled) outage one of our IIB Brokers (which are on clouds servers) - we won't lose the data that may be on the local QMgr (or have the data processing delayed), and another broker can pickup any txns that were rolled back by the broker that has incurred the outage. |
|
Back to top |
|
 |
adubya |
Posted: Wed Jan 31, 2018 7:56 am Post subject: |
|
|
Partisan
Joined: 25 Aug 2011 Posts: 377 Location: GU12, UK
|
How vital is the logging information you're capturing ? Does it have to be exchanged using persistent messages or would non-persistent suffice ? _________________ Independent Middleware Consultant
andy@knownentity.com |
|
Back to top |
|
 |
phil.h |
Posted: Wed Jan 31, 2018 2:11 pm Post subject: |
|
|
Newbie
Joined: 24 Jan 2018 Posts: 7
|
The log messages are as important as the transactions - as they provide problem resolution, transaction replay and audit functions - to some extent the logging is more important than the transaction - so the requirement for the logging is message persistence. Tests have been completed with 1, 2, 4 & 6 brokers and these tests have performed as expected in terms of scalability performance of processing the transaction messages - however now that the transactions are being processed we are seeing a backlog in our logging - which we are now tuning. |
|
Back to top |
|
 |
Andyh |
Posted: Sun Mar 18, 2018 12:13 pm Post subject: |
|
|
Master
Joined: 29 Jul 2010 Posts: 239
|
MQ 9.0.5 has now been announced, and it's worth noting that it's highly likely that 9.0.5 would not have needed multiple queues to achieve the desired performance in this scenario.
The underlying serialization benefits of syncpoint still exist in the bowels of the 9.0.5 queue manager, however if the queuing engine detects that multiple applications hObj are concurrently opened for output on an object then the queue manager will interpret an MQPUT outside of syncpoint by an application that doesn't have an active unit of work, as an MQPUT inside syncpoint, and an immediate MQCMIT. The effect of this is to allow the requests to progress in parallel and to very significantly limit the serialization impacts of MQPUT outside syncpoint. |
|
Back to top |
|
 |
|