ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum IndexWebSphere Message Broker (ACE) SupportLarge Open for Output count on transmit queue

Post new topicReply to topic
Large Open for Output count on transmit queue View previous topic :: View next topic
Author Message
meekings
PostPosted: Thu Feb 17, 2005 11:13 am Post subject: Large Open for Output count on transmit queue Reply with quote

Voyager

Joined: 28 Jun 2001
Posts: 86
Location: UK, South West

If I use rfhutil (from SupportPac IH03) to write a message to two different queues on a remote queue manager, I see the “open for output count” (OOC) on the transmit queue increase by 1.
If I put together a trivial broker message flow that just takes a queue manager name and queue name from its input message, builds a destination list, and sends out the same message, and repeat the previous scenario – that is, I send two messages into the flow, each specifying the same remote queue manager, but a different queue – I see the OOC on the transmit queue increase by 2.
The documentation says that the broker caches queue handles for performance reasons, but this behaviour is causing problems because our broker flows are responding to incoming requests for data, which are returned to temporary queues. These are mostly different for every incoming request, so that the OOC climbs rapidly.
I realize I could use MAXHANDS on the queue manager to limit the number of open handles, but with 10 instances of a flow inside an execution group, the number is still potentially large.
Some questions:
1. Is this expected behaviour? Why does the broker need a different handle on the transmit queue just because the destination queue is different (rfhutil doesn’t)?
2. How is performance impacted by this?
3. Is there any way to change it – eg, limit the cache size?
_________________
Brian Meekings
Information Design, Inc.
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Craig B
PostPosted: Mon Feb 21, 2005 1:42 am Post subject: Reply with quote

Partisan

Joined: 18 Jun 2003
Posts: 316
Location: UK

Hi,

Hopefully you will find this useful ...

When a message flow is processing messages it is very likely that WMQ queues will be opened and used, especially when using standard IBM primitives such as the MQInput and MQOutput nodes. This description concentrates on when such queues are opened and closed.

When a message flow contains an MQInput node, then this queue is opened as soon as the flow is deployed, and this queue is never closed while the flow is active in the broker. This is because the MQInput node processing continually trys to retrieve messages from this input queue and therefore there would be performance implications if this queue is opened/closed frequently.

At this point it is worth noting that starting and stopping message flows does NOT close the handles on queues that the message flow has open. Stopping a message flow just stops the message flow from retrieving messages from the open queue. It does not close the queue handle on the input queue and does not close the handles on any output queues!

At times during the processing of the MQInput, the opening of a Backout Requeue queue and/or Dead letter queue is required on the users behalf. When such queues are used, these queues are closed immediately since we do not expect these queues to be used too often.

WMQ queues named in MQOutput nodes (either directly, in a DestinationList or with reply-To queue) are not opened until first needed, and are then held open until an open queue threshold is reached, at which point queues are considered for closing. This queue threshold will be discussed in more detail later in this description. Generally a user will not see MQOutput WMQ Queues being closed but they output queues could potentially be closed at some point.

The potential closing of queues we refer to here is in normal running of the broker and not as a result of a user action. Queue handles will be closed under the following user actions :

The broker is stopped or execution groups are restarted. In this case, any queue handles will be closed, but it should be noted that Input queues will be re-opened as soon as flows restart.
A message flow is removed from an execution group. At this point any queue handles the flow has will be closed.
A message flow is redeployed. When a message flow is redeployed the old version is deleted, and the new version is deployed. Point (2) applies so queue handles are closed, but once again the input queue will be opened when the flow starts.
The queue handles opened by a message flow are specific to that message flow and are not shared between the flows. So if MessageFlow A opens Queue1 and then MessageFlow B needs to use Queue1, then MessageFlow B will open its own queue handle to that flow. However, if MessageFlow A needs to use Queue1 again, then it will perform a look-up in its queue Cache and find that it still has Queue1 open, and so use its own existing handle.

As you can tell from this last sentence, the execution group has a queue cache that stores open queue handles based on Thread, Queue manager name, queue name and what type of handle it is open for (Ie, Input, Output etc). When a message flow requires access to a queue, then it passes in its own thread identifier, and then performs a hashed look-up on the queue manager name, and queue name being specified, and also specifies whether it needs the queue for Input or Output. If a match can be found, then the message flow is given its own existing queue handle, else the new queue is opened and this is then stored in the queue cache.

Generally in a message flow the queue names are static and so the Execution group (DataFlowEngine process) does not maintain many queue handles. However, if the user sets the DestinationMode of 'DestinationList' on the MQOutput node, then the queue name/manager combination will be variable within the message flow. Based on the message flow logic, it is possible that this queue name/queue manager name could be different on every input message. So for example, if an XML tag in an input message was used to specify the output queue name, then the user could set the queueName in the DestinationData record to be this XML tag value. If message1 specified Queue1, and message2 specified Queue2, and .... then messageN specified QueueN, then then the message flow would open Queue1 to QueueN whilst processing these messages. So if N is 100 then the Message flow will be holding open 100 queues.

This may seem an unrealistic example but when you factor in reply message logic, then this 'different queue name' scenario could be quite normal. The MQOutput node also has a Destination mode of 'reply-To-queue' which instructs the MQOutput node to send a reply message based on the current message tree. In this case the MQMD.replyToQMgr and MQMD.replyToQ are used as the queue name and queue manager to open. As with any other queue open request, the message flow will search its queue cache to see if it has a queue open with that queue manager and queue name combination, if not it opens one. It is likely that the queue manager name specified will be a different one to our own queue manager name, and so this becomes what is known as a fully qualified open. Ie an MQOPEN that specifies an MQOD.ObjectQMgrName that is different to the local queue manager. In this case, the MQOPEN will obtain a handle to one of the following :

A remote queue that has the same name as the queue manager name.
A local queue of the same name as the queue manager which has USAGE=XMITQ
If the queue manager name is known in our cluster then the SYSTEM.CLUSTER.TRANSMIT.QUEUE will be used.
If a default XMITQ is defined to the queue manager and exists then this queue will be used.
Hence in the case of using the 'reply-To-Q' functionality, a handle will be obtained to a transmission queue. Since the MQReply node performs an almost identical task to this (and it would not surprise you to hear that the MQReply node and MQOutput node with 'replyToQueue' drive very similar code paths. ), the MQReply node will also cause similar handles to be opened.

So why is this replyToQ scenario different to the multiple local queue scenario shown with the DestinationList? The difference is that when queues are opened on the local queue manager, then individual separate handles will be seen on each queue. However, when different queue names are 'opened' on a different queue manager then multiple handles will be obtained on the SAME XMITQ (including the SYSTEM.CLUSTER.TRANSMIT.QUEUE). So for example, consider running a message flow with an MQReply node in a broker on QMGRA. This receives three messages that have the following combinations :

1) MQMD.ReplyToQMgr = QMGRB, ReplyToQ = QUEUE1
2) MQMD.ReplyToQMgr = QMGRB, ReplyToQ = QUEUE2
3) MQMD.ReplyToQMgr = QMGRB, ReplyToQ = QUEUE3

and QMGRB is defined as a local XMITQ on our brokers queue manager. Then this will cause the message flow to acquire 3 queue handles on the QMGRB XMITQ on our local queue manager. If QMGRB were in our cluster, then this would have caused the flow to have three open handles on the SYSTEM.CLUSTER.TRANSMIT.QUEUE. If each input message had a different ReplyToQ then the broker would open N handles on the same XMITQ, and the user may find this strange.

Another unrealistic scenario? Maybe not! Consider a requestor application that requires information from the broker, and this application only runs when needed. It sends a request to the broker and needs a reply back from the broker. To do this it creates a Temporary dynamic queue that it names as its replyToQ, and on the MQOPEN it asks WMQ to produce a unique name. This name will be specified as the ReplyToQ name and will be different on every request. As seen earlier, when the message flow responds to the applications request then it will open that queue manager and queue name combination. Hence it opens a new XMITQ queue handle on EVERY request.

In these cases, the message flow (and hence the execution group) will have legitimate open handles not only one many queues but potentially a large number of handles on individual XMITQs including the SYSTEM.CLUSTER.TRANSMIT.QUEUE.

So where do cluster queues fit into this? In the previous paragraph we mentioned the possibility of handles on the SYSTEM.CLUSTER.TRANSMIT.QUEUE. These were not through the opening of a cluster queue, but a fully qualified open on a remote queue manager that happens to be in our queue managers cluster. When the user opens a cluster queue, then they specify the cluster queue name and specify a blank queue manager name. When this queue handle is cached, we cache it with no queue manager name and the queue name. The queue manager indirectly gives us a handle to the SYSTEM.CLUSTER.TRANSMIT.QUEUE (S.C.T.Q). When opening cluster queues, we will only be given more handles on the S.C.T.Q if we open a different cluster queue with a different name.

Therefore from the message flow perspective the opening of cluster queues is no different then opening local queues. However, when considering the closing of output queues, the message flow will NEVER close a cluster queue. This is because the message flow may need to maintain affinity with its chosen backend server. When the message flow opens a cluster queue, then the queue manager will provide it with a destination queue. This destination queue will have defined NOTFIXED or ONOPEN as its DEFBIND attribute. In ONOPEN is used, then this means the application must put all of its messages to this definition. So every MQPUT on this open handle will go to that back-end server. If the broker were to CLOSE this queue and open it again, it might get a different back-end server. Therefore we cannot close cluster queues in case this affinity with a specific back-end server is lost.

The Publish nodes in the broker also put to WMQ queues. In fact the publish node internally builds DestinationData records for each Destination it needs to publish on, and then calls the MQOutput nodes functionality to process a Destination Mode of destination List. This means that Publish nodes obey the same queue handle rules as discussed above. Ie, any queues opened by Publish nodes, will be held open.

So when does a message flow consider closing output queue handles?

As previously mentioned the execution group maintains a queueCache of open handles. For this queue cache there is a concept of an Ideal Maximum number of handles the execution group wants to hold. When this Ideal maximum value is reached for the execution group, then it will close the oldest queue handles that it can. At this point it should be noted that the execution group will only perform this processing when requested to open/access a queue. The logic is that queues should only be considered for closing when a request is made to access a queue, and when the Ideal maximum size has been reached.

The concept of this IdealMaximum is actually different between the V2.1 product and the V5 product. In V2.1, the execution group owns the queue cache and each message flow stores its own handles in the same cache. The Ideal Maximum threshold applied to all handles across the execution group. Ie, the total number of open handles for the execution group were considered to see if the Ideal maximum had been reached. If the Ideal Maximum had been reached, then queues would be considered to be closed that were not owned by the current message flow making the access queue request.

In V5, each thread has its own queue cache, and so the Ideal Maximum value applies to each message flow in the execution group. Since each thread has its own queue cache, then when its own Ideal Maximum threshold is reached, then it consider closing queues that it owns since it isn't taking up any other threads space. The Ideal maximum threshold is actually the queueCacheMaxSize variable for the ComIbmMQConnectionManager object. This can be altered using the mqsichangeproperties command, as follows :

mqsichangeproperties brokerName
-e ExecGpName
-o ComIbmMQConnectionManager
-n queueCacheMaxSize
-v nn

In V2.1, if the queueCacheMaxSize is set to 50, then this means that execution group will cache 50 queue handles until it considers closing the older queue handles. In V5, if the queue CacheMaxSize is set to 50 for an EG, then this means each message flow will hold 50 open queue handles until it starts closing them. Therefore, the Execution group can actually hold 50 * M handles, where M is the number of message flows in the execution group.

The default queueCacheMaxSize is 30 and this is stored in the Broker database for each execution group. The same default should be used in V5, however, this value is not being picked up and so it takes the default of 240. APAR IY67112 was taken to fix the problem where the Ideal maximum is not set from the queueCacheMaxSize variable.

Until this is fixed, any one message flow can hold 240 queues open. This should not present any immediate problems since the WMQ max handles per thread default is 256.
_________________
Regards
Craig
Back to top
View user's profile Send private message
meekings
PostPosted: Mon Feb 21, 2005 6:12 am Post subject: Reply with quote

Voyager

Joined: 28 Jun 2001
Posts: 86
Location: UK, South West

This must be one of the longest and most comprehensive responses I've seen here! EXTREMELY helpful. Thanks, Craig.
_________________
Brian Meekings
Information Design, Inc.
Back to top
View user's profile Send private message Send e-mail Visit poster's website
Display posts from previous:
Post new topicReply to topic Page 1 of 1

MQSeries.net Forum IndexWebSphere Message Broker (ACE) SupportLarge Open for Output count on transmit queue
Jump to:



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP


Theme by Dustin Baccetti
Powered by phpBB 2001, 2002 phpBB Group

Copyright MQSeries.net. All rights reserved.