ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » General Discussion » Share your RFE's

Post new topic  Reply to topic Goto page 1, 2  Next
 Share your RFE's « View previous topic :: View next topic » 
Author Message
mqjeff
PostPosted: Fri Jun 03, 2016 5:05 am    Post subject: Share your RFE's Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

I know a lot of people have made announcements of the RFE's they have opened.

But those announcements tend to get lost as more questions are opened and etc.

Take a moment here to remind people of the RFE's you've created, that you think are especially valuable or useful.
_________________
chmod -R ugo-wx /
Back to top
View user's profile Send private message
bruce2359
PostPosted: Sat Jun 04, 2016 8:52 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9394
Location: US: west coast, almost. Otherwise, enroute.

A new forum for RFE's? An email off to Mehrdad shortly.
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
cicsprog
PostPosted: Thu Jun 16, 2016 8:13 am    Post subject: Reply with quote

Partisan

Joined: 27 Jan 2002
Posts: 314

Add a RECOVERY progression message(s) after a abnormal mqm restart. Supposedly this one is coming.
Back to top
View user's profile Send private message
mqjeff
PostPosted: Thu Jun 16, 2016 8:16 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

cicsprog wrote:
Add a RECOVERY progression message(s) after a abnormal mqm restart. Supposedly this one is coming.


Ok... but what's the link to the RFE?

So more people can read and vote for it...

The point of this thread is to have people post links to their RFE's, to remind people that they exist...
_________________
chmod -R ugo-wx /
Back to top
View user's profile Send private message
cicsprog
PostPosted: Thu Jun 16, 2016 8:19 am    Post subject: Reply with quote

Partisan

Joined: 27 Jan 2002
Posts: 314

mqjeff wrote:
cicsprog wrote:
Add a RECOVERY progression message(s) after a abnormal mqm restart. Supposedly this one is coming.


Ok... but what's the link to the RFE?

So more people can read and vote for it...

The point of this thread is to have people post links to their RFE's, to remind people that they exist...


here ya go
Link: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=86784

You may want to apply UI37315 if you process large numbers of XA transaction. We had a 6 hour recovery because the XA recovery routines were poorly written. The out come of the issue was Level 3 rewrite of recovery routines for XA (UI37315) and this RFE request for progression messages during extended recovery.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Jun 20, 2016 3:14 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Bookmarkable URL:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=56017

Headline:
Allow hot backups a DataPower appliance

Description:
We are requesting that DataPower is enhanced to allow the secure-backup command and/or the export command to be run without having to quiesce the appliance first. Customers expect that modern solutions allow for backups of configuration data to be accomplished without an outage.
Should the backup command encounter conditions that preclude a successful backup, the command should end gracefully with an appropriate reason code and not hang the appliance.

Use case:
While the commands can be executed currently without first quiescing, we have experienced rare failures where the secure-backup command will hang. Transactions continue to process, but all further commands are ignored in the default domain. The referenced PMR concluded this is working as designed - the hung command necessitating a reboot is not a problem since "best practices" were not followed and the appliance was not quiesced first.

It is not a trivial matter to quiesce an appliance hosting dozens of domains and hundreds of services. Outages are hard to come by. Requesting an outage to take a backup of configurations is seen as a gap in the product by the business community used to giant databases being backed up while they run, to servers being backed up while they run.

We execute our backups via automated scripts on a central server that contacts all our appliances via ssh. The backup script becomes exponentially more complicated if you need to quiesce an appliance with dozens of domains, to wait and verify they are all down, to bring them up and to check that they are all up.

Adding a quiesce is not without risk. The potential exists the unquiesce may encounter issue for one or more domains. In an unattended script that becomes a problem. Yes, we have monitoring, but it takes time to respond to the alerts during which time we are in an unplanned extended outage.

We wish to be able to schedule our secure-backups during periods of low activity on the appliance and not have to take an outage on that appliance, to not have to incur the risk and complexity of quiescing and unquiescing multiple domains. The presumed risk is that the unquiesce runs into trouble and we leave the appliance with one or more domains down.

We wish to be able to run the secure-backup knowing that if it encounters problems, it ends gracefully with an appropriate error code, and doesn't leave us with an appliance that needs to be rebooted.

I appreciate that in a Lab environment its super easy to quiesce the appliance. And/or to reboot it if the command hung. In a production environment, scheduling outages is difficult. Incurring outages on a regular basis to get regular backups is not ideal.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Jun 20, 2016 3:22 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Bookmarkable URL:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=61701

Headline:
Option to force MQ authority checking against the queue name before its placed in the transmission queue

Description:
Create a QM attribute that allows the MQ Administrator to force the queue manager to execute MQ Authority checking for the User ID against the queue name even when the MQOPEN / MQPUT1 call is specifying a queue name that exists on another queue manager.

The ClusteQueueAccessControl = RQMName feature in the qm.ini file does not suffice - it still allows the user to address a message to every queue on the destination queue manager.

Changing the Receiver, Requestor or Cluster Receiver channel to run with PUTAUT=Context is not realistic in a production environment dealing with hundreds of users and thousands of queues across dozens of queue managers. It would mean users being known and authorized on multiple systems, which does not scale in the real world.

Use Case Set Up:
The intention is for MyUser executing MyApp connected to QM1 to only have access to queues that start with ABC.QUEUE.*, regardless of which QM hosts those queues, but to not have any access to any other queues on QM1 or any QMs QM1 can talk to.

The MQ Admin grants MyUser +put to ABC.QUEUE*.** on QM1, along with +connect to QM1. No access is given to any other queues. No access is given to any XMITQs. There are ABC.QUEUE*s on QM1, as well as other QMs in the environment that QM1 is connected with.

Use case:
Use Case 1
MyUser wants to put a message to ABC.QUEUE.1 which is a local q on QM1. MyUser issues MQOPEN specifying the Destination Q name as "ABC.QUEUE.1", with blanks for the Destination QM name. This works and is a use case covered by MQ for many years. No change required. RFE should not change this behavior.


Use Case 2
MyUser wants to put a message to ABC.QUEUE.2 which is a locally defined Alias or Remote q on QM1. MyUser issues MQOPEN specifying the Destination Q name as "ABC.QUEUE.2", with blanks for the Destination QM name. This works and is a use case covered by MQ for many years. No change required. RFE should not change this behavior.

Use Case 3 - (non MQ cluster example)
MyUser wants to put a message to ABC.QUEUE.3 which is a queue defined on QM3. MyUser issues MQOPEN specifying the Destination Q name as "ABC.QUEUE.3", with "QM3" for the Destination QM name. This fails. The MQ Admin must give +put access to the XMITQ that routes to QM3. MyUser can now send a message to QM3 addressed to ABC.QUEUE.3. MyUser can also now address a message to every other q on QM3.

Potential Solution 3.a
Defining a Remote Q Def on QM1 that resolves to ABC.QUEUE.3 on QM3 does not help, as it will be ignored by QM1 if MyApp is specifying the Destination QM name on the MQOPEN, which is typical in a Request/Reply scenario.

Potential Solution 3.b
Change the Receiver channel on QM3 to PUTAUT=Context. Technically would work, but does not scale as described in the Description.

RFE Solution: If the QM was told to check the q name on every MQOPEN, it would intercept that API call even though it was resolving to the QM3 XMITQ and only allow it to complete if MyApp had access to the q name being specified. Access to the XMITQ would also be required - desirable as it would control if the app could send off the QM or not, and to which QMs.

Use Case 4 - (MQ cluster example A)
MyUser wants to put a message to ABC.QUEUE.4 which is a q defined on QM4. It might be clustered, it might not be. Doesn't matter in this use case. MyUser issues MQOPEN specifying the Destination Q name as "ABC.QUEUE.4", with "QM4" for the Destination QM name. This fails. The MQ Admin must give +put access to the S.C.T.Q that routes to QM4. MyUser can now send a message to QM4 addressed to ABC.QUEUE.4. MyUser can also now address a message to every other q on QM4, as well as every q on every other QM in the cluster.

Potential Solution 4.a
Same as 3.a

Potential Solution 4.b
Same as 3.b

Potential Solution 4.c: Use ClusteQueueAccessControl=RQMName and only allow access to QM4 for MyApp on QM1. This does nothing to prevent MyApp from addressing any queue on QM4.

RFE Solution: If the QM was told to check the q name on every MQOPEN call, it would intercept that API call even though it was resolving to the cluster XMITQ. The QM would only allow the MQOPEN to complete if MyApp had access to the q name being specified. Access to the XMITQ would also be required - desirable as it would be a way to control if the app could send off the QM or not. Using ClusteQueueAccessControl=RQMName should ideally still work with the new solution to allow us control in this scenario which QMs in the cluster MyApp could address.

Use Case 5 - (MQ cluster example B)
MyUser wants to put a message to ABC.QUEUE.5 which is a clustered q defined on QM5 thru QM500. MyUser issues MQOPEN specifying the Destination Q name as "ABC.QUEUE.5", with blanks for the Destination QM name. This fails. The MQ Admin must give +put access to the S.C.T.Q that routes to QM5 thru QM500. MyUser can now send a message that will be load balanced to QM5 thru QM500 addressed to ABC.QUEUE.5. MyUser can also now address a message to every other queue on QM5 thru QM500.

Potential Solution 5.a
Define an Alias Q on QM1 for ABC.QUEUE.5. This does work. No change required. RFE should not change this behavior.

Potential Solution 5.b
Same as 3.b & 4.b

Potential Solution 5.c: Use ClusteQueueAccessControl = RQMName. This does nothing to prevent MyApp from addressing any and every queue on QM5 thru QM 500 if the desire is to load balance to ABC.QUEUE.5 on QMs 5 thru 500.

RFE Solution: If the QM was told to check the q name on every MQOPEN call, it would intercept that API call even though it was resolving to the cluster XMITQ & only allow it to complete if MyApp had access to the q name being specified. This would allow us to eliminate the Alias Q on QM1. Access to the XMITQ would also be required - desirable as it would be a way to control if the app could send off the QM or not. Using ClusteQueueAccessControl = RQMName would be optional and should ideally still work with the new solution to allow us control in this scenario which QMs in the cluster MyApp could address. We could restrict access to ABC.QUEUE.5 on QM5 thru 200, while allowing it to QM201 thru QM500, for example.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Jun 20, 2016 3:23 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Bookmarkable URL:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=70743

Headline:
DataPower - We need the ability to comprehensively audit who did what and when

Description:
Considering how security focused DataPower is, I'm puzzled why there is not a comprehensive audit log that simply shows who did what and when across all domains as it relates to changes, starts, and stops.
I know, someone will say go to the Web GUI and check Status...View Logs...Audit Logs. Next to useless. I want a comprehensive audit log that simply shows who did what and when across all domains as it relates to changes, starts, and stops.
I'm playing around with Log Targets and capturing Auth and Mgmt Event Categories down to the Information level, but no good. I cannot definitively tell who did what and when.
Maybe I have not hit the magic combination of Event Categories and Filter Subscriptions / Suppressions in a user defined Log Target, or maybe there is some secret command to make the default Audit Log (at Status...View Logs...Audit Logs) comprehensive.
Again, seems very odd to have to struggle to get this info on a piece of equipment that touts its security capabilities.

Further research by my IBM contacts has indicated that this type of logging does exist for some objects in DataPower today (for example, a certificate object), but not all objects. However, if the appliance is configured in common criteria mode you will get the audit logging you desire for all objects but common criteria mode brings along a number of other restrictions you may not want to have. We want the type of object audit logging currently done in common criteria mode to be configurable such that it could be enabled in non common criteria mode.

Use case:
Something changed last night. We want to know what got changed, who changed it, and when it got changed. Joe Schmoe, Jon Smith and Jane Doe were all logged into the appliance at the same time last night, so reviewing the list of log ins / log outs is not good enough.

We simply need a log that tracks exactly what objects got changed, who made the change, and what time it got changed. It would be nice to have the option to choose a higher level of logging that would not only show what object got changed, but the before and after values of the attribute(s) that got changed on that object.

The log needs to survive appliance restarts.
The log must be large to store lots of changes (days if not weeks), not just a couple of hours.
It would be nice to have the ability to keep this info on the appliance, and/or send it off appliance via typical Log Target methods.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Jun 20, 2016 3:24 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

They Declined this one - vote for it anyway!

Bookmarkable URL:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=31297

Headline:
MQ stats for PUTs on Remote Queues

Description:
MQ provides statistics for # of PUTs and GETs, and last date/time of PUTs and GETs for Local Queues. It would be real helpful to have these statistics available for individual remote queues. Since XMITQs are often used by multiple Remote Queues, its not feasible to look at the Put statistics for a XMITQ to understand how many puts an individual remote queue had.

Use case:
Self explanatory.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Jun 20, 2016 3:26 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Bookmarkable URL:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=42528

Headline:
Have the Queue Manager produce a read only message showing all the parameters its running with

Description:
A queue manager gets many of its characteristics from the qm.ini file. Environment variables set for the mqm ID also dictate how the Queue Manager behaves. Currently there is no way for an MQ Admin to know what all these values are with an MQ Client based tool. The MQ Admin is forced to log onto the server, or rely on a local agent to be installed on the MQ server that they then interact with.

This Request For Enhancement is to have the Queue Manager allow suitably authorized MQ Client based applications to read these attributes. A running Queue Manager knows what all the characterstics are for its logging details (type, size, number, buffer, etc). Could the Queue Manager dump all these attributes that are not available via runmqsc into a new queue (maybe SYSTEM.QM.INI.VALUES) upon start up and allow only suitably authorized apps to read the message(s) on this queue? If the messages are persistent and small, maybe keep old ones on the queue and just add a new one each time the QM restarts, to provide an Audit trail or History of what parameters this Queue manager ran with.

If the Queue Manager is aware of what environment variables are set, it would be helpful to have those values captured as well. For example, is MQS_REPORT_NOAUTH set to TRUE.

Use case:
An MQ Admin using MQ Explorer, MO71 or some other tool (if those tools were enhanced to take advantage of this) could then remotely see the values of all the qm.ini related parameters the QM is running with. MQ Client based applications that already allow an MQ Admin to compare multiple QMs based off of PCF commands/responses would be able to fully compare 2 or more queue managers without the MQ Admin having to log onto each server to complete the job.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Jun 20, 2016 3:28 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Bookmarkable URL:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=44622

Headline:
Allow dedicated separate storage on a queue level basis for MQ on Windows and Unix

Description:
Similar to z/OS, we would like the ability to have one or more queues use a separate and dedicated of storage on Windows and Unix MQ systems.


Use case:
For example on Windows, I would like to be able to create a T:\ drive and then have the SYSTEM.DEAD.LETTER.QUEUE use that for its storage, while the rest of the Queue Manager's queues reside on the E:\ drive. On Linux, I would like to be able to create a new file system separate from /var/mqm and put QM1.XMITQ and QM2.XMITQ there, while all other queues reside in the default location.

Another use case might be to have all the queue manager's system queues (other than the DLQ) reside on the default storage, have the DLQ reside on its own storage, and app queues reside on a third section of storage. If a new app comes to use this queue manager and they have the need for a lot of deep queues occasionally, we could create a 4th area of storage and build their queues there.

Another use case would be to get a very contact admin qTree, maybe 750 GB, to be used for Dead Letter Queues. Connect this contact admin to each MQ server over 10 gigabit. And then aim each QM's DLQ towards this one common qTree. The odds of multiple QMs needing a large amount of storage for their DLQs at the same time is very small. But at any one time each QM would have the ability to queue up to 750 GB of dead letter messages, without having to give 750 GB of storage to QM1, another 750 GB to QM2, another 750 GB to QM3, etc., which would all end up sitting unused 99.99% of the time, but could certainly be useful at anytime.

Currently we have to use a RCVR channels Message Retry Count and Interval to try and throttle how fast messages get off loaded to a DLQ, with serious impacts to that channel's throughput for all the other innocent app's messages trying to share. Having giant jumbo DLQs would allow us to tune down these message retry counts and intervals, making for better channel performance in a shared environment when one app starts misbehaving.

Business justification:
If we could segregate the system queues from the app queues and the DLQ, we could make it more likely that the QM would not encounter a disk full situation because some app on some other Queue Manager continued to send massive volumes of messages causing a spillover to the DLQ. The common queues like DLQs and XMITQs need to be able to handle an occasional BIG message, and the occasional burst of millions of little messages, so we are forced to create the Max Q Depth and Max Message length of these queues in such a way that we are vulnerable to having the queue hold millions of small to big messages, overwhelming the underlying storage. On a shared queue manager with dozens or hundreds of queues it's not realistic to set every queue's Max Q Depth and Max Message Size to low enough levels where there is zero chance of filling your one and only amount of disk space for the entire QM. Being able to segregate queues with a higher likelihood of having to hold a large amount of data to separate storage would be very useful. Being able to segregate an app's queues to dedicated storage would allow us to make the app that requires big Max Q Depth and Max Message Size go onto their own storage, which we could then charge back to them.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Jun 20, 2016 3:29 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Bookmarkable URL:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=55753

Headline:
MQ Application Activity Trace - Allow MQ administrator to limit tracing by queue name, channel name, Connection ID

Description:
The ApplicationTrace stanza in the mqat.ini file currently allows the MQ Admin to restrict Application Activity tracing by application name, by using the ApplName parameter.

This RFE is to add additional parameters to allow the MQ Admin to limit the application activity tracing even further - by queue name, or by MQ Client Channel name, or by Connection ID.

In a highly shared environment with multiple high volume applications sharing the same queue manager, the App Activity Trace is of limited value if those apps all have the same name due to the massive amount of trace data sent to SYSTEM.ADMIN.TRACE.ACTIVITY.QUEUE. By implementing this RFE we could definitely use the powerful App Activity Trace feature more often to solve problems on our own, hopefully reducing the # of PMRs we need to open.

Use case:
Our queue managers all support multiple applications. Frequently there are dozens of instances of an application connected to the same queue manager, and restricting the App Activity Trace to an appl name still produces a massive amount of App Activity Trace output. For example, on a WMB Queue Manager, all the message flows are "DataFlowEngine". And on our queue managers where DataPower connects with their Front Side Handlers, there are dozens of "WebSphere Datapower MQClient".

By allowing us to restrict the App Activity Trace to a specific queue name or set of queue name with a wildcard, it would allow us to focus the App Activity Trace on the one App whose Activity we would like to Trace. Other options to focus the trace could be by MQ Client Channel Name and / or by Connection ID.

Business justification:
In a highly shared environment with multiple high volume applications sharing the same queue manager, the App Activity Trace is of limited value if those apps all have the same name due to the massive amount of trace data sent to SYSTEM.ADMIN.TRACE.ACTIVITY.QUEUE. By implementing this RFE we could definitely use the powerful App Activity Trace feature more often to solve problems on our own, hopefully reducing the # of PMRs we need to open.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Jun 20, 2016 3:31 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Bookmarkable URL:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=46015

Headline:
Provide statistics on how long a database interaction takes from inside a Compute node, without having to Trace

Description:
When a database call is made inside a compute node, with many lines of non database related ESQL code before and after the database call, we would like WMB to expose information on how long its taking for that database call to complete. Accounting & Statistics, and Exits, only get us down to the node level. You can tell the compute node is the slow portion, but can't prove the database call is the culprit. Using trace is tough in a busy Production environment - it produces tremendous amounts of data, and may cause its own performance problems. We need the ability to quickly have access to database timings for a specific flow in a specific execution group, without having to restart or redeploy anything.

Use case:
The fire alarm is pulled. "WMB is slow!!!" We know it's almost surely the call to the database, but how do we prove it? We have have dozens of executions groups with hundreds of message flows, processing thousands of transactions a minute. Turning on trace is not feasible. There is concern the trace will cause its own performance impacts. There is concern that a trace will simply be millions of lines of output showing what all the flows on the broker are doing, making it next to impossible to see what we need to see. There is concern the trace output is meant for L2 and L3 support, and won't be useful to us.

I would like the ability to run an mqsi* command where I specify the particular message flow that causes WMB to start producing Statistics and / or puts out a very targeted log file that shows start and end times for the database calls made by that one message flow, along with success/failure codes. And optionally showing the content of the request to the DB and the reply from the DB.

Business justification:
We waste hours trying to prove WMB is NOT the issue, that WMB is the victim of a slow database. Sometimes the problem mysteriously goes away (coincidentally after the DBAs got pulled into the call), no one admits doing anything, and the problem is closed out as WMB was slow and started working better on its own.

We need a way to quickly identify external resources as being the slow component so that we don't waste hours trying to prove this, and so that WMB's reputation is not taking a hit.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Jun 20, 2016 3:33 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Bookmarkable URL:
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=39902

Headline:
Allow MQ Clustering to load balance at the queue level

Description:
Consider a small MQ cluster where QM1 hosts the sending application and QM2 and QM3 host the destination clustered local queues. Currently the MQ cluster load balancing algorithm considers the traffic over MQ cluster channels when deciding the most eligible destination for the next message to send. While this does even out the amount of MQ messages sent from QM1 to QM2 and QM3, it does mean that the amount of messages sent to any one set of clustered queues can be impacted by the traffic going to another set of clustered local queues. The request is to allow the MQ Admin to choose to have the sending QM load balance messages at the queue level instead of the channel level.

Use case:
QM1, QM2, QM3, QM4, QM5 are all in the same MQ cluster.
QM1 hosts the sending apps.
QueueA is clustered on QM2, QM3, QM4, QM5.
App A puts to QM1, and the load is distributed evently to QueueA on QM2, QM3, QM4 and QM5, roughly 25% each.
QueueB is added to QM2 and QM3 only. It is not added to QM4 or QM5.
App B starts putting to QM1.
"B" messages are load balanced between QM2 and QM3, roughly 50% each.
But now "A" messages find themselves routing primarily to QM4 and QM5 only, because "B" messages are using the channels to QM2 and QM3 only and influencing the routing that QM1 is doing.
The routing of App A's messages are being influenced by App B's messages and this is a bad thing.
If implemented this RFE would allow the MQ Admin to tell QM1 to load balance at the queue level, and this would restore App A's load balancing back to 25%/25%/25%/25%, regardless of what App B, C, D...App Z are doing.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
mqjeff
PostPosted: Tue Jun 21, 2016 3:40 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

That's the spirit, Peter!
_________________
chmod -R ugo-wx /
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2  Next Page 1 of 2

MQSeries.net Forum Index » General Discussion » Share your RFE's
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.