|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Multiple Cluster Receiver Channels in Repository QManager |
« View previous topic :: View next topic » |
Author |
Message
|
vsathyan |
Posted: Thu Jul 02, 2015 10:13 am Post subject: Multiple Cluster Receiver Channels in Repository QManager |
|
|
Centurion
Joined: 10 Mar 2014 Posts: 121
|
Is the below setup valid?
REPOS1 - host1, port 1001
Two listeners running on ports - 1001, 21001
TO.REPOS1 - CLUSRCVR - conname('host1(1001)')
REPOS2 - host2, port 1002
Two listeners running on ports - 1002, 21002
TO.REPOS2 - CLUSRCVR - conname('host2(1002)')
Can I add one more cluster receiver channel in REPOS1
TO.REPOS1.SECURE - CLUSRCVR - conname('host1(21001)')
The corresponding cluster sender - TO.REPOS1.SECURE will be added in partial repos qmgr and use the same conname as specified in above definition.
Also, I hope it is good to have multiple connames in the cluster receiver channel rather than the above approach like below...
TO.REPOS1 - CLUSRCVR - connae('host1(21001),host1(1001)')
In this case, if the first conname fails, it connects using the second port.
I'm asking this question to fix a problem being faced due to one of the partial repository is in PCI zone behind the firewall and is not able to communicate using the 21001 port. Port 1001 is punched and works fine.
I know to get the firewall port opened and update the cluster sender. But which of the above two approach can I take.
Please add your inputs/comments and advise. Thanks. _________________ Custom WebSphere MQ Tools Development C# & Java
WebSphere MQ Solution Architect Since 2011
WebSphere MQ Admin Since 2004 |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Jul 02, 2015 10:29 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
You can separate the port between secure and non secure communication, but with MQ you don't have to The channel information is the one that caries the SSL setup and depending on the channel used it will encrypt/decrypt the communication. The port is not relevant.
Most of the time the reason to use multiple ports is to avoid a denial of service attack on a specific port... Although using the limitations on the channel to restrict the number of instances is pretty effective... and then there is that thing called an external firewall...
The other reason to use multiple ports is if you are hitting some kind of IO limit that would saturate the port... or the hardware capabilities...
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
vsathyan |
Posted: Thu Jul 02, 2015 10:40 am Post subject: |
|
|
Centurion
Joined: 10 Mar 2014 Posts: 121
|
Thanks!
The bottom line here is -
In a repository queue manager - is it advisable to have multiple cluster receiver channels than having a single cluster receiver channel with multiple connames?
Which one is preferable? Thanks again in advance.  _________________ Custom WebSphere MQ Tools Development C# & Java
WebSphere MQ Solution Architect Since 2011
WebSphere MQ Admin Since 2004 |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Jul 02, 2015 11:04 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
It's not clear why you are using two listeners? |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Jul 02, 2015 11:54 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
vsathyan wrote: |
Thanks!
The bottom line here is -
In a repository queue manager - is it advisable to have multiple cluster receiver channels than having a single cluster receiver channel with multiple connames?
Which one is preferable? Thanks again in advance.  |
One should have nothing to do with the other...
If you have multiple cluster receivers for the same cluster I would understand it if one had an internal conname and the other an external one... but usually those things are catered for by a dns server or a hosts entry, and you don't need more than one cluster receiver... for the same cluster...
You would if you are running a transition from non SSL to SSL in the cluster. But again this is at the channel level and not at the conname level...
If your queue manager is a multi-instance queue manager you would have both host(port) pairs in the conname...
So far you have not presented a compelling case...  _________________ MQ & Broker admin |
|
Back to top |
|
 |
vsathyan |
Posted: Thu Jul 02, 2015 6:47 pm Post subject: |
|
|
Centurion
Joined: 10 Mar 2014 Posts: 121
|
Here is my case:
Operating System : Linux
MQ Version 7.5.0.2
We have around 52 queue managers, all are partial repositories in a single cluster. We have two full repository queue managers. Initially we had only one port in each queue manager.
Applications used to connect and cluster receiver channels also used the same port to reach the queue manager. However, we found a problem with this approach -
1. When application makes high number of connections than the ulimit can support, the TCP backlog occurs and the queue manager performance gets degraded.
2. When we stop a server connection channel in forced mode, the port is still open and running. The application can still make an attempt to connect to the queue manager from the client side. Though the attempt fails, if it tries to make connections in a short amount of time, it is saturating the port and not allowing to accept any more connections.
Due to time, since the port is not responsive on a queue manager with high connections, the cluster updates are not being received. For example during the above said situation, if a cluster queue is added, it is not reflecting in the queue manager becuase the port is not responding and unable to get cluster updates.
Hence, we decided to have a separate listener for application connections and for cluster receiver communication.
When we added one more listener to all the queue managers including partial and full repositories, there was one queue manager in PCI zone with ports not opened on the newly created port. Due to this, the communication from the PCI queue manager to the cluster repository failed.
Now, to apply a fix -
1. either we have to roll back the changes and use a single port for cluster and application connection
2. we have to use multiple connames in cluster receiver in the repository so that it accepts connections from PCI and other partial repository queue managers
3. create a separate cluster receiver for PCI queue manager and keep the one which is already created for all the other partial repository queue managers to communicate in the cluster.
Hence the question, whether one more cluster receiver can be added, dedicated for by one queue manager in PCI to communicate in the cluster.
As i mentioned earlier, multiple connames in repository cluster receiver seems better.
Thanks. _________________ Custom WebSphere MQ Tools Development C# & Java
WebSphere MQ Solution Architect Since 2011
WebSphere MQ Admin Since 2004 |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Jul 02, 2015 7:02 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Now this is presenting a case correctly...
First I would check if limiting the SVRCONN connection using MAXINST and MAXINSTC would limit the number of connections and allow for the qmgr to qmgr connections to be always active. Remember you will still be hitting the max channels setting in the qmgr.... (see qm.ini stanzas for max channels, max active channels).
This should take care of applications behaving poorly, limiting their connections.
Second you could create a second listener and by convention not release that information to the applications. So you could force all qmgr to qmgr connections on the second listener..., but you'll still be hitting the max channels in the qmgr... so remember to implement point 1...
You will notice that adding a listener / port will do nothing for you if you're already hitting the max channels limit on the qmgr... Please do not think you can handle this like an HTTP server. It doesn't work that way.
Which ever works for you  _________________ MQ & Broker admin |
|
Back to top |
|
 |
vsathyan |
Posted: Thu Jul 02, 2015 7:34 pm Post subject: |
|
|
Centurion
Joined: 10 Mar 2014 Posts: 121
|
1. We have the MAXINST and MAXINSTC attributes set on each svrconn channel.
2. We have the max channels value set in qm.ini
3. (Important) - Even with the svrconn channel disabled/stopped at MQ queue manager and the listener running, an application can saturate the port by making infinite connection attempts, though not succesful in connecting to the queue manager.
4. Applications hosted on weblogic or oracle service bus have these problems, as the user does not know how the adapter is coded. Every one develops in a happy path, not considering the repercussions or the side effects if it goes off track.
For example, i worked with our enterprise OSB architects and came to know that OSB console does not have an option to set the CCSID of a message before reading it (MQMessage.CharacterSet = x before reading it off the queue). Though it is not a point of concern here.. the point i want to make here is, these adapters are not exposing what they do with MQ and options are limited.
Let me explain one peculiar problem. When the application running on OSB unable to open the queue with specific options as the permission is not given, say setall - it tries to dump the connection all together and make a new connection attempt to the queue manager, rather than logging the error and stop. Due to this, we have seen lot of connections to the queue manager, eventually bringing the qmgr to halt. Even when we increased the ulimit settings, the app reached 15K connections, before maxing out the queue manager. Please note, that the channel instances never exceeded the value set in the channel params. However, the same instance was used to make multiple connections using sharing conversations. Setting sharing conversations had other side affects.
Setting up and stabilizing the environment with different app services, either custom written or deploying on an enterprise bus is still challenging.
Any way, thanks much for your inputs and time. I'll do some R&D stuff before i conclude the best approach which suits me. Have a great day. _________________ Custom WebSphere MQ Tools Development C# & Java
WebSphere MQ Solution Architect Since 2011
WebSphere MQ Admin Since 2004 |
|
Back to top |
|
 |
fjb_saper |
Posted: Fri Jul 03, 2015 5:31 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Quote: |
Even when we increased the ulimit settings, the app reached 15K connections, before maxing out the queue manager |
I understand the saturation of the port... This is why you may want to add a second listener. At the same time if all those attempts are unsuccessful your max channels is not reached and other applications should be able to connect on a different channel. Giving the application the choice bewteen 2 ports is not the solution.
What I suspect here is that the application may get an exception and does not release the resources acquired upon exception, keeping the connections alive, thus maxing out on the qmgr at maxinstc. If that limit is too high you can quickly max out max channels as well (Think of all svrconn channels being maxed out either at maxinst or maxinstc plus the cluster traffic). This is equivalent to a denial of service attack. Providing the application with 2 ports will only lead to the problem quicker...
Remember that max channels is the count across all channels on the queue manager. Due to the cluster you already have the potential for 112 concurrent channels, just due to the cluster, for each of the queue managers... This does not take into account any p2p channel pairs or any svrconn channels...
What is the total of maxinst across all svrconn channels compared to max channels / max active channels?
What is the comparison between maxinst and maxinstc on any svrconn channel?
Example: The application is expected to use up to 10 channels and runs 5 instances on 5 different boxes.
Say we double the values to leave a buffer...
So you have maxinst = 5 * 10 * 2 = 100 and maxinstc = 10*2 = 20
 _________________ MQ & Broker admin |
|
Back to top |
|
 |
fjb_saper |
Posted: Fri Jul 03, 2015 5:39 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
vsathyan wrote: |
Let me explain one peculiar problem. When the application running on OSB unable to open the queue with specific options as the permission is not given, say setall - it tries to dump the connection all together and make a new connection attempt to the queue manager, rather than logging the error and stop. Due to this, we have seen lot of connections to the queue manager, eventually bringing the qmgr to halt. Even when we increased the ulimit settings, the app reached 15K connections, before maxing out the queue manager. Please note, that the channel instances never exceeded the value set in the channel params. However, the same instance was used to make multiple connections using sharing conversations. Setting sharing conversations had other side affects.
Setting up and stabilizing the environment with different app services, either custom written or deploying on an enterprise bus is still challenging.
Any way, thanks much for your inputs and time. I'll do some R&D stuff before i conclude the best approach which suits me. Have a great day. |
You might want to set shared conversations to 1 and see how that plays out for you...
Quote: |
it tries to dump the connection all together |
Define tries to dump the connection?
Is the exception just leaving the connection open? Any connection abandoned because of an exception needs to be properly closed and released. This is easier with JMS2 and MQ8.
What is the count of channel status or conn(*) for the channel?
If you get to 15K instances you missed the boat somewhere on maxinst and maxinstc...
 _________________ MQ & Broker admin |
|
Back to top |
|
 |
vsathyan |
Posted: Mon Jul 06, 2015 12:00 am Post subject: |
|
|
Centurion
Joined: 10 Mar 2014 Posts: 121
|
We had implemented sharing conversations to 1 with disconnect interval set to 0, maxinst to 256 and maxinst to 128, since we have two app server nodes (case 1) - the problem occurred as soon as the app started.
Later we changed the sharecnv to 0 and disconnect interval to 600 (10 minutes), the problem did not appear until the application made 15K connections.
We had increased the OS parameters to support 15K connections. We tested it using our internal developed tools and we were able to reproduce that when the connections to the queue manager reach 15500+, the TCP backlog starts.
We have applications making a max of 2000 connections today in production. Not sure how it reached 15K+ connections, saturating the port.
As you mentioned, we have suspected this
Quote: |
What I suspect here is that the application may get an exception and does not release the resources acquired upon exception |
We have faced this issue only with applications being hosted on Oracle Service Bus (OSB). The OSB adapters, for example do not give out the option of setting the CCSID while reading messages out of MQ.
We also had memory leak issues in 7.5.0.2 and confirmed with IBM that it is fixed in 7.5.0.5, tested it and we are applying this maintenance level. The environment is much stable now.  _________________ Custom WebSphere MQ Tools Development C# & Java
WebSphere MQ Solution Architect Since 2011
WebSphere MQ Admin Since 2004 |
|
Back to top |
|
 |
fjb_saper |
Posted: Mon Jul 06, 2015 2:12 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
vsathyan wrote: |
The OSB adapters, for example do not give out the option of setting the CCSID while reading messages out of MQ.
We also had memory leak issues in 7.5.0.2 and confirmed with IBM that it is fixed in 7.5.0.5, tested it and we are applying this maintenance level. The environment is much stable now.  |
Glad you're doing better now. As for the CCSID, I'd expect you to set it on the connection factory in JMS. This way the TextMessages received will be in the CCSID of the ConnectionFactory...
Read the OSB documentation carefully. I believe the CCSID may be something that you have to configure with additional properties on the adapter... I know it was the case for the MQ AQ bridge...  _________________ MQ & Broker admin |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|