Author |
Message
|
Boomn4x4 |
Posted: Mon Feb 20, 2012 6:26 am Post subject: Cluster channels explanation |
|
|
Disciple
Joined: 28 Nov 2011 Posts: 172
|
I have been reading about cluster channels, sending and receiving, but I'm struggling to understand what, exactly is happening. What I'm struggling to understand the most is the significance of the "inactive" cluster sender channel.
To simplify things, I have 2 queue managers set in a cluster, "ClientQM" and "ServerQM". ClientQM sends messages to a "ServerQ" that is on ServerQM. On the ClientQM, I have defined a "To.ServerQM" sender channel and a "To.ClientQM" receiver channel. On the ServerQM I have defined a "To.ServerQM" receiver channel.
Now, its my understanding that once I define a connection between "To.ServerQM" and "To.ClientQM" that the SYSTEM.CLUSTER.TRANSMIT.QUEUE handles the passing of messages from the Client to the Server and that Channels are automatically started to relay those messages.
What has me confused however, is why to the Sender/Receiver channels show as "inactive"? Why, even when I "Stop" those channels, messages still flow through? I hate to sound so naive about this, especially I've seen other people post similar questions to which the replies are typically somewhere along the lines of, "Read the documentation dumbass". Well, I have been reading the documentation, and it just isn't clicking as to what exactly is happening?
Thanks for your patience. |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 20, 2012 6:47 am Post subject: Re: Cluster channels explaination |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
Boomn4x4 wrote: |
To simplify things, I have 2 queue managers set in a cluster, "ClientQM" and "ServerQM". ClientQM sends messages to a "ServerQ" that is on ServerQM. On the ClientQM, I have defined a "To.ServerQM" sender channel and a "To.ClientQM" receiver channel. On the ServerQM I have defined a "To.ServerQM" receiver channel. |
That's not really going to simply things for you. A WMQ cluster is not arranged client / server, but is peer-to-peer. The only difference is that 2 of the queue managers in a cluster are nominated to act as repositories for the cluster information.
Boomn4x4 wrote: |
Now, its my understanding that once I define a connection between "To.ServerQM" and "To.ClientQM" that the SYSTEM.CLUSTER.TRANSMIT.QUEUE handles the passing of messages from the Client to the Server and that Channels are automatically started to relay those messages. |
No. The messages are passed via automaticly defined channels.
Boomn4x4 wrote: |
What has me confused however, is why to the Sender/Receiver channels show as "inactive"? Why, even when I "Stop" those channels, messages still flow through? |
Because the manually defined channels carry cluster information. The auto channels carry data.
Boomn4x4 wrote: |
I hate to sound so naive about this, especially I've seen other people post similar questions to which the replies are typically somewhere along the lines of, "Read the documentation dumbass". Well, I have been reading the documentation, and it just isn't clicking as to what exactly is happening? |
Well I wouldn't have used the word "dumbass", but it is documented behaviour. You certainly need to get a better grip of the structure of a cluster if you think it's arranged client/server. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
Boomn4x4 |
Posted: Mon Feb 20, 2012 7:03 am Post subject: |
|
|
Disciple
Joined: 28 Nov 2011 Posts: 172
|
I used the terms client / server loosely. I do understand how they are arranged. I used those words only because my example had one QM sending messages to a "hosting" QM.
So, going off of what you said, the sender/receiver channels that I created are only used to pass information about the QMGRS, not messages themselves? The actual transmission of messages happens on automatic channels that started and stopped behind the scenes? So by Stopping the cluster-sender channel, all I was doing was stopping that QMGR from sending information about itself to the full repositories? |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 20, 2012 7:12 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
Boomn4x4 wrote: |
So by Stopping the cluster-sender channel, all I was doing was stopping that QMGR from sending information about itself to the full repositories? |
Yes. Given that your cluster only has 2 queue managers and hence both queue managers should be full repositories. WMQ clusters have exactly 2, always 2 and only 2 full repositories.
Unless they have 3.  _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
Boomn4x4 |
Posted: Mon Feb 20, 2012 7:19 am Post subject: |
|
|
Disciple
Joined: 28 Nov 2011 Posts: 172
|
Vitor wrote: |
Boomn4x4 wrote: |
So by Stopping the cluster-sender channel, all I was doing was stopping that QMGR from sending information about itself to the full repositories? |
Yes. Given that your cluster only has 2 queue managers and hence both queue managers should be full repositories. WMQ clusters have exactly 2, always 2 and only 2 full repositories.
Unless they have 3.  |
You have me scared.... Our current development environment has 10 QMGRS in a single cluster, 2 of which are full repositories (my example used one partial repository putting messages on one full repository). There have been some discussions as to how many full repositories we should have in production. The network will be composed of several thousand partial repositories, I have heard the question to as many as 6 full repositories to ensure reliability. You say 2, maybe 3... is more than 3 an over kill? |
|
Back to top |
|
 |
fjb_saper |
Posted: Mon Feb 20, 2012 7:26 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Boomn4x4 wrote: |
Vitor wrote: |
Boomn4x4 wrote: |
So by Stopping the cluster-sender channel, all I was doing was stopping that QMGR from sending information about itself to the full repositories? |
Yes. Given that your cluster only has 2 queue managers and hence both queue managers should be full repositories. WMQ clusters have exactly 2, always 2 and only 2 full repositories.
Unless they have 3.  |
You have me scared.... Our current development environment has 10 QMGRS in a single cluster, 2 of which are full repositories (my example used one partial repository putting messages on one full repository). There have been some discussions as to how many full repositories we should have in production. The network will be composed of several thousand partial repositories, I have heard the question to as many as 6 full repositories to ensure reliability. You say 2, maybe 3... is more than 3 an over kill? |
It all depends on your topology, geographic distributions and network reliability.
If your 2 FR are set up as being HA available (hardware cluster or MI) then 2 should be sufficient given the assumption that the network is always available with near no latency...
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 20, 2012 7:56 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
Boomn4x4 wrote: |
I have heard the question to as many as 6 full repositories to ensure reliability. You say 2, maybe 3... is more than 3 an over kill? |
You'll find a lot of discussion in this forum on how many repositories you need in a WMQ cluster. You tallk about reliability; the point which many people overlook is that all the FRs do is hold cluster information. If all the full repositories in a cluster go down the cluster still works because they're not involved in the message transfer.
One possible reason for 3 FRs if you have a very geographically diverse cluster and make a lot of changes. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
rammer |
Posted: Mon Feb 20, 2012 8:05 am Post subject: |
|
|
Partisan
Joined: 02 May 2002 Posts: 359 Location: England
|
This comment is going to tempt fate, but where I work we have only ever had 2 FR and approx 20 PR and have never had an issue on the FR's in terms of availability (these are on a san with failover capability), Only time we have had issues was the early days of mq version 5 when clustering was not very good or "someone" has changed something in recent year... |
|
Back to top |
|
 |
bruce2359 |
Posted: Mon Feb 20, 2012 8:11 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Vitor wrote: |
If all the full repositories in a cluster go down the cluster still works because they're not involved in the message transfer. |
...for existing cluster objects, objects known to the residual PRs.
The official doc says that only 2 FRs will be used in a cluster.
If you intend to have a single, small, localized, reliable cluster, then 2 FRs work.
If you intend to have a geographically dispersed cluster (state-wide or nation-wide) you may want to create smaller, geographically localized clusters, and interconnect or overlap them. In this case, the smaller cluster will have 2 FRs, each interconnected cluster will have 2 FRs, wash, rinse, repeat.
IBM offers a 3-day course WM250 Designing and Architecting Clustering Solutions. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 20, 2012 8:28 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
bruce2359 wrote: |
If you intend to have a geographically dispersed cluster (state-wide or nation-wide) you may want to create smaller, geographically localized clusters, and interconnect or overlap them. In this case, the smaller cluster will have 2 FRs, each interconnected cluster will have 2 FRs, wash, rinse, repeat. |
It's a viable solution. You'd need to judge the network topology against the 3FR solution but it could easily be a better way. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Mon Feb 20, 2012 10:10 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
|
Back to top |
|
 |
bruce2359 |
Posted: Mon Feb 20, 2012 11:16 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Is it possible that the Search button above is inoperative? Or that google has ceased to function?
Current posts lead me to believe so. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 20, 2012 11:53 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
bruce2359 wrote: |
Is it possible that the Search button above is inoperative? Or that google has ceased to function?
Current posts lead me to believe so. |
Just seems like Monday to me...  _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
|