ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » Clustering » Validating a cluster config.

Post new topic  Reply to topic
 Validating a cluster config. « View previous topic :: View next topic » 
Author Message
aboggis
PostPosted: Thu Sep 11, 2003 9:39 am    Post subject: Validating a cluster config. Reply with quote

Centurion

Joined: 18 Dec 2001
Posts: 105
Location: Auburn, California

Here's my cluster scenario:

Code:

    +------+                    +------+
    |      |  CLUSRCVR CLUSRCVR |      |
    | QMA1 |<-TO.QMA1  TO.QMA2->| QMA2 |
    |  FR  |                    |      |
    |      |<------+  +---------|      |
    +------+       |  |CLUSSDR  +------+
       ^ |         |  |TO.QMB1
CLUSSDR| |CLUSSDR  |  |
TO.QMA1| |TO.QMB1  +--------------+
       | |            |   CLUSSDR |
       | V     +------+   TO.QMA1 |
    +------+   |                +------+
    |      |<--+                |      |
    | QMB1 |           CLUSRCVR | QMB2 |
    |  FR  |CLUSRCVR   TO.QMB2->|      |
    |      |<-TO.QMB1           |      |
    +------+                    +------+
        ^
        |         ...<--------------+
 CLUSSDR|                    CLUSSDR|                     
 TO.QMB1|                    TO.QMA1|
    +------+                    +------+
    |      |           CLUSRCVR |      |
    | QMB1C| CLUSRCVR TO.QMB2C->| QMB2C|
    |      |<-TO.QMB1C          |      |
    |      |                    |      |
    +------+                    +------+

Six queue managers.
One cluster.
Two full repositories.

QMA1 (FR = Full Repository)
CLUSSDR: TO.QMB1
CLUSRCVR: TO.QMA1

QMB1 (FR = Full Repository)
CLUSSDR: TO.QMA1
CLUSRCVR: TO.QMB1

QMA2
CLUSSDR: TO.QMB1
CLUSRCVR: TO.QMA2

QMB2
CLUSSDR: TO.QMA1
CLUSRCVR: TO.QMB2

QMB1C
CLUSSDR: TO.QMB1
CLUSRCVR: TO.QMB1C

QMB2C
CLUSSDR: TO.QMA1
CLUSRCVR: TO.QMB2C


QMA1 & QMA2 each run on their own box.
QMB1 & QMB1C share a box, listening on different ports.
QMB2 & QMB2C share a box, listening on different ports.

I am confusing myself and not sure if I have the right cluster sender/receiver pairs defined. All six queue managers and members of the same cluster, with QMA1 & QMB1 defined as full repositories.

Will the queue managers function correctly if QMA1, QMB1 & QMB1C are offline, given that QMA1 & QMB1 are the only full repositories? For syetem testing, there are times when we wish to bring down QMA1, QMB1 & QMB1C. Since this is where the respositories are hosted will this effect the remaining qmgrs? Or should I make it so that QMA1 & QMA2 are full repositories?

I have a cluster workload exit installed ensuring that messages are not sent to a cluster member that is not running.

Comments welcome.
Back to top
View user's profile Send private message AIM Address Yahoo Messenger
EddieA
PostPosted: Thu Sep 11, 2003 10:01 am    Post subject: Reply with quote

Jedi

Joined: 28 Jun 2001
Posts: 2453
Location: Los Angeles

Tony,

Yes, you've got the right channels defined. MQ will automagically define any others needed.

If both the repositories are offline at the same time, the cluster will continue to operate normally, as long as you don't define any new objects in one of the partial repositories and expect to use it in one of the other partials before one of the full repositories comes back on line. Or reference an object for the first time from a partial.

There's no need to write a workload exit for what you've described. MQ itself will take care of not sending messages to queue managers that aren't available.

Cheers,
_________________
Eddie Atherton
IBM Certified Solution Developer - WebSphere Message Broker V6.1
IBM Certified Solution Developer - WebSphere Message Broker V7.0
Back to top
View user's profile Send private message
aboggis
PostPosted: Thu Sep 11, 2003 10:14 am    Post subject: Reply with quote

Centurion

Joined: 18 Dec 2001
Posts: 105
Location: Auburn, California

Quote:
There's no need to write a workload exit for what you've described. MQ itself will take care of not sending messages to queue managers that aren't available.


BUT... if there are messages for a cluster queue that is hosted on one of the unavailable queue managers, isn't the message still put on the cluster xmit queue for delivery when the queue manager becomes available?

BTW, Eddie... been in any "high places" recently
Back to top
View user's profile Send private message AIM Address Yahoo Messenger
EddieA
PostPosted: Thu Sep 11, 2003 11:39 am    Post subject: Reply with quote

Jedi

Joined: 28 Jun 2001
Posts: 2453
Location: Los Angeles

The descision as to which Queue Manager to send the message to is made at the time the message is put to the XMIT queue. If the queue is ONLY hosted on a queue manager(s) that is (are) unavailable, then yes, it will be put to the queue waiting for one of the queue manager(s) to become avaialable again. But MQ decides which queue manager at this point. If there were 2 and both unavailable, MQ could make a 'wrong guess' as to which will become available first. Is that what you were trying to pre-empt in your exit, or were you planning on changing the destination Queue in your exit (if that's possible), in which case, yes, you would need your own.

If the queue exists on multiple queue managers, and at least one queue manager is available, then the messages will be routed to ONLY the available queue manager(s).

Chapter 11 of the Cluster manual begins with a breakdown of how the 'built-in' algorithm works.

Once a message is on the XMIT queue, it cannot be re-routed to another queue manager without some 'application' reading the message and re-sending it.

As to the last question: Not recently, but I do get the occaisional hankerings to get my 'knees in the breeze'. So, it could be a possibility.
_________________
Eddie Atherton
IBM Certified Solution Developer - WebSphere Message Broker V6.1
IBM Certified Solution Developer - WebSphere Message Broker V7.0
Back to top
View user's profile Send private message
Ratan
PostPosted: Fri Sep 12, 2003 1:06 pm    Post subject: Reply with quote

Grand Master

Joined: 18 Jul 2002
Posts: 1245

I think you also need to consider BIND: ON OPEN and BIND: NOT_FIXED here. If you set 'Bind on open' then the messages will be put to the unavailable QM as well as the available QM, which means if 2 qms are hosting the q, every alternate message is left on the xmit queue to the unavialable QM. With Bind not fixed messages are only put to the avialable QMs.
_________________
-Ratan
Back to top
View user's profile Send private message Send e-mail
aboggis
PostPosted: Fri Sep 12, 2003 2:25 pm    Post subject: Reply with quote

Centurion

Joined: 18 Dec 2001
Posts: 105
Location: Auburn, California

Good point. For that very reason all of our cluster queues are defined with DEFBIND(NOTFIXED).
Back to top
View user's profile Send private message AIM Address Yahoo Messenger
PeterPotkay
PostPosted: Sat Sep 13, 2003 4:43 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Quote:

If you set 'Bind on open' then the messages will be put to the unavailable QM as well as the available QM, which means if 2 qms are hosting the q, every alternate message is left on the xmit queue to the unavialable QM.


I do not agree.

Whether the bind on open option is set or not, MQ will not send a message to a QM that is N/A.

If you issue an MQOPEN, MQ looks for an available QM. If one of the 2 is not up, you bind to the one that is up. And for the duration of that connection to that queue, all subsequent puts will go to that one queue on the live QM. MQ will not send every other one to the down QM.

If you have 3 QMs in your cluster all hosting the queue (QM1 is down, QM2 and QM3 are up), and you issue an MQOPEN with BIND, MQ will bind ALL your messages for that queue connection to either QM2 or QM3. It will never choose QM1. If you choose not fixed in this scenario, MQ would round robin the message between QM2 and QM3, again ignoring QM1 (until it came up anyway).


If you issue an MQOPEN to a BIND_ON_OPEN queue, and MQ happens to bind you to QM1, which is up, all messages go to QM1 for the duration of that connection. If QM1 goes down in the middle of this connection, all the message will stack up in the cluster XMIT queue until the QM comes back.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » Clustering » Validating a cluster config.
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.