|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
cluster issue |
« View previous topic :: View next topic » |
Author |
Message
|
Vitor |
Posted: Mon Jan 11, 2010 9:10 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
rsk33 wrote: |
on QM1
DEFINE QLOCAL ('a') DEFBIND(NOTFIXED) boqname('Backout') CLUSTER('A') SHARE REPLACE;
on qm2
DEFINE QLOCAL ('a') DEFBIND(NOTFIXED) boqname('Backout') CLUSTER('B') SHARE REPLACE; |
Is this supposed to be telling us something?
It certainly doesn't explain how messages are being distributed as the "same" queue isn't in both clusters - each cluster has a separate queue called 'a'.
Which, in a moment of revalation, is why you're getting cluster resolution errors now QM2 has gone. Because the application (or something) must be addressing a given copy of the queue, and something's trying to use QM2.
I'll also point out for the record that DEFBIND is exactly that - a default, like persistence. The application doesn't have to honour it, and in this instance sounds like it isn't.
If this set-up was intended to provide message failover in the event of queue manager failure, it's been set up wrong. As you've discovered. If you want all messages to go to QM1.a if QM2.a is unavailable, they have to be in the same cluster. Not 2 different ones. And the putting application has to use BIND_NOT_FIXED & anonomous addressing, which it can't in this set up because it has to bridge the clusters. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
bruce2359 |
Posted: Mon Jan 11, 2010 9:14 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9472 Location: US: west coast, almost. Otherwise, enroute.
|
[quote="Vitor"]
rsk33 wrote: |
It certainly doesn't explain how messages are being distributed as the "same" queue isn't in both clusters ... |
The original post didn't state that this was the case. I believe this was rsk33's intent.
My next question to rsk33 will be: Have you read the WMQ Clusters manual? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
rsk33 |
Posted: Mon Jan 11, 2010 9:19 am Post subject: |
|
|
Centurion
Joined: 21 Aug 2006 Posts: 141
|
front end QM is partial repository for both cluster A and Cluster B |
|
Back to top |
|
 |
Vitor |
Posted: Mon Jan 11, 2010 9:20 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
rsk33 wrote: |
Front end QM is partial repository for both cluster A and cluster B. |
You've already said that and it makes no difference at all to any of the comments made about your set up & your problem.  _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
rsk33 |
Posted: Mon Jan 11, 2010 9:26 am Post subject: |
|
|
Centurion
Joined: 21 Aug 2006 Posts: 141
|
I agree with vitor that the both Qms should be in the same cluster.
But if cluster is corrupted both will fail. keeping that cluster corruption in mind that above setup was configured. |
|
Back to top |
|
 |
bruce2359 |
Posted: Mon Jan 11, 2010 9:29 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9472 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
front end QM is partial repository for both cluster A and Cluster B |
Exactly how did you accomplish this?
Do you have Cluster Sender channels to both full repositories? What do these channel definitions look like? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
Vitor |
Posted: Mon Jan 11, 2010 9:31 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
rsk33 wrote: |
But if cluster is corrupted both will fail. keeping that cluster corruption in mind that above setup was configured. |
I echo my associate's comment - have you read the Clusters manual?
I repeat my previous comment - there's no such object as a cluster!
What, exactly, do/did you think would be corrupted and cause both queue managers to be inaccessable? What, exactly, do/did you think was the advantage of this set up?
Given that this is the situation you were trying to protect against, i.e. corruption by hard disk failure, do you now agree that you were wrong about both of the above points? As it's not even slightly failed over? _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|