Author |
Message
|
StefanSievert |
Posted: Tue Apr 30, 2002 9:29 am Post subject: |
|
|
 Partisan
Joined: 28 Oct 2001 Posts: 333 Location: San Francisco
|
Hi there,
is anybody out there using MQSeries V5.2 for AIX in a HACMP cluster, where both boxes are 'hot', i.e. running under normal circumstances?
Usually, a HA cluster is set up in a stand-by configuration where only one node is processing work and the stand-by node will only take over in a failover situation.
We want to have two queue managers on two boxes in a HACMP cluster that are both up and running performing their duties. Each queue manager should be able to failover to the other box in case one box dies. Bothe queue managers are MQ clustered with two front-end queue managers running outside the HACMP cluster.
Is what I am describing possible with HACMP or am I dreaming? Sorry if this a bit more of a AIX/HACMP question than a MQ question.
Any insight is highly appreciated!
Thanks a lot,
Stefan
_________________ Stefan Sievert
IBM Certified * WebSphere MQ |
|
Back to top |
|
 |
mrlinux |
Posted: Tue Apr 30, 2002 7:26 pm Post subject: |
|
|
 Grand Master
Joined: 14 Feb 2002 Posts: 1261 Location: Detroit,MI USA
|
No that is perfectly acceptable.
_________________ Jeff
IBM Certified Developer MQSeries
IBM Certified Specialist MQSeries
IBM Certified Solutions Expert MQSeries |
|
Back to top |
|
 |
mcruse |
Posted: Tue May 14, 2002 2:55 am Post subject: |
|
|
 Novice
Joined: 15 Apr 2002 Posts: 13 Location: Germany
|
Hey Stefan,
I think that HACMP supports only standby solutions because a filesystem must be in a volume group and a volume groupe is only accessable by one system. While a take over the first system have to varyoff this volume group and the second system can varyon the volume groupe.
I hope this helps.
wkr
Markus
Viele Gruesse aus dem Schwabenland  |
|
Back to top |
|
 |
smahon |
Posted: Tue May 14, 2002 6:04 am Post subject: It works, but first..... |
|
|
Apprentice
Joined: 24 Apr 2002 Posts: 29
|
This is absolutely a workable solution and I have done it many times. Each queue manager must be on a different port and each queue manager's file systems must be in a different volume group. PLUS, the following must be configured:
Since semaphore and shared memory ids are based on inodes in MQ, these will conflict when attempting to run multiple queue managers on 1 box when the inodes, as in this case, reside in different filesystems. MQ will simply report "unknown error" when attempting to start the second queue manager. The work around is to relocate the following directories to a local, unshared filesystem and create a symbolic link from the old location to the new location:
...qmgrs/<QMGRNAME>/@ipcc/esem
...qmgrs/<QMGRNAME>/@ipcc/isem
...qmgrs/<QMGRNAME>/@ipcc/msem
...qmgrs/<QMGRNAME>/@ipcc/shmem
...qmgrs/<QMGRNAME>/@ipcc/ssem
...qmgrs/<QMGRNAME>/esem
...qmgrs/<QMGRNAME>/isem
...qmgrs/<QMGRNAME>/msem
...qmgrs/<QMGRNAME>/shmem
...qmgrs/<QMGRNAME>/ssem
This must be done for both queue managers, on both machines. It is sufficient to use mkdir for the new directories (for both queue managers on both machines). The symbolic links only need to be created once, since they will be imported in a failover situation. |
|
Back to top |
|
 |
smahon |
Posted: Mon May 20, 2002 4:21 am Post subject: |
|
|
Apprentice
Joined: 24 Apr 2002 Posts: 29
|
Support pack mc63 also describes this configuration. |
|
Back to top |
|
 |
kavithadhevi |
Posted: Thu May 23, 2002 8:16 am Post subject: |
|
|
 Master
Joined: 14 May 2002 Posts: 201 Location: USA
|
Hi ,
i am getting an error such as
AMQ6150: MQSeries semaphore is busy.
EXPLANATION:
MQSeries was unable to acquire a semaphore within the normal timeout period of 0 minutes.
ACTION:
MQSeries will continue to wait for access. If the situation does not resolve
itself and you suspect that your system is locked then investigate the process
which owns the semaphore. The PID of this process will be documented in the accompanying FFST.
can anyone help me pls. i need some explanation why this is caused and how to solve it.
Thanks .
-- Kavitha. |
|
Back to top |
|
 |
|