ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » General IBM MQ Support » Multi-instance QM with Solaris NSF server

Post new topic  Reply to topic Goto page 1, 2  Next
 Multi-instance QM with Solaris NSF server « View previous topic :: View next topic » 
Author Message
Esa
PostPosted: Mon Mar 18, 2013 2:16 am    Post subject: Multi-instance QM with Solaris NSF server Reply with quote

Grand Master

Joined: 22 May 2008
Posts: 1387
Location: Finland

If I understand IBM Testing and support statement for WebSphere MQ multi-instance queue managers correctly, multi-instance queue managers don't support Solaris as an NSF V4 Server.

So you can run multi-instance queue managers on Solaris (with certain patches), but the N-A-S can only be NFS v4 server on IBM System Storage N series system running Data ONTAP 7.3.2 or Veritas Storage Foundation V5.1 SP1 Cluster File System?

Please tell me how wrong I am again!
Back to top
View user's profile Send private message
ramires
PostPosted: Mon Mar 18, 2013 2:50 am    Post subject: Reply with quote

Knight

Joined: 24 Jun 2001
Posts: 523
Location: Portugal - Lisboa

I guess the important part of that document is this

"multi-instance queue managers were designed to use the less restrictive advisory file locking scheme and are not compatible with mandatory file locking. IBM has encountered mandatory file locking only with contact admin devices from the EMC Celerra family."

I had this issue with EMC. You can open a pmr asking for more details.
_________________
Obrigado / Thanks you
Back to top
View user's profile Send private message
Esa
PostPosted: Mon Mar 18, 2013 4:14 am    Post subject: Reply with quote

Grand Master

Joined: 22 May 2008
Posts: 1387
Location: Finland

Thanks, ramires.

I wonder if a "Yes" in the MIQM operating system/Network storage system matrix (see above link) means "Yes, it's supported" or "Yes, we have tested this"?

I also wonder if it would make any sense to run a multi-instace queue manager on top of a hardware based HA system, like Veritas Cluster. To decrease planned downtime. Can a multi-instance QM failover cause an unnecessary hardware failover?
Back to top
View user's profile Send private message
Vitor
PostPosted: Mon Mar 18, 2013 4:58 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Esa wrote:
I also wonder if it would make any sense to run a multi-instace queue manager on top of a hardware based HA system, like Veritas Cluster. To decrease planned downtime.


I'm not sure what you're getting at here. I don't see what you'd get out of this, except possibly using a MI to facilitate DR in the event of the entire Vertias Cluster going down, so what do you mean? What does "decrease planned downtime" convey to you? If downtime is planned, it doesn't matter what you're using because you'll shut it down.

Esa wrote:
Can a multi-instance QM failover cause an unnecessary hardware failover?


It depends on how your hardware failover (Vertias in your example) is configured. If Vertias is simply monitoring hardware then probably not, but Vertias is typically configured to monitor the health of it's active node by the processes running on it (because you're not using a MQ QMGR and want it to come up on the passive node in the event of failure). So Vertias could easily react to the transfer of a queue manager as a failire and issue a failover. Hence my comment above about using MI after Vertias not before it.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Esa
PostPosted: Mon Mar 18, 2013 8:19 am    Post subject: Reply with quote

Grand Master

Joined: 22 May 2008
Posts: 1387
Location: Finland

Thanks, Vitor. I think your comments have helped me back to the correct path. I think I may have bumped into a somewhat unorthodox cluster setup where the terminology used differs from Veritas documentation in a confusing way. I'm afraid the application that gets switched over is not an MQ service but a virtual machine running both an OS and MQ. So there is only one copy of it. To apply maintenance to MQ or the OS you will have to take the services down without a possibility to switch over. I hope I have misunderstood. Trying to get the documentation of the environment in my hands - if there is any...
Back to top
View user's profile Send private message
mqjeff
PostPosted: Mon Mar 18, 2013 8:59 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

You can certainly run an MI qmgr in an environment as you describe. But what you're talking about isn't an HA qmgr and it isn't an MI qmgr.

It's a regular qmgr in an HA OS instance.

But the thing is that you need two OS images to run an MI qmgr, or for that matter, an HA qmgr either.

What you have is similar to what IBM Hypervisor or PureSystems does.
Back to top
View user's profile Send private message
Esa
PostPosted: Mon Mar 18, 2013 11:50 pm    Post subject: Reply with quote

Grand Master

Joined: 22 May 2008
Posts: 1387
Location: Finland

Thanks, mqjeff

So there is hope that I don't need to tear it all down to make it work.

Do you have an opinion on my other question?

Esa wrote:

I wonder if a "Yes" in the MIQM operating system/Network storage system matrix (see above link) means "Yes, it's supported" or "Yes, we have tested this"?
Back to top
View user's profile Send private message
mqjeff
PostPosted: Tue Mar 19, 2013 3:47 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

Esa wrote:
Thanks, mqjeff

So there is hope that I don't need to tear it all down to make it work.

Do you have an opinion on my other question?

Esa wrote:

I wonder if a "Yes" in the MIQM operating system/Network storage system matrix (see above link) means "Yes, it's supported" or "Yes, we have tested this"?


I have several opinions.

None of them count.

The opinion that is the most relevant is that it doesn't matter. Because, again, it's not an MI qmgr.
Back to top
View user's profile Send private message
Esa
PostPosted: Tue Mar 19, 2013 4:24 am    Post subject: Reply with quote

Grand Master

Joined: 22 May 2008
Posts: 1387
Location: Finland

mqjeff wrote:

The opinion that is the most relevant is that it doesn't matter. Because, again, it's not an MI qmgr.


I'm investigating the possibility to make it one, so the question is relevant?
Back to top
View user's profile Send private message
mqjeff
PostPosted: Tue Mar 19, 2013 4:43 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

Esa wrote:
mqjeff wrote:

The opinion that is the most relevant is that it doesn't matter. Because, again, it's not an MI qmgr.


I'm investigating the possibility to make it one, so the question is relevant?


You can't convert a qmgr to an MI qmgr, you have to start from scratch.

It's worth running experiments to determine what is the fastest failover.

I'd expect that a VM image failover would likely happen fastest, if the system manages to keep a proper image of the OS RAM state, such that it doesn't have to "reboot" or "restart" anything to instantiate the OS image in the new location.

And, again, to use an MI qmgr, (or even an HA managed qmgr) you need *two* separate OS images running *simultaneously*. That may incur licensing charges for the vm system and/or the os image itself.
Back to top
View user's profile Send private message
Esa
PostPosted: Tue Mar 19, 2013 5:11 am    Post subject: Reply with quote

Grand Master

Joined: 22 May 2008
Posts: 1387
Location: Finland

mqjeff wrote:
Esa wrote:
mqjeff wrote:

The opinion that is the most relevant is that it doesn't matter. Because, again, it's not an MI qmgr.


I'm investigating the possibility to make it one, so the question is relevant?


You can't convert a qmgr to an MI qmgr, you have to start from scratch.

It's worth running experiments to determine what is the fastest failover.

I'd expect that a VM image failover would likely happen fastest, if the system manages to keep a proper image of the OS RAM state, such that it doesn't have to "reboot" or "restart" anything to instantiate the OS image in the new location.

And, again, to use an MI qmgr, (or even an HA managed qmgr) you need *two* separate OS images running *simultaneously*. That may incur licensing charges for the vm system and/or the os image itself.


I agree with everything you say. This is about creating a new MI QM and I'm afraid I may have to apply the same approach that has been used with the existing.

Yes, I think the HA cluster is enough for failover. I'm not too familiar with Solaris zones yet (I have pure AIX background) but I'm afraid applying maintenance to MQ or maybe even the OS may cause downtime. Unless you make the QM MI. I know there will be other issues, for example with making the NFS server highly available...
Back to top
View user's profile Send private message
mqjeff
PostPosted: Tue Mar 19, 2013 5:24 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

Esa wrote:
Solaris zones


Yikes!
Back to top
View user's profile Send private message
bruce2359
PostPosted: Tue Mar 19, 2013 5:31 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9472
Location: US: west coast, almost. Otherwise, enroute.

Esa wrote:
... I'm afraid applying maintenance to MQ or maybe even the OS may cause downtime.

While there is risk to applying maintenance, there is also risk from NOT applying maintenance.

Preventive maintenance (PM) updates your software with fixes to problems that others have encountered. Not applying PM leaves you exposed to those problems.

Policy should address the risk your organization will tolerate, and the extent and types of mitigation (MI, HA, DR, etc.) implemented.

The IT conundrum: We get no points for doing IT perfectly; we get negative points for doing IT wrong.
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
Esa
PostPosted: Tue Mar 19, 2013 8:29 am    Post subject: Reply with quote

Grand Master

Joined: 22 May 2008
Posts: 1387
Location: Finland

mqjeff wrote:
Esa wrote:
Solaris zones


Yikes!


A zone seems to be the Solaris equivalent of AIX wpar, the poor man's lpar?
Back to top
View user's profile Send private message
Vitor
PostPosted: Tue Mar 19, 2013 8:32 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Esa wrote:
mqjeff wrote:
Esa wrote:
Solaris zones


Yikes!


A zone seems to be the Solaris equivalent of AIX wpar, the poor man's lpar?


Solaris Zones Are Cool.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2  Next Page 1 of 2

MQSeries.net Forum Index » General IBM MQ Support » Multi-instance QM with Solaris NSF server
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.