Author |
Message
|
hopsala |
Posted: Wed Oct 05, 2005 4:02 am Post subject: Discourse \ Why more than one QM per machine? |
|
|
 Guardian
Joined: 24 Sep 2004 Posts: 960
|
Maybe i'm old-fashioned, being an MQ veteran and all, but there's something I can't quite figure out; I had educated myself to think one does not need more than one QM per machine, and that all the applications on that machine will MQCONN to it. True, sometimes you need different qms for prod/prdt/test, but usually these are on different machines, and this only accounts for a max of 3.
I'm not saying an Organization shouldn't have more than one qm, of course not - at the last place I worked we had over 300 qms, of which some had to support at least 5 large applications apiece, with a throughput of ~100MB/sec or more. (so a single QM can take any load you'll give 'im, no need to split load to a few QMs)
Thing is, i've seen many posts of people with dozens of QMs on the same machine, and although I asked why, I never got a reply. Hence, what i'm asking is simply this - Why should one have more than one QM per machine?
Example of said sightings: Sun Solaris with lots of WMQ
(P.S by per-machine I mean the smallest OS unit - an image on VMWare, an LPAR on z/OS etc...) |
|
Back to top |
|
 |
Carl Bloy |
Posted: Wed Oct 05, 2005 5:04 am Post subject: |
|
|
Acolyte
Joined: 16 Dec 2003 Posts: 69 Location: England
|
In production we only have one qmgr per machine, apart from WBIMB where it is one per broker. I can really see no reason to have more than one qmgr per machine for production.
In test we have multiple qmgrs per machine to support multiple test envrionments of the same application. _________________ Regards
Carl |
|
Back to top |
|
 |
zpat |
Posted: Wed Oct 05, 2005 5:27 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Different service levels. We have a couple of message brokers (each with a QM of course).
One is 24x7 and not stopped overnight.
The other one is stopped for an hour to run a synchronised backup over multiple platforms to ensure their data is backed up in a consistent state. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Wed Oct 05, 2005 5:29 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
QM1P runs on ProdServer1.
QM2P runs on ProdServer2.
QM1Q runs on QAServer1.
QM2Q runs on QAServer2.
QM1D and QM2D run on DEVServer1, because we don't want to spend the $$$ for a second server in DEV. But we still want 2 separate QMs to try and make DEV "look" as much like QA and PROD as possible. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
PGoodhart |
Posted: Thu Oct 06, 2005 3:47 am Post subject: |
|
|
Master
Joined: 17 Jun 2004 Posts: 278 Location: Harrisburg PA
|
I do the same as Peter.
I don't want to confuse our webheads, so they get two queue managers to represent two machines.
They seem to find and use/exploit any minor difference and then when it doesn't work exactly the same on the QA level they act like it is all your fault, so two queue managers with no conectivity between them to represent two machines. _________________ Patrick Goodhart
MQ Admin/Web Developer/Consultant
WebSphere Application Server Admin |
|
Back to top |
|
 |
hopsala |
Posted: Thu Oct 06, 2005 6:02 am Post subject: |
|
|
 Guardian
Joined: 24 Sep 2004 Posts: 960
|
Tnx for the prompt replies, but I leaning towards a different direction:
1. Mainly production systems; although over a couple of QMs for a test system is still odd at best.
2. a lot of QMs.
I mean, look at the link I posted, the guy had 19 QMs and 10 brokers on the same machine! (he even said "about 19", he lost count you see ) |
|
Back to top |
|
 |
PGoodhart |
Posted: Thu Oct 06, 2005 6:19 am Post subject: |
|
|
Master
Joined: 17 Jun 2004 Posts: 278 Location: Harrisburg PA
|
I see your point, and that whole message was weird. I think it was license driven (cheap rather then good). Some people also end up with lots of queue managers/brokers because of consolidation of equiptment (also cheap rather then good). I can see it on Z/os where you have lots of regions/lpars or on linux/windows/solaris/aix virtual machines. It makes some sense if you have applications with very differing needs, or are in a situtation where you have some load balancing/fail over going on (so one queue manager takes a hit or is down and another picks up...) But this really is not something most of us need to get into. The one I see the most likely is when you are running applications for many external clients on the same machine and there is some contractual need for seperation of data/resources. _________________ Patrick Goodhart
MQ Admin/Web Developer/Consultant
WebSphere Application Server Admin |
|
Back to top |
|
 |
kevinf2349 |
Posted: Thu Oct 06, 2005 5:08 pm Post subject: |
|
|
 Grand Master
Joined: 28 Feb 2003 Posts: 1311 Location: USA
|
We have a production UNIX box with 2 queue managers on it, but that is simply because a vendor insisted on us having 'their' queue manager named the way they wanted it and kept seperate. As I see it this was mainly because the company in question doesn't have a clue about MQ. Their focus was the applicaation code and it was working with the way they defined 'their' objects.
Our z/OS production has two queue manager, one just for SDSF. The other for everything else.
The AS400 has one per environment. The windows production machines have one each server.
My machine has as many as I decide to put there, but then I play around on mine all the time  |
|
Back to top |
|
 |
hopsala |
Posted: Fri Oct 07, 2005 1:51 am Post subject: |
|
|
 Guardian
Joined: 24 Sep 2004 Posts: 960
|
kevinf2349 wrote: |
Our z/OS production has two queue manager, one just for SDSF. The other for everything else. |
What do you mean by QM for SDSF? If memory serves, SDSF is a JES2 utility, used to view spool (job output) etc; how does WMQ apply here? |
|
Back to top |
|
 |
kevinf2349 |
Posted: Fri Oct 07, 2005 5:04 am Post subject: |
|
|
 Grand Master
Joined: 28 Feb 2003 Posts: 1311 Location: USA
|
|
Back to top |
|
 |
hopsala |
Posted: Fri Oct 07, 2005 7:20 am Post subject: |
|
|
 Guardian
Joined: 24 Sep 2004 Posts: 960
|
Sounds nifty indeed, wish I had a MF handy to test it out I actually had to write a small utility to send jobs between LPARs in my MF Admin days, somewhere between a JES assembli exit and an HACMP chpid configuration. Yes, those were the days...
Anyway, back to the topic at hand - I am yet to receive a satisfying answer  |
|
Back to top |
|
 |
tinye |
Posted: Wed Oct 12, 2005 12:09 pm Post subject: |
|
|
Novice
Joined: 02 Aug 2005 Posts: 17 Location: OHIO
|
It may be that I have enough MQ smarts to be dangerous, and that's why I have more than one QM on a server.
I receive incoming customer connections, and I like to isolate each customer within a QM. That way, I know that everything within a QM is for a particular customer; and if I'm monkeying with something for a specific customer, I'm not hitting all customers.
It's probably not the best way to go; but it works for me right now. As the architecture of a single QM becomes clearer to me, I may shift to that in time. |
|
Back to top |
|
 |
ramires |
Posted: Wed Oct 12, 2005 2:29 pm Post subject: |
|
|
Knight
Joined: 24 Jun 2001 Posts: 523 Location: Portugal - Lisboa
|
In my case the, inside the same organization, there is a windows box with several qmgrs. Each qmgr is serving different department. We have different qmgrs names and the same queues. The qmgr can be viewed like a "service" we provide. If nedded, we can stop one and let the others running.
If a new department needs our "service", we create q new qmgr, and replicate all the envriroment (except ports numbers and channel names)
Regards |
|
Back to top |
|
 |
hopsala |
Posted: Wed Oct 12, 2005 9:33 pm Post subject: |
|
|
 Guardian
Joined: 24 Sep 2004 Posts: 960
|
ramires wrote: |
If a new department needs our "service", we create q new qmgr, and replicate all the envriroment (except ports numbers and channel names) |
This was the exact point of this post - why do you treat a QM as a service instead of a queue as a service? I mean, it's like having serveral applications on the same z/OS and creating an LPAR for each one, or like creating one inetd daemon for each appl, and the examples go on. The term "service" is a hierarchic term, so that QM is the general service for WMQ services, and queues are service per-application; at least that's how I see it, and I surely MQ has the technology (auth, service, recovery etc) to support this approach, meaning that IBM implicitly states this as well.
ramires wrote: |
If nedded, we can stop one and let the others running. |
While this is a common argument, i've always found it problematic at best: Stopping a qm is never an application requirement, always a maintenance requirement - so you gain nothing by having a QM per application. On the contrary, every FixPac, version upgrade, configuration task, backup task and design change will have to be reproduced per QM, instead of once.
You could say that when you want to down a qm for maintenance, this gives you the option of stopping only one appl at a time (given you don't have to boot the server, which is many times the case) - but is it really worth the maintenance overhead? Besides, use of failover/balancing solutions clears up that issue in a much more elegant way.
So, what am I missing here?  |
|
Back to top |
|
 |
hopsala |
Posted: Wed Oct 12, 2005 9:45 pm Post subject: |
|
|
 Guardian
Joined: 24 Sep 2004 Posts: 960
|
tinye wrote: |
It may be that I have enough MQ smarts to be dangerous, and that's why I have more than one QM on a server.
I receive incoming customer connections, and I like to isolate each customer within a QM. That way, I know that everything within a QM is for a particular customer; and if I'm monkeying with something for a specific customer, I'm not hitting all customers. |
(in addition to my response to ramires)
What per-customer monkeying about do you do that endangeres the QM as a server? If you call configuring a chl or a queue mokeying about, then you should probably learn to trust mq better than that.
Think of it this way - a QM with x applications, needs to be downed y times per month for maintenance, and y=x (to simplify); so, if you have 10 appls on the same qm, you'll have to down it 10 times per month, if you'll have 10 appls, each on a seperate qm, each of them will have to be downed 1 time per month, a sum total of 10 per month...
I guess my point is that as opposed to servers and OSs, MQ does not become unstable as throughput and clients number increases, other than at very high throughputs (100MB/sec); so I don't think the term "dangerous" applies. |
|
Back to top |
|
 |
|