Author |
Message
|
essive |
Posted: Wed Jan 31, 2007 7:55 am Post subject: Routing messages on Put in clustered queues |
|
|
Novice
Joined: 15 Nov 2004 Posts: 24
|
Is there any way to have a publishing client application do a MQ Put such that messages of the same type will all go to the same QManager in a cluster?
Here's my requirement: 3 production MQ servers in a cluster. I want to distribute the messages to the clustered queue but want messages with the same classification (basically a type) to go to the same physical queue manager. |
|
Back to top |
|
 |
jefflowrey |
Posted: Wed Jan 31, 2007 7:56 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
Yes.
BIND_ON_OPEN.
But it's a poor idea to design any affinity between messages.
But you can't do BIND_ON_OPEN in Pub/Sub.
Depending on your topic hierarchy or if you're using WMB as your broker - you can subscribe each individual queue on each queue manager to different types. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
essive |
Posted: Wed Jan 31, 2007 7:59 am Post subject: |
|
|
Novice
Joined: 15 Nov 2004 Posts: 24
|
BIND_ON_OPEN requires the QMGR, correct? There's no logical way to say I want these messages to route to the same destination (I don't care which destination) without a QMGR? |
|
Back to top |
|
 |
jefflowrey |
Posted: Wed Jan 31, 2007 8:06 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
No, BIND_ON_OPEN does not require the qmgr name. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
ashoon |
Posted: Wed Jan 31, 2007 8:18 am Post subject: easy way |
|
|
Master
Joined: 26 Oct 2004 Posts: 235
|
would be to have a different queue for each of your type and only have one local queue per type shared in the cluster... |
|
Back to top |
|
 |
essive |
Posted: Wed Jan 31, 2007 8:32 am Post subject: |
|
|
Novice
Joined: 15 Nov 2004 Posts: 24
|
Ok, let me see if I have this straight:
1. Remote client must use BIND_ON_OPEN to get similar message to the same destination.
2. However, you must use different queues for the types and define these as local on each server in the cluster.
So, if I have 3 servers, each with 1 QMGR - let's call them QM1, QM2 and QM3 all in a cluster I would define local queues - let's say 6 categories -
QLOCAL1, QLOCAL2 on QM1
QLOCAL3, QLOCAL4 on QM2
QLOCAL5, QLOCAL6 on QM3
The remote client would have to publish BIND_ON_OPEN to the local queue name based on whatever business logic I would want to group these message by - my example would be I have a business dept field with 30 depts - so I would route by deptID to one of the six QLOCAL's based on that ID (say deptID mod #ofqueues)
So, there's no way to define a clustered queue just called QLOCAL that is visible to all QM's but the publish sends only the type I define to one physical destination? In other words - divide my depts to the 3 qmanagers but keep the same depts to a single QMGR. |
|
Back to top |
|
 |
Vitor |
Posted: Wed Jan 31, 2007 8:39 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
essive wrote: |
So, there's no way to define a clustered queue just called QLOCAL that is visible to all QM's but the publish sends only the type I define to one physical destination? In other words - divide my depts to the 3 qmanagers but keep the same depts to a single QMGR. |
By default, the messages will be evenly spread across all the appropriate queues, so a QLOCAL defined across all the queue managers will get an even spread of messages. If you open a specific queue on a specific queue manager to deliver a specific kind of message you defeat much of the advantage of clustering and will take a performance hit (OPEN is a relatively expensive operation).
And this even assumes you can do BIND_ON_OPEN in a pub/sub environment; I'm with jefflowrey on this.
As has been posted before, what you're describing is message affinity and is a bad thing, not least for the reasons you're encountering.
What you could do is put in a custom version of the cluster workload exit that contains the routing logic. Exits are not for the beginner and it's not something I'd use myself in your position, but I mention it so you've all the options before you. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
jefflowrey |
Posted: Wed Jan 31, 2007 9:45 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
Okay.
BIND_ON_OPEN is only necessary if you stick with one queue for all types.
And all it really does is ensure that each client application will send all messages from one MQ session (defined by opening and closing a queue) to the same instance of a queue that is shared in a cluster.
But that doesn't help you decide that message type A goes over here, but message type B goes THERE.
If you have different queues for each different type, then every Putting application can presumably NOT set BIND_ON_OPEN, and distribute messages between each instance of each type-specific queue.
And then you can distribute instances of each type-specific queue across the queue managers in your cluster in whatever way makes sense.
If you're using Pub/Sub, the only mechanism you have for controlling what messages go to which queues is the subscription. Depending on what broker you are using for your pub/sub, you have different levels of granularity for the subscription. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
essive |
Posted: Wed Jan 31, 2007 11:26 am Post subject: |
|
|
Novice
Joined: 15 Nov 2004 Posts: 24
|
So, I thank you all for the responses. The real reasons and madness behind this is that I must be able to sequence my events for each Dept ID so that Creates, Updates and Deletes occur in order. That is why I want a kind of destination affinity for each Dept across a cluster.
Now, maybe I'm looking at this all wrong. Is there a better way to sequence messages in a clustered / scaled environment? We're talking 1 - 20 million records a day. |
|
Back to top |
|
 |
jefflowrey |
Posted: Wed Jan 31, 2007 11:40 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
If you're trying to synchronize these types of events across large periods of time - i.e. anything more than a few minutes - then you're in a whole world of hurt and you need to add another large bulk of hours to your design time.
Otherwise, what you can do is set the Priority on Creates to be higher than updates or deletes, and etc.
But then what if someone needs to do a Delete/Create instead of doing an Update? Or what if the user sends an Update without ever sending a Create?
Your overall best approach is to ensure that every event is treated independently.
Maintaining message sequence across a large cluster is a complicated problem. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
essive |
Posted: Wed Jan 31, 2007 11:52 am Post subject: |
|
|
Novice
Joined: 15 Nov 2004 Posts: 24
|
I agree - it is complex. My events will be independent Creates, Updates and Deletes. I just need each dept to go to the same destination. Are there any best practices for sequencing? |
|
Back to top |
|
 |
jefflowrey |
Posted: Wed Jan 31, 2007 11:58 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
Normally, I'd argue against having different queues holding the same kind of logical data for different business groups. But in this case, you kind of need it.
And you keep using the word "publish", but you haven't confirmed if you're actually using the Pub/Sub model. If you're using Pub/Sub, then you can put the Department in the Topic Hierarchy, and create individual subscribers for each department ID. You can even then start to share queues (although I think it's a poor idea) using subscriber ids. Each subscriber would receive messages for a particular department - and be tied to a particular queue on a particular queue manager in the cluster.
But you still have to deal with the case of a bad order for requests being supplied by the end user. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
|