Author |
Message
|
indyjoe |
Posted: Wed Jan 25, 2012 10:44 am Post subject: Message Broker and Put restriction question |
|
|
Newbie
Joined: 25 Jan 2012 Posts: 5
|
Hello,
I tried to do a search and couldn't find much other than not checking depth on an input queue.
I have a flow that reads a message off of an input queue and queries a database and can put multiple messages on an output queue.
My flow can put messages onto the output queue faster than the application on the other side can read them. This is problematic sometimes because the data can get old. I would like to prevent a put to the output queue if the depth is over a thresh hold or over a certain age. Is this reasonable to do in message broker?
I can not speed up processing on the other end because this is a vendor application.
Would it be better to restrict the queue depth and adapt my flow to retry the put? |
|
Back to top |
|
 |
adubya |
Posted: Wed Jan 25, 2012 10:52 am Post subject: |
|
|
Partisan
Joined: 25 Aug 2011 Posts: 377 Location: GU12, UK
|
Use MQ message expiry and have MQ remove "stale" messages ? |
|
Back to top |
|
 |
indyjoe |
Posted: Wed Jan 25, 2012 10:57 am Post subject: |
|
|
Newbie
Joined: 25 Jan 2012 Posts: 5
|
They only become "stale" if another data update has occurred on the final DB before it is read from my output queue. |
|
Back to top |
|
 |
indyjoe |
Posted: Thu Jan 26, 2012 7:05 am Post subject: |
|
|
Newbie
Joined: 25 Jan 2012 Posts: 5
|
|
Back to top |
|
 |
Vitor |
Posted: Thu Jan 26, 2012 7:45 am Post subject: Re: Message Broker and Put restriction question |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
indyjoe wrote: |
Is this reasonable to do in message broker? |
It's not reasonable to do at all. How can you calculate ahead of time what depth to stop at? Or how old is too old? How do you know if another DB update has happened? _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
Vitor |
Posted: Thu Jan 26, 2012 7:55 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
indyjoe wrote: |
Anyone have any ideas? |
Show a little patience? We're all volunteers here.
So the situation is this:
A message arrives with some kind of database key. You read the database & use the result to send 1-n messages. These sit on an output queue waiting to be read off. Separate to this, some process updates the part of the database you read off to produce the messages, meaning the messages you produced now contain the previous data.
You have serious process issues.
If this was me, I'd be inclined to move to some kind of pub/sub model where changes to the database resulted in a new retained publication. Or perhaps a trigger on the database to detect the change and reissue your messages. Or get the consuming application to acknowledge what it's read and compare that with the current state, resissuing the messages if needed.
Or most likely get a different vendor with a better consuming application. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Jan 26, 2012 8:03 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
Rethink the problem?
It seems to be that you're creating messages that represent database updates, and you have a possibility of creating duplicate update messages, and you're trying to code something that will "guess" whether an update message has been processed or not.
You could potentially use report messages to get notified when the consumer has gotten a given message.
You could potentially use expiry and report messages to get a copy of messages that have not been processed because they expired.
Or you could simply trust that your message will get delivered and will get processed, and take steps to ensure that you do not create a duplicated update message. |
|
Back to top |
|
 |
lancelotlinc |
Posted: Thu Jan 26, 2012 8:14 am Post subject: |
|
|
 Jedi Knight
Joined: 22 Mar 2010 Posts: 4941 Location: Bloomington, IL USA
|
Another way to accomplish this is to use SOAPRequest node to a Web Service which does the updates. Both the MQ solution and the SOAP solution have considerations.
I agree with mqjeff that you have not designed this thoroughly enough. Push back on your System Architect and have him/her re-accomplish the design. _________________ http://leanpub.com/IIB_Tips_and_Tricks
Save $20: Coupon Code: MQSERIES_READER |
|
Back to top |
|
 |
smdavies99 |
Posted: Thu Jan 26, 2012 8:39 am Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
It seems that a few of us have missed this bit.
Quote: |
I can not speed up processing on the other end because this is a vendor application.
|
This is IMHO the biggest issue here. Without knowing if the vendo application can handle duplicates etc anything we do could be like 'doing No 1's into the wind'.
Unless the consuming application can be changed/sppeded up then:-
He can't move to a Pub/sum model.
He can't use any of the other methods mentioned here.
However, I wonder if the OP is missing something.
He is sending a batch of messages to the consumer app via a queue.
Surely this is what Queues are for? Well they were when I learnt Queuing Theory in 1975.
If the consumer app can handle duplicates then it might not matter if it can't consume them quick enough. It will eventually UNLESS the amount of data is so great that the consume will never empty the queue over a complete work cycle. If that is the case, then that is a whole different problem. _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
Vitor |
Posted: Thu Jan 26, 2012 9:09 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
smdavies99 wrote: |
It seems that a few of us have missed this bit.
Quote: |
I can not speed up processing on the other end because this is a vendor application.
|
|
I got that! Yay me!
smdavies99 wrote: |
He can't move to a Pub/sum model.
He can't use any of the other methods mentioned here. |
smdavies99 wrote: |
He is sending a batch of messages to the consumer app via a queue.
Surely this is what Queues are for? Well they were when I learnt Queuing Theory in 1975.
If the consumer app can handle duplicates then it might not matter if it can't consume them quick enough. It will eventually UNLESS the amount of data is so great that the consume will never empty the queue over a complete work cycle. If that is the case, then that is a whole different problem. |
I think the issue here is that if only 1 trigger message arrives, resulting in output to the vendor application which includes / is based on the contents of the database. If an unconnected process updates the database there are no duplicates for the vendor application to process (and handle or not), but just messages containing the old information.
It's interesting (and perhaps telling) that the OP's original solution was to delete such messages before they could be processed. This implied to me that their loss was not a problem because they would be reproduced by a later transaction.
The bottom line is that this design is hosed up. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
smdavies99 |
Posted: Thu Jan 26, 2012 9:25 am Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
Vitor wrote: |
I got that! Yay me!
The bottom line is that this design is hosed up. |
Well done Sir. I couldn't have said it better.  _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Jan 26, 2012 9:29 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
I think if you read what I posted, you'll see that it doesn't imply any changes to the consumer..
Otherwise, I think the only part of this that is necessarily wrong is the notion of deleting previously sent messages.
If the database state at time T0 is sent as a message, and then the state is changed at time T1, and the state is again sent as a message, then the consumer will eventually represent the correct state of the database, after it has processed both the T0 and the T1 messages.
It's hard to see why there is a need to delete the T0 message at the time of sending the T1 message, if indeed T0 has not yet been processed. |
|
Back to top |
|
 |
indyjoe |
Posted: Thu Jan 26, 2012 10:47 am Post subject: |
|
|
Newbie
Joined: 25 Jan 2012 Posts: 5
|
I would like to describe the situation a little better.
The Request comes from our vendor application. A Message Broker process reads the data and goes out to a Corporate Date Store that is updated nightly. The vendor application then updates a separate Oracle Database.
The issue that can occur is that sometimes we will have a larger than normal volume on this process. What could then happen is the data on Message Broker process is older than what comes in on a separate queue into the application. This could only occur if the data is on the reply queue for more than 24 hours. The process only can handle about 10k an hour. Sometimes we need to process 500K. That means 50 hours. Mind you there are other things going on so we can't always run 24 hrs a day.
This is why I was looking to restrict the number of messages on the queue at one time.
Thank you guys for all your input and sorry for my impatience.
This is what I was envisioning for a flow.
Vendor app Send Request ->Message Broker Process Read Request -> Check Reply Queue Depth if greater than 10k wait until it is lower -> Query Corporate Data Store to Get Current Data and generate message for Reply Queue ->Vendor App Processes the Data and updated Vendor DB. |
|
Back to top |
|
 |
Vitor |
Posted: Thu Jan 26, 2012 11:32 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
indyjoe wrote: |
Vendor app Send Request ->Message Broker Process Read Request -> Check Reply Queue Depth if greater than 10k wait until it is lower -> Query Corporate Data Store to Get Current Data and generate message for Reply Queue ->Vendor App Processes the Data and updated Vendor DB. |
Aside from the fact you can't do that with WMB or WMQ, the problem here is that your 10K is an estimate. Suppose it's running slowly because something's wrong with the box and it only manages 9K? You've got 1,000 messages which you think are right (because you've implemented this system somehow) but are in fact wrong or you've got WMB continuously retrying to squeeze things in. So this solution gives you a false sense of security.
I stand by my assertion that the process is hosed. How can you process new messages from this separate queue when you've not finished processing all the updates from the WMB queue? That's ridiculous. Those new requests should be queued (pun intended) and made to wait until all the updates are done.
If the response to that is "we can't wait 50 hours for this" then I ripost with my earlier comment about you needing a better vendor. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
smdavies99 |
Posted: Thu Jan 26, 2012 12:31 pm Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
to add my thruppence worth,
Being General.
It appears that this is another case where no one has done a proper 'end-to-end' design. If they had then the issue of volume and the capacity of the Third Party application to consume the data would have come up then.
Once upon a time, I used to hear lots about data rates, processing volumes etc etc.
These days, I hear it very infrequently.
When I raised the issue not so long ago, I was given some really dirty looks and told to stop questioning the designs of the so called Architects.
The project failed to process the exprected volumes.
Being specific...
Because of the above the previous comments about the design being hosed are perfectly correct. _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
|