ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » General Discussion » controling dequeue rate

Post new topic  Reply to topic Goto page 1, 2  Next
 controling dequeue rate « View previous topic :: View next topic » 
Author Message
Chrismcc
PostPosted: Fri Nov 06, 2009 8:27 am    Post subject: controling dequeue rate Reply with quote

Newbie

Joined: 06 Nov 2009
Posts: 4

Hi All

Is there a way to configure my MQS infrastructure such that I can control the rate at which messages are dequeued.

My customer dumps 300k mssages in a queue and our app dequeues them as fast as it can which ends up killing the backend.

Tuning down our app as much as we can we still cannot contrict message flow enough.

so I wonder if there is anything I can do in the MQS infrastructure to even out the flow. i.e. setting up an aliased or mirror queue and having the QM limit the flow between?

All ideas welcome?

Cheers
Chris
Back to top
View user's profile Send private message
Vitor
PostPosted: Fri Nov 06, 2009 8:52 am    Post subject: Re: controling dequeue rate Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Chrismcc wrote:
Tuning down our app as much as we can we still cannot contrict message flow enough.


I find that interesting. Not only that you have so little control over your app but also that the backend runs that slowly.

Chrismcc wrote:
if there is anything I can do in the MQS infrastructure to even out the flow. i.e. setting up an aliased or mirror queue and having the QM limit the flow between?


WMQ will only deliver messages when your app gets them off the queue - there's nothing in WMQ to only respond to a get request if it's n seconds after the last one.

Given that an alias queue is just a pointer to a real one, and even a mirror queue is going to be local to your queue manager, I think you'll find the queue manager's ability to move the messages around will still outpace the required speed.

Chrismcc wrote:
All ideas welcome?


You need to get on top of why your app is so much faster than the backend. If you can't fix that 2 possible options are:

- latch the app and the backend so it can signal when it's ready
- put another queue in so your app reads it's current queue, writes to a new queue from which the backend reads

Other, possibly better solutions, are undoubtably available.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
elvis_gn
PostPosted: Fri Nov 06, 2009 10:19 am    Post subject: Reply with quote

Padawan

Joined: 08 Oct 2004
Posts: 1905
Location: Dubai

Hi Chrismcc,

Rather than making the app slower, why don't you try making the backend faster ? I know it's easier said than done...

Anyway, what is the backend app ? How is it picking messages from MQ, is it multithreaded ? Is it sync or async ? Is it fire-n-forget or req-reply ?

Regards.
Back to top
View user's profile Send private message Send e-mail
Vitor
PostPosted: Fri Nov 06, 2009 10:31 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

elvis_gn wrote:
Anyway, what is the backend app ? How is it picking messages from MQ


I was working under the impression the app was picking messages from the queue and using RPC (or other non-WMQ method) to pass them to the backend, hence my suggestion to put a queue between them.

If the backend is pulling messages itself, then it shouldn't matter how fast the app is running as messages will just pile up on the backend's queue.

And if the backend has such poor impulse control it picks up messages as soon as the app puts them (i.e. it's on a TRIGGER(EVERY)) then I've just has a better idea about how to fix this problem....
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Chrismcc
PostPosted: Thu Nov 12, 2009 12:10 pm    Post subject: Reply with quote

Newbie

Joined: 06 Nov 2009
Posts: 4

Hi All

thanks for the replies.

Re-reading my question I may have been a little ambiguous.

The frontend MQ is a Unisys system that dumps a batch of 300000 msgs (each a couple of kb in size) in a queue.

The client is Microsofts MQ client adapter(base client) which reads continually under a cursor until the queue is empty.

Unfortunately we cannot reduce the batch size at the unisys end to be less than 300000

Tuning the MQadapter to read msgs 1 at a time it still overwelms our upstream db app.

I wish we could scale up the perfomance of our db app but that would be a big job.

Ideally we would even out the flow from Unisys so we wouldn't have to deal with such a flood of msgs, but the customer is resisant to changing this part of the system.

Cheers
Chris
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Thu Nov 12, 2009 12:28 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7722

Chrismcc wrote:
Tuning the MQadapter to read msgs 1 at a time it still overwelms our upstream db app.

So if 100 messages hit the queue at the same time it gets overwhelmed? If 10 hit at the same time? Two at the same time?

If its doing one message at a time, what does it care if there are 299,999, 10, 1 or 0 more messages left behind in the queue?
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Nov 12, 2009 1:43 pm    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

Quote:
Tuning the MQadapter to read msgs 1 at a time it still overwelms our upstream db app.

So, one message overwhelms your db app? There's not much you can do, other than stopping the requesting app from sending any messages. If the db server is under provisioned, upgrade the hardware.
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
Michael Dag
PostPosted: Thu Nov 12, 2009 2:16 pm    Post subject: Reply with quote

Jedi Knight

Joined: 13 Jun 2002
Posts: 2607
Location: The Netherlands (Amsterdam)



what can you change? what is under your control?
what's wrong with the MQ - DB Adapter ? does it have parameters?

anything to go on?

Are both applications connecting to the same queuemanager as clients?
_________________
Michael



MQSystems Facebook page
Back to top
View user's profile Send private message Visit poster's website MSN Messenger
gbaddeley
PostPosted: Thu Nov 12, 2009 4:41 pm    Post subject: Re: controling dequeue rate Reply with quote

Jedi Knight

Joined: 25 Mar 2003
Posts: 2538
Location: Melbourne, Australia

Chrismcc wrote:
Is there a way to configure my MQS infrastructure such that I can control the rate at which messages are dequeued.


MQ does not have any throttling features built in. It is designed to process messages as fast as possible.

Quote:
My customer dumps 300k mssages in a queue and our app dequeues them as fast as it can which ends up killing the backend.
Tuning down our app as much as we can we still cannot contrict message flow enough.


How about adding a configurable sleep just before the MQGET? Tune the sleep value to match the poor performance of the back end. A bit kludgy, but it might work.

A more complicated solution is to store the messages in a DB and then drip feed them to the back end in a separate process.
_________________
Glenn
Back to top
View user's profile Send private message
Chrismcc
PostPosted: Fri Nov 13, 2009 7:32 am    Post subject: Reply with quote

Newbie

Joined: 06 Nov 2009
Posts: 4

Hi All

Again thanks for your replies and help.

To add more clarification; configuring the Microsofts MQS adapter to pull off messages one at a time(I believe this is done on a single thread) it still does this so quickly and returns for more that very quickly all the 300 000 messages in the queue are consumed and sent to the db app.

I think the final statement is the kicker here: MQS is designed to process as fast as possible and the MQS adapter adheres to this design philosphy.

Thanks for all your help!

Cheers
Chris
Back to top
View user's profile Send private message
bruce2359
PostPosted: Fri Nov 13, 2009 7:39 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

Can you explain exactly what you mean when you say that the db app is overwhelmed? High CPU? High I/O?

Is the db app on the same server?
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
mqjeff
PostPosted: Fri Nov 13, 2009 8:04 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

Also, are you issuing a DB commit for each message, or piling all of them in the same transaction?
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Fri Nov 13, 2009 8:38 am    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7722

Is this BizTalk?
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
Chrismcc
PostPosted: Fri Nov 13, 2009 10:48 am    Post subject: Reply with quote

Newbie

Joined: 06 Nov 2009
Posts: 4

Hi All

yep BizTalk is part of the solution but there are other components/web services that get called.

Part of the challenge is that BizTalk consumes the messages as fast as they arrive and then invokes the rest of the solution components.

We do not have any problems with respect to Tx enlistement. It is all about High CPU and I/O load on SQL and backend components

The solution has a high maximum sustained throughput several multiples of production requirements but it cannot elegantly handle a sudden burst of 300000 msgs.

A burst of 3000 no sweat, 30000 easy but a 300000 msg instantaneous burst is a problem.

However I am getting the impression that there isn't anything in the WMQ architecture that I can tweak to throttle or even out this flood of messages.

It seems I must figure out another way to go.
Cheers
Chris
Back to top
View user's profile Send private message
Vitor
PostPosted: Fri Nov 13, 2009 11:57 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Chrismcc wrote:
However I am getting the impression that there isn't anything in the WMQ architecture that I can tweak to throttle or even out this flood of messages.


No.

As you say and I & others indicated earlier in this thread, WMQ will make messages on a queue available as soon as they are delivered. There's nothing to make an application requesting a message wait for the message to be returned. Conversely, if no application requests the message it'll sit there forever.

(Yes, yes, I know that's simplistic...)

So the software is doing what it's designed to do - deliver messages without delay. You'll need to insert some kind of throttle between the adapater & the db.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2  Next Page 1 of 2

MQSeries.net Forum Index » General Discussion » controling dequeue rate
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.