ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » General IBM MQ Support » IPPROCS Practical Limit

Post new topic  Reply to topic Goto page 1, 2  Next
 IPPROCS Practical Limit « View previous topic :: View next topic » 
Author Message
SAFraser
PostPosted: Thu Jul 26, 2012 3:04 pm    Post subject: IPPROCS Practical Limit Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

We have an application in stress test right now that is not consuming quickly enough to match message input. The most recent test had 605 listeners on a single queue and the results were not satisfactory. The uncommitted message count in the queue is slowly increasing, too.

The developer is going to double the listener count-- so, we will have 1200+ IPPROCS on a single queue.

The queue manager itself has sufficient MaxChannels to handle this. We are running on very nice T5120 Solaris servers, plenty of system resources. The messages are non-persistent and are not under syncpoint control. (MQ v7.0.1.6.)

I don't feel that increasing the listener count is best practice. If the consuming application cannot be optimized any further, then maybe there should be multiple input queues.

What are the pitfalls of such a high IPPROCS count? I wonder if we have file locking contention on the queue file, I wonder if that is a possible cause of the uncommitted messages.

Any thoughts will be appreciated.
Back to top
View user's profile Send private message
exerk
PostPosted: Fri Jul 27, 2012 12:08 am    Post subject: Reply with quote

Jedi Council

Joined: 02 Nov 2006
Posts: 6339

You state "...The uncommitted message count in the queue is slowly increasing, too..." and later "...The messages are non-persistent and are not under syncpoint control...". Are the messages arriving via a RCVR channel? If so, what about dropping the batch size down to a low single figure to see what effect that has on the uncommitted messages?
_________________
It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Fri Jul 27, 2012 5:13 am    Post subject: Re: IPPROCS Practical Limit Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20756
Location: LI,NY

SAFraser wrote:
We have an application in stress test right now that is not consuming quickly enough to match message input. The most recent test had 605 listeners on a single queue and the results were not satisfactory. The uncommitted message count in the queue is slowly increasing, too.

The developer is going to double the listener count-- so, we will have 1200+ IPPROCS on a single queue.

The queue manager itself has sufficient MaxChannels to handle this. We are running on very nice T5120 Solaris servers, plenty of system resources. The messages are non-persistent and are not under syncpoint control. (MQ v7.0.1.6.)

I don't feel that increasing the listener count is best practice. If the consuming application cannot be optimized any further, then maybe there should be multiple input queues.

What are the pitfalls of such a high IPPROCS count? I wonder if we have file locking contention on the queue file, I wonder if that is a possible cause of the uncommitted messages.

Any thoughts will be appreciated.


There is somewhere in the manual a reference to performance and this kind of pattern. And it is stated that for optimal performance when having multiple threads do gets, it is best to have the gets under syncpoint... Can't remember exactly where or I would have provided the link...(I believe it was a developer works article)...
Obviously read ahead is not a concern for you (cannot have syncpoint in read ahead)...

You are talking about a listener count which makes me think about a J2EE app with Message Beans... Also take into account that depending on other resource usages (DB etc) more is not always better. There is a threshold above which any additional thread affects overall throughput in a negative manner... At which point it is unclear (i.e. to be tested) if even adding another instance of the consuming application will help as the resource bottleneck is downstream i the consuming application...

Another trick would be to check the channels and make sure shareconv is set to 1 and see whether it makes a difference. Remember you have to STOP (status stopped) and restart the channel after the change for it to take effect. I would not want to have the socket block wait while I am expecting a message on one queue while a message is available on another...

And please define not satisfactory:

  • qdepth vs number of uncommited messages in queue
  • average message size
  • average dequeue rate
  • average processing time
  • number of consuming apps
  • number of threads per consuming app
  • average time to process the message (or to process x messages)


Make sure to verify the variation of the last 3 parameters to gauge overall throughput...

Hope it helps
_________________
MQ & Broker admin


Last edited by fjb_saper on Fri Jul 27, 2012 5:29 am; edited 3 times in total
Back to top
View user's profile Send private message Send e-mail
mqjeff
PostPosted: Fri Jul 27, 2012 5:16 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

exerk wrote:
You state "...The uncommitted message count in the queue is slowly increasing, too..." and later "...The messages are non-persistent and are not under syncpoint control...". Are the messages arriving via a RCVR channel? If so, what about dropping the batch size down to a low single figure to see what effect that has on the uncommitted messages?




If the app is proven to not be using a syncpoint, then the only way you get uncommitted messages is from the channel.

You should monitor the channel and the log activity during this test, as well, to see if you're getting slower movement across the channel than expected.

As the retired jarhead says, altering the batch size of the channel is a good way to alter it's performance.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Fri Jul 27, 2012 2:14 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7722

Do they really have that many listeners on the queue? Or do they have hokey code that freaks out when it gets an abnormal MQRC (what?!? 2033?!?), doesn't close the queue or disconnect from the QM, but instead just opens the queue again. And the next 'error' opens it again. And again.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Fri Jul 27, 2012 3:45 pm    Post subject: Re: IPPROCS Practical Limit Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20756
Location: LI,NY

SAFraser wrote:
We have an application in stress test right now that is not consuming quickly enough to match message input. The most recent test had 605 listeners on a single queue and the results were not satisfactory. The uncommitted message count in the queue is slowly increasing, too.


Shirley knowing her stuff well, I would take that to mean that the qdepth is greater than the uncommitted count and the consuming application is just not keeping up with the rate messages are being put to the queue.

This may be due to varying factors as stated in my previous post.
Worst kind of scenario would be a DB bottle neck that just doesn't allow more than x inserts per min while she is getting y>x messages on the queue per min...
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
SAFraser
PostPosted: Mon Jul 30, 2012 4:01 am    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

Gentlemen,

I do apologize for the delay in responding to all of your very helplful posts. Let me answer a few of your inquiries, then give an update on our progress.

This is a J2EE client, using the MQ provider that is installed natively with WebSphere Application Server 7. We are using WAS Activation Specifications (with Message Beans) to listen/consume messages, as they are the IBM implementation of JCA 1.5 (IBM has stabilized listener ports). The message producers are running in remote WAS JVMs.

There is no syncpoint control. These messages are small and are used to track the duration of a transaction in certain applications. These "performance log" messages are written from several sets of JVM clusters, then consumed by another JVM cluster and inserted into a database. The developers are not measuring the performance of MQ at all, but rather the performance of the applications.

Messages are traveling via svrconn channels. All servers are in the same data center.

Yes they really have that many listeners on the queue. In our stress test environment, which is sized at 25% of production, we are seeing enqueue/dequeue rates around 15,000 per minute. At that rate, the dequeue rate falling even slightly soon fills the queue. As fjb_saper has correctly surmised (and thanks for giving me credit for not being a bonehead), the dequeue does not match the enqueue and thus is a performance issue for us.

The developer has asked us to increase the thread count to match the server session count for the Activation Specs. In other words, if we allow 150 instances of the listener, then we set the threads to the same. Should have done that from the start (this is rather new for us). This is a proper technical step, but I doubt it will improve performance.

Our friends in development sent along some actual errors from the SystemOut log, they showed us some errors "marked for browse message not found". Ah ha! A clue!

From reading, there are two possible fixes to try (neither of which involve putting 1200 listeners on a single queue -- geesh).

1) We are at WAS 7.0.0.15 which has MQ 7.0.1.3. This MQ client version has a bug (fixed in 7.1.0.6) where a server session tries to access a marked message and can't find it. Workaround is to set SHARECNV to 0.

2) Increase queue manager MARKINT from default of 5 seconds to something greater.

If we are manifesting the bug, #1 would help. If the application is slow, the #2 would be a bandaid for a slow application (or, as fjb_saper mentions, a DB bottleneck). Both steps have consequences, of course, which we must monitor and assess.

The developer has agreed to take a measured approach. Stay tuned. More excitement to follow. THANKS SO MUCH for the discussion, keep it coming!

Shirley

PS I have URLs for Activation Spec overview and the two fixes mentioned above. Happy to post them if there is any interest.
Back to top
View user's profile Send private message
SAFraser
PostPosted: Mon Jul 30, 2012 5:51 am    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

fjb_saper, et.al.,

Do you think there is a practical difference between setting SHARECNV to 1 vs. 0? Perhaps an under-the-covers difference in the way in which the channel instantiates?

Enquiring minds want to know.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Mon Jul 30, 2012 6:08 am    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20756
Location: LI,NY

SAFraser wrote:
fjb_saper, et.al.,

Do you think there is a practical difference between setting SHARECNV to 1 vs. 0? Perhaps an under-the-covers difference in the way in which the channel instantiates?

Enquiring minds want to know.

Setting shareconv to 0 gives you V6 behavior (half duplex)
Setting shareconv to 1 gives you V7 behavior (full duplex) but still only one conversation per socket...
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
Andyh
PostPosted: Wed Aug 29, 2012 3:22 am    Post subject: Reply with quote

Master

Joined: 29 Jul 2010
Posts: 239

The queue manager is capable of handling massively higher non persistent message rates without using anywhere near this number of consumer threads.
If the processing time in the application is high then a large number of consumers might still be required.
When dealing with small non-persistent messages it is important that messages are delivered directly from the MQPUT to a waiting MQGET.
You might find that using non ASF mode will very significantly increase the percentage of messages that are delivered directly and increase throughput massively. You will also need to ensure than NPMSPEED(FAST)
is specified on any channels over which these messages are delivered.
Back to top
View user's profile Send private message
bruce2359
PostPosted: Wed Aug 29, 2012 4:07 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

ASF-mode vs. non-ASF mode explanation here: http://pic.dhe.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=%2Fcom.ibm.websphere.base.doc%2Finfo%2Faes%2Fae%contact admin.html
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
mqjeff
PostPosted: Wed Aug 29, 2012 4:53 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

And for the version of WAS that Shirley is *actually* using...

http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=%2Fcom.ibm.websphere.base.doc%2Finfo%2Faes%2Fae%contact admin.html

not that she couldn't find that on her own...
Back to top
View user's profile Send private message
bruce2359
PostPosted: Wed Aug 29, 2012 5:04 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.


_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
SAFraser
PostPosted: Wed Aug 29, 2012 5:57 am    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

Thank you everyone! We are knee deep in alligators due to code releases, but I will digest this excellent link next week. I was absolutely unfamiliar with this topic.

(The code that prompted this question was deployed but was turned off, it's not running, due to the dreadful performance during testing. So this will heat up again in the next week or two when testing resumes.)

You all rock.
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Aug 29, 2012 6:30 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

SAFraser wrote:
You all rock.


Sometimes backward and forwards, whimpering quietly.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2  Next Page 1 of 2

MQSeries.net Forum Index » General IBM MQ Support » IPPROCS Practical Limit
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.