ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » IBM MQ Installation/Configuration Support » MQ v7.0.1.8 Linux x86_64 - FDC generated - Probe ID ZS402020

Post new topic  Reply to topic Goto page Previous  1, 2
 MQ v7.0.1.8 Linux x86_64 - FDC generated - Probe ID ZS402020 « View previous topic :: View next topic » 
Author Message
bruce2359
PostPosted: Wed Sep 19, 2012 5:25 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

vanushreevyas wrote:
Is there any limit to clusters that a queue manager can be part of ? The 8 queue managers here are part of clusters ranging from 230 to 410 clusters. Not sure if that kind of load is causing the repos process problems.

Here you say there are (only) eight queue managers. Yet in your most recent post ...
vanushreevyas wrote:
...we have more than 800+ MQ servers on our estate with one or multiple queue managers on them.

Simple math (my limit) is that you have somewhere between 800 and 1600 qmgrs.

Did I miss something?
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
nathanw
PostPosted: Wed Sep 19, 2012 5:31 am    Post subject: Reply with quote

Knight

Joined: 14 Jul 2004
Posts: 550

vanushreevyas wrote:
we have more than 800+ MQ servers on our estate with one or multiple queue managers on them. The queue managers on this server have MQ clusters with quite a few of the other MQ queue managers on rest of the estate. Hence so many clusters.


wow wish I had been the salesman for that sale.
_________________
Who is General Failure and why is he reading my hard drive?

Artificial Intelligence stands no chance against Natural Stupidity.

Only the User Trace Speaks The Truth
Back to top
View user's profile Send private message MSN Messenger
vanushreevyas
PostPosted: Wed Sep 19, 2012 5:31 am    Post subject: Reply with quote

Novice

Joined: 28 Nov 2011
Posts: 20

Agree with you that the number of clusters should not have been so high. Probably the design was not well thought through. But we are already in this situation now
Looking for pointers to prove that we can safely blame number of clusters and associated traffic load for amqrrmfa process consuming high CPU.
Back to top
View user's profile Send private message
exerk
PostPosted: Wed Sep 19, 2012 5:35 am    Post subject: Reply with quote

Jedi Council

Joined: 02 Nov 2006
Posts: 6339

vanushreevyas wrote:
Looking for pointers to prove that we can safely blame number of clusters and associated traffic load for amqrrmfa process consuming high CPU.

Suggest to management that you rationalise the clusters into something more manageable and documentable - I'll wager you can't easily tell which clusters any given queue manager is in, and which queue managers are FRs for a given cluster.
_________________
It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys.
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Sep 19, 2012 6:18 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

vanushreevyas wrote:
Looking for pointers to prove that we can safely blame number of clusters and associated traffic load for amqrrmfa process consuming high CPU.


The amqrrmfa process is principally concerned with cluster management and replication. I don't think you need a pointer to the fact that with that number of clusters these functions are going to consume a lot of CPU.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
nathanw
PostPosted: Wed Sep 19, 2012 6:23 am    Post subject: Reply with quote

Knight

Joined: 14 Jul 2004
Posts: 550

Vitor wrote:
vanushreevyas wrote:
Looking for pointers to prove that we can safely blame number of clusters and associated traffic load for amqrrmfa process consuming high CPU.


The amqrrmfa process is principally concerned with cluster management and replication. I don't think you need a pointer to the fact that with that number of clusters these functions are going to consume a lot of CPU.




I am by no means an expert on clustering but have been involved in enough in the past to know that the numbers being talked about here would need serious monitoring and also an environment specced up to handle them

Remember just because you can do something does not mean you should ie if max handles etc is 255 does not mean you should hit 255 etc etc
_________________
Who is General Failure and why is he reading my hard drive?

Artificial Intelligence stands no chance against Natural Stupidity.

Only the User Trace Speaks The Truth
Back to top
View user's profile Send private message MSN Messenger
Vitor
PostPosted: Wed Sep 19, 2012 6:27 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

vanushreevyas wrote:
1) why so many clusters? - we have more than 800+ MQ servers on our estate with one or multiple queue managers on them. The queue managers on this server have MQ clusters with quite a few of the other MQ queue managers on rest of the estate. Hence so many clusters.


This does not explain why you have so many clusters. This explains what all the clusters you have are doing. My question was, accepting you have 800 - 1600 queue managers in your estate, why there are so many clusters in use.

vanushreevyas wrote:
2) why so many clusters? - Cause of the number of queue managers on the estate. and number of MQ interfaces accross entire MQ estate.


Again, that's not an answer. Having a large number of queue managers doesn't imply a large number of clusters, and if you have a number of interfaces that should be driving the number of clusters down not up.

vanushreevyas wrote:
3) what requirement does this meet? - We had more than 1000 MQ servers with each application having their own server and queue manager.


Which is nothing to do with what cluster the queue manager is in

vanushreevyas wrote:
We have tried to reduce the number of MQ servers by sharing 1 server for a few applications. This reduces the need for applications to have MQ server installed as they can use MQ client.


Granted, but that's nothing to do with clusters either. If the applications want (or need) to have a local queue manager that's fine. We're discussing clusters here not the number of queue managers. 1600 queue managers is large but not excessive (except in terms of license costs!)

vanushreevyas wrote:
4) what idiot thought this was a good idea? - am myself looking for that idiot


You should not be compounding the idiocy by just saying "we are already in this situation now" and trying to apply a fix to the current problem. You should be fixing the topology before you need to bounce queue managers every hour.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page Previous  1, 2 Page 2 of 2

MQSeries.net Forum Index » IBM MQ Installation/Configuration Support » MQ v7.0.1.8 Linux x86_64 - FDC generated - Probe ID ZS402020
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.