ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Need an advice on this topology

Post new topic  Reply to topic Goto page Previous  1, 2, 3  Next
 Need an advice on this topology « View previous topic :: View next topic » 
Author Message
yalmasri
PostPosted: Wed Nov 20, 2013 4:00 am    Post subject: Reply with quote

Centurion

Joined: 18 Jun 2008
Posts: 110

Vitor, I see you didn't overlook any detail in my post... I appreciate that!
Vitor wrote:
Neither WMB nor an EG is WAS. The EG does not handle the JVM the same way that WAS does

What's the difference?

Vitor wrote:
And if you see it running low what are you going to do? Change the resource profile in the WAS admin console broker doesn't have? That kind of change requires a restart. and 700 flows starting in a single group is going to take a while.

No, I'm going to use mqsichangeproperties the same way you'll be using it when you want to change the memory settings for your 100-EGs Broker, and in all cases the broker all together need to be restarted, not just your EGs. So this is irrelevant to how many EGs you have in your Broker.

Now you tell me what would you do if your WAS started to throw OoM exceptions, and you used that nifty admin console to change your JVM settings? Are your exceptions going to disappear out of the blue or you have to as well restart the server?!

Vitor wrote:
yalmasri wrote:
What was the OS and Broker versions within which these situations occurred?


Linux, Solaris & AIX on v6.1 & v7. None of these OS react the way you claim under all circumstances.

I'll not be surprised to know it's Solaris

Vitor wrote:
yalmasri wrote:
Let me know because I have cases in Broker Performance Reports that tell you otherwise


Post a link.

Yea? Go to the performance reports page and download IP6P
http://www-01.ibm.com/support/docview.wss?uid=swg27007159
If you go to section "Growing message throughput" you'll find two sub-sections: "Using Additional Instances" and "Using Multiple Execution Groups". In the first sub-section if you go to record #6 in the first table you will see that a message rate of 7407.2 is achieved for percentage CPU utilization of 48.6, while in the second sub-section in the first table there, you'll find that in record #5, and under CPU utilization of 49.0%, only a message rate of 6513.2 could be achieved. Using linear convolution, the rate for this flow in single-process multi-threaded mode is 7434.0 compared to 6513.2 for operating in multi-process multi-threaded mode. This is a significant increase of 12%

Vitor wrote:
1. Define "efficient". You can't mean it takes longer to build 2 brokers than it does one. It's exactly the same point as how many queue managers do you have on a single server?

No. Using two Brokers means necessarily two queue managers, and thus the performance of two instances of a given flow deployed in one Broker is certainly higher than of one instance on two Brokers linked through QMs. There's no need to explain why, no?

Vitor wrote:
2. There's more reasons than application separation (like administrative control of broker) to have multiple EGs

So far the reasons mentioned by people here seem tolerable given my setup

Vitor wrote:
3. The kind of automated build & deploy tool you need to be using if you have 700 flows to keep versioned and controled will easily manage flow location.

Yes, but when you have a new deployment, you have to have a whole new study on usage pattern, scalability requirements, current machines resource availability, etc. in order to take a good decision on where to put it.

Vitor wrote:
4. Yes, "almost". You're gambling that the workload distribution at the transport level is even, that all flows take about the same time & resource and the OS is smart enough to manage resources via additional instances without you dropping a hint in the form of an additional EG (process) with another copy of the flow in it.

It's a good gamble, isn't it?

Vitor wrote:
6. This isn't WAS. WMB doesn't handle DB connections like that.

I'm saying that EGs have their own pools, which means increasing pools' number, thus accumulating minimum idles.

Vitor wrote:
7. Cite your sources, with especial reference to how 700 running flows use significantly less CPU and memory resources than the same flows in 10 EGs 70 to an EG. Be clear on how this significantly lower resource cost more than makes up for the additional administrative and control overhead

This is a simple argument; every new empty EG will have its own memory and CPU requirements, which should share system resources with your flows.

Vitor wrote:
yalmasri wrote:
Any solid facts for not going down this road?


Put 700 flows in a single EG. Stop the broker & issue an mqsisetdbparms for a non-DB service (or any other change that requires the broker to be stopped). Issue an mqsistart and note the time. Note the time again when the 700th flow shows as running. Compare the difference in these times to the customer's tollerance for outage.

Again, why outage? I still have other instances of Broker running
Back to top
View user's profile Send private message
yalmasri
PostPosted: Wed Nov 20, 2013 5:40 am    Post subject: Reply with quote

Centurion

Joined: 18 Jun 2008
Posts: 110

rbicheno wrote:
With your design using clustering if you get misbehaving app sending bad messages it could send to all brokers so they could be taken down simultaneously with a restart time of 40mins I wouldn’t be a happy customer. In a perfect world all apps would behave well and all flows bug free but i would bet you don’t have 700 perfect flows!

This reasoning holds some water. Although this is very much an unlikely case, but such a grim situation will cost me a lot. In case I have for example 10 EGs, not only did I limit this outage to a certain number of services, but also accelerated the recovery 10 times.

But what people normally do when this situation occurs in WAS?
Back to top
View user's profile Send private message
yalmasri
PostPosted: Wed Nov 20, 2013 5:44 am    Post subject: Reply with quote

Centurion

Joined: 18 Jun 2008
Posts: 110

mqjeff wrote:
Even with four active nodes, there can easily be reasons why you'd need to restart all four of them, and having two EGs on each node really does allow you to stagger the restarts whilst still maintaining uptime.

This solution sounds appealing. But how two EGs would benefit me in situations where clustering could not?
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Nov 20, 2013 5:46 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

yalmasri wrote:
No, I'm going to use mqsichangeproperties the same way you'll be using it when you want to change the memory settings for your 100-EGs Broker, and in all cases the broker all together need to be restarted, not just your EGs. So this is irrelevant to how many EGs you have in your Broker.


No, you restart the EG not the broker.

yalmasri wrote:
Now you tell me what would you do if your WAS started to throw OoM exceptions, and you used that nifty admin console to change your JVM settings? Are your exceptions going to disappear out of the blue or you have to as well restart the server?!


It's not my WAS but no, it doesn't need to be restarted.

Vitor wrote:
yalmasri wrote:
Let me know because I have cases in Broker Performance Reports that tell you otherwise


yalmasri wrote:
Post a link.

Yea? Go to the performance reports page and download IP6P
http://www-01.ibm.com/support/docview.wss?uid=swg27007159
If you go to section "Growing message throughput" you'll find two sub-sections: "Using Additional Instances" and "Using Multiple Execution Groups". In the first sub-section if you go to record #6 in the first table you will see that a message rate of 7407.2 is achieved for percentage CPU utilization of 48.6, while in the second sub-section in the first table there, you'll find that in record #5, and under CPU utilization of 49.0%, only a message rate of 6513.2 could be achieved. Using linear convolution, the rate for this flow in single-process multi-threaded mode is 7434.0 compared to 6513.2 for operating in multi-process multi-threaded mode. This is a significant increase of 12%


I don't see 12% as significant enough to overcome the objections. But that's what I think, and I'll return to this point in a moment.

yalmasri wrote:
There's no need to explain why, no?


Yes. You said this topology was more "efficient". How are you measuring "efficient"?

yalmasri wrote:
Vitor wrote:
2. There's more reasons than application separation (like administrative control of broker) to have multiple EGs

So far the reasons mentioned by people here seem tolerable given my setup


I told you we'd return to this point. It's your setup and you need to make the final choice. You came here and asked for advice; a lot of people have given advice and most of it seems to oppose this idea. You also mentioned a lot of debate before you posted, so there would seem to be some misgivings at your end.

But it remains your choice.

yalmasri wrote:
Vitor wrote:
3. The kind of automated build & deploy tool you need to be using if you have 700 flows to keep versioned and controled will easily manage flow location.

Yes, but when you have a new deployment, you have to have a whole new study on usage pattern, scalability requirements, current machines resource availability, etc. in order to take a good decision on where to put it.


Yes, and 10 minutes later you feed the decision into the tool. Done.

yalmasri wrote:
Vitor wrote:
4. Yes, "almost". You're gambling that the workload distribution at the transport level is even, that all flows take about the same time & resource and the OS is smart enough to manage resources via additional instances without you dropping a hint in the form of an additional EG (process) with another copy of the flow in it.

It's a good gamble, isn't it?


No. If I want to gamble I go to Vegas. I don't gamble with the client's systems, I measure, evaluate and mitigate risk as I'm sure you do. But I don't see and you've not mentioned the steps you've taken for this topology and my point is that you're relying on a lot of unquantifiables and other components all working properly all of the time. It would make me nervous.

Thus we're back at the your setup point I mentioned earlier.

yalmasri wrote:
Vitor wrote:
6. This isn't WAS. WMB doesn't handle DB connections like that.

I'm saying that EGs have their own pools, which means increasing pools' number, thus accumulating minimum idles.


I'm saying they don't, certainly the ODBC ones. You can also review a number of posts on this forum about WMB handling DB connections in unexpected and non-optimal ways, especially from Java people who think it works like WAS with connection pools.

yalmasri wrote:
Vitor wrote:
7. Cite your sources, with especial reference to how 700 running flows use significantly less CPU and memory resources than the same flows in 10 EGs 70 to an EG. Be clear on how this significantly lower resource cost more than makes up for the additional administrative and control overhead

This is a simple argument; every new empty EG will have its own memory and CPU requirements, which should share system resources with your flows.


So as it's so simple, answer my question about how 10 EGs use significantly less CPU and memory than 1. I assert the overhead you correctly mention is not as large as you're claiming.

yalmasri wrote:
Again, why outage? I still have other instances of Broker running


And is all the traffic being routed away from the downed broker? Even after it's 10 minutes into it's 40 minute start up and the ports are resonsive? Is the traffic diverted away automatically? You're sure nothing can cause a need to restart all the servers to deal with a single problem? That you'll never need to capability to start some flows before others, which you can do if they're in different EGs but not if they're in a single one?

You asked for advice. You've got it from me and from others. Make your choice and go in peace with it
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
yalmasri
PostPosted: Wed Nov 20, 2013 6:09 am    Post subject: Reply with quote

Centurion

Joined: 18 Jun 2008
Posts: 110

mqjeff wrote:
That said, you are trying to conserve the resources of the overhead of additional EGs, but compared to 700 flows, the overhead is not very big! So you're saving a penny, and possibly costing a dollar to do so.

Actually this overhead is a small part of the story. If I'm to go with 10 EGs and split my flows among them, then I'm again running into resource management nightmare. Upon every deployment of a new flow, you have to hold back for some time to decide where this flow should go. Even for current distribution of flows, you should constantly busy yourself with the fact that you might have better deployed that flow here not there, or probably you discovered that two flows are receiving huge load in almost same time of the day so they better sit in different EGs, or some EGs are granted too much memory while others are hitting their max, and so on. You're effectively going to suffer from what's called: the multi-pool syndrome.
Back to top
View user's profile Send private message
mqjeff
PostPosted: Wed Nov 20, 2013 6:12 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

yalmasri wrote:
mqjeff wrote:
Even with four active nodes, there can easily be reasons why you'd need to restart all four of them, and having two EGs on each node really does allow you to stagger the restarts whilst still maintaining uptime.

This solution sounds appealing. But how two EGs would benefit me in situations where clustering could not?


It's the least worst alternative to the terrible setup you've proposed.

All of the reasons you've been given for why your setup is bad explain exactly how this minor change I've suggested is better than your original idea. And, again, I did actually explain.

You've been given advice that the setup you're proposing is terrible. When you've been given recommendations that are not terrible, you've asked us to explain why they are not terrible.

Your proposed topology is terrible. It's better, by only a very small margin, then the opposite topology of having one flow per execution group - and thus 700 execution groups. But it causes the entirely opposite set of problems.

As my esteemed colleague said, you asked for advice. You've been given advice. It's your setup and your problem. Listen to the advice or not.

As my other esteemed colleague said, if someone came to me and proposed putting 700 flows into a single EG, I would firmly believe that they didn't know Message Broker very well.

The advice you've been given is from people who understand message broker very well, there's been a fair amount of explaining why and how the advice fixes problems with your proposed topology.

Go in peace.
Back to top
View user's profile Send private message
yalmasri
PostPosted: Wed Nov 20, 2013 7:09 am    Post subject: Reply with quote

Centurion

Joined: 18 Jun 2008
Posts: 110

mqjeff wrote:
It's the least worst alternative to the terrible setup you've proposed.

All of the reasons you've been given for why your setup is bad explain exactly how this minor change I've suggested is better than your original idea. And, again, I did actually explain.

You've been given advice that the setup you're proposing is terrible. When you've been given recommendations that are not terrible, you've asked us to explain why they are not terrible.

Your proposed topology is terrible. It's better, by only a very small margin, then the opposite topology of having one flow per execution group - and thus 700 execution groups. But it causes the entirely opposite set of problems.

As my esteemed colleague said, you asked for advice. You've been given advice. It's your setup and your problem. Listen to the advice or not.

As my other esteemed colleague said, if someone came to me and proposed putting 700 flows into a single EG, I would firmly believe that they didn't know Message Broker very well.

The advice you've been given is from people who understand message broker very well, there's been a fair amount of explaining why and how the advice fixes problems with your proposed topology.

Go in peace.

If I were you I would have applied more self-control before submitting such a post. This is hurting you as a professional and your long experience before anything else.

I'm leaving it without comment.


Last edited by yalmasri on Thu Nov 21, 2013 2:25 am; edited 1 time in total
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Wed Nov 20, 2013 7:17 am    Post subject: Re: Need an advice on this topology Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7722

yalmasri wrote:
A client approached us for migrating their 6.1 broker to 7 (some consultant told them 8 is not stable enough!).


Perhaps the consultant made that statement when the only version of WMB 8 was 8.0.0.0. I would agree.
Even if only 8.0.0.1 was out, my ominion would be to let the other suckers find the major bugs inherent in all new versions of complicated softwares.
By the time the second Fix Pack is out, things are generally good.

WMB 8.0.0.3 is out. Unless they can point to a specific thing even at 8.0.0.3 that is unstable, I bet the statement that WMB is unstable is probably more than a year old.

Ah, the old How Many Execution Groups debate, an oldie but a goodie. Definitely as much art as it is science.

Some more considerations on why you might not want to put all your flows in one basket, er, Execution Group.

Security - you can set up a dedicated trust store per execution group, you cannot do it by flow. And WMB does not allow us to filter on the distinguished name field like we can in MQ. So anytime a new flow needs to trust an SSL Server or SSL Client, you implicitly grant access to each and every certificate signed by that CA. Repeat that 700 times. At that point you are only doing busty work with SSL and not securing anything.

Additional Instances. How many threads can one process run on your O/S? I don't know. But what if 50 of your 700 message flows need additional instances set to 100. Now your process needs to run 50*100 + 650 threads. Will it be able to?

So if 700 in one is not great, what do you go to? 10 EGs? 50? 63?

I don't know, I've searched for a formula. See my post on here on the topic. There's to much "It Depends" for IBM to give us a formula.

On Red Hat, we've dealt with hung execution groups that stop responding to deploy commands. All we can do is kill the thing. That was at 6.1. Is the situation better at 8.0, or 7.0?
_________________
Peter Potkay
Keep Calm and MQ On


Last edited by PeterPotkay on Wed Nov 20, 2013 11:36 am; edited 1 time in total
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Nov 20, 2013 7:57 am    Post subject: Re: Need an advice on this topology Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

PeterPotkay wrote:
On Red Hat, we've dealt with hung execution groups that stop responding to deploy commands. All we can do is kill the thing. That was at 6.1. Is the situation better at 8.0, or 7.0?



_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Nov 20, 2013 7:59 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

yalmasri wrote:
I'm leaving it without comment.


Apart from this comment?
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Nov 20, 2013 8:03 am    Post subject: Re: Need an advice on this topology Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

PeterPotkay wrote:
Ah, the old How Many Execution Groups debate, an oldie but a goodie. Definitely as much art as it is science.




So more considerations on why you might now want to put all your flows in one basket, er, Execution Group.

PeterPotkay wrote:
Security - you can set up a dedicated trust store per execution group, you cannot do it by flow. And WMB does not allow us to filter on the distinguished name field like we can in MQ. So anytime a new flow needs to trust an SSL Server or SSL Client, you implicitly grant access to each and every certificate signed by that CA. Repeat that 700 times. At that point you are only doing busty work with SSL and not securing anything.


Good point

PeterPotkay wrote:
Additional Instances. How many threads can one process run on your O/S? I don't know. But what if 50 of your 700 message flows need additional instances set to 100. Now your process needs to run 50*100 + 650 threads. Will it be able to?


Great point.

PeterPotkay wrote:
So if 700 in one is not great, what do you go to? 10 EGs? 50? 63?


The thrust of this thread. 700 flows in 1 EG is terrible. 700 EGs with 1 flow each is terrible. Where do you make the cut? This is mostly why I've been banging on about administrative control, given that I overlooked the 2 excellent examples you quote above.

PeterPotkay wrote:
There's to much "It Depends" for IBM to give us a formula.


And too many site specific variables.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
mqjeff
PostPosted: Wed Nov 20, 2013 8:07 am    Post subject: Re: Need an advice on this topology Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

Vitor wrote:
PeterPotkay wrote:
On Red Hat, we've dealt with hung execution groups that stop responding to deploy commands. All we can do is kill the thing. That was at 6.1. Is the situation better at 8.0, or 7.0?




It's not even better in v9. The only way it could reasonably be better is if the deploy processor was a separate process from the DFE.

But that would still need some mechanism for the DFE to receive input from the deploy processer, and if all of the threads in the DFE are hung and won't allow any changes, again you're still back at someone needs to kill the process.

The separate deploy processor would be able to kill the DFE automatically. But it's not clear that such a thing should be assumed to be the right choice in all cases where the EG is "hung".

I.e. "The Halting Problem".
Back to top
View user's profile Send private message
smdavies99
PostPosted: Wed Nov 20, 2013 8:40 am    Post subject: Reply with quote

Jedi Council

Joined: 10 Feb 2003
Posts: 6076
Location: Somewhere over the Rainbow this side of Never-never land.

Quote:

The thrust of this thread. 700 flows in 1 EG is terrible. 700 EGs with 1 flow each is terrible. Where do you make the cut? This is mostly why I've been banging on about administrative control, given that I overlooked the 2 excellent examples you quote above.


My personal experience with Broker on Slowaris is much more than 40 flows per EG makes the EG startup times unacceptable. Granted that was with my particular match of flow types and complexity.

Moving the broker in the same config to an RHEL 64bit Intel platform halved the EG startup times.

Just saying...
_________________
WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995

Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions.
Back to top
View user's profile Send private message
sarasu
PostPosted: Wed Nov 20, 2013 8:50 am    Post subject: Reply with quote

Master

Joined: 02 Feb 2006
Posts: 229

Please visit the below link which may be helpful for you.

ftp://public.dhe.ibm.com/software/integration/support/supportpacs/individual/ip6p.pdf

Thanks
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Nov 20, 2013 9:02 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

smdavies99 wrote:
My personal experience with Broker on Slowaris is much more than 40 flows per EG makes the EG startup times unacceptable. Granted that was with my particular match of flow types and complexity.

Moving the broker in the same config to an RHEL 64bit Intel platform halved the EG startup times.

Just saying...


We're moving, we're moving.....and I didn't pick the original platform.

I just got agreement we could move to RHEL
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page Previous  1, 2, 3  Next Page 2 of 3

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Need an advice on this topology
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.