Author |
Message
|
chrisgclark |
Posted: Tue Mar 02, 2010 6:59 am Post subject: how do i increase memory for execution group (not jvm heap) |
|
|
Apprentice
Joined: 26 Mar 2009 Posts: 35
|
Hi,
I have a message flow that processes a large message, splits the message into 360 msgs, calls a services with each of these messages, then aggregates them back together. I am crashing my execution group with a unable to allocate memory apend file.
If I watch the process Id for the execution group, it increases from 200MB to 1.03GB where it crashes.
How do I increase the execution group process heap size? My flow is near completion when it crashes so giving the execution group an extra 0.5GB process heap size is likely to fix this issue.
The flow is esql and aggregation so increasing the JVM heap will not help here.
Thanks,
Chris C |
|
Back to top |
|
 |
MrSmith |
Posted: Tue Mar 02, 2010 7:11 am Post subject: |
|
|
 Master
Joined: 20 Mar 2008 Posts: 215
|
chris
Are you able to pinpoint that it is the aggregate reply that is the point of crashing or the aggregate control / request, by placing in debug at a point where the aggregates ahve all been returned, as depending on where you put the message then if its to an MQ Q it would be kinda too large anyways, details of what the flow does and it sfailure point might help the forum. |
|
Back to top |
|
 |
chrisgclark |
Posted: Tue Mar 02, 2010 7:22 am Post subject: |
|
|
Apprentice
Joined: 26 Mar 2009 Posts: 35
|
MrSmith, Thanks for your reply.
From stepping through in debug and watching the memory I can see my execution group (EG) initialises at 200MB. When I put my large cvs message onto mq queue it is parsed by broker using a msgset and my EG memory increases to 600mb. Then my flow reads a lot of data from a database and the EG grows to 800MB. Messages are then sent out under aggregation control. When the reply message come back and start being aggregated together (aggreply) the EG increases to 1030MB where it crashes.
As you can see the memory usage is not just in the agg reply node, but spread across the functions of the flow, and agg reply just pushes the flow over the EG memory limit. Therefore I would like to increase the memory limit on the EG.
Chris |
|
Back to top |
|
 |
smdavies99 |
Posted: Tue Mar 02, 2010 7:35 am Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
So you are doing something like the following
1 in --> 360 out
360 in -> 1 out
That is some Fanout/Fanin Ratio.
Why don't you do something like
1-in-->36out
Then
36 flows where there is
1 in -> 10 out
Then
36 instances of
10 in --> 1 out
finally,
36 in --> 1 out
Split the thing up into smaller chunks. This way I would guess that you won't break the EG as you are currently doing. _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
WMBDEV1 |
Posted: Tue Mar 02, 2010 8:00 am Post subject: |
|
|
Sentinel
Joined: 05 Mar 2009 Posts: 888 Location: UK
|
|
Back to top |
|
 |
MrSmith |
Posted: Tue Mar 02, 2010 8:15 am Post subject: |
|
|
 Master
Joined: 20 Mar 2008 Posts: 215
|
additional to what smdavies99 has said cos that is some fan out/in ccan you not break down the flow into smaller catagories as the other implication of such a lot of requests returns is the volume that you need to push through it and the collation afterwards, there is far more room for error of timeouts and responses not equalling requests with such a high number surely. |
|
Back to top |
|
 |
chrisgclark |
Posted: Wed Mar 03, 2010 3:10 am Post subject: |
|
|
Apprentice
Joined: 26 Mar 2009 Posts: 35
|
Hi,
Thanks for the replies.
One option is definitely to decrease the number of fan out/fan in messages (e.g. instead of 360 output messages, put 10 requests in each and send out only 36 requests). This would help with the aggregation reply memory usage.
Thanks for the link. My OS is AIX and my process data ulimit is 1GB. Therefore I could increase my process data ulimit to 1.5 or 2GB. (to check ulimit on AIX, run ulimit -a ).
The way I solved this problem though was to split this into 2 execution groups. My process had 3 flows in 1 EGs. Now I had 2 EGs, one with 1 flow, the other with the remaining 2 flows. Without changing the ulimit this gives the overall process 2GB of memory (i.e. 1 GB for each EG) and this was sufficient.
Thanks for the ideas.
Chris |
|
Back to top |
|
 |
|