Author |
Message
|
siljcjose |
Posted: Wed Jul 06, 2011 6:51 pm Post subject: Handling large messages in WMB |
|
|
Apprentice
Joined: 18 Aug 2005 Posts: 27
|
Hi,
I have a scenario, where I collect a batch of messages in a MQ queue. When the batch ends a trigger message is sent, which will start collecting all the messages in the queue using an MQGET.
The Problem I have is , there are lot of records ( Comma delimeted file) in a message and many such messages in the queue. As i read each message in the queue, i store that in the env variable as BLOB, and as the next message is got, it is appened to the existing BLOB in the environment.
DECLARE inputMsgName CHARACTER FIELDNAME(envBlobRef);
IF inputMsgName = 'InputMsg' THEN
SET Environment.Variables.InputMsg.BLOB = Environment.Variables.InputMsg.BLOB || CRLF || InputRoot.BLOB.BLOB;
ELSE
SET Environment.Variables.InputMsg.MQMD = InputRoot.MQMD;
SET Environment.Variables.InputMsg.Properties = InputRoot.Properties;
SET Environment.Variables.InputMsg.BLOB = headerRecBLOB || CRLF ||InputRoot.BLOB.BLOB;
END IF;
My flow is abending when the size of the message becomes around 10 MB, on the environment variable.
Is there any better way I can handle this. There are other flows in my environment which can handle around 20 MB message but on the input queue.
Any suggestion on this is appreciated.
Thanks. |
|
Back to top |
|
 |
fjb_saper |
Posted: Wed Jul 06, 2011 7:44 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Well seing the way you do things, this would obviously not be our first choice...
Now you did not give us any information as to the purpose of what you're doing, or why you're doing it this way...
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
siljcjose |
Posted: Wed Jul 06, 2011 7:55 pm Post subject: |
|
|
Apprentice
Joined: 18 Aug 2005 Posts: 27
|
What could be the first choice then ? Collector node
? My requirement is to collect all the messages that are in a queue and to make one message out of it. and create one more record on top of it, which is a header record. I am on Broker 7 |
|
Back to top |
|
 |
fjb_saper |
Posted: Wed Jul 06, 2011 7:59 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
siljcjose wrote: |
What could be the first choice then ? Collector node
? My requirement is to collect all the messages that are in a queue and to make one message out of it. and create one more record on top of it, which is a header record. I am on Broker 7 |
I was thinking more along FileInput node and FileOutput node...  _________________ MQ & Broker admin |
|
Back to top |
|
 |
rglack10 |
Posted: Wed Jul 06, 2011 11:37 pm Post subject: Re: Handling large messages in WMB |
|
|
Apprentice
Joined: 06 Jul 2011 Posts: 34
|
siljcjose wrote: |
Hi,
I have a scenario, where I collect a batch of messages in a MQ queue. When the batch ends a trigger message is sent, which will start collecting all the messages in the queue using an MQGET.
The Problem I have is , there are lot of records ( Comma delimeted file) in a message and many such messages in the queue. As i read each message in the queue, i store that in the env variable as BLOB, and as the next message is got, it is appened to the existing BLOB in the environment.
DECLARE inputMsgName CHARACTER FIELDNAME(envBlobRef);
IF inputMsgName = 'InputMsg' THEN
SET Environment.Variables.InputMsg.BLOB = Environment.Variables.InputMsg.BLOB || CRLF || InputRoot.BLOB.BLOB;
ELSE
SET Environment.Variables.InputMsg.MQMD = InputRoot.MQMD;
SET Environment.Variables.InputMsg.Properties = InputRoot.Properties;
SET Environment.Variables.InputMsg.BLOB = headerRecBLOB || CRLF ||InputRoot.BLOB.BLOB;
END IF;
My flow is abending when the size of the message becomes around 10 MB, on the environment variable.
Is there any better way I can handle this. There are other flows in my environment which can handle around 20 MB message but on the input queue.
Any suggestion on this is appreciated.
Thanks. |
10mb shouldn't really be an issue I would think. You could try and tune the execution group, JVM heap size, stack size etc. Is that all of your code? |
|
Back to top |
|
 |
WGerstma |
Posted: Mon Jul 18, 2011 9:19 pm Post subject: |
|
|
Acolyte
Joined: 18 Jul 2011 Posts: 55
|
We had basically the same issue with our Broker 6.02 on Linux.
We had a timed trigger every evening, that starts to MqGet all messages summed up the day within in a loop, put the Message to environment, and finally aggregated the content to pass a single MQ message down for further processing. This where up to 20000 messages a day.
This worked perfect on windows machines but collapsed on test and production environments running Linux.
So we tricked around:
The flow counts every run of the MQGet Loop and breaks at ~200 messages (200 was always stable while 300 crashed in 1 of 50 times). He aggregates them according to the business requirements to a single message, then sends this message via MQOutput. Then the flow itself reads the same Message via MQInput and uses this as a trigger to grab the next 200 messages. Within this entirely new transaction, we had no problems of aggregating 200 more together with the content of the trigger message. this is repeated until the MQGet states NoMessage.
Not so nice, the answer we got from IBM support was something like "You are using the MQGet node in a loop?", so we did not invest in promoting this problem to become a PMR.
I just assume, this is not a problem of the environment itself or the size of a variable within environment. Basically the trigger message is also directly put to the environment, and size is no problem. But I assume the MQGet node itself is some minimal thread, managing an MQ handle or whatever, and the execution group or MQ or whatever is running out of threads.
As stated, Broker on Windows did not show this problem. Do you have a Windows installation to crosscheck? |
|
Back to top |
|
 |
smdavies99 |
Posted: Mon Jul 18, 2011 10:16 pm Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
Windows does break as well. on 32bit Server 2003 I found the limit was about 600. This was in a slighlty different situation
Trigger-->Read Db-->Start loop-->Read db-->MQPUT--->Loop
All the messages were actually held in the DB.
In my case, I 'front' ended the flow with another one that calculated how many messages were t obe retrieved from the DB. Then it divided that total by 500 (configurable) and sent that number of messages to the flow that looped.
This is fairly basic 'Consevation of Resources' stuff IMHO _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
j.f.sorge |
Posted: Mon Jul 18, 2011 10:31 pm Post subject: Re: Handling large messages in WMB |
|
|
Master
Joined: 27 Feb 2008 Posts: 218
|
siljcjose wrote: |
My flow is abending when the size of the message becomes around 10 MB, on the environment variable. |
We has similar problem but it depends on the number of messages read within the loop. Using a ComputeNode which propagated to the the MQGet until a flag within Environment has been set that there are no messages helped us. The PROPAGATE statement lets broker free the memory which has been allocated by the MQGet node. The flag will be set when MQGet node leaving through NoMessage terminal by ESQL code within a FilterNode. _________________ IBM Certified Solution Designer - WebSphere MQ V6.0
IBM Certified Solution Developer - WebSphere Message Broker V6.0
IBM Certified Solution Developer - WebSphere Message Broker V6.1
IBM Certified Solution Developer - WebSphere Message Broker V7.0 |
|
Back to top |
|
 |
rbicheno |
Posted: Tue Jul 19, 2011 4:24 am Post subject: |
|
|
Apprentice
Joined: 07 Jul 2009 Posts: 43
|
As suggested earlier i would take a step back and look at the filenodes, they have streaming logic for these kind of scenarios to read large amounts of data in/out whilst keeping memory used to a minimum. You are dealing with MQ messages but yet talking about building a CSV file. For example you could for example have a flow:
MQIn -> Fileout
Where the contents of each message is appended to a csv file with CRLF as the delimiter i.e each message becomes a record in the file. If you need that file as a message later then its easy to trigger another flow when needed to do FileIn->MQOut. |
|
Back to top |
|
 |
|