|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
Handling large message |
« View previous topic :: View next topic » |
Author |
Message
|
mqjeff |
Posted: Tue Apr 08, 2014 7:01 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
mattynorm wrote: |
Quote: |
It will happily reclaim that memory and use it for other things.
|
So if a single EG has 5 gb of memory assigned, other EGs will be able to grab this if required (assuming it's not being used by the EG in question)? |
No.
The EG will reuse that memory for other things.
As I said, it will not release it back to the OS. |
|
Back to top |
|
 |
Esa |
Posted: Tue Apr 08, 2014 11:40 am Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
mattynorm wrote: |
which I guess means the FileOutput is doing the lion's share of the work. The only properties I have changed from the defaults are setting the file mode to staging it in the mqsitransit dir, timestamp archive and replace an existing file, and from the Records and Elements setting it as 'Record is Delimited Data' (thought I would have to do this if propagating it out line by line), and I think this sets the delimiter as 'Broker System Line End' by default.
Any of those likely to have a significant performance impact?
|
I think 'Record is whole file' should be better.
Also the File Input should have 'Whole file' as Record Detection.
And it's essential that the File Input nodes Parser Options setting Parse timing is set to 'On Demand'.
I recall that when creating a mutable input body the trick is to copy the parser before it has started parsing. Your code is copying an element under the parser and that probably triggers premature parsing of InputBody, which is exactly what you need to avoid.
When the input is an MQ Input node, copying the parser makes the new parser to refer to the original unparsed BLOB. When the input node is a File Input node, the copied parser will obviously attach to the original FileInputStream.
So try something like this:
Code: |
CREATE FIRSTCHILD OF rowCachedInputMsg DOMAIN ('DFDL');
SET rowCachedInputMsg.DFDL = InputRoot.DFDL;
DECLARE inRef REFERENCE TO rowCachedInputMsg.DFDL;
IF NOT LASTMOVE(inRef) THEN
THROW USER EXCEPTION VALUES('File Not Valid');
END IF;
MOVE inRef FIRSTCHILD;
IF NOT LASTMOVE(inRef) THEN
THROW USER EXCEPTION VALUES('File Not Valid');
END IF;
MOVE inRef FIRSTCHILD NAME recordElementName; |
You need to delete the row that DECLAREs inRef in the beginning of your code because of the little change I made. But that change is irrelevant in context of your memory problem. |
|
Back to top |
|
 |
mattynorm |
Posted: Wed Apr 09, 2014 2:35 am Post subject: |
|
|
Acolyte
Joined: 06 Jun 2003 Posts: 52
|
Thanks Esa
Quote: |
I think 'Record is whole file' should be better.
|
As it stands, I don't think this will work as I am propagating each record out one by one, so I just get an output file with the 2 headers and tghe first record.
BUT...your other suggestion appears to be all kinds of awesome, memory usage has dropped down to about 300mb, so I can give it a go shoving out the output message in 1 and using 'Record Is Whole File' on the FileOutput, see if that improves performance, if it doesn't revert back. |
|
Back to top |
|
 |
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|