Author |
Message
|
kirank |
Posted: Wed Jun 26, 2013 8:12 am Post subject: File Output Node Large File |
|
|
 Centurion
Joined: 10 Oct 2002 Posts: 136 Location: California
|
Hi,
I am using WMB V7.0.0.0 for doing file transfer. I have a requirement to transfer very large files. I did some search earlier on this forum and got the information that Broker does not have file size limit. The limit is more on the record length. As such I set the environment variable MQSI_FILENODES_MAXIMUM_RECORD_LENGTH to a value 10737418240 ( 10 GB) . After this change File input node is able to consume 500 Mb file however the File Output node gives Out of Memory error. I can increase the JVM heap size which is currently set at 512 MB.
However after certain point increasing the JVM heap is not going to help. I created a PMT with IBM and the response came back that Broker only supports up to 100 MB file transfer. I was able to process 250 MB file with current settings so clearly there is disconnect.
Have anyone processed whole files larger than 1 Gb through Broker for File Transfer requirements?
Regards
Kiran |
|
Back to top |
|
 |
mqjeff |
Posted: Wed Jun 26, 2013 8:19 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
First of all, please upgrade to a much later fixpack of v7 than 7.0.0.0.
7.0.0.5 or later.
Secondly, Broker can process as large a file as you want, as long as you process the file in very small pieces - i.e. records.
Thirdly, it's always a bad idea in any server environment to load the entire contents of a very large file into memory in order to process it - because it always requires at least as much memory as the entire contents of the file.
So, to repeat. Update to the most recent FP of Broker v7. Process the file as individual records, not as the entire file. |
|
Back to top |
|
 |
Vitor |
Posted: Wed Jun 26, 2013 9:24 am Post subject: Re: File Output Node Large File |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
kirank wrote: |
I am using WMB V7.0.0.0 for doing file transfer. |
If you simply want to move files from one point to another, consider using FTE or Connect:Direct which will be faster than rolling through a large file one record at a time.
And as my associate says, don't use a version of WMB that old for anything.
I think the PMR from IBM shows a disconnect between what you thought you were asking & what IBM thought you were asking. WMB will refuse to process any file larger than that variable you mention, which defaults to 100Mb. If you change that limit, as you have done, then WMB will attempt to process up to that new limit but WMB stability is not guaranteed and problems you have (like the broker falling over) will not be supported if a new PMR is raised. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
kirank |
Posted: Wed Jun 26, 2013 9:35 am Post subject: |
|
|
 Centurion
Joined: 10 Oct 2002 Posts: 136 Location: California
|
Yes the requirement is to simply move files as BLOB so can not read the file record by record. I am already looking at other MFT tools to meet this requirement. Since we are using Broker already I thought of exploring that option. But looks like it is not a good fit for this requirement.
The product cycles are going through fast. Can not afford to upgrade every other year for business reasons. So we will probably upgrade to IIB V9 once its stable.
Regards
Kiran |
|
Back to top |
|
 |
Vitor |
Posted: Wed Jun 26, 2013 9:39 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
kirank wrote: |
Can not afford to upgrade every other year for business reasons. |
WMBv7.0.0.5 isn't an upgrade, it's a fix pack. All it costs is time to install and regression test.
Even when IIBv9 is "stable", I bet they'll do the same thing & keep releasing fixes for it so you'll soon be behind the curve again. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
Esa |
Posted: Wed Jun 26, 2013 11:08 pm Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
kirank wrote: |
Yes the requirement is to simply move files as BLOB so can not read the file record by record. |
Sure you can. I have moved videos and other large files with no problems.
Set Record Detection to 'Fixed length'.
I even did some performance testing a couple of years ago and concluded that length of about 100000 bytes gave optimal performance. |
|
Back to top |
|
 |
Tibor |
Posted: Thu Jun 27, 2013 1:37 am Post subject: Re: File Output Node Large File |
|
|
 Grand Master
Joined: 20 May 2001 Posts: 1033 Location: Hungary
|
kirank wrote: |
As such I set the environment variable MQSI_FILENODES_MAXIMUM_RECORD_LENGTH to a value 10737418240 ( 10 GB) . |
You cannot exceed 2147483647 byte (=2GB - 1), which is the biggest number as an Integer in Java, because this environment variable is processed this way:
Code: |
{
String str1 = "<init>";
if (Trace.isOn()) Trace.logNamedDebugEntry("ComIbmFileReadNode", str1);
String str2 = System.getenv("MQSI_FILENODES_MAXIMUM_RECORD_LENGTH");
if (null != str2) {
try {
int i = Integer.parseInt(str2);
if (i > 0) {
maxRecordLength = i;
}
}
catch (NumberFormatException localNumberFormatException)
{
if (Trace.isOn()) Trace.logNamedDebugTraceData("ComIbmFileReadNode", str1, "Value of MQSI_FILENODES_MAXIMUM_RECORD_LENGTH is not a valid integer: ", str2);
}
}
if (Trace.isOn()) Trace.logNamedDebugExitData("ComIbmFileReadNode", str1, "maxRecordLength='" + maxRecordLength + "'");
} |
As you can see, if you don't switch tracing on, you cannot get the error message.
By the way, it is possible to process files up to 2 GB with a JavaCompute node without setting MaxHeapSize to 2 GB. |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Jun 27, 2013 3:00 am Post subject: Re: File Output Node Large File |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
Tibor wrote: |
By the way, it is possible to process files up to 2 GB with a JavaCompute node without setting MaxHeapSize to 2 GB. |
Yes, using streaming techniques.
It's also possible to process a file as big as you want in chunks using the BLOB domain without any fixed knowledge of the structure of the file to use "records". FileInput and FileOutput both support fixed length sections instead of record sections, and FileOutput can append multiple chunks to the same file. |
|
Back to top |
|
 |
bdebruin |
Posted: Fri Aug 09, 2013 2:49 pm Post subject: |
|
|
 Novice
Joined: 14 Apr 2010 Posts: 12
|
We had an issue where we had to process the file as a whole so that WTX could transform the file.
We could not use burst mode and process a record at a time because it was an 835 and the adjustment segments were at the end of the file. We had to treat the file as a whole.
Our size requirement was 1 gig, but we were able to process 10 gigs by gzipping the file using a shell script and passing the compressed file (100:1) through the file input node. The files were copybooks, so 4000 byte records with majority of the data being spaces.
More information at debruinconsulting.com/fast.php |
|
Back to top |
|
 |
|