|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Large file/Message processing in WBIMB |
« View previous topic :: View next topic » |
Author |
Message
|
frikkieo |
Posted: Fri Jul 01, 2005 6:02 am Post subject: Large file/Message processing in WBIMB |
|
|
Novice
Joined: 16 Jul 2004 Posts: 13 Location: South Africa
|
I have been asked to investigate the possibility of having WBIMB working through a file/Message that can contain more than 200 000 records. The first question I was asked is it possible?, and the second question is what would the performance be like?
Now my first reaction was that if it is a large message and the machine you ant to perform it on has got the resources, it will be possible, else you will run into resource problems.
What would be the best way of doing it? will each record need to be written as a message (obviously the better solution regrding to resource usage) or will it be better suggesting the whole file be written as a message and then grinding through it (resource permitting)?
Has anyone came across something like this before?
Regards
Frikkie |
|
Back to top |
|
 |
jefflowrey |
Posted: Fri Jul 01, 2005 6:08 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
It all depends on the record size... if it's a 5byte record, you should be fine!
If the record size is somewhat large, you may not be able to fit all 200000+ records in a single 100MB message.
In the general case, it is best to have one record per message. In your specific case, it may be easier or more logical to batch them up in larger groups than that.
I think almost everyone has come across this kind of thing before. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
frikkieo |
Posted: Fri Jul 01, 2005 6:37 am Post subject: |
|
|
Novice
Joined: 16 Jul 2004 Posts: 13 Location: South Africa
|
Thanks Jeff, I have before parsed and transformed ascii messages of up to and just over 1 Mb. but yes, I must agree that small messages should be fine to parse and transform, depending on the code written. In a case like this the developer must try and stay away from using the Environment too much. |
|
Back to top |
|
 |
JohnMetcalfe |
Posted: Fri Jul 01, 2005 9:05 am Post subject: |
|
|
 Apprentice
Joined: 02 Apr 2004 Posts: 40 Location: Edinburgh, Scotland
|
We have done quite a bit in this space - we are processing file like transmissions of 40,000 + records through WMQI
Initially we took the approach of 'messagising' the file i.e creating a single MQ message for every record in the file. We grouped the messages together using MQ grouping. We found this very difficult from the support point of view - loads of messages flowing round the infrastructure, a failure of a single one would compromise the integrity of the transmission. In addition, we hit some performance problems on the end-point adapters - getting 40,000+ messages off the Q could take some time - simply the overhead of the MQGets
The approach we would take now is to use a partitioning approach to file transmisisons - the sending application adapter chunks up the file transmission and creates a number of messages, each say with 10000 rows in. It then groups them using the MQ grouping functionality. Hence you get a smaller number of messages flowing through the infrastructure, so you can see what is going on, but you keep a lid on the message sizes. It is nice and scalable as well.  |
|
Back to top |
|
 |
frikkieo |
Posted: Sun Jul 03, 2005 10:12 pm Post subject: |
|
|
Novice
Joined: 16 Jul 2004 Posts: 13 Location: South Africa
|
Thanks guys, I really appreciate the response.
Cheers |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|