|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
What to do if MQPUT process fails ? |
« View previous topic :: View next topic » |
Author |
Message
|
AlexeiSkate |
Posted: Tue May 14, 2002 7:32 am Post subject: What to do if MQPUT process fails ? |
|
|
Centurion
Joined: 10 Apr 2002 Posts: 123
|
Hi,
I have a simple Java program that opens a file and uses the MQS Java API to perform an MQPUT onto a local queue. What is the error-handling recommendations should the MQPUT process failed, e.g. the TCP/IP connection is disrupted, or number of msgs on queue exceed the maxdepth .... ?
I was thinking of doing the put under a unit of work, so that I won't do a commit until the whole file content is read and loaded. But isn't there a limit on the number of msgs that can be processed under a UOW ? If this is true, then how do you handle msgs that are already loaded onto the queue when the process unexpectedly failed, because I would like the file loading process to be an all or nothing event.
Thanks,
Alex |
|
Back to top |
|
 |
kolban |
Posted: Tue May 14, 2002 8:03 am Post subject: |
|
|
 Grand Master
Joined: 22 May 2001 Posts: 1072 Location: Fort Worth, TX, USA
|
There is indeed a limit to the number of messages put to a queue under syncpoint but it is configurable on a per queue manager basis and is also pretty darn large. You should ensure that you have allocated sufficient resources (log files) to accomodate. How many messages could be in a file? How big (average) is each message? |
|
Back to top |
|
 |
AlexeiSkate |
Posted: Tue May 14, 2002 12:22 pm Post subject: |
|
|
Centurion
Joined: 10 Apr 2002 Posts: 123
|
Thanks.
The maximum number of records in the file should be 500,000 or less, and each record length shouldn't be more than 400 characters long. Since the max number of uncommitted messages per syncpoint is from 1 to 999 999 999, I can assume that I'm safe by uploading the whole file under one commit ? |
|
Back to top |
|
 |
bduncan |
Posted: Tue May 14, 2002 2:41 pm Post subject: |
|
|
Padawan
Joined: 11 Apr 2001 Posts: 1554 Location: Silicon Valley
|
Remember, as a unit of work grows bigger and bigger, more and more system resources (semaphores, file handles, etc..) are devoted to maintaining it. Typically the hardware and/or operating system will limit the number of messages in a unit of work to far less than whatever hardcoded limit may be present in the MQSeries product. For instance, we noticed that on our Linux machines, as we approached about 10,000 16K messages under one unit of work, the queue manager would suddenly fail.
You should also keep in mind that with 500,000 message units of work, you are running dangerously close to the maximum limit for queue depths (640,000). Since your receiving application can't start processing anything until the entire unit of work has been committed to the queue, the queue depth is guaranteed to go up to the number of messages in the UOW, possibly more if you are sending multiple UOWs to the same queue at once. Also, the receiving application may still be processing a particular UOW when another UOW starts coming in.
Have you considered batching the records together? Instead of 500,000 one record messages, perhaps consolidate into 100,000 five record messages?
The key here, as Neil pointed out, is the log files though. They are really what allow you to accommodate large UOWs. One of the MQSeries manuals has a section that can help you to calculate the size of the logs required for your given situation... _________________ Brandon Duncan
IBM Certified MQSeries Specialist
MQSeries.net forum moderator |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|