Author |
Message
|
gpklos |
Posted: Mon Aug 05, 2002 9:51 am Post subject: MQBrowse(cursor) vs. MQGET |
|
|
Centurion
Joined: 24 May 2002 Posts: 108
|
We have an application that is a client which removes messages from a queue. The process basically reads an "error queue" using MQBROWSE and then removes the message if another condition occurs. We remove the message with the MESSAGE_UNDER_CURSOR option. Once we are done with the error queue we read another queue using just an MQGET, which removes the message from the queue immediately. We ran into a case where the 'C' program which does the processing just quits when it is processing the "error" queue and there are more than 763 messages on the queue. However it can remove any number of messages from the other queue. The only difference between the queues is we browse, then remove the messages from the first queue and just use a destructive get from the second queue. We think it is a memory issue with the C program. Does the MQBrowse/Cursor take up more memory than just doing an MQget? ie. Does it take more memory to Browse, then delete 800 messages vs just destructively getting 800 messages? Does the cursor mean a pointer is held to all the records?
Any help would really be appreciated.
Gary |
|
Back to top |
|
 |
poki |
Posted: Mon Aug 05, 2002 3:35 pm Post subject: |
|
|
 Newbie
Joined: 05 Aug 2002 Posts: 9 Location: US
|
There is lots of performance issue involve with MQ Browse. One should alway avoid MQ Browse if one has large volumn of message to browse over.
We don't have statistic of memory usage but MQ Browse and MQ Delete was taking more time then MQ Get (fact). |
|
Back to top |
|
 |
RogerLacroix |
Posted: Mon Aug 05, 2002 7:58 pm Post subject: |
|
|
 Jedi Knight
Joined: 15 May 2001 Posts: 3264 Location: London, ON Canada
|
I gotcha get me one of those new MQ Delete calls.
Yes, 2 MQ API calls will take longer than 1 MQ API call but sometimes an application requires this design for a varity of reasons (that's not saying a little re-design couldn't hurt ).
Now back to the problem. gpklos, what was the completion / reason code from the last MQ API call? Your problem could be as simple as the MQAdmin "GET" disabling the "error queue" and your problem just existed gracefully.
If you can get a little more info about the last few operations that the program did (with reason codes), it would help to narrow it down.
later
Roger... |
|
Back to top |
|
 |
gpklos |
Posted: Tue Aug 06, 2002 4:34 am Post subject: mqbrowse vs mqget |
|
|
Centurion
Joined: 24 May 2002 Posts: 108
|
All completion codes are normal. The program displays the messages as they come in, and then it just quits. No error, nothing. Plus the queue is still enabled. Also if you immediately trigger the queue after the program stops it will do another 763 messages and then stop again. It does this till the total count of messages gets below 763 and then it will clear off the queue.
Thanks!
Gary |
|
Back to top |
|
 |
bob_buxton |
Posted: Tue Aug 06, 2002 4:43 am Post subject: |
|
|
 Master
Joined: 23 Aug 2001 Posts: 266 Location: England
|
It is not clear from the initial post why you suspect a memory issue or how many of the 763 messages on the queue satisfy your conditions for removal.
It is possible to miss messages when doing a browse. The browse maintains a cursor of the last message seen and looks for messages beyond that point. However it is possible for messages that haven't been seen by the browse to appear on the other side of the cursor which the browse wont see. For example if the queue is in priority order and you have browsed past the high priority messages and then a new high priority message arrives it won't be seen by Browse_next. Commits of in syncpoint puts and rollbacks of gets can also cause similar problems.
To avoid these problems it is often necessary to peiodically restart the browse but this then leads to rereading the same messages again and at the least impacts performance and may, depending on what else you are doing, be undesirable.
MQ works best when destructive gets are used to keep queue sizes small.
Bob _________________ Bob Buxton
Ex-Websphere MQ Development |
|
Back to top |
|
 |
gpklos |
Posted: Tue Aug 06, 2002 6:16 am Post subject: followup |
|
|
Centurion
Joined: 24 May 2002 Posts: 108
|
We suspect a memory condition because we were told by the unix administrators that they put a cap on the amount of memory available to certain processes. Plus like I said the program just stops with no error messages anywhere. It is very strange I admit. Since it is very consistent and it occurs when you browse so many messages, that was the conclusion. We are testing the application today (hopefully) to try and recreate the situation. If we do recreate it then we will change from a browse to a get and see if it still occurs. The browse is very important because we don't want to take the message off the queue until we process the information in the message onto an oracle database. Once we get a good return code from oracle, we now remove the message from the queue using a MSG_UNDER_CURSOR.
I hope this explains some of it better.
Gary |
|
Back to top |
|
 |
bduncan |
Posted: Tue Aug 06, 2002 9:56 am Post subject: |
|
|
Padawan
Joined: 11 Apr 2001 Posts: 1554 Location: Silicon Valley
|
Gary,
Your method is not the standard way of doing such an operation. If your desire is to not remove the message until you get a good completion code from Oracle, then you need to use Syncpoint. Removing the message within the context of a unit of work (or better yet XA coordination with Oracle) guarantees that the message won't actually be removed from the queue until you issue the MQCMIT. If there is an error in Oracle, simply issue an MQBACK. If you use XA, you get the added benefit of combining the MQSeries and Oracle transactions into a single unit of work that can be backed out or committed as an atomic action. Issuing a single MQGET under syncpoint followed by an MQCMIT is far more efficient than doing a browse, followed by another MQGET.
Plus, your method doesn't protect against the second MQGET failing. What if your application dies after making the update in Oracle, but hasn't yet completed the MQGET to remove the message? Now you have the potential for duplicate updates, etc...
Syncpoint is the way to go. _________________ Brandon Duncan
IBM Certified MQSeries Specialist
MQSeries.net forum moderator |
|
Back to top |
|
 |
bob_buxton |
Posted: Wed Aug 07, 2002 12:19 am Post subject: Re: followup |
|
|
 Master
Joined: 23 Aug 2001 Posts: 266 Location: England
|
gpklos wrote: |
... Plus like I said the program just stops with no error messages anywhere. It is very strange I admit. ...
Gary |
Even if your program does not issue error messages it must have a loop to process messages and some logic to terminate the loop when it thinks it has run out of messages. We need to detemine which of your loop terminating conditions has occurred. I would not expect you to get a normal 2033 - no msg available code from MQ if you are running short of storage but if you are just exiting when you receive anything other than an OK response from MQ or Oracle your program may be concealing a significant problem from you.
You mention of Oracle makes me wonder whether if your problems are on that side. How often do you commit your updates to Oracle? Could it be running short of storage trying to maintain a large Unit of work.
I certainly agree with Brandon that using syncpoint to coordinate your MQ and Oracle updates is the correct way to go. You may not need to issue a commit after every MQGET/Oracle Put pair - you could do it after every 20 messages. Then if you do have a failure you have a maximum of 20 messages to reprocess. _________________ Bob Buxton
Ex-Websphere MQ Development |
|
Back to top |
|
 |
|