ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » General IBM MQ Support » MQ Port Automaticcaly Stopping..Resolution Query

Post new topic  Reply to topic
 MQ Port Automaticcaly Stopping..Resolution Query « View previous topic :: View next topic » 
Author Message
osullivj35
PostPosted: Thu Feb 05, 2009 9:16 am    Post subject: MQ Port Automaticcaly Stopping..Resolution Query Reply with quote

Newbie

Joined: 13 Jan 2009
Posts: 6

Hi,

We have had an ongoing problem whereby an MQ port in websphere would stop for some unknown reason. We finally identified the error below in our system.out logs

I looked at the article - http://www.ibm.com/developerworks/websphere/library/techarticles/0405_titheridge/0405_titheridge.html - and it appears that we may be getting some poison messages in could be handling them more gracefully. The suggestion is to set the backout threshold property > 0 and less than the listener port's Maximum retries property.

    *Does this seem like a reasonable cause of the problem?
    *Is there any other impacts my suggested change might have?


Thanks
Jerry

Quote:
2/4/09 12:13:13:041 GMT] 11f4b84a TranManagerIm I WTRN0041I: Transaction
57415344:00000000000025210000009447e9980529e3310521cf1692c01301b7f2730b0d47454f50524453657276657232[]
has been rolled back.
[2/4/09 12:13:13:147 GMT] 11f4b84a ServerSession W WMSG0036E: Maximum message delivery retry count of 0 reached for
MDB BPSRiskMDBBean, JMSDestination jms/BPP_RLL_PR_REQUEST, MDBListener stopped
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Feb 05, 2009 9:40 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

The application program(mer) must decide what is the appropriate action to take when a poison message arrives in the queue. The url you references offers the three posibilities.

Backing out the message seems like an odd choice since the consuming application would just get the same message a second time, and discover that it is poison, back it out once again, then get it a third time, a fourth time, a fifth time...

Perhaps the application should move the message to some other queue for special handling.

Poison messages don't mysteriously show up; rather, some program created it. So, the corrective action involves both moving the poison message elsewhere AND having the contact admin message creating application stop doing so.
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
mqjeff
PostPosted: Thu Feb 05, 2009 9:49 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

After BackoutRetry count has been exceeded, messages should be moved to the BackoutRetryQueue, and not put to the Input queue again. In plain MQ applications, an MQ application programmer must explicitly program this behavior. In Message Broker and JMS, the server environment (MQInput node or JMS Provider) will handle this for you.

The information osullivj35 has posted is how to configure the JMS definitions to cause the JMS provider to perform backout requeue processing.
Back to top
View user's profile Send private message
SAFraser
PostPosted: Thu Feb 05, 2009 11:00 am    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

As a general rule, I create a backout queue explicitly for each MQ queue where messages will be processed by an MDB. (Or, you can have a single backout queue where backed-out messages are co-mingled. ) I set the BO threshold to 1 on the MQ input queue, and the max retries on the WAS listener port to 2.

The exception to this would be if an application team has a business requirement such that they want the MDB to die if a message can't be processed. In that case, of course, I wouldn't set a max retry value. And in some cases, you might want the BO threshold to be greater than 1; just depends on your business requirements.

With that said, there is a reason that the message can't be processed, and I would press the application team for that. Also, I find that application teams allow backout messages to languish in the backout queue rather than investigating them, so that requires a bit of pushing too.
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Feb 05, 2009 11:17 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

A well-behaved application design should contemplate poison messages, as well as the more traditional messages that end up on DLQ, missing or duplicate requests, missing or duplicate replies.

And I'm not holding my breath ...
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
SAFraser
PostPosted: Thu Feb 05, 2009 11:26 am    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

In this case, the application never gets the message in order to handle it.

The application itself (on our site) has a second queue to which it can write messages that it can't process. Our most well-behaved applications repackage the input message with additional data that describes the nature of the failure.
Back to top
View user's profile Send private message
mqjeff
PostPosted: Thu Feb 05, 2009 11:51 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

bruce2359 wrote:
A well-behaved application design should contemplate poison messages


Yes, and using Backout Retry Queues is a reasonable way to do it.
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Feb 05, 2009 11:57 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

I continue to be amazed and amused by developers who ask "...why would that happen?!" (why would a poison message end up in a queue), and "...why should I be concerned about it?!" (doesn't the qmgr take care of that)

One developer asked me what ReasonCode should he expect to receive if the message was a poison message?
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
Vitor
PostPosted: Thu Feb 05, 2009 12:39 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

bruce2359 wrote:
Backing out the message seems like an odd choice since the consuming application would just get the same message a second time, and discover that it is poison, back it out once again, then get it a third time, a fourth time, a fifth time...


This allows for the message to be retried in the event of an external problem; the message itself is fine, but can't be processed at the time of first reading

bruce2359 wrote:
Perhaps the application should move the message to some other queue for special handling.


There does come a point where you give in and move it to a backout queue.

bruce2359 wrote:
Poison messages don't mysteriously show up; rather, some program created it. So, the corrective action involves both moving the poison message elsewhere AND having the contact admin message creating application stop doing so.


There is another class of poison message. Consider this:

The message is read, processed and results in a database update. For reasons utterly unconnected to the message or the reading application the page on the database is locked (e.g. a separate application updating separate items). The lock's not going to last very long so it's an accidental collision that can be resolved by retrying.

Consider also this:

The message is read, and further processing requires information from a remote system. The remote system's having a spot of bother, and replies are timing out. Nothing wrong with the input message or the request, but no reply. It's reasonable to have another go to see if there's more luck next time.

In both these instance you don't want backout count too high, because there's a point where you give in (because there's either a persistent problem or the message is in fact invalid). Where "too high" should be set is a design decision based on many, many factors.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
SAFraser
PostPosted: Thu Feb 05, 2009 1:21 pm    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

As is often the case, Vitor states the case more clearly than I have (despite his inexplicable continuing allegiance to that wretched Windows logo).

The message can be rejected before it is processed by the application (as in my previous example, for a structural problem with the message itself). In the absence of a retry strategy, the MDB will shut itself down. We route these messages, via max retries and BOTHRESH, to a backout queue. The input message itself was bad.

However, as Vitor points out, the message may be good (not poison at all). But a downstream process may be down, a record may not be found in a DB, a lock exists someplace. In this case (in our experience), in the absence of exception handling in the application, the data goes to the Great Bit Bucket in the Sky. From the MDB's perspective, the message was successfully consumed.

So, we have two queues: one for backout (input message could not be consumed) and one for exceptions (input message was consumed but could not be processed).
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Feb 05, 2009 1:35 pm    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

Yes, my example of a single defective (invalid data) message with no message affinity, or application affinity or DB affinity, or downstream application affinity, is simple. If the message is poinon at first get, it will continue to be poison no matter how many times the same application gets it and tries to process it.

I agree that a poison message don't have to be defective; rather, it may not meet the application requirements for the instance of the transaction (DB not available, downstream app not responding, ...)

For more convoluted (less simple) applications, a backout threshold greater than one may be appropriate. Using your examples, the appropriate re-processing will be entirely application dependant and equally convoluted.
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Thu Feb 05, 2009 7:17 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20756
Location: LI,NY

SAFraser wrote:
However, as Vitor points out, the message may be good (not poison at all). But a downstream process may be down, a record may not be found in a DB, a lock exists someplace. In this case (in our experience), in the absence of exception handling in the application, the data goes to the Great Bit Bucket in the Sky. From the MDB's perspective, the message was successfully consumed.

So, we have two queues: one for backout (input message could not be consumed) and one for exceptions (input message was consumed but could not be processed).


You mean to say that you have MDB's that have transaction not supported set? Ours have all require new set for transactionality. This would mean that in the case where you cannot process, you raise the correct exception and the message gets rolled back ... until it lands either on the DLQ (no backout queue specified) or the backout queue... or the MDB stops (no bothresh)
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
bruce2359
PostPosted: Fri Feb 06, 2009 7:16 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9469
Location: US: west coast, almost. Otherwise, enroute.

What's a good definition of a transaction?

A simple one, a local UofW, involves an application making changes to qmgr objects only, like MQIs MQGETs MQPUTs, MQCMIT, MQBACK.

A slightly more complex transaction (global UofW, XA-compliant) might involve DB updates in the same UofW as the queue updates - but both on/in the same o/s instance or enabled by appropriate enabling software (Extended Transactional Client, for example).

A much more complicated UofW might, as suggested above, involve downstream updates to other resources outside the usual o/s scope. Or a multi-legged application (a broad, non-technical view of transaction) may span o/s and time, with dependencies beyond the usual (simple) transaction model that WMQ addresses with elegance.
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » General IBM MQ Support » MQ Port Automaticcaly Stopping..Resolution Query
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.