|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
(Resolved) AdoptNewMCA Question |
« View previous topic :: View next topic » |
Author |
Message
|
csmith28 |
Posted: Fri Nov 05, 2004 2:33 pm Post subject: (Resolved) AdoptNewMCA Question |
|
|
 Grand Master
Joined: 15 Jul 2003 Posts: 1196 Location: Arizona
|
MQ5.3.0.6
AIX5.1.0.0
Occasionally I encounter an issue in which, for reasons * that are not apparent, one of my MQManager Servers that services roughly 20 remote 5.3.0.0 ** Client applications using mostly WebSphere Application Server 4.0.0.6 Java and JMS to connect will drop off the network for anywhere from a few seconds to 30 minutes. I have taken to calling it a Network Napp.
When this happens the stdout.logs for these application begin to fill very rapidly with "2009 MQRC_Connection_Broken" errors. Historically these application instances have to be bounced to restore service but my SDR/RCVR channels to remote MainFrame, AS/400 and Tandem MQManagers pick right up where they left of when the network connection is restored. Just as they should.
What I am wondering is, if I set:
Code:
AdoptNewMCA=ALL
AdoptNewMCACheck=ALL
in my qm.ini file will that allow these Client applications to reconnect to my MQManager without having to be bounce or does this not apply to SVRCONN channels?
KeepAlive is already set to YES.
*No .FDC files are generated, errpt -a | more shows no hardware failures regarding the NIC or software failure regarding inetd or TCP/IP. My /var/mqm/qmgrs/MQMGR/errors/AMQERR01.LOG only shows AMQ9999 channel ended abnormally for my SDR/RCVR Channels and gank load of TCP/IP Connection reset by peer entries. I got the Network Group to put a Sniffer on the MQ Server for about three weeks once. The problem re-occured once while the Sniffer was in place but buy the time we got some one from the Network group to join the Bridge, service had been restored and the Sniffer Buffer was already full of normal traffic having pushed out any information from the "Network Napp" that the server took.
** Yeah I know. I am working on upgrading the Clients to 5.3.0.6 right now. It's a whole nother story.
Oddly enough, historically this only seems to happen to the Production MQManager Server. It never happens in the DEV, IntTest or QA environments. _________________ Yes, I am an agent of Satan but my duties are largely ceremonial.
Last edited by csmith28 on Fri Nov 05, 2004 4:30 pm; edited 2 times in total |
|
Back to top |
|
 |
fjb_saper |
Posted: Fri Nov 05, 2004 2:46 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
From what I seem to understand the MCA attribute allows you to reallocate the existing channel to the new request instead of keeping it allocated on the server and allocating a new resource to the request.
So in essence your application will have to handle the reconnect after the 2009 disconnect. The MCA attribute allows you to optimize handling of the resource and not run out of available channels because you kept the ones that disconnected due to network problems.
Enjoy  |
|
Back to top |
|
 |
csmith28 |
Posted: Fri Nov 05, 2004 3:21 pm Post subject: |
|
|
 Grand Master
Joined: 15 Jul 2003 Posts: 1196 Location: Arizona
|
Yeah, when I was reading about the AdoptNewMCA and related attributes I noticed that SVRCONN was not an option that could be specified but it did say something about channels that were set to TCP and something else about channels that were managed by the amqrsta process which can include SVRCONN channels but I guess none of this applies to SVRCONN Channels even if ALL is specified.
Wishfull thinking I guess. _________________ Yes, I am an agent of Satan but my duties are largely ceremonial. |
|
Back to top |
|
 |
PeterPotkay |
Posted: Fri Nov 05, 2004 4:06 pm Post subject: Re: (Resolved) AdoptNewMCA Question |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
csmith28 wrote: |
MQ5.3.0.6
AIX5.1.0.0
Occasionally I encounter an issue in which, for reasons * that are not apparent, one of my MQManager Servers that services roughly 20 remote 5.3.0.0 ** Client applications using mostly WebSphere Application Server 4.0.0.6 Java and JMS to connect will drop off the network for anywhere from a few seconds to 30 minutes. I have take to calling it a Network Napp.
When this happens the stdout.logs for these application begin to fill very rapidly with "2009 MQRC_Connection_Broken" errors. Historically these application instances have to be bounced to restore service but my SDR/RCVR channels to remote MainFrame, AS/400 and Tandem MQManagers pick right up where they left of when the network connection is restored. Just as they should.
|
MQ as designed. A RCVR channel listens on a socket for the next transmission from the SNDR side, and will not allow another SNDR to grab on. You can allow this by enabling AdoptNewMCA, and/or (preferably and) use Heartbeats, so the RCVR channel knows when the next transmission from the SNDR will arrive (the HB), and it can give up on that orphaned connection and get ready for a new connection.
A SVRCONN channel is different. The SVRCONN channel only acts as a "model" for new SRVRCONN/CLNTCONN connections. The base SVRCONN doesn't get hung up on an old connection, because it doesn't have any connections. Its children do. The way they get cleaned up off of orphaned connections is KeepAlive. The better way is for the apps to send an MQDISC, but sometimes that doesn't happen.
I guess a way to think of SVRCONN channels versus RCVR type channels is that SVRCONN defs are like Model queues. When an app opens a Model queue, you get a new separate queue, and the parent model queue is left alone. Same thing for SVRCONN channels. A succesful connection attempt spawns off new channels.
Back to your problem. SNDR channels, the MCAs really, are the first MQ programs ever written. They know the right way to get and put messages. And they were coded correctly to retry their connection to the remote side according to the channel defs until they reconnect.
Like fjb_saper said, it doesn't sound like your client apps are coded to give up on the orphaned q handle. They just keep banging their head and getting the 2009. Instead they should drop into a fresh MQCONN / MQOPEN at the first 2009 error and get back to work with a new connection, or keep trying that MQCONN until network nappy time is over. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
csmith28 |
Posted: Fri Nov 05, 2004 4:49 pm Post subject: |
|
|
 Grand Master
Joined: 15 Jul 2003 Posts: 1196 Location: Arizona
|
Thanks Peter & fjb. _________________ Yes, I am an agent of Satan but my duties are largely ceremonial. |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|