ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » Clustering » Default Workflow does not seem happy

Post new topic  Reply to topic
 Default Workflow does not seem happy « View previous topic :: View next topic » 
Author Message
msklie
PostPosted: Wed Jan 28, 2004 7:18 am    Post subject: Default Workflow does not seem happy Reply with quote

Newbie

Joined: 11 Jul 2002
Posts: 7

We have QM1 and QM2 as the repo. Both contain Q.TEST. One day QM3 (part of the cluster but not a repo) was sending messgages as it does to Q.TEST and the channel TO.QM2 went into RETRY. It seemed as though 1/2 our messages sayed in the SYSTEM.CLUSTER.TRANSMIT.QUEUE until the TO.QM2 channel went to RUNNING.

No special user exit is used here. I'm wondering why the messages just didnt get sent over TO.QM1 who has a clustered queue Q.TEST.

We are running MQ 5.2 (no CSD) but have since upgraded to CSD 8 but unable to test. Any thoughts here will be greatly appreciated.
Back to top
View user's profile Send private message
mqonnet
PostPosted: Wed Jan 28, 2004 8:39 am    Post subject: Reply with quote

Grand Master

Joined: 18 Feb 2002
Posts: 1114
Location: Boston, Ma, Usa.

I am not sure about the platform+os specifics(if any fixes were made on top of it or not). But what should have happened is when the channel to one full repos is detected as in retry state, the queue manager should have forwarded all the messages to the other full repos qm. Unless of course there were errors associated with that as well. But again, there could be many factors involved such as the size of messages, traffic, cpu busy etc.

At times all of these make us believe that the messages are sort of stuck, since the putting app is "always" faster compared to MQ resources that have to handle so many resources.

So, the question to ask would be. When your channel went into retry, at that time how much work load was there and how many apps were trying to put to this clustered queue. And how do you determine(for sure) that the messages were in fact not being forwarded to the other queue manager to which the channel connection is fine. Is there any message id or unique identification with which you can confirm this. Barring which it would be abstract to say that the messages are "not" being trasmitted. Another question to ask would be, did you ever look at the cluster transmit queue when everything is fine.

Cheers
Kumar
Back to top
View user's profile Send private message Send e-mail Visit poster's website
msklie
PostPosted: Wed Jan 28, 2004 9:12 am    Post subject: Reply with quote

Newbie

Joined: 11 Jul 2002
Posts: 7

Sorry on the OS. Its a Solaris 8 enviornement. I was looking at the queue CURDEPTH growing and every other message going to the RUNNING queue by verifying the MSGS #.
Back to top
View user's profile Send private message
mqonnet
PostPosted: Wed Jan 28, 2004 9:23 am    Post subject: Reply with quote

Grand Master

Joined: 18 Feb 2002
Posts: 1114
Location: Boston, Ma, Usa.

What does the error log say on the partial qm???

I would assume you have only one app that is putting to this cluster queue.

BTW does this happen all the time, or did it happen this one time.

Check on the IBM web site for any updates or fixes in this area.

Cheers
Kumar
Back to top
View user's profile Send private message Send e-mail Visit poster's website
msklie
PostPosted: Wed Jan 28, 2004 9:51 am    Post subject: Reply with quote

Newbie

Joined: 11 Jul 2002
Posts: 7

-------------------------------------------------------------------------------
01/22/04 23:43:18
AMQ9558: Remote Channel is not currently available.

EXPLANATION:
The channel program ended because the channel 'TO.ZBLGGOPP' is not currently
available on the remote system. This could be because the channel is disabled
or that the remote system does not have sufficient resources to run a further
channel.
ACTION:
Check the remote system to ensure that the channel is available to run, and
retry the operation.
------------------------------------------------------------------
-------------------------------------------------------------------------------
01/23/04 16:40:37
AMQ9209: Connection to host 'blgaix01 (192.168.46.6)' closed.

EXPLANATION:
An error occurred receiving data from 'blgaix01 (192.168.46.6)' over TCP/IP.
The connection to the remote host has unexpectedly terminated.
ACTION:
Tell the systems administrator.
-------------------------------------------------------------------------------
01/23/04 16:40:37
AMQ9999: Channel program ended abnormally.

EXPLANATION:
Channel program 'TO.ZBLGGOPP' ended abnormally.
ACTION:
Look at previous error messages for channel program 'TO.ZBLGGOPP' in the error
files to determine the cause of the failure.
-------------------------------------------------------------------------------


This is the first time any issue has occured but then the cluster has only been running for 1 month. We know that the QMGR on the other end abruptly died (or as far as we were told).

Yes it is one WebLogic app putting to this queue. We have since applied CSD 8 since 6 and 7 references workflow issues but interested to see why this happened since the config.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » Clustering » Default Workflow does not seem happy
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.