Author |
Message
|
pauls |
Posted: Sun Sep 20, 2009 1:43 pm Post subject: MQ7 fails to start FDC error lpiRC_LOG_NOT_AVAILABLE |
|
|
Novice
Joined: 20 Sep 2009 Posts: 11
|
Hi,
Our QA environment is RHEL 5.3, WMQ7 and Message broker 6.1 configured in an HA cluster. Following an application deployment the queue manager fails to start on either cluster node. The FDC summary is attached (couldnt post the full fdc on my first post).
I have checked the logs and they exist and the permissions appear correct. Any suggestions?
+-----------------------------------------------------------------------------+
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
| |
| Date/Time :- Fri September 18 2009 15:21:06 EST |
| UTC Time :- 1253251266.004690 |
| UTC Time Offset :- 600 (EST) |
| Host Name :- srv-qs-mqq3 |
| Operating System :- Linux 2.6.18-028stab062.3 |
| PIDS :- 5724H7230 |
| LVLS :- 7.0.0.2 |
| Product Long Name :- WebSphere MQ for Linux (x86-64 platform) |
| Vendor :- IBM |
| Probe Id :- ZX000001 |
| Application Name :- MQM |
| Component :- ExecCtrlrMain |
| SCCS Info :- cmd/zmain/amqzxma0.c, 1.223.1.7 |
| Line Number :- 738 |
| Build Date :- Apr 24 2009 |
| CMVC level :- p700-002-090421 |
| Build Type :- IKAP - (Production) |
| UserID :- 00000500 (mqm) |
| Program Name :- amqzxma0 |
| Addressing mode :- 64-bit |
| Process :- 9411 |
| Process(Thread) :- 9411 |
| Thread :- 1 |
| ThreadingModel :- PosixThreads |
| QueueManager :- QAQSQM01 |
| ConnId(1) IPCC :- 2 |
| ConnId(2) QM :- 2 |
| ConnId(3) QM-P :- 2 |
| ConnId(4) App :- 2 |
| Last HQC :- 2.0.0-688768 |
| Last HSHMEMB :- 2.4.4-5040 |
| Major Errorcode :- xecF_E_UNEXPECTED_RC |
| Minor Errorcode :- lpiRC_LOG_NOT_AVAILABLE |
| Probe Type :- MSGAMQ6118 |
| Probe Severity :- 2 |
| Probe Description :- AMQ6118: An internal WebSphere MQ error has occurred |
| (7017) |
| FDCSequenceNumber :- 0 |
| Arith1 :- 28695 7017 |
| |
+-----------------------------------------------------------------------------+ |
|
Back to top |
|
 |
fjb_saper |
Posted: Sun Sep 20, 2009 7:08 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Which logs did you check? The error logs or the restart logs?  _________________ MQ & Broker admin |
|
Back to top |
|
 |
pauls |
Posted: Sun Sep 20, 2009 7:26 pm Post subject: |
|
|
Novice
Joined: 20 Sep 2009 Posts: 11
|
|
Back to top |
|
 |
fjb_saper |
Posted: Sun Sep 20, 2009 9:09 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Good thing to check for the restart logs. However the link refers to MQ V5 (embedded MQ on WAS V5)...
You are running V7.0.0.2 of WebSphere MQ.
Looking up the IBM site I found this http://www.ibm.com/support/docview.wss?uid=swg21268419
Are both the primary and failover nodes of your cluster on the same version of MQ?  _________________ MQ & Broker admin |
|
Back to top |
|
 |
pauls |
Posted: Sun Sep 20, 2009 10:09 pm Post subject: |
|
|
Novice
Joined: 20 Sep 2009 Posts: 11
|
Yes both the same version of MQ withthe same maintenance. Its a custom ha solution as the red hat nodes are virtualised under virtuozzo and despite claims on the website they dont support RHCS. I am not onsite today but just got a note from the tech services guys that they replaced the logs and were able to get the queue manager running again.
At this stage it seems like a log corruption, probably related to the custom ha solution.
Thanks for the suggestions. |
|
Back to top |
|
 |
mvic |
Posted: Mon Sep 21, 2009 9:03 am Post subject: |
|
|
 Jedi
Joined: 09 Mar 2004 Posts: 2080
|
pauls wrote: |
At this stage it seems like a log corruption, probably related to the custom ha solution. |
That would be my conclusion too.
Check out the new features available in MQ 7.0.1.0 - multi-instance queue managers, client reconnect - these are intended to cater for this sort of scenario. |
|
Back to top |
|
 |
pauls |
Posted: Mon Sep 21, 2009 11:46 am Post subject: |
|
|
Novice
Joined: 20 Sep 2009 Posts: 11
|
Thanks for the suggestion. I did notice the feature in my searches today and still trying to understand fully how it would apply to our setup. It may be worth checking with the lock command to ensure that only one node can start the queue manager. The plan is to only have one of the nodes running at any pint in time however we do need to test uncontrolled parrallel startup.
Back on the log issue I spent a bit of time investigating and could still get the log access problem with a new set of logs. Ended up recreating the third queue manager and the log problem persisted. Strangely the broker user could start 2 queue managers but not the third one for the configuration manager? mqm could start the third queue manager though so it looks like its a combination of permission and profile setting problems that could have ben exposed by a corruption. While this is the standby node it did work for a while with manual and automated failover. |
|
Back to top |
|
 |
|