|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Socket files in /tmp directory |
« View previous topic :: View next topic » |
Author |
Message
|
bduncan |
Posted: Thu Jul 12, 2001 4:01 pm Post subject: |
|
|
Padawan
Joined: 11 Apr 2001 Posts: 1554 Location: Silicon Valley
|
I have noticed that on unix systems running MQSeries, there are files that exist in the /tmp directory with names of the format "MQSeries.XXXXX" where XXXXX is some number. When issuing the "file" command against one of these files, the system informs me that these are socket files. The number appended at the end of the filename seems to correspond to the PID of one of the running channel programs. For instance, I have 8 running channels, and I see 8 of these files in the /tmp directory. I have been trying to find out more about these files and their functionality (beyond the obvious) and have been unable to track anything down except for the following information that is part of the APAR SA92142 file for AS/400 -
DESCRIPTION OF PROBLEM FIXED FOR APAR SA92142 :
-----------------------------------------------
The MQSeries Listener process creates and uses temporary
socket files in IFS directory '/tmp'. These files have
Size = *SOCKET, and names of the format 'MQSeries.nnnnn'.
.
The problem is that no housekeeping is performed to delete
these files after they are no longer required. More and more
files accumulate as new Listener jobs are started. Over a
long period of time, there is potential for depleted resource
problems.
CORRECTION FOR APAR SA92142 :
-----------------------------
All temporary socket files will be deleted during a full
quiesce - ie: ENDMQM with option ENDCCTJOB(*YES).
* Additional fixes are included for the defects listed below :-
* --------------------------------------------------------------
*
* Poor exception handling when MQSeries profile passwords expire
* ref: 53357
* --------------------------------------------------------------
* Version 5.2 ships user profiles QMQM and QMQMADM with password
* expiration set to *NOMAX. However, user profiles migrated from
* version 5.1 may use the system value *SYSVAL for password
* expiration. This could cause MQSeries to fail without logging
* error data which clearly indicates the cause of such a problem
* With this PTF applied, MQSeries checks explicitly for profiles
* in a disable state or with expired passwords, and saves
* message AMQ6666 in the error log.
CIRCUMVENTION FOR APAR SA92142 :
--------------------------------
These temporary socket files can be deleted at any time when
there are no MQSeries Listener jobs active.
For example, after a complete quiesce of all Queue Managers.
Does anyone know anything about these socket files beyond what I have been able to discover? I am in the process of watching some of these files over time on a Solaris system to determine if they are being disposed of properly (which I assume is the case) or if they suffer from the same problem on AS/400 that was fixed by the APAR... Any ideas?
_________________ Brandon Duncan
IBM Certified MQSeries Specialist
MQSeries.net forum moderator |
|
Back to top |
|
 |
kolban |
Posted: Thu Jul 12, 2001 4:38 pm Post subject: |
|
|
 Grand Master
Joined: 22 May 2001 Posts: 1072 Location: Fort Worth, TX, USA
|
Brandon, have no familiarity with these files and to be honest never knew they were there. I would utilize the "lsof" utility ... see
http://www.transarc.ibm.com/Support/dce/debug_tools/lsof.html
To examine the nature of these files. This tool will tell who (which process) has them open (if anyone). This will also allow determination (possible) of when they can be removed. |
|
Back to top |
|
 |
bduncan |
Posted: Fri Jul 13, 2001 8:10 am Post subject: |
|
|
Padawan
Joined: 11 Apr 2001 Posts: 1554 Location: Silicon Valley
|
Well, apparently they are disposed of by the channel initiator when the given channel that was using the socket is ended. According to the APAR, if this doesn't happen, you should be able to simply delete the file assuming the channel is stopped. It points out that the only way to make sure they are safe to delete is ending the queue manager before removing them. I personally have seen these files on Linux and Solaris, so I assume all unix flavors of MQ behave this way. Sometime today I will be able to confirm my suspicions from yesterday about these files being deleted automatically. Will keep everyone updated...
_________________ Brandon Duncan
IBM Certified MQSeries Specialist
MQSeries.net forum moderator |
|
Back to top |
|
 |
bduncan |
Posted: Fri Jul 13, 2001 9:17 am Post subject: |
|
|
Padawan
Joined: 11 Apr 2001 Posts: 1554 Location: Silicon Valley
|
So I have been able to confirm that during normal operation of the queue manager, these files are constantly being deleted and created as necessary. I had started 8 channels last week on a single queue manager, and noticed 8 socket files on that system yesterday. The date on these files were yesterday, as oppossed to last week, which would have made more sense because that was when I started the channels. My only theory was that the socket files were being overwritten periodically. Sure enough, when checking today, the socket files had creation dates of today...
_________________ Brandon Duncan
IBM Certified MQSeries Specialist
MQSeries.net forum moderator |
|
Back to top |
|
 |
Charro |
Posted: Fri Apr 18, 2003 8:41 am Post subject: amqrmppa does not die.... |
|
|
Newbie
Joined: 09 Apr 2003 Posts: 1
|
With regard to the local socket files /tmp/MQSeries.pid#, if there is a cleanup job which deletes "old" files from your /tmp filesystem, (Which any good admin will set up...), then it will destroy any local socket files in /tmp, which are old enough to meet the "cleanup" requirement. These socket files may or may not still be in use.
The result is that should you quiesce the queue manager (endmqm) any amqrmppa processes running WILL NOT DIE! I have seen this behavior in AIX, and will test it in Linux.
Any thoughts?
Personally, I question the placing of these files in /tmp... |
|
Back to top |
|
 |
gmabrito |
Posted: Sat Mar 05, 2005 2:29 am Post subject: |
|
|
 Apprentice
Joined: 19 Mar 2002 Posts: 35
|
What does it mean if these temp files are always empty? I am trying to trace down a common recurring FDC that says AMQ9213: A communications error for bind occurred. We are on 5.3 CSD08. We regularly get these FDC every couple of hours. We are using amqcrsta with inetd instead of runmqlsr. We do have lots of clients connecting. |
|
Back to top |
|
 |
fjb_saper |
Posted: Sat Mar 05, 2005 6:32 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
In 5.3 you should be using runmqlsr and not via inetd. There have been some changes vs 5.2 and the performance is much better. There is an ibm paper out there about the changes just don't have the url at hand.
We do not touch these temp files and have had no problem communicating with our qmgrs. If you want to make sure that you can delete these files you need to check if the relevant pid still exists in your system. You should not delete them as long as the pid has not died.
Enjoy  |
|
Back to top |
|
 |
PeterPotkay |
Posted: Sat Mar 05, 2005 7:59 am Post subject: |
|
|
 Poobah
Joined: 15 May 2001 Posts: 7722
|
fjb_saper wrote: |
In 5.3 you should be using runmqlsr and not via inetd. There have been some changes vs 5.2 and the performance is much better. There is an ibm paper out there about the changes just don't have the url at hand. |
http://www.developer.ibm.com/tech/faq/results/0,1322,1%253A401%253A416%253A148%253AGeneral,00.html#q148
Quote: |
In WebSphere MQ 5.3, why is runmqlsr now the recommended listener over inetd?
A: In MQSeries 5.2 and previous releases, runmqlsr ran each inbound connection as a new thread within itself. If runmqlsr ran out of resources (memory, threads, file descriptors), then it would not accept any new connections. This massively threaded approach worked well on systems with a limited number of channels, but on very busy systems it was necessary to set up multiple listeners and balance connections across them.
The inetd daemon starts a new amqcrsta process for each inbound connection. There is no chance an amqcrsta responsible for only one channel will run out of resources, so even the busiest of queue managers requires only a single port in inetd. However, this massively unthreaded approach means that busy systems may have hundreds of amqcrsta processess, forcing administrators to increase maxuproc. Inetd has no idea when the queue manager is inactive, so it will start amqcrsta processes even when the queue manager is shut down.
WebSphere MQ 5.3 removes the listener scalability problem once and for all. Rather than running each inbound connection as a thread within itself, runmqlsr now passes connections to one of the amqrmppa channel pooling processes. These amqrmppa's are threaded, but not massively so. This means they do not exhaust per-process resources or force administrators to increase maxuproc. The listener will start new amqrmppa processes as needed, so a single listener can now handle an unbounded number of connections. The listener is aware of the queue manager's status at all times, so it is also very quick to deny connections when the queue manager is down.
|
I read somewhere that each amqrmppa can have 60 or 64 threads. Once you have more than that many channels, you see another amqrmppa process started. Not 100% of these #s though. Jason or Nigel will hopefully correct me. _________________ Peter Potkay
Keep Calm and MQ On |
|
Back to top |
|
 |
JT |
Posted: Sat Mar 05, 2005 9:20 am Post subject: |
|
|
Padawan
Joined: 27 Mar 2003 Posts: 1564 Location: Hartford, CT.
|
|
Back to top |
|
 |
Nigelg |
Posted: Mon Mar 07, 2005 12:45 am Post subject: |
|
|
Grand Master
Joined: 02 Aug 2004 Posts: 1046
|
Peter, you have the numbers right, except for HPUX.
On AIX (Solaris, Linux) each amqrmppa process will have up to 64 threads running all types of channel with MCATYPE(THREAD), i.e. SDR RCVR SVRCONN etc. If you ever get to 100 amqrmppa's, 6400 threads, then each amqrmppa will then ramp up its number of threads to 100 per process. 64 is the OPTIMUM number, 100 the MAXIMUM number.
If you are using pipelining, then each SDR (or RCVR) will use 2 threads in the process.
On HPUX this is reduced to 30 threads per amqrmppa, because of the low default value (64) of the number of threads allowed per process. |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|