|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Multi-Instance Queue Manager - Architectural Gotchas |
« View previous topic :: View next topic » |
Author |
Message
|
Infraguy |
Posted: Mon Jun 18, 2012 10:00 am Post subject: Multi-Instance Queue Manager - Architectural Gotchas |
|
|
Newbie
Joined: 23 May 2012 Posts: 4
|
The setup at my company may not be the same as other environments but I am sure there are similarities if using:
MQ 7.0.1 - Multi-Instance Queue Managers (MIQM)
Linux (RHEL)
NFS v4 (NetApp)
I thought I would share some of the details and lessons learned.
Over the course of several months, we've been working with the main three vendors on settings to optimize the file system performance (reads, writes, access) on both the NFS client and server side.
These variables shouldn't be changed on a whim! However, out of the box configurations for many of the infra components of our messaging technology stack were under-tuned (is that a word?) such that identifying the 'hidden' parameters and understanding the impact of any changes to the values is worth sharing. If you have different infra components, the variables may be named differently!
Had this information been here, or anywhere on the interwebz, some time could have been saved.
On the NFS Server: TCP, NFS, and ATIME
TCP receive window needs to as high as possible on busy systems
nfs.tcp.recvwindowsize <value>
NFS Data Flow Control
nfs.ifc.rcv.high <value>
nfs.ifc.rcv.low <value>
ATIME disabled
For the Linux (RHEL) NFS client:
idmapd logging - extremely helpful for NFSv4
Set the verbosity of /etc/idmapd.conf to "-1" (minus one)
This allows the idmapd process, which is required for NFSv4, to not wait on syslog to complete entries before responding to the NFS server request of GET ID
RPC table slot tuning
The default RPC slot table is 16 in RHEL. Setting this to the max of 128 allows more slots in the table. The setting is in effect after the file system has been remounted.
We found 16 RPC slots are too, too low for the amount of chatter when a QMGR spikes with traffic.
One final piece: qtrees
Qtrees to the same volume from the same Linux server will have the same file system ID (FSID). This forces all of the RPC tasks through the same RPC connection. This has to be managed with care (see RPC table slot tuning). If too many data and log tasks are funneled through the same RPC connection, long file access/waits will occur.
Not setting these parameters correctly can result in MIQM failovers. This can be painful for applications that aren't built to address the nuances of MIQM (reconnect, for example). |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|