ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » IBM MQ Performance Monitoring » Linux TCP tuning for WMQ

Post new topic  Reply to topic Goto page 1, 2  Next
 Linux TCP tuning for WMQ « View previous topic :: View next topic » 
Author Message
romankhar
PostPosted: Wed Mar 05, 2014 12:11 pm    Post subject: Linux TCP tuning for WMQ Reply with quote

Novice

Joined: 23 Jan 2014
Posts: 12

Folks,
I have looked at the WMQ manual and performance reports, but there is no discussion on tuning TCP setting on RHEL.

There a plenty of articles on the web for TCP tuning on RHEL, but are there any best practices for tuning TCP on RHEL 6.5 specifically for WMQ 7.5 on x86 64 bit?
Back to top
View user's profile Send private message Visit poster's website
PaulClarke
PostPosted: Wed Mar 05, 2014 12:50 pm    Post subject: Reply with quote

Grand Master

Joined: 17 Nov 2005
Posts: 1002
Location: New Zealand

I'm not sure I follow why you would think the MQ manuals are a good place to look for TCP tuning. It perhaps depends on what type of tuning you are looking for. Generally speaking you should not have to do anything to your TCP network for MQ to perform well on your system. If you have extreme requirements, for example you are sending very large amounts of data and want to dedicate as much resource as possible to MQ then you can set large TCP receive buffers using MQ tuning parameters.

Perhaps you could start by telling us the problem you having and perhaps the symptoms you see.

Cheers,
Paul.
_________________
Paul Clarke
MQGem Software
www.mqgem.com
Back to top
View user's profile Send private message Visit poster's website
SAFraser
PostPosted: Wed Mar 05, 2014 2:05 pm    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

Except, perhaps, TCP keep alive?
Back to top
View user's profile Send private message
PaulClarke
PostPosted: Wed Mar 05, 2014 2:40 pm    Post subject: Reply with quote

Grand Master

Joined: 17 Nov 2005
Posts: 1002
Location: New Zealand

As I said, it depends on what kind of tuning romankhar is looking for. However, I would not regard switching on Keepalive as 'tuning'. However, perhaps I am being blinded by the word 'Performance' in his question. We really need him to tell us what the issue is first.

P.
_________________
Paul Clarke
MQGem Software
www.mqgem.com
Back to top
View user's profile Send private message Visit poster's website
SAFraser
PostPosted: Wed Mar 05, 2014 6:45 pm    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

To clarify my point:

He said he looked through the manual and the performance reports. I took it to mean he is setting up MQ and wanting to tune the OS in an optimum fashion. I didn't read it to mean he has a current problem.

If he intends to "switch on" TCP keepalive (I assume you referring to the parameter in mqs.ini), then he needs to look at these OS settings in Linux:
tcp_keepalive_time
tcp_keepalive_intvl
tcp_keepalive_probes

The system defaults will be too lengthy.

All that said, of course it's best if the OP tells us more about his need.
Back to top
View user's profile Send private message
romankhar
PostPosted: Thu Mar 06, 2014 5:05 am    Post subject: TCP tuning Reply with quote

Novice

Joined: 23 Jan 2014
Posts: 12

I am currently doing performance testing of WMQ for non-persistent messaging - message sizes 256, 1K, 10K, 100K and 1MB.

I can't seem to drive the 8 core x86 machine to it limits and looking at iostat seems like I have 27% of CPU capacity left with 26% of network capacity left on 100K and 1MB tests - even with me running two queue managers. I use JMSPerfHarness as a client.

Perhaps this question is more appropriate on Linux and VMware forums since my client and server are communicating via private VMware ESX nework using VMXNET3 adapter. Enabling JumboFrames does not seem to make a difference. I would have thought VMXNET3 should give me more than 3.4 Gbits/sec performance (~430 MB/sec). I started writing this script for TCP tuning:

Code:
#!/bin/bash

#
# DESCRIPTION:
#    This script is to be used for Linux TCP tuning on RHEL.
#

# Some useful tips about error checking in bash found here: http://www.davidpashley.com/articles/writing-robust-shell-scripts/
# This prevents running the script if any of the variables have not been set
set -o nounset
# This automatically exits the script if any error occurs while running it
set -o errexit

ECHON=${ECHON-echo}
me=$(basename $0)

##############################################################################
# regexify
##############################################################################
# Convert a string to a regular expression that matches only the given string.
#
# Parameters
# 1 - The string to convert
##############################################################################
regexify() {
   # Won't work on HPUX
   echo $1 | sed -r 's/([][\.\-\+\$\^\\\?\*\{\}\(\)\:])/\\\1/g'
}

###############################################
# delAndAppend
###############################################
# Delete a line containing a REGEX from a file,
# then append a new line.
#
# Parameters
# 1 - The REGEX to search for and delete
# 2 - The line to append
# 3 - The file to edit
###############################################
delAndAppend() {
   echo "Updating entry in $3: $2"
   awk '{if($0!~/^[[:space:]]*'$1'/) print $0}' $3 > $3.new
   mv $3.new $3
   echo "$2" >> $3
}

###################################################
# backupFile
###################################################
# Copy the given file to a file with the same path,
# but a timestamp appended to the name.
#
# Parameters
# 1 - The name of the file to backup
###################################################
backupFile() {
   cp $1 $1.`date +%Y-%m-%d.%H%M`
}

#############################################
# updateSysctl
# Update values in the /etc/sysctl.conf file.
#
# Parameters
# 1 - The value to update
# 2 - The new value
#############################################
updateSysctl() {
   delAndAppend `regexify $1` "$1 = $2" /etc/sysctl.conf
}

#############################################
# UpdateSysctl
# Update values in the /etc/sysctl.conf file.
#
# Parameters
#    none
#############################################
UpdateSysctl() {
   # First we need to make a backup of existing kernel settings before we change anything
   backupFile /etc/sysctl.conf

   # Now can set some new values
   # System tuning (sysctl)
   echo "" >> /etc/sysctl.conf
   echo "# The following values were changed by $me [`date`]." >> /etc/sysctl.conf

   # The maximum number of file-handles that can be held concurrently
#   updateSysctl fs.file-max $FDMAX

   # The maximum and minimum port numbers that can be allocated to outgoing connections
   updateSysctl net.ipv4.ip_local_port_range '1024 65535'

   # From what I can gather, this is the maximum number of disjoint (non-contiguous),
   # sections of memory a single process can hold (i.e. through calls to malloc).
   # This doesn't mean that a process can have no more variables than this,
   # but performance may become degraded if this the number of variables exceeds this value,
   # as the OS has to search for memory space adjacent to an existing malloc.
   # This is my best interpretation of stuff written on the internet; it could be completely wrong!
   #   - Rowan (10/10/2013)
   updateSysctl vm.max_map_count 1966080

   # The maximum PID value. When the PID counter exceeds this, it wraps back to zero.
   updateSysctl kernel.pid_max 4194303

   # Tunes IPC semaphores. Values are:
   #  1 - The maximum number of semaphores per set
   #  2 - The system-wide maximum number of semaphores
   #  3 - The maximum number of operations that may be specified in a call to semop(2)
   #  4 - The system-wide maximum number of semaphore identifiers
   updateSysctl kernel.sem '1000 1024000 500 8192'

   # The maximum size (in bytes) of an IPC message queue
   updateSysctl kernel.msgmnb  131072

   # The maximum size (in bytes) of a single message on an IPC message queue
   updateSysctl kernel.msgmax  131072

   # The maximum number of IPC message queues
   updateSysctl kernel.msgmni  2048

   # The maximum number of shared memory segments that can be created
   updateSysctl kernel.shmmni  8192

   # The maximum number of pages of shared memory
   updateSysctl kernel.shmall  536870912

   # The maximum size of a single shared memory segment
   updateSysctl kernel.shmmax  137438953472

   # TCP keep alive setting
   updateSysctl net.ipv4.tcp_keepalive_time 300
      
   # TCP tuning options are taken from several different sources:
   # http://kaivanov.blogspot.com/2010/09/linux-tcp-tuning.html

   # To increase TCP max buffer size setable using setsockopt():
   updateSysctl net.core.rmem_max 33554432
   updateSysctl net.core.wmem_max 33554432
   updateSysctl net.core.rmem_default 65536
   updateSysctl net.core.wmem_default 65536
   
   # To increase Linux autotuning TCP buffer limits min, default, and max number of bytes to use set max to 16MB for 1GE, and 32M or 54M for 10GE:
   updateSysctl net.ipv4.tcp_rmem 4096 87380 33554432
   updateSysctl net.ipv4.tcp_wmem 4096 65536 33554432

   # You should also verify that the following are all set to the default value of 1:
   updateSysctl net.ipv4.tcp_window_scaling 1
   updateSysctl net.ipv4.tcp_timestamps 1
   updateSysctl net.ipv4.tcp_sack 1
   updateSysctl net.ipv4.tcp_no_metrics_save 1
      
   updateSysctl net.core.netdev_max_backlog 30000

   echo "" >> /etc/sysctl.conf

   # there is a bug in RHEL where bridge settings are set by default and they should not be, so we have to pass '-e' option here
   # read more: https://bugzilla.redhat.com/show_bug.cgi?id=639821
   sysctl -e -p
   sysctl -w net.ipv4.route.flush=1
}

#############################################
# MAIN BODY starts here
#############################################
echo ""
echo "------------------------------------------------------------------------------"
echo " This script will tune TCP parameters on your RHEL OS"
"------------------------------------------------------------------------------"
echo ""

UpdateSysctl

# Use JumboFrames for TCP
ip link set eth3 mtu 9000

# Test if JumboFrames work ok
ping mqhost -s 9000 -c 4


Last edited by romankhar on Thu Mar 06, 2014 8:19 am; edited 1 time in total
Back to top
View user's profile Send private message Visit poster's website
PaulClarke
PostPosted: Thu Mar 06, 2014 5:16 am    Post subject: Reply with quote

Grand Master

Joined: 17 Nov 2005
Posts: 1002
Location: New Zealand

Yes, somehow I thought you might be talking about tuning for performance.

What do you mean by 'network maxed out'. If your network is properly maxed out - ie. you are driving it at or very near the available bandwidth then I struggle to see how you think you may go faster than that. There is, after all, only so much stuff you can push through a pipe.

From an MQ point of view, as I said before, perhaps the best way to get better throughput is to ensure you have fairly TCP receive buffer sizes.

Cheers,
Paul.
_________________
Paul Clarke
MQGem Software
www.mqgem.com
Back to top
View user's profile Send private message Visit poster's website
romankhar
PostPosted: Thu Mar 06, 2014 8:25 am    Post subject: Reply with quote

Novice

Joined: 23 Jan 2014
Posts: 12

I have corrected my previous post - I should have been clear that I am using 73% of all 8 cores and 74% of network capacity. I am not maxing out either of these. My goal is to try and drive it to the max to get the best performance. So far I am not sure what the bottleneck is and why am I not able to get it to 100% of either CPU or 100% of network.

So far I get 1565 msgs/sec (100KB size) on 20 queues using requestor/responder pattern with remote requestor and local responder. That gives me 313 MB/sec. This appears to be around 74% of the total network capacity I have between client and server - both being on the same ESX host connected by VMXNET3.

PaulClarke wrote:
Yes, somehow I thought you might be talking about tuning for performance.

What do you mean by 'network maxed out'. If your network is properly maxed out - ie. you are driving it at or very near the available bandwidth then I struggle to see how you think you may go faster than that. There is, after all, only so much stuff you can push through a pipe.

From an MQ point of view, as I said before, perhaps the best way to get better throughput is to ensure you have fairly TCP receive buffer sizes.

Cheers,
Paul.
Back to top
View user's profile Send private message Visit poster's website
PaulClarke
PostPosted: Thu Mar 06, 2014 10:09 am    Post subject: Reply with quote

Grand Master

Joined: 17 Nov 2005
Posts: 1002
Location: New Zealand

Ok, well that makes a bit more sense. I suspect you may be right to talk to some Linux performance people. There could well be tools you can use to analyse your data. The key to driving more data into MQ can often be to ensure you have a decent level of parallelism. You say you have 20 queues so you should be getting a fair amount of that but exactly what is optimal would be hard to guess and may be better determined by trial and error.

Of course one of the questions would be 'How fast do you need it to be ?'. 313 MB/sec seems pretty fast to me. It would be a pretty big workload to maintain over 300 MB/second continually.

Cheers,
Paul.
_________________
Paul Clarke
MQGem Software
www.mqgem.com
Back to top
View user's profile Send private message Visit poster's website
SAFraser
PostPosted: Thu Mar 06, 2014 12:56 pm    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

Paul 1 - Shirley 0.
Back to top
View user's profile Send private message
Tibor
PostPosted: Sat Mar 08, 2014 1:48 am    Post subject: Reply with quote

Grand Master

Joined: 20 May 2001
Posts: 1033
Location: Hungary

Have you already measured the raw network performance? You should know this information firstly.
Back to top
View user's profile Send private message
romankhar
PostPosted: Sat Mar 08, 2014 5:47 am    Post subject: Reply with quote

Novice

Joined: 23 Jan 2014
Posts: 12

I did measure network performance with iperf and it appears to be a lot less than what I expected - I was trying to figure it out on the other forum: http://serverfault.com/questions/580521/vmxnet3-performance-on-linux-on-esx-5-0

For now there is nothing I can do to fix network performance, so while I am waiting for new hardware, I will work on persistent messaging tests as it does not need as much bandwidth.

Tibor wrote:
Have you already measured the raw network performance? You should know this information firstly.
Back to top
View user's profile Send private message Visit poster's website
bruce2359
PostPosted: Sat Mar 08, 2014 6:56 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9394
Location: US: west coast, almost. Otherwise, enroute.

romankhar wrote:
I will work on persistent messaging tests as it does not need as much bandwidth.

Really? A 100Meg persistent message uses less bandwidth than a 100Meg non-persistent message?
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Sat Mar 08, 2014 7:58 am    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20696
Location: LI,NY

bruce2359 wrote:
romankhar wrote:
I will work on persistent messaging tests as it does not need as much bandwidth.

Really? A 100Meg persistent message uses less bandwidth than a 100Meg non-persistent message?

What I believe he meant to say, is that with persistent messages you are introducing some disk related delays, thus having fewer messages in the same time frame and using slightly less bandwidth. (number of msgs over a specific interval)...
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
Michael Dag
PostPosted: Sat Mar 08, 2014 9:07 am    Post subject: Reply with quote

Jedi Knight

Joined: 13 Jun 2002
Posts: 2602
Location: The Netherlands (Amsterdam)

would be interesting to see if you could package some of these tests to run on different hardware, different setups and being able to compare results...
_________________
Michael



MQSystems Facebook page
Back to top
View user's profile Send private message Visit poster's website MSN Messenger
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2  Next Page 1 of 2

MQSeries.net Forum Index » IBM MQ Performance Monitoring » Linux TCP tuning for WMQ
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.