ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » IBM MQ Installation/Configuration Support » Performance Issues with WebSphere MQ 7

Post new topic  Reply to topic
 Performance Issues with WebSphere MQ 7 « View previous topic :: View next topic » 
Author Message
Volvic
PostPosted: Tue Mar 08, 2011 2:13 am    Post subject: Performance Issues with WebSphere MQ 7 Reply with quote

Apprentice

Joined: 14 Oct 2009
Posts: 30

Hello there WebSphere MQ professionals,

I am posting today as we are having trouble to achieve performance (messages throughput) which we in my opinion should achieve on our servers. We are testing pretty simple on just one queue manager as follows: We are starting 30/60/90 WebSphere MQ clients simultaneously (supportpac MA0T) in order to write 30000/60000/90000 persistent messages (20kB each) in a local queue (1000 messages per client). I would expect to have about 1500 messages/second written to the queue. Unluckily we are able to write only 450 messages per second maximum, which is very poor.

90000 msgs written / 200 seconds (for all clients) = 450 msgs / second

The mq and client configuration is as follows.

Server:
Code:

HP DL380G7
1st Processor HP E5640 2.66GHz QC Kit
2nd Processor HP E5640 2.66GHz QC Kit
16 GB RAM
SAN Storage with /MQHA/<Qmgr>/data and /MQHA/<Qmgr>/log (both on same SAN disk)


OS:
Code:
Red Hat Enterprise Linux Server release 5.5 (Tikanga)


Kernel parameters:
Code:

# sysctl parameters for mq
kernel.msgmni = 1024
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.sem = 500 256000 250 1024
fs.file-max = 32768
net.ipv4.tcp_keepalive_time = 300


When running IBM script to check configuration we get:
Code:

mqconfig: Analyzing Red Hat Enterprise Linux Server release 5.5 (Tikanga)
          settings for WebSphere MQ 7.0

System V Semaphores
  semmsl (sem:1)      500 semaphores                     IBM:500           OK
  semmns (sem:2)      5012 of 256000 semaphores  (1%)    IBM:256000        OK
  semopm (sem:3)      250 operations                     IBM:250           OK
  semmni (sem:4)      48 of 1024 sets            (4%)    IBM:1024          OK

System V Shared Memory
  shmmax              68719476736 bytes                  IBM:268435456     OK
  shmmni              47 of 4096 sets            (1%)    IBM:4096          OK
  shmall              33182 of 2097152 pages     (1%)    IBM:2097152       OK

Other Settings
  file-max            32768 files                        IBM:32768         OK
  tcp_keepalive_time  300 seconds                        IBM:300           OK


Network:
Currently 100Mbit but we hardly reach 40Mbit when 90 clients are running simultaneously putting 20kB messages with 1000 messages per client.

WebSphere MQ qm.ini:
Code:

ExitPath:
   ExitsDefaultPath=/var/mqm/exits/
   ExitsDefaultPath64=/var/mqm/exits64/
Log:
   LogPrimaryFiles=30
   LogSecondaryFiles=20
   LogFilePages=16384
   LogType=CIRCULAR
   LogBufferPages=4096
   LogPath=/MQHA/<Qmgr>/log/<Qmgr>/
   LogWriteIntegrity=SingleWrite
Service:
   Name=AuthorizationService
   EntryPoints=13
ServiceComponent:
   Service=AuthorizationService
   Name=MQSeries.UNIX.auth.service
   Module=/opt/mqm/lib64/amqzfu
   ComponentDataSize=0
QMErrorLog:
   ErrorLogSize=10485760
   ExcludeMessage=7234
   SuppressMessage=9001,9002,9202
   SuppressInterval=30
CHANNELS:
   MaxChannels=3000
   MaxActiveChannels=2000
   MQIBindType=FASTPATH
TCP:
   KeepAlive=Yes
TuningParameters:
   DefaultQBufferSize=1048576
   DefaultPQBufferSize=1048576


Environment variables:
Code:

MQ_CONNECT_TYPE=FASTPATH


Channel configuration:
Code:

AMQ8414: Display Channel details.
   CHANNEL(TEST)                       CHLTYPE(SVRCONN)
   ALTDATE(2011-03-04)                     ALTTIME(23.25.42)
   COMPHDR(NONE)                           COMPMSG(NONE)
   DESCR( )                                HBINT(300)
   KAINT(AUTO)                             MAXINST(999999999)
   MAXINSTC(999999999)                     MAXMSGL(4194304)
   MCAUSER()                          MONCHL(QMGR)
   RCVDATA( )                              RCVEXIT( )
   SCYDATA( )                              SCYEXIT( )
   SENDDATA( )                             SENDEXIT( )
   SHARECNV(0)                             SSLCAUTH(REQUIRED)
   SSLCIPH( )                              SSLPEER( )
   TRPTYPE(TCP)


Testqueue:
Code:

AMQ8409: Display Queue details.
   QUEUE(IN)                           TYPE(QLOCAL)
   ACCTQ(QMGR)                             ALTDATE(2011-03-07)
   ALTTIME(17.24.30)                       BOQNAME( )
   BOTHRESH(0)                             CLUSNL( )
   CLUSTER( )                              CLWLPRTY(0)
   CLWLRANK(0)                             CLWLUSEQ(QMGR)
   CRDATE(2011-03-04)                      CRTIME(11.23.42)
   CURDEPTH(0)                             DEFBIND(OPEN)
   DEFPRTY(0)                              DEFPSIST(YES)
   DEFPRESP(SYNC)                          DEFREADA(NO)
   DEFSOPT(SHARED)                         DEFTYPE(PREDEFINED)
   DESCR( )                                DISTL(NO)
   GET(ENABLED)                            HARDENBO
   INITQ( )                                IPPROCS(0)
   MAXDEPTH(500000)                        MAXMSGL(4194304)
   MONQ(QMGR)                              MSGDLVSQ(PRIORITY)
   NOTRIGGER                               NPMCLASS(NORMAL)
   OPPROCS(0)                              PROCESS( )
   PUT(ENABLED)                            PROPCTL(COMPAT)
   QDEPTHHI(80)                            QDEPTHLO(20)
   QDPHIEV(DISABLED)                       QDPLOEV(DISABLED)
   QDPMAXEV(ENABLED)                       QSVCIEV(NONE)
   QSVCINT(999999999)                      RETINTVL(999999999)
   SCOPE(QMGR)                             SHARE
   STATQ(QMGR)                             TRIGDATA( )
   TRIGDPTH(1)                             TRIGMPRI(0)
   TRIGTYPE(FIRST)                         USAGE(NORMAL)


MA0T script configuration for test clients:
Code:

<?xml version="1.0"?>
<MsgTest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

xsi:noNamespaceSchemaLocation="C:\MsgTest\Schema\MsgTest.xsd">
    <Control>
        <Connection>
            <QMgr><QMgr></QMgr>
            <Channel><Channel></Channel>
            <Host><Host></Host>
            <Port>1414</Port>
        </Connection>
        <TestReport>
            <File>%ScriptName%-%1%.rpt</File>
            <Dir>c:\MsgTest\TestScripts\Reports</Dir>
        </TestReport>
    </Control>
    <!-- ================================================================================ -->
    <!-- Test MessageRate BURST mode at bursts of 10 messages a second. BURST mode means  -->
    <!-- put the messages onto the queue as fast as possible until the value calculated   -->
    <!-- from the From="" and To="" parameters is reached, in this case 10 as both        -->
    <!-- parameters are the same.                                                          -->
    <!-- ================================================================================ -->
    <Test Name="#1 Put Messages">
        <GetFile>
      <Dir>c:\MsgTest\</Dir>
      <File>Testmessage-%1%.txt</File>
      <Buffer>StressTest-%1%</Buffer>
   </GetFile>
        <!-- First put contains an implicit MQOPEN so get it done -->
        <!-- for IntervalStart/IntervalEnd measurements           -->
        <PutMsg>
   <Buffer>StressTest-%1%</Buffer>
            <Q>IN</Q>
       <IntervalInMsgId Name="Put"/>
            <Syncpoint/> 
        </PutMsg>
        <!-- Put a series of messages using BURST mode -->
   <IntervalStart Name="PutMsg"/>
        <For Name="BigSix" From="1" To="1000" Incr="1">
            <PutMsg>
      <Buffer>StressTest-%1%</Buffer>
                <Q>IN</Q>
        <Syncpoint/>   
            </PutMsg>
        </For>
   <Commit/>
   <IntervalEnd Name="PutMsg"/>
    </Test>
</MsgTest>


Any ideas on this? What should be our next steps? Maybe we are testing the wrong way?
_________________
Volvic


Last edited by Volvic on Tue Mar 08, 2011 1:01 pm; edited 2 times in total
Back to top
View user's profile Send private message
zpat
PostPosted: Tue Mar 08, 2011 2:28 am    Post subject: Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5866
Location: UK

450 persistent messages a second seems fairly decent to me. It's likely to be limited by the speed of your disk subsystem, fast write-caching and that sort of thing.

Have you read the IBM performance report support pac and compared your results to Tim Dunn's? Tim might also offer some advice if you contacted him.
Back to top
View user's profile Send private message
Volvic
PostPosted: Tue Mar 08, 2011 3:10 am    Post subject: Reply with quote

Apprentice

Joined: 14 Oct 2009
Posts: 30

zpat wrote:
450 persistent messages a second seems fairly decent to me. It's likely to be limited by the speed of your disk subsystem, fast write-caching and that sort of thing.

Have you read the IBM performance report support pac and compared your results to Tim Dunn's? Tim might also offer some advice if you contacted him.


Well I have seen "WebSphere MQ Linux V7.0 - Performance Evaluations on xSeries 64 bit" and they achieve 1665 round trips per second when using 112 clients and WMQ 7.0 (see chapter 3.2.2.2).

I'll check the disk subsystem configuration and will try to contact Tim...
_________________
Volvic
Back to top
View user's profile Send private message
zpat
PostPosted: Tue Mar 08, 2011 3:16 am    Post subject: Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5866
Location: UK

Are they using persistent or non-persistent messages - the difference in performance will be very significant?

Synchronous (in effect) request/reply is typically done with non-persistent messages.

I assume you have seen this (fairly old) article.

http://www.ibm.com/developerworks/websphere/library/techarticles/0712_dunn/0712_dunn.html

WMQ v7 has client performance improvements for certain scenarios (read ahead, and asynchronous put where it does not wait for the MQRC to come back to save line turnaround). These are usually related to non-persistent messages because disk i/o factors will generally limit the performance of persistent messages. In most cases the client application will need to enable these improvement options.
Back to top
View user's profile Send private message
Volvic
PostPosted: Tue Mar 08, 2011 3:27 am    Post subject: Reply with quote

Apprentice

Joined: 14 Oct 2009
Posts: 30

zpat wrote:
Are they using persistent or non-persistent messages - the difference in performance will be very significant?
...

They are evaluating both non-persistent and persistent messages. I was talking about persistent message in the round trip.

I have read Tim's article as searching the web on this issue was part of my "small" research

The MQ client from MA0T is compiled with V6.0 libraries but our customers/developers were testing with Java applications and V7.0 libraries which resulted in pretty much same or worse throughput.
_________________
Volvic
Back to top
View user's profile Send private message
zpat
PostPosted: Tue Mar 08, 2011 3:51 am    Post subject: Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5866
Location: UK

As I mentioned, most of the V7 client improvements result from the MQ client application specifying (enabling) the appropriate new options in the MQI calls. It won't happen automatically because it is an application decision whether to use the new features or not.

Round-trip persistent messages doesn't really make sense. If the response is not received in time the requestor can repeat it. This means you don't need MQ to persist the message and can benefit from the much, much higher performance of non-persistent messages.

As with many things in MQ, it's a trade off between high performance or high recoverability of messages. If you feel you need more advice, then open a PMR or contact Tim (I don't work for IBM but they usually don't mind as long as you have tried the obvious and documented things first).
Back to top
View user's profile Send private message
Volvic
PostPosted: Tue Mar 08, 2011 3:59 am    Post subject: Reply with quote

Apprentice

Joined: 14 Oct 2009
Posts: 30

I think that round trips with persistent messages are useful as long as it is for performance measurement.

We already opened a PMR but IBM stated that performance consulting cannot be done over a PMR, it has to be done over separate fee based service.

We'll see if IBM and/or Tim can help.
_________________
Volvic
Back to top
View user's profile Send private message
mqjeff
PostPosted: Tue Mar 08, 2011 4:02 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

In general, performance issues are outside the scope of the PMR process.

This doesn't mean you won't get some assistance.

20kb messages are relatively small - but message size is a hugely significant factor in performance. As zpat said, look at things like disk subsystem performance. In particular, confirm that you have put the the /var/mqm/log on a separate disk than /var/mqm/qmgrs - so that you can have concurrent writes to both the log and the q file.

You should be able to get a build of MA0T for v7 if you send the author an email... or at least instructions on how to recompile it yourself if it comes with full source.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » IBM MQ Installation/Configuration Support » Performance Issues with WebSphere MQ 7
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.