ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » CICSREQUEST node throughput

Post new topic  Reply to topic
 CICSREQUEST node throughput « View previous topic :: View next topic » 
Author Message
PEPERO
PostPosted: Wed Nov 26, 2014 3:43 am    Post subject: CICSREQUEST node throughput Reply with quote

Disciple

Joined: 30 May 2011
Posts: 177

Hi all;
I've designed a message flow in which the input endpoint is a TCPIPSERVER INPUT node from which a message is transformed by some compute nodes and then the message goes to a CICSREQUEST node. when the response is prepared , it would be put into a TCPIPSERVER OUTPUT node to complete the thread execution. I'm using the runtime of the broker under a Zseries server in which the CICS subsystem is installed on. So the communication from WMB to the CICS is local.
By tunning the TCPIPSERVER INPUT/RECEIVE and OUTPUT nodes , and also the CICS system , the overal through put of the system wouldn't exceed 350 tps. But in my opinion, it have to be much more since the load test consumes only 10 additional instances on the broker and the CICS server is not busy (Up to 19 simultaneous conversation among 100 receive counts which is defined by default).
Even increasing the number of additional instances, not only doesn't improve the through put but also it decreases the overal system through put.
At first i doubt to the client pomping machine and the LAN switches and routers or even the network bandwith but when i moved the pomp stub code to the Unix system services of the Zseries and run it there , nothing was changed (local run, i mean pomping on the local host where WMB server is running).
The CICS called program from the CICSREQUEST node is a tunned program with approximately low I/O overhead with Quasirent value set for the Concurrency attribute.
So i would be thankful to have any recommandation round this issue.
Back to top
View user's profile Send private message
pmasters
PostPosted: Wed Nov 26, 2014 5:23 am    Post subject: Reply with quote

Novice

Joined: 19 Jul 2011
Posts: 14

Defining a program as Quasirent will mean it only runs on the CICS QR thread, and is therefore singlethreaded. Multiple receiveSessions on the IPIC link, and the additional instances won't therefore help you if the program isn't threadsafe and needs to run for a significant proportion of the message processing time in the flow.

What I suspect is that by increasing the additional instances, you are simply causing more resources (flow threads, IPIC sessions) to bottleneck on the backend resource and hence you see the system resource usage go up.

Whether or not your program can be defined as threadsafe or openapi is a separate discussion, and needs to be understood before you specify. I won't cover that here as there are numerous CICS documents and presentations on this subject.

What sort of TPS are you seeing if you have no additional instances of your flow? I would also recommend using the message flow statistics to give you an idea of what proportion of message processing time is spent in the CICS call (easily visualised via Web UI if you have v9). If the CICS call is the majority of the time, you won't see an improvement by using additional instances for a single-threaded CICS call. If that is the case, then I would suggest looking at making the target CICS program threadsafe or using CICS TG to spray the work across multiple regions.

Thanks, Peter
===========
Peter Masters
IIB Development
Back to top
View user's profile Send private message
PEPERO
PostPosted: Fri Nov 28, 2014 10:54 pm    Post subject: Reply with quote

Disciple

Joined: 30 May 2011
Posts: 177

Thanks for the answer.
When no additional instances is defined , the total system through put is 40-50 tps.
I previously had turned on the message flow statistics and it showed a considerable waste time on the CICSREQUEST node. Since we are using WMB v7 there is no support for the URLGateway for using the CTG. So i think that i've to try for the threadsafe solution.
Back to top
View user's profile Send private message
pmasters
PostPosted: Mon Dec 01, 2014 1:27 am    Post subject: Reply with quote

Novice

Joined: 19 Jul 2011
Posts: 14

The flow statistics will show you the amount of time in the CICS program. If that program is doing a lot, you might well expect it to be high. The reason for asking was that working out the percentage of time you spend in CICS vs the amount of time spent in the rest of the flow will tell you (with a single threaded program) whether you are maxing out your CICS task. The other issue you will have is that if you're quasi-reentrant, then you'll be sharing the QR thread with everyone else out there.

Your figure of ~50tps and your maximum tps of 350 suggest you aren't going to see much improvement beyond 7-8 additional instances unless you can either multi-thread, or route to multiple CICS regions.

We added 3-tier support via CTG in v7 (fixpack 2 I believe, rather than the GA release). However, I think the better option is always to look at whether it would be possible to threadsafe the CICS programs as that would provide much better throughput.
Back to top
View user's profile Send private message
PEPERO
PostPosted: Mon Dec 01, 2014 6:49 am    Post subject: Reply with quote

Disciple

Joined: 30 May 2011
Posts: 177

Before thinking to the possibility of using full reentrancy (using open TCBs instead of QR TCBs) , i've designed a load test scenario in which the clients are connected directly to a CTG on the Mainframe.
The CTG on MainFrame would spray the received transactions to a single CICS server instance. The results was ~ 450 - 500 tps for calling a single low I/O CICS program , The one that was called from the CICSREQUEST node using WMB.
So in my opinion the problem couldn't be in the CICS or the resources defined within it. I believe there must be somthing wrong in the WMB , either the message flow or the JVM (?!!) ...
In the message flow the TCPIPSERVER INPUT/RECEIVE and OUTPUT nodes are responsible to read/write to/from the sockets. In the middle there is a COMPUTE node for some transformations and then CICSREQUEST node is called. There is no complexity in the COMPUTE node. The TCPIPSERVER related endpoints are tunned for the input/output streams and no delay is shown when starting the message flow statistics on these nodes.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Mon Dec 01, 2014 8:57 am    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20696
Location: LI,NY

Just wondering at your architecture. Sometime establishing the connections is what is the most costly endeavor. If you have MQ on the mainframe, you could run the CICS trigger program and have a triggered transaction...
Don't know if it would be any faster, but you can try and see the limits...

This way MQ makes the transport, low latency as the channel is always busy, and no cost for establishing the connection, every time... Optimized IO on the channel and opitimized IO on the MF....

Let us know how you fare...
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
pmasters
PostPosted: Tue Dec 02, 2014 2:17 am    Post subject: Reply with quote

Novice

Joined: 19 Jul 2011
Posts: 14

Ok - I think it sounds like your CTG shows the max rate for your CICS program in your setup to be about 450tps. In the IIB case, that seems to be ~350tps.

What are your client applications in the "client"->CTG->CICS case? If they are java CTG applications the main question I'd have is "are you using extended transactions?". In IIB, the default for new CICS nodes is transactionMode=auto, which means that (again by default) every 5 messages we will drive a commit which will incur an overhead, tho I wouldn't expect that to be anything like 100tps worth.

In the CTG test you ran, you are presumably doing a tight loop calling the CICS program. The message flow equivalent is effectively a slightly more open loop as it is doing a few other things in between messages. As I said, it looks like as a proportion of message processing time, your message flow spends about 1/8th of its time in CICS, and thus you can keep the CICS QR busy constantly with ~8 copies of the flow running.

Another thought. In WMB v7, we use an older IPIC driver which was updated considerably in v9. The principle difference is that in v7, CTG internally creates a single TCPIP connection to CICS and multiplexes the requests over it. In v9, it makes multiple connections (and still multiplexes). This might be affecting the throughput. In WMB v7, one way to work around this is to switch to using a newer CTG (v8 or above) as a middle-tier, rather than WMB->CICS. That would also make the comparison clearer with your CTG test.

Hope that helps, Peter
===========
Peter Masters
IIB Development
Back to top
View user's profile Send private message
PEPERO
PostPosted: Wed Dec 03, 2014 4:29 am    Post subject: Reply with quote

Disciple

Joined: 30 May 2011
Posts: 177

Quote:

fjb_saper wrote :

If you have MQ on the mainframe, you could run the CICS trigger program and have a triggered transaction


Yes we have it on the mainframe and previously i've replaced the CICSREQUEST node with the following nodes
Quote:

JCN=>MQOUTPUT => MQGET = > JCN


to be able to use the CICS MQ Bridge facilities (DPL) but it had never exceeded from 250tps for calling the same CICS application program.

Quote:

pmasters wrote :

What are your client applications in the "client"->CTG->CICS case?


The client application is written in java in which the ECIRequest is instantiated with ECIRequest.ECI_NO_EXTEND.A good news today was when i increased the number of the clients pomping the CTG (increasing the sockets) the rate was reached to 750-800 tps. I'm trying to use fixpack 2 to use the CICSREQUEST node in three tire and hope the problem to be resolved.
Before starting with fixpack2 , i had changed the flow to use two CICSREQUEST nodes using a filter node to determine the odd current milisecond time or the even to fire the false or true terminals. But the same results were achieved. Don't you think if we could doubt to the single TCPIP connection to CICS and multiplexes this test have to break the previous through put and results?
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » CICSREQUEST node throughput
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.