Author |
Message
|
Vitor |
Posted: Fri Jan 23, 2009 7:18 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
Mr Butcher wrote: |
or am i wrong ?!? |
No you're right and I apologise if I've expressed it badly, but in this situation how could you run a CKTI in each CICS to trigger the transactions? Or fire cross-region transactions from a single CKTI?
You could of course bind the 2 CICS to a single queue manager, and have long running transactions in both regions servicing the requests. I've assumed that (as these are replacements of 3270 screen contact admin) triggering is being used to allow for periods of idleness. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
Maximus |
Posted: Fri Jan 23, 2009 7:22 am Post subject: |
|
|
Acolyte
Joined: 14 Jun 2004 Posts: 57
|
Vitor, regarding the Trigger Monitor (CKTI) on Z/OS. The trigger type will be "on first" and will process thousands (I should get a better estimate this afternoon) of requests per day.
The client implemented a CICS Adapter using this pattern: http://www.ibm.com/developerworks/websphere/library/techarticles/0511_suarez/0511_suarez.html
This means that the CICS Adapter will be "launched/executed" hundred if not thousands of times per day by the CKTI. Is this acceptable on z/OS? Or removing the trigger monitor (CKTI) and making the built-in CICS Adapter (based on pattern above) permanent (always running) in memory more efficient in this case?
I know that on Windows ,if you execute thousands of messages per day that having a process that will be always up and running is more efficient that using the trigger monitor that will call the process every time there is messages in the queue to process. Is this also the case on z/OS?
Also, having a process always up and running to process the messages as the advantage that you can start the same process more then once if you want load balancing. I could use the same technique on z/OS if the CICS adapter could be modified to be always running. Then I could start this built-in CICS Adapter on both regions using only 1 QM. That would be another way to load balance CICS transactions between 2 CICS regions. Is this feasible or acceptable on z/OS? |
|
Back to top |
|
 |
fjb_saper |
Posted: Fri Jan 23, 2009 7:44 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Maximus wrote: |
Vitor, regarding the Trigger Monitor (CKTI) on Z/OS. The trigger type will be "on first" and will process thousands (I should get a better estimate this afternoon) of requests per day.
The client implemented a CICS Adapter using this pattern: http://www.ibm.com/developerworks/websphere/library/techarticles/0511_suarez/0511_suarez.html
This means that the CICS Adapter will be "launched/executed" hundred if not thousands of times per day by the CKTI. Is this acceptable on z/OS? Or removing the trigger monitor (CKTI) and making the built-in CICS Adapter (based on pattern above) permanent (always running) in memory more efficient in this case?
I know that on Windows ,if you execute thousands of messages per day that having a process that will be always up and running is more efficient that using the trigger monitor that will call the process every time there is messages in the queue to process. Is this also the case on z/OS?
Also, having a process always up and running to process the messages as the advantage that you can start the same process more then once if you want load balancing. I could use the same technique on z/OS if the CICS adapter could be modified to be always running. Then I could start this built-in CICS Adapter on both regions using only 1 QM. That would be another way to load balance CICS transactions between 2 CICS regions. Is this feasible or acceptable on z/OS? |
Great article. Most CICS shops however do a trigger every AND set a limit on how many transactions (as triggered) can process in parallel, thus avoiding the damaging resource drain if the resource utilization is not limited... It is a different way of scaling that avoids being limited to a single thread with trigger first....
Now if you run both CICS regions attached to the same qmgr a long running process might allow you to better balance. At the same time be aware that with the processing power at your disposal you might have no load balancing at all. The rule for dequeueing is first come / first serve.
 _________________ MQ & Broker admin |
|
Back to top |
|
 |
Maximus |
Posted: Fri Jan 23, 2009 8:26 am Post subject: |
|
|
Acolyte
Joined: 14 Jun 2004 Posts: 57
|
fjb_saper wrote: |
Great article. Most CICS shops however do a trigger every AND set a limit on how many transactions (as triggered) can process in parallel, thus avoiding the damaging resource drain if the resource utilization is not limited... It is a different way of scaling that avoids being limited to a single thread with trigger first....
Now if you run both CICS regions attached to the same qmgr a long running process might allow you to better balance. At the same time be aware that with the processing power at your disposal you might have no load balancing at all. The rule for dequeueing is first come / first serve.
 |
You guys are great! That's another way of load balancing, using trigger EVERY and setting the transaction class for how many to process in parallel. But if I use this technique I need to cluster 2 QM to have 2 CKTI and to load balance CICS Transaction executions between 2 CICS regions plus I get the extra parallelism (on each CICS region) of trigger EVERY with transaction class set. Using this option, I could greatly simplify the built-in CICS Adapter which is managing all this parallelism (coded by the client).
I have another issue that I didn't discuss with you guys yet... you may have some good ideas....
The client decided to create one Queue per CICS transactions and all those Queues are "Trigger On on first". At the moment, the client as about 30 CICS transactions, so 30 queues and soon will have more then 100 CICS transactions which means over 100 Queues. The triggered program for each of those queues is exactly the same except that they give the program a specific name for all the CICS Transactions. This was done to be able to monitor the execution time of each CICS Transactions with TMON. Is this common for z/OS?
Right now the "built-in CICS Adapter" is a CICS transaction that will do the MQGET, look at the content, execute a service program with the message content, get back the result and build a response message and do the MQPUT. Also a parallelism mechanism is coded to start more then 1 service program.
I want to try to convince the client to use only 1 queue for all the CICS transactions. My main reason is easier maintenance and at the moment the consumer (webApp who is sending requests) needs to know in which queue to put is request depending on which CICS transaction he wants to execute which does not make sense to me.
Do you guys have other cons/pros on using 1 queue/CICS transaction versus 1 queue for all the CICS transactions?
I would like to find a solution that would permit the use on 1 queue for all the CICS transactions and still be able to monitor the CICS Transactions with TMON. |
|
Back to top |
|
 |
rtsujimoto |
Posted: Fri Jan 23, 2009 9:21 am Post subject: |
|
|
Centurion
Joined: 16 Jun 2004 Posts: 119 Location: Lake Success, NY
|
Multiple CICS regions running on the same LPAR can access the same queue manager. We run 4-5 CICS regions against the same queue manager. |
|
Back to top |
|
 |
bruce2359 |
Posted: Fri Jan 23, 2009 9:47 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9471 Location: US: west coast, almost. Otherwise, enroute.
|
There is a restriction with the CICS-MQ adapter, namely: Only one queue manager at a time can be connected to a CICS region. The adapter does the MQCONN. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
Maximus |
Posted: Fri Jan 23, 2009 9:55 am Post subject: |
|
|
Acolyte
Joined: 14 Jun 2004 Posts: 57
|
|
Back to top |
|
 |
bruce2359 |
Posted: Fri Jan 23, 2009 12:26 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9471 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
Also the client has built is own CICS-MQ Adapter ... |
Are you saying that your customer has written their own CICS-MQ adapter, and is using it instead of the one supplied with the MQ product?
If so, you will need to ask your customer how it works, and how it's different from the supplied adapter.
The supplied adapter is the facility that allows MQ calls to cross from CICS to MQ. It has a restriction that only one queue manager at a time can be connected to a particular CICS region. A CICS application with imbedded MQ calls can code the MQCONN call, but the adapter ignores it. When the adapter is started, it MQCONNects to the queue manager.
The supplied CICS trigger monitor makes use of the adapter, as well. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
rtsujimoto |
Posted: Tue Jan 27, 2009 10:52 am Post subject: |
|
|
Centurion
Joined: 16 Jun 2004 Posts: 119 Location: Lake Success, NY
|
The tech article that the poster references describes a software layer that does two things: 1. manage the number of tasks to handle the load of incoming messages 2. hand over the inbound message to the appropriate program via EXEC CICS LINK and COMMAREA. The objectives are to address overloading CICS with too many transactions started by CKTI, and to balance load the handling of inbound messages by using dedicated programs handling inbound messages. The only thing CKTI is used for is to start this software layer. A suspect a lot of CICS shops have implemented something similar to this. |
|
Back to top |
|
 |
Maximus |
Posted: Tue Jan 27, 2009 11:13 am Post subject: |
|
|
Acolyte
Joined: 14 Jun 2004 Posts: 57
|
rtsujimoto wrote: |
The tech article that the poster references describes a software layer that does two things: 1. manage the number of tasks to handle the load of incoming messages 2. hand over the inbound message to the appropriate program via EXEC CICS LINK and COMMAREA. The objectives are to address overloading CICS with too many transactions started by CKTI, and to balance load the handling of inbound messages by using dedicated programs handling inbound messages. The only thing CKTI is used for is to start this software layer. A suspect a lot of CICS shops have implemented something similar to this. |
The supplied CICS Bridge is using the COMMAREA therefore is limited to 32K, this is another reason why my client decided to write is own CICS Bridge. |
|
Back to top |
|
 |
GSI |
Posted: Thu Feb 05, 2009 7:20 pm Post subject: |
|
|
Novice
Joined: 16 Apr 2008 Posts: 18
|
Maximus,
Did you figure out a solution?. Could you please share with others.
I think butcher has a point here , the CICS regions reciding on the same lpar and having a single mq for the cics regions .... butcher , could you elaborate on this ?. |
|
Back to top |
|
 |
GSI |
Posted: Thu Feb 05, 2009 7:33 pm Post subject: |
|
|
Novice
Joined: 16 Apr 2008 Posts: 18
|
cons/pros on using 1 queue/CICS transaction versus 1 queue for all the CICS transactions?
this in interesting, masters .. anybody would like to comment on this ?. |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Feb 05, 2009 7:40 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
I believe the point was being made that you can only connect a single qmgr to a CICS region. But it does not say what the limit is for CICS regions connected to a single qmgr... subtle difference, all running in the same lpar.
All the other points were variations of scaling and throughput optimization.
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
bob_buxton |
Posted: Fri Feb 06, 2009 3:30 am Post subject: |
|
|
 Master
Joined: 23 Aug 2001 Posts: 266 Location: England
|
It is good to consider how you would handle high transaction volumes and continuous application availability when designing an application.
You don't want to be approaching your Christmas sales peak and realize that you can't keep up with the transaction flow without a fundamental application redesign. Having a design that can cope with multiple simultaneous instances executing without excessive serialization and affinities is good basic programming practice.
However that doesn't mean that you necessarily have to implement all of the possible techniques for achieving parallel execution initially.
CICS and MQ are both designed to handle high transaction volumes.
The OP mentioned volumes in terms of thousands a transactions per day. Obviously it will depend on how much work you need to do in each transaction but CICS and MQ can handle thousands of transactions per second.
I was recently looking at a CICS region that was processing 100,000 CICS bridge transactions per hour and for substantial periods CICS was idle!
Obviously you need to do sizing calculations based on expected message rates, transaction execution times etc to to ensure you are in the right ballpark but start out simply with a single trigger first queue and then, provided you had a good design to start with, you can always add multiple queues or a transaction monitor (based on the referenced design pattern) and additional CICS regions as needed to meet your capacity and availability requirements. _________________ Bob Buxton
Ex-Websphere MQ Development |
|
Back to top |
|
 |
Maximus |
Posted: Fri Feb 06, 2009 5:16 am Post subject: |
|
|
Acolyte
Joined: 14 Jun 2004 Posts: 57
|
bob_buxton wrote: |
...start out simply with a single trigger first queue and then, provided you had a good design to start with, you can always add multiple queues or a transaction monitor (based on the referenced design pattern) and additional CICS regions as needed to meet your capacity and availability requirements. |
Thanks Bob for your comment, I told my client that its overkill to have so many trigger queues to start with and could even have a negative effect. I recommended to have 3 trigger queues, one for high priority request/respond, one for normal priority request/respond and one for batch process. This is an acceptable in between solution... since right now the client as around 30 trigger queues. |
|
Back to top |
|
 |
|