Author |
Message
|
MQMB&WAS |
Posted: Mon Apr 23, 2018 4:58 pm Post subject: Client connections to Mainframe vs Distributed qmgrs |
|
|
Centurion
Joined: 12 Jun 2016 Posts: 130
|
Hi experts,
We have some applications connecting to Mainframe via mq on linux(local to app) . I'm just curious why the apps can't just connect directly to mainframe qmgrs using client connections. Would there be any performance issues or any other known issues with such connections? Thanks for your time. |
|
Back to top |
|
 |
bruce2359 |
Posted: Mon Apr 23, 2018 6:52 pm Post subject: Re: Client connections to Mainframe vs Distributed qmgrs |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
MQMB&WAS wrote: |
Hi experts,
We have some applications connecting to Mainframe via mq on linux(local to app) . I'm just curious why the apps can't just connect directly to mainframe qmgrs using client connections. Would there be any performance issues or any other known issues with such connections? Thanks for your time. |
Are you saying that an app:
- MQCONNects to a Linux qmgr,
- MQOPENs a QRemote definition,
- MQPUTs a message that ends up in a transmission queue,
- and is sent across a Sender-Reciever channel pair to a z/OS qmgr?
What version of MQ on z/OS? Any other information you can share will help us help you. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
Vitor |
Posted: Tue Apr 24, 2018 5:09 am Post subject: Re: Client connections to Mainframe vs Distributed qmgrs |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
MQMB&WAS wrote: |
Would there be any performance issues or any other known issues with such connections? |
Increased license costs for one; the Client Attach Facility on z/OS allows 5 administrative connections unless you buy more capacity. Queue manager to queue manager connections are free.
Another would be security & firewall configuration; it's a specific configuration on z/OS to make the TCP/IP stack service the number of dynamic connections a client set up allows. Also (given what most sites use and store on their z/OS systems) the security people don't like that many holes in the firewall with that many possible endpoints. They prefer a specific hole pointing to a specific whitelisted static IP address, which is most usually a server.
A lesser consideration is the volume of material traveling to or from the mainframe. It's often better to let the application dump material onto the transmit queue and let MQ just work through it. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
MQMB&WAS |
Posted: Tue Apr 24, 2018 5:09 am Post subject: Re: Client connections to Mainframe vs Distributed qmgrs |
|
|
Centurion
Joined: 12 Jun 2016 Posts: 130
|
bruce2359 wrote: |
Are you saying that an app:
- MQCONNects to a Linux qmgr,
- MQOPENs a QRemote definition,
- MQPUTs a message that ends up in a transmission queue,
- and is sent across a Sender-Reciever channel pair to a z/OS qmgr?
What version of MQ on z/OS? Any other information you can share will help us help you. |
Correct. App connects to the linux qmgr with local bindings. Current version of mq on both lixux and zos is v8. |
|
Back to top |
|
 |
grrttlucas1 |
Posted: Tue Apr 24, 2018 5:43 am Post subject: |
|
|
Newbie
Joined: 10 Jul 2015 Posts: 4
|
|
Back to top |
|
 |
bruce2359 |
Posted: Tue Apr 24, 2018 5:51 am Post subject: Re: Client connections to Mainframe vs Distributed qmgrs |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
MQMB&WAS wrote: |
bruce2359 wrote: |
Are you saying that an app:
- MQCONNects to a Linux qmgr,
- MQOPENs a QRemote definition,
- MQPUTs a message that ends up in a transmission queue,
- and is sent across a Sender-Reciever channel pair to a z/OS qmgr?
What version of MQ on z/OS? Any other information you can share will help us help you. |
Correct. App connects to the linux qmgr with local bindings. Current version of mq on both lixux and zos is v8. |
Have you tried connecting as a client to z/OS qmgr? What were the results? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
elkinsc |
Posted: Tue Apr 24, 2018 5:53 am Post subject: Connecting as a client directly |
|
|
 Centurion
Joined: 29 Dec 2004 Posts: 138 Location: Indy
|
You can connect directly to a z/OS queue manager from a client application running on any distributed platform. There is no additional charge for the Client Attach Feature, that was removed in the V7 time frame. HOWEVER, we often recommend using a client concentrator queue manager for a couple of reasons. The biggest reason is that clients may not be well behaved, if they connect do one thing, disconnect and repeat that over and over the CPU costs associated with that can substantially increase MLC charges. Also, even if using the VUE version, the CPU increase can impact other workload. The second reason is less typical, the channel initiator address space cannot support the number of channels and the buffers required for the client access of messages. |
|
Back to top |
|
 |
Vitor |
Posted: Tue Apr 24, 2018 6:04 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
grrttlucas1 wrote: |
Client attachment feature for MQ z/OS was removed with version 8. No longer limited to 5 client connections. |
I stand by my comments regarding security & managing volume. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
elkinsc |
Posted: Tue Apr 24, 2018 6:36 am Post subject: The CAF |
|
|
 Centurion
Joined: 29 Dec 2004 Posts: 138 Location: Indy
|
When it was removed it was made retroactive for all supported queue managers at that time. I just remembered that it was available to V7 queue managers, because when I really started getting the angry calls about CHIN CPU most of the queue managers were V7.
And you are right poking holes in firewalls is a thing of beauty and joy forever!
There are some very good reasons to connect directly to z/OS queue managers - using shared queues is the best one, but it can come at a high cost especially if the clients are not under your control.
Be nice if the expensive connect/disconnect code could be like DB2s DRDA - offloaded to zAAPs. But my opinion and 65 cents gets you a snickers bar when they are on sale.
Happy Tuesday! |
|
Back to top |
|
 |
MQMB&WAS |
Posted: Tue Apr 24, 2018 7:21 am Post subject: Re: Connecting as a client directly |
|
|
Centurion
Joined: 12 Jun 2016 Posts: 130
|
elkinsc wrote: |
The biggest reason is that clients may not be well behaved, if they connect do one thing, disconnect and repeat that over and over the CPU costs associated with that can substantially increase MLC charges. |
Apparently, this was the complaint from mainframe team when the app tried to connect directly to zos. Also, there was some issue with logs filling up.
Just curious,how is this not a concern on the distributed?
And, what changes do the apps need to not cause these issues? |
|
Back to top |
|
 |
Vitor |
Posted: Tue Apr 24, 2018 7:31 am Post subject: Re: Connecting as a client directly |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
MQMB&WAS wrote: |
Just curious,how is this not a concern on the distributed? |
A z/OS queue manager does not work the same way as a distributed one. Do not assume that z/OS is just another OS; it is unique and this impacts everything an application does, including disc file handling.
MQMB&WAS wrote: |
And, what changes do the apps need to not cause these issues? |
Write applications according to best practice. Seriously.
The pattern of connect, do something, disconnect, repeat is hideously inefficient even on distributed platforms. Connection and disconnection are the most resource-hungry MQ operations and applications will see significant improvements in both CPU & I/O consumption if they connect once at startup, reuse that connection for all operations, and disconnect once at close down. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
markt |
Posted: Tue Apr 24, 2018 8:47 am Post subject: |
|
|
 Knight
Joined: 14 May 2002 Posts: 508
|
The big difference is that it doesn't (transparently) cost money to run badly-written apps on a Distributed platform. So CPU costs are irrelevant! Though that might change as more people use cloud deployments and start to get charged there too.
Even though operations that cost a lot of processing time on z/OS equally cost a lot of processing time on Unix (to a first level of approximation). |
|
Back to top |
|
 |
bruce2359 |
Posted: Tue Apr 24, 2018 9:03 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
elkinsc wrote: |
The second reason is less typical, the channel initiator address space cannot support the number of channels and the buffers required for the client access of messages. |
Can you be a bit more precise here please? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
Vitor |
Posted: Tue Apr 24, 2018 9:26 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
markt wrote: |
Though that might change as more people use cloud deployments and start to get charged there too. |
Good point! Not considered that myself, but that could indeed be a short, sharp, shock to a lot of people's systems. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
elkinsc |
Posted: Tue Apr 24, 2018 9:53 am Post subject: Hi Bruce |
|
|
 Centurion
Joined: 29 Dec 2004 Posts: 138 Location: Indy
|
When using clients, the CHIN has to have space for the messages so will have to use address space storage. The CHIN is pretty smart about how the storage is used, but if there are a large number of clients that are using very large messages and a lot of connections (each of which has a chunk of storage associated with it), we have seen the CHIN run out of storage. It does not use above the bar storage. I think there is an RFE requesting that as it would allow for many more connections as well. As I said, not seen this often but I have seen it more than once.
I like to tell people, somewhere something has to supply the physical storage for darn near everything. |
|
Back to top |
|
 |
|