ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » General IBM MQ Support » Clients utilising F5 to connect to Queue Manager

Post new topic  Reply to topic
 Clients utilising F5 to connect to Queue Manager « View previous topic :: View next topic » 
Author Message
aspre1b
PostPosted: Wed Feb 13, 2019 3:37 am    Post subject: Clients utilising F5 to connect to Queue Manager Reply with quote

Voyager

Joined: 05 Jul 2007
Posts: 78
Location: Coventry, UK

Hello

I'm looking to utilise a F5 appliance to control client connections to one of two Gateway queue managers. I require the F5 to send all connections to the primary Qmgr if up, and only send to the secondary if the primary is unavailable.

I've been told to get this to work, the F5 needs to utilise a CCDT to connect to MQ. Has anyone had any experience with an F5 on how it detects that a MQ instance is up? Is it a complicated set up?

NB: At present, we cannot have a cross-site multi-instance queue manager to act as the Gateway due to pipe speed constraints.
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Feb 13, 2019 5:51 am    Post subject: Re: Clients utilising F5 to connect to Queue Manager Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

aspre1b wrote:
I've been told to get this to work, the F5 needs to utilise a CCDT to connect to MQ.


Who told you that? You don't.

aspre1b wrote:
Has anyone had any experience with an F5 on how it detects that a MQ instance is up?


Yes. We pass all client connections through F5 for load balancing & failover.

aspre1b wrote:
Is it a complicated set up?


No.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
aspre1b
PostPosted: Wed Feb 13, 2019 6:17 am    Post subject: Reply with quote

Voyager

Joined: 05 Jul 2007
Posts: 78
Location: Coventry, UK

Thanks Vitor!

How easy is it to set up the F5 so that it sends all connections to a single primary queue manager. If that is uncontactable, then the primary moves to the previous secondary. The connection only moves back to the original Primary if the newly promoted primary fails.

UPDATE:
Looks like I have two options:
If I want a simple primary and secondary - this can be achieved using a priority group activation to designate a primary and a secondary.
If I require the primary to flip, then iRules would be required to achieve this. (https://devcentral.f5.com/codeshare/single-node-persistence).
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Feb 13, 2019 8:01 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

aspre1b wrote:
How easy is it to set up the F5 so that it sends all connections to a single primary queue manager.


As easy as setting an F5 to direct all connections to a single web server, except you have to make sure the F5 considers the session sticky and doesn't redistribute 2 consecutive calls from the same client to 2 different queue managers.

aspre1b wrote:
If that is uncontactable, then the primary moves to the previous secondary. The connection only moves back to the original Primary if the newly promoted primary fails.

UPDATE:
Looks like I have two options:
If I want a simple primary and secondary - this can be achieved using a priority group activation to designate a primary and a secondary.
If I require the primary to flip, then iRules would be required to achieve this. (https://devcentral.f5.com/codeshare/single-node-persistence).


If you don't want all the connections to shift back to the original primary when it comes back up, then yes you'll need some logic in the F5 as you describe. As I said, we use groups so if one queue manager goes down the other n queue managers take up the load. We don't forcibly redistribute when the failed queue manager returns, but let natural attrition and load balancing do the work.

One political gotcha I'll mention - we did get some pushback from persons who thought that because the F5 redirected connections to a working queue manager, they didn't have to check for 2009, 2019 2059, etc. reason codes any more "because the F5 was managing the connections" and there was some surprise and disquiet when it was explained that when the queue manager failed, the connection would be lost and the reconnection request would be "managed" by the F5 to a working queue manager.

There was some suggestion that this meant the new system was "no better than the old one because the application still has to reconnect". The rebuttal was that when the application reconnects, it's routed to a working queue manager not left to scratch & whine on the door of the dead queue manager for hours.

(This site didn't use CCDT, MI or other strategies. You got a queue manager, you used that queue manager, it went down, you were SOL until it came back. Long, political, historical story I fixed with a bunch of F5s and the network team).
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Wed Feb 13, 2019 11:36 am    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Vitor,
How do you ensure that clients that need to control which MQ queue manager they connect don't use the F5 address?
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Feb 13, 2019 11:50 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

PeterPotkay wrote:
Vitor,
How do you ensure that clients that need to control which MQ queue manager they connect don't use the F5 address?


It's 2-fold. Firstly, if the client needs to connect to a specific queue manager they need to fill out so many forms to justify their decision we deliver them in a 3 volume set bound in leather. Our HA / DR strategy hinges on our ability to connect a client to any (within reason) still-operational queue manager and they can keep working. So we have (for example) a group of 3 identically configured but differently named queue managers assigned to a given business unit. Their applications should expect to connect to one of them and perform what they need to do; the topology allows the reply-to queue manager to be resolved in all cases.

Secondly, the actual DNS of any given queue manager is available. There's nothing technological preventing a client quoting a specific DNS name (or IP address if they're completely contact admin) and connecting. We could do this, but to quote the relevant manager, "the juice isn't worth the squeezing". Our queue managers and web servers, for the most part, trust anything with an SSL certificate signed by our internal CA or that's on our trusted internal network.

Where the wheels will come off for these people is if their use of the specific DNS comes up in one of the network audits or more likely when we do an HA or DR test and their application stops working because it can't connect to the specific queue manager they want. Then it's time to pull up the comfy chair, grab the popcorn and watch an enraged mob of auditors and operational risk people chasing the application owner with pitchforks & burning torches.

It's plausible someone could fill out all the forms, get their design through committee and into production with some kind of "wake me up when you go, go" manual failover procedure to change where their application connects to in the event of a problem. The policies permit this, but no-one's tried it.

I'm on one of the committees they have to get through.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Wed Feb 13, 2019 12:21 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Maybe you don't have one aspect of our topology.

In your 3 QM example, do you have a clustered local queue on each of the 3 QMs, with incoming work being MQ clustered load balanced across the 3 QMs? If yes, the consuming application needs to ensure each of their 3 queues has 1 or more listening threads. If the consuming application uses the F5 to connect to the group, they have lost control and may find themselves load balanced in such a way that one or more of their queues have no consumers and messages are accumulating, while the other instances of the queue have too many consumers racing for the % of the messages being load balanced there.

I'm interested in your set up because we considered F5s, tested it to prove it works, and then decided against it. I had no way to keep the client that shouldn't use the F5 from doing so. Yeah, we could have exploded our QM topology and had F5 fronted QMs exclusively for clients that don't care which QM they connect to. And for clients that needed to control which QM they connected to (my clustered example above) another set of QMs where we never set up F5 addresses.

But the in between use case was the trickiest, the classic semi-synchronous Request/Reply. The app can connect to any QM to send a request, the network biffs causing them to lose their connection and when they reconnect to go after their reply, they get F5 load balanced off to some other QM in the group while their reply is waiting on the original QM. OK, persistent connections can tell the F5 to send connection attempts from the "same" client to the same MQ server for x minutes, but that is not fool proof.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Feb 13, 2019 12:53 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

We don't use MQ clusters.

What we would expect is that a client using the 3 queue manager example I quoted above would have at least 3 listening threads which would be round-robined across the 3 queue managers; in practice production applications have 10s or 100s of connections, especially when they're in an app server.

We also have monitoring to detect when IPPROCS is 0 on a queue which should have someone consuming it.

PeterPotkay wrote:
But the in between use case was the trickiest, the classic semi-synchronous Request/Reply. The app can connect to any QM to send a request, the network biffs causing them to lose their connection and when they reconnect to go after their reply, they get F5 load balanced off to some other QM in the group while their reply is waiting on the original QM. OK, persistent connections can tell the F5 to send connection attempts from the "same" client to the same MQ server for x minutes, but that is not fool proof.


I'm interested by this use case. In my world (which may not be the real world or a world others inhabit), if an application is doing a synchronous request / reply and the network biffs (as if such a thing could happen!) then the application needs to consider the transaction lost and re-try it. The reply will be orphaned obviously, and will expire unloved and unremarked.

We also make a clear distinction between a synchronous & asynchronous request / reply. We don't permit applications to hang around on a queue waiting for 3 or 4 minutes waiting for a reply to turn up; the server threads are not union and don't get to hang around drinking coffee. They wait for a reply for a couple of seconds or they send a request then get on with the next request.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Wed Feb 13, 2019 4:22 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Vitor wrote:
We don't use MQ clusters.

You probably have more hair on your head as a result.

Vitor wrote:
What we would expect is that a client using the 3 queue manager example I quoted above would have at least 3 listening threads which would be round-robined across the 3 queue managers; in practice production applications have 10s or 100s of connections, especially when they're in an app server.

10s or 100s, you would be vulnerable to load balancing sooner or later putting you in a position of a queue with no listeners. Maybe only briefly, maybe not for week or months even, but it would happen eventually.

Vitor wrote:

We also have monitoring to detect when IPPROCS is 0 on a queue which should have someone consuming it.

Better than nothing, but its reactive. I'd rather design to avoid it in the first place, which means (to me anyway) in this use case the clients must have control over which QMs they connect to, and the responsibility to always have 1 or more threads on each queue.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
Vitor
PostPosted: Thu Feb 14, 2019 6:14 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

PeterPotkay wrote:
Vitor wrote:
We don't use MQ clusters.

You probably have more hair on your head as a result.


It's not working out that way.

PeterPotkay wrote:
Vitor wrote:
What we would expect is that a client using the 3 queue manager example I quoted above would have at least 3 listening threads which would be round-robined across the 3 queue managers; in practice production applications have 10s or 100s of connections, especially when they're in an app server.

10s or 100s, you would be vulnerable to load balancing sooner or later putting you in a position of a queue with no listeners. Maybe only briefly, maybe not for week or months even, but it would happen eventually.


Ok, not seeing your point here and we're currently in our 4th year of this topology without issues.

So if I have (for example) 5 WAS servers with client connections load balancing over 3 queue managers, how would you get to a position where the F5 has decided not to route any connection requests to one of the queue managers? Granted you could have a situation where all the connections to one queue manager closed because all the instances in all the WAS servers connected to that queue manager finished processing at the same time (no matter how implausible, it could happen). But the next connection will go to that queue manager because the F5 knows it's much more lightly loaded than the other 2.

I literally don't see how this could happen and would welcome an explanation because I'm slightly freaking out here. It's going to be awkward if it turns out our topology is flawed.....

PeterPotkay wrote:
Vitor wrote:

We also have monitoring to detect when IPPROCS is 0 on a queue which should have someone consuming it.

Better than nothing, but its reactive. I'd rather design to avoid it in the first place, which means (to me anyway) in this use case the clients must have control over which QMs they connect to, and the responsibility to always have 1 or more threads on each queue.
[/quote]

Same here, and no matter the design you still need to react to problems. One good network biff and you're dead in the water, and I'd sooner be notified shortly after the event that no-one seems to be reading the queue any more than be notified hours, days or weeks later (if at all) that there's a network problem.

The network team always being so proactive about fault monitoring and cascading information to other teams as they are.


_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Thu Feb 14, 2019 5:00 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

You may not have the scenario I am talking about that precludes the use of a VIP. Please hold all freaking out.

Scenario:
5 WAS apps with lots of instances each
3 QMs with a local queue each which is a queue that one or more threads must always be MQGETting from. These are long lived connections - connect once, open the q once, MQGET in a loop "forever".
An F5 VIP fronts these 3 QMs for the clients to use.
Other QMs send to these QMs, but the QM to QM channels do not use the VIP.

At start up all the instances on the 5 WAS servers issue MCONNX and the F5 does a beautiful job load balancing the connections. All the queues have about an even distribution of connections. As messages arrive they get consumed.

If any individual connection drops on next reconnect the F5 will load balance round robin the connection. You may no longer have an exactly equal distribution but no worry, you have so many connections you can tolerate some variance.


Then one day the network connection to QM#2 fails, for whatever reason. All the connected listening WAS threads to QM#2 get MQRC 2009 and drop into their reconnect logic. The F5 VIP does its job and sends all the reconnect attempts to QM#1 and QM#3. They connect once, open once and settle into the MQGET loops. All is well.

QM#2 comes back online. The QM to QM channels reconnect on their own. Messages start being delivered to QM#2. But all the WAS connections are happily connected to QM#1 and QM#3 with no reason to close/disconnect and reconnect to be load balanced again. Even if for some reason they did, they would still have a 66.6% chance of being sent to a QM other than QM#2, where messages continue to accumulate in the queue. Problem.


If your apps are constantly connecting/disconnecting you don't have an issue. The F5 VIP is only an issue for the long lived connections for apps responsible for queues that must always have at least one listening instance. These apps can't use a VIP. They must have control to go to a specific QM that hosts the specific instance of the queue they are responsible for.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
Vitor
PostPosted: Fri Feb 15, 2019 5:01 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

PeterPotkay wrote:
You may not have the scenario I am talking about that precludes the use of a VIP. Please hold all freaking out.


But the view from up here on the ceiling is so interesting.

PeterPotkay wrote:
3 QMs with a local queue each which is a queue that one or more threads must always be MQGETting from. These are long lived connections - connect once, open the q once, MQGET in a loop "forever".


Yeah, we don't allow "forever" connections. Like I said earlier, our process threads are non-union. I'll climb down now.

PeterPotkay wrote:

If your apps are constantly connecting/disconnecting you don't have an issue. The F5 VIP is only an issue for the long lived connections for apps responsible for queues that must always have at least one listening instance. These apps can't use a VIP. They must have control to go to a specific QM that hosts the specific instance of the queue they are responsible for.


You make a valid design point that you need to bear in mind if you plan to use an F5. And as I said up front, we expose the DNS of the actual queue managers if an application is in the situation you describe.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Fri Feb 15, 2019 6:37 am    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

But if you ever come to have some of these apps that use long lived connections, you will need to figure out a way to ensure they don't use the VIP.
Easy enough on Day 1 of the new implementation with you standing at the front of the room, banging your shoe on the podium. But months and years down the road when they catch wind of magical VIPs that front MQ? Hey, let's use those!

The only option I could come up with was to host queues for apps like that on queue managers that never have an F5 VIP. While the other queues for apps where an F5 VIP is appropriate are hosted on QMs that do have an F5 VIP setup.

Oh, one other gotcha for the F5 VIPS to MQ was the queue manager then saw the source IP address as the F5, and not the actual MQ Clients.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
Vitor
PostPosted: Fri Feb 15, 2019 7:28 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

PeterPotkay wrote:
But if you ever come to have some of these apps that use long lived connections, you will need to figure out a way to ensure they don't use the VIP.


I wasn't joking about the number of forms. That stops most people. Attaching to something other than the F5 is a design exception that needs to be approved. Having a connection sitting on a queue waiting is a design exception that needs to be approved. Not using the standard DR is a design exception that needs to be approved. I could go on.

Even if they decide to avoid all the paperwork (the preferred method is to flat out lie about how their application works), their sins find them out on the first DR test. Assuming the network or server monitoring don't grass them out first.

I agree this is a problem for less fascist sites and that's why this discussion is a valuable reference for future readers.

PeterPotkay wrote:
Easy enough on Day 1 of the new implementation with you standing at the front of the room, banging your shoe on the podium. But months and years down the road when they catch wind of magical VIPs that front MQ? Hey, let's use those!


And then the auditors take them out into the parking lot.

Like I said, we didn't try to secure the actual queue manager addresses because we didn't think it was worth the effort. If the peasants were ever to rise in revolt and start bypassing the F5s, then it would be worth the effort and we'd block it.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » General IBM MQ Support » Clients utilising F5 to connect to Queue Manager
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.