ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » Clustering » Clustering with 3rd parties

Post new topic  Reply to topic
 Clustering with 3rd parties « View previous topic :: View next topic » 
Author Message
seanb
PostPosted: Wed Mar 31, 2004 2:01 am    Post subject: Clustering with 3rd parties Reply with quote

Apprentice

Joined: 02 Aug 2003
Posts: 39

(Sorry in advance for the long post)

I have been following the thread 'Clustering through a firewall' with interest. In particular when the issues of clustering with 3rd parties was brought up and it's associated security concerns and vulnerabilities. Comments such as 'clustering should be for internal use only' and 'create all sorts of MQ changes / additions / deletions to any and all (yes, ALL) of the QMs in all of your clusters' (is this in reference to overlapping clusters only?) and '12 different attacks that have nothing to do with clustering', etc.

Currently, we communicate with a 'trusted' 3rd party (how far can you really trust a 3rd party of which you have no control over) using distributed queuing. We were looking to change this to clustering, primarily for failover reasons. I have always had some concerns clustering with a 3rd party but after following the above mentioned thread, I am not so keen to expose our environment to clustering.

Some relevant information...

a) Our current environment is ... Our host qmgr (HQM) connects to 2 gateway qmgrs (GWQM) (on different servers) uses distributed queuing. These GWQMs connect to 2 GWQMs in our 'trusted' 3rd party also using distributed queuing. This gives us load balancing and some failover ability (although it requires manual intervention). The GWQMs are in a DMZ.

b) Our proposed environment was going to be ... Our HQM clusters with 2 GWQMs. These 2 GWQMs cluster with the 2 GWQMs in our 'trusted' 3rd party. This gives us load balancing and failover ability. It introduces overlapping clusters.

c) After reading the earlier thread, I am no longer keen to cluster with the 3rd party, but will only cluster our HQM with our GWMQs and remain with distributed queuing with the 3rd party, but before making the final decision, I am hoping to obtain answers to the following.

The main difference (as I see it) between b) and c) is should one of our GWQMs fail, MQ will stop sending messages from the 3rd party to our failed QM in b) but we will need to manually stop trying to send messages to the failed QM from the 3rd party in c) (ie, the messages will build up on the xmitq). As we haven't had such failure in over 12 months (doesn't mean we wont in the future), we can probably live with this risk.

My dilemma... (I am relatively new to clustering and haven't had too much chance to 'play' and see how I can break in)

I have discussed my concerns with our 3rd party and mentioned we will not cluster but am getting comments like 'our design requires clustering and you need to prove what the security/vulnerability issues are'. I am thinking our 3rd party needs to prove there are no security/vulnerability issues before we will accept clustering, especially wrt my comments in the previous paragraph (we do have load balancing, we do have a degree of failover). Plus, we can just say 'no, we will not cluster, end of story'.

1) I am particularly concerned by comments such as 'create all sorts of MQ changes / additions / deletions to any and all (yes, ALL) of the QMs in all of your clusters'. Based on this comment, I am under the impression the 3rd party, if included in a cluster with our GWQMs will be able to make changes to our HQM. Am I understanding things correctly here? Is this only possible when I am using overlapping clusters? Will an 'edge' queue manager overcome this? Extending this out, this means if our 'trusted' 3rd party clusters with other 3rd parties (who are not 'trusted' by us), a non-trusted 4th party could make such changes to our HQM. (Our GWQMs are in q DMZ ... does this make a difference here). Going further, will our 'trusted' 3rd party be able to create alias queues pointing to our local queues and start putting messages, bypassing any security we may have?

2) If I introduce an 'edge' queue manager, will this address my concerns in 1) or will someone still be able to hack in to my clusters further down the line. ie, If I cluster my HQM to the 2 GWQMs, then have the GWQMs use distributed queuing to 'edge' queue magagers which are clustered to the 3rd party GWQMs. Or am I still unecessarily exposed by the fact our 'trusted' 3rd party is also clustering with another 3rd party.

3) In relation to '12 different attacks ... I can crash your QM (that talks to us directly) so fast its not funny ...' and clustering vulnerabilites, etc. What are they? I know you probably cannot post such information to a newsgroup for obvious reasons (I know I wouldn't) but are you (PeterPotkay or oz1ccg or anyone else) able sched some light on this (either in posts to this newsgroup or to my private email). I can think of some attacks but haven't really given it too much thought, especially with clustering. I understand if you choose not to, but it would sure help me to protect my environments and put some weight to my arguments against clustering with the 3rd party.

Thanks.
Sean.
Back to top
View user's profile Send private message
Michael Dag
PostPosted: Wed Mar 31, 2004 3:14 am    Post subject: Re: Clustering with 3rd parties Reply with quote

Jedi Knight

Joined: 13 Jun 2002
Posts: 2602
Location: The Netherlands (Amsterdam)

kazak wrote:
We were looking to change this to clustering, primarily for failover reasons.


This is the magic sentence, Clustering is NOT for failover!
(and I would not cluster with an external party either... )
_________________
Michael



MQSystems Facebook page
Back to top
View user's profile Send private message Visit poster's website MSN Messenger
seanb
PostPosted: Wed Mar 31, 2004 5:01 am    Post subject: Reply with quote

Apprentice

Joined: 02 Aug 2003
Posts: 39

Yes, that is true. However, for what we want (queue manager stops, server stops for whatever reason), my understanding is MQ will stop, or at least try to stop, sending messages to the failed queue manager. I also understand messages may be stuck on the failed queue manager until it starts up again and that for true clustering we need to look at other clustering technologies. We accept this.

On the clustering with 3rd parties, do you have any reasons I can use in my defense (of not clustering). As I mentioned, we are getting pressure from the 3rd party to cluster and at this stage all I can say is it is (strongly) recommended not to cluster with external parties.
Back to top
View user's profile Send private message
jefflowrey
PostPosted: Wed Mar 31, 2004 5:17 am    Post subject: Reply with quote

Grand Poobah

Joined: 16 Oct 2002
Posts: 19981

You might review the documentation that comes with the MQ Internet Passthrough (IPT) Support pack to see if it discusses security issues with multiple parties. It probably won't address clustering per se, but it may have something that you can use one way or another.
_________________
I am *not* the model of the modern major general.
Back to top
View user's profile Send private message
oz1ccg
PostPosted: Sun Apr 04, 2004 3:16 am    Post subject: Reply with quote

Yatiri

Joined: 10 Feb 2002
Posts: 628
Location: Denmark

Hi there,

First of all, if you open your MQ-cluster to "out-siders", you'll see some side effects with NAT/FIREWALL/DMZ etc.
I guess that HMQ will have one connection-name/IP-addr. from your inside network and another in the public network. This give you a problem, because the CLUSRCVR published by the FULL-REPOS gives all partners it's connection-name.

Is HMQ a full-repos QMGR ??

Your buisness partner will have to expose some of his MQ-network to you too. And will have to deal with NAT/FW/DMZ etc. I started to describe some effects on cluster security here: http://www.mrmq.dk/Cluster_security1.htm It's not covering all aspects, because it's a big challange to do, there are a lot of sources on this topic too. Both here on MQSeries.net and on other web-sites.

How would I do something like what you're requesting ??
I will use some HA solution like HACMP or Microsoft cluster, where it's hardware failover, so your QMGR, Channels, connection names etc. are standard. This gives you the posibillity to use MCAUSER securityexits etc. to control the access offered to you client. And you can prevent access to fx. SYSTEM.ADMIN.COMMAND.QUEUE and other vital queues, and only give positive access to the queues that are needed.
There are no problems here with NAT/FW/DMZ... Just business as usual.

You can add workload balancing using MQ-clustering when you are inside your own company. It's not so difficult. I did a small description on the topic some time ago:
http://www.mrmq.dk/cluster_qmb_3.htm QMX1 is something like HMQ.

I was asked for a reason for not using the cluster in "public" communication, one of the reasons is that the SYSTEM.ADMIN.COMMAND.QUEUE have to be open together with the command server have to run the cluster..... Any queue in the cluster is open to all cluster participants.

I hope it can help you a bit.

Just my $0.02
_________________
Regards, Jørgen
Home of BlockIP2, the last free MQ Security exit ver. 3.00
Cert. on WMQ, WBIMB, SWIFT.
Back to top
View user's profile Send private message Send e-mail Visit poster's website MSN Messenger
PeterPotkay
PostPosted: Sun Apr 04, 2004 6:05 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

If I were designing it from scratch, I would also make a dedicated Edge Queue Manager that is not in any cluster be the means of communication between my clusters and their clusters. Putting this QM on a Hardware clustered machine means it will always be available. And since this queue manager has no other function, you can completely lock it down from all sorts of perspectives so that only messages destined for valid queues are allowed to pass thru. Your firewall rules will prevent them from directly connecting to any QMs beyond this edge QM, even if they somehow figured out those servers' names and channels.

Your dedicated Edge QM will have regualr channels to and from one of their QMs, which now becomes a single point of failure on their side. The easy answer is have them OS cluster this QM as well, but maybe they can't, or won't.


So, lets see how we can do this.


Your company has a cluster called MyCluster.

You create 2 new Queue Managers in this cluster, each on separate servers at separate sites. The first one is EdgeGW1, the second is EdgeGW2. They have no purpose other than as gateways to an outside company. These 2 servers are placed in the DMZ. Your firewall rules will only allow the outside company to get to these 2 QMs. So we can assume direct contact to all your existing QMs in MyCluster is not possible for them.

On EdgeGW1 and 2, all the channels are made inaccessible to the outside party. You do this by taking TO.EdgeGW1.MyCluster, the CLUSRCVR, and adding SSL to it. Now you know only valid QMs (your internal ones) can use this channel to talk to these QMs. There should be no other channels on these QMs. All the SYSTEM.DEF.* channels have their MCAUSER set to "STAY_OUT". Since you didn't give this ID STAY_OUT any rights, all those other channels are now useless. If you have any other valid internal channels on these QMs, use SSL again. But their shouldn't be any, since these are dedicated QMs with a single purpose, right?

At this point, we are going to make the assumption that these 2 QMs are completely secured and locked down from all outside MQ access. Don't proceed unless you are confident in this assessment.


The other company has a cluster called TheirCluster. We are going to add EdgeGW1 and EdgeQW2 into TheirCluster, to fulfill the goal of having 2 QMs in both clusters.

The following applies equally to EdgeGW1 and 2. I'll just say EdgeGW*.

1.Create a new CLUSRCVR called TO.EdgeGW*.TheirCluster (I know, the name is to long, make it work with 20 chars somehow).

2.Change the MCAUSER of TO.EdgeGW*.TheirCluster to ABC123, or whatever. Add SSL to this channel if you want to insure only thet can use it.

3.Create Remote Queue Defs on EdgeGW* that point to your internal queues. These remote queue def names you will share with the other company. They can only send messages into your cluster through these remote queue defs.

4.Run setmqaut commands on the ID ABC123 to deny all access, except the ability to connect / put messages to these Remote Queue Defs. You might want to give the ID ABC123 access to put to the DLQ, in case they try and send something they shouldn't, it will go to the DLQ with a 2035. (You must take precautions so they cant flood that DLQ (hint:MessageRetry attributes of the CLUSRCVR channel).

5. Set up any necessary QMAliases or Remote Queue Defs for your messages to get to them (as I described in Clustering Through a Firewall). Since your internal channels are not running under any ID, they will be running as mqm, and will be able to put to these queues that go to them, so no need for setmqaut commands for these queues for your messages. (Unfortunatly, you cant make QMAliases for all your internal QMs for them to use. The reason is you don't run setmqaut commands against QMAliases. You run them against the underlying XMITQ. And you cannot give ABC123 access to the SYSTEM.CLUSTER.TRANSMIT.QUEUE on EdgeGW*, as that opens the door wide open to your internal cluster world.)

6.Create CLUSSNDRs from EdgeQW* to their Full Repositories in TheirCluster to complete the loop.


What goes wrong at this point? Message start ending up in the DLQ of EdgeQM*. Why? Becase these are messages destined for SYSTEM.CLUSTER.COMMAND.QUEUE on EdgeGW*. These are messages from TheirCluster, from their Full Repositories. Remember, you never gave ABC123 access to SYSTEM.CLUSTER.COMMAND.QUEUE.

And now it is decision time. Do we give this ID access or not to SYSTEM.CLUSTER.COMMAND.QUEUE??? If we want to join TheirCluster, we have to. But what are the security risks?

Well, by giving the access to SYSTEM.CLUSTER.COMMAND.QUEUE, we can join their cluster, and we will find out about any queues in TheirCluster. No harm there.

They can't create / delete / change any queues or namelists or channels on EdgeGW*, because ABC123 has no access to SYSTEM.ADMIN.COMMAND.QUEUE.

They cant send a message to the SYSTEM.ADMIN.COMMAND.QUEUE of any QM in MyCluster, because ABC123 has to put mesages to the remote queue defs we set up for them. They will go to the DLQ if they try to go around these.

They can't go directly to my internal QMs and not use EdgeGW*, because the firewall rules prevent that.

So the only possible here is that they can put something to the cluster command queue. I don't think they can join MyCluster, since EdgeGW* is not a Full Repository. I don't think they can send a message to SYSTEM.CLUSTER.COMMAND.QUEUE on EdgeGW* that can change anything on these 2 QMs as they pertain to MyCluster. I suppose they could SUSPEND EdgeGW* from TheirCluster, or eject EdgeGW* from TheirCluster with the RESET command, but that is only harming themselves.

Carefully set up as I laid out above, I just don't *think* their is anything they can do to any objects that are part of MyCluster.

A lot of clustering is still magic under the covers. If you feel safe that giving the other company put access to SYSTEM.CLUSTER.COMMAND.QUEUE is not a problem, AND you own the EdgeGW*s and can lock them down as I said, you can probably safely overlap your 2 clusters with no more security exposure than if you had 2 regular QMs from separate companies talking to each other.

An Edge QM has a lot of things that need to be done that protects you and it from the other side. I wrote a 6 page internal doc on the subject alone. Whether that Edge QM is in a cluster, or a regular QM, or an overlapping cluster, really doesn't change things.

It all boils down to: Do you feel lucky giving them access to SYSTEM.CLUSTER.COMMAND.QUEUE?

Because I don't *know* it is safe, I prefer not to overlap clusters with other companies. If they somehow figure out how to hack in through this "hole", and start doing something with MQ messages in your cluster that deal with peoples fortunes or lives, do you want to answer to your boss? It may be OK, I just don't know for sure.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
seanb
PostPosted: Mon Apr 05, 2004 4:49 am    Post subject: Reply with quote

Apprentice

Joined: 02 Aug 2003
Posts: 39

Hi All,

Thanks heaps for your most valuable input to my investigations. It is most appreciated.

Seems it all comes down to do we / don't we give access to the SYSTEM.CLUSTER.COMMAND.QUEUE.


Jeff:
I had only downloaded the IPT guide before and never had a chance to read it. On first glance, it seems to offer some interesting possibilities, both dealing internally and externally. I hope to fit it somewhere in my busy schedule to read it in the next few days.


Jorgen:
I was planning on using hostnames to solve the NAT'ing issue (I haven't tried this yet but I believe it should work). After reading your Cluster_Security1 page I can see where using hostnames in large clusters would become an impractical error-prone administrative nightmare.

HMQ is a full-repos (mainframe), so too is one of the GWs (Solaris). This is the InternalCluster which I will use for load balancing from the mainframe to the 3rd party. We were planning on creating a second cluster (ExternalCluster) between my two GWs and their two Windows GWs, with one full-repos being on either side.

Quote:
It's not covering all aspects, because it's a big challange to do

Yes, the whole issue of MQ security across the enterprise and beyond is a huge challenge (MQ people with little or no security expertise (or desire) and Security experts with little or no MQ knowledge). I have just recently finished writing (in conjunction with our security dept) a MQ security policy document and various procedures documents (OS/390, unix, Windows) and with what I have learnt in the last 9 months, it scares me just how vulnerable we really are.

I have also downloaded BlockIP and LogIP and will look at using them. They definitely appear useful for an overall security strategy and address some of the questions my security dept has asked, especially wrt logging potential unauthorized access attempts.

You're comment stating the command server must be running when using clustering interests me as the cluster I have *seems* to be running ok without the command server running. I think I better re-check! I'm not keen on running the command server on the GWs.

I may look at hardware or OS clustering at a later stage but just not now (I'm not an expert in this area). We are 'under the pump' to get something in quickly (business as usual!).


Peter:
I like the idea of dedicated Edge QMs with regular channels. I will investigate this option and see whether we are in a position to do this.

(fyi). Our current settings are (I don't think I'm giving anything away here) . . .
- We have MCAUSERs on all channels
- I have shutdown all SYSTEM.DEF.* channels. It's amazing how very few people seem to do this.
- I don't run the command server on the GWs. (We currently use regular channels).
- I don't permit use of SVRCONN channels on the GWs and hence don't permit the use of MQ explorer and all other remote admin tools. Makes admin a little more difficult but nothing that isn't solved with shell scripting.
- I don't permit the 3rd party to access any transmission queues on the GWs that are used to send messages to internal QMs. Access to internal QMs is only permitted via remote queue definitions.
- The 3rd party only gets access to the minimum number of queues required.
- I don't use generic (setmqaut) profiles when granting access to any queues. This can lead to accidental exposures.
- The GWs are in a DMZ
- For what it's worth, I don't use port 1414

(fyi). I was planning . . .
- To implement SSL.
- I wasn't planning on using any security exits, but will now look at using BlockIP, LogIP and see what IPT can do for us. Thanks Jeff, Jorgen.
- We *may* yet introduce clustering with the 3rd party, but I need to think about your last 2 paragraphs. I don't want to be that guy!
- If we introduce clustering with them, separate cluster receiver channels will be created for each cluster. eg., InternalCluster.EdgeGW1 and ExternalCluster.EdgeGW1. We fit within the 20 char limit. I've always been curious why channel names are limited to so few characters.
- We only expose one clustered QMA to the 3rd party and not many cluster queues. We then use local alias queues on the GWs to pass the reply messages to the locally defined remote queues - all these queues sit outside the cluster. I will use setmqaut to give access to these queues only.
- I will also only use load balance clustering going from our mainframe to our GWs but not from our GWs to our mainframe. I plan on using regular queuing back to our mainframe queue manager. When the time is right, I'll opt for QSGs rather than clusters as they offer everything a cluster does plus more (eg, peer recovery, no need to cluster our secure mainframe queue manager to unix/Windows queue managers, high performance, availability, etc).

Thanks Peter for your extensive answer. You have validated that I am mostly on the right track. This is significant, as I haven't had anyone able to check my configuration.

I do give access to the DLQ. I never considered the DLQ flooding. I will look at MessageRetry.

I'm not a big fan of having channels (either internal or external) without a MCAUSER so I don't have any that will run under the mqm, channel initiator or any other 'super' user id. I prefer to specify the exact access all incoming channels have.

I don't give any user access to the SYSTEM.CLUSTER.TRANSMIT.QUEUE. I hope this doesn't cause any problems.

Based on your input, I am now happy we have done as much as we can to minimize the likelihood of unauthorised access. The only guaranteed way is to switch the box off altogether.

Hmmm, one of my GWs is a full-repos. Based on your comment, I will need to look at this.


Based on all the input, it seems the big question is do I give them access to the SYSTEM.CLUSTER.COMMAND.QUEUE. Something I'll need to ponder for a while . . . but I'm thinking 'better safe than sorry'. Seems to be the common thinking here!

Once again, thanks all for your input.

Cheers,
Sean.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Apr 05, 2004 2:26 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

We use MQIPT here. Basically, it allows you to open a secure "tunnel" between 2 entities.
It is secure because it runs with SSL.
It is a tunnel because it allows any number of channels or channel types to communicate between the 2 sides.

One other nice feature is that you can have QM1 on Server1 talk to MQIPT on Server2 which talks to MQIPT on Server3 which talks to QMB on Server4. The two QMs never know what the other's port number or IP address actually is. And by changing the MQIPT config file, you can reroute messages with a refresh mqipt command and no changes on the sending side (other than a stop and start to reset the sequence numbers).

The command server does not need to be running to have a cluster work. The command server only processes messages from the SYSTEM.ADMIN.COMMAND.QUEUE , which do not have anything to do with Application messages flying through a cluster once all the queue and channel defs have been defined.

It is the Repository Manager, which you have no control over, which proccesses cluster admin specific messages that are sent to and read from the SYSTEM.CLUSTER.COMMAND.QUEUE. Once the cluster is established, as long as there are no changes to the cluster, even this queue is not used. A stable established cluster with only application messages flying through will only be sending stuff through the SYSTEM.CLUSTER.TRANSMIT.QUEUE.

Maybe after the initial communications with the other companies' Full Repositories, you can then shut off access to SYSTEM.CLUSTER.COMMAND.QUEUE??? Obviously you wouldn't know about any new queues in their cluster, or any other Cluster specific changes sent out by their Full Repositories, but maybe that's OK. I have never tried this, I just came up with it actually, but that might be worth testing.

Obviously, a Full Repository will always need access to its SYSTEM.CLUSTER.COMMAND.QUEUE, so that is the reason I don't want my "exposed" QMs Full Repositories.

You can deny access to the DLQ, but realize that if they send a bad message, the channel blocks and nothing that is valid behind it will make it thru. You also lose evidence of an attempted security breach. (Check Out Authority Events for the Queue Manager, and turn these on for those Edge QMs regardless of what you do with the DLQ.)

You may do the right thing and set the message retry values on the channel to throttle back a flood of bad messages filling your DLQ. If you did that to stop me, I would then send 100MEG dummy messages over as fast as you would let me, filling your servers disk and crashing the QM as a few dozen 100 MEG messages landed in the DLQ. Check out the Max Message Size and set it as small as possible for your apps!

The next fun I could have is start up 1000 clients and have them all repeatedly try and connect to your listener, over and over, until it crashes. You should have a dedicated listener running for each company that this EdgeQM talks to, and have each one running on a separate port. That way if CompanyOne crashes their listener, the other listeners and companies continue to run.

Dont set Put Authority to Context on the RCVR channels. It makes it to easy to get into your system with mqm authority.

It goes on and on. So, do you know all the ways MQ can be used to make a security attack? Maybe you do, maybe you don't. Kinda makes sense to make a dedicated EdgeQM, doesn't it? If the bad guys are gonna crash a QM, let it be one that does nothing but talk to the other company.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
seanb
PostPosted: Wed Apr 07, 2004 2:03 am    Post subject: Reply with quote

Apprentice

Joined: 02 Aug 2003
Posts: 39

Sounds like the IPT is a very useful support pack. I especially like the hiding of the IP and port. I assume the overheads of running IPT is minimal!?!

Quote:
The command server does not need to be running to have a cluster work

Oh good, that's what I thought. I will keep it stopped.

Quote:
Once the cluster is established, as long as there are no changes to the cluster, even this queue is not used.

With the design we have, we only have *one* QMA visible to the cluster. All queues on the GW are locally defined and will not be visible in the cluster. Should we need to add a new queue, this too will be hidden from the cluster. We use the QMA to allow MQ to send the messages from the 3rd party qmgr to our GW qmgrs (and vice-versa) at which time MQ will discover the locally defined queue and use that. I know this is a little different to the usual clustering (where the queues are visible) and, yes, it does mean there are a few more definitions than there would be otherwise, but it allows us to exploit the features of clustering, while minimizing the number of definitions visible in each others cluster. It also helps us overcome any naming conflicts between organizations (different organizations have different naming standards and could potentially have the same queue name as ours).

Quote:
Maybe after the initial communications with the other companies' Full Repositories, you can then shut off access to SYSTEM.CLUSTER.COMMAND.QUEUE???

I like this idea. I have spoken to our 3rd party about this and we will give it a try. I will let you know of the outcome. My only concern may be, if there are any problems, it may not show up for quite a while and if Murphy has anything to say it will happen when we're in production on a high volume day.

Quote:
Obviously you wouldn't know about any new queues in their cluster

With our design this should be ok.

Quote:
or any other Cluster specific changes

Hmmm. Does this mean if I suspend a queue manager from a cluster or shut it down, MQ will not be able to publish this information to the full-repos and hence, MQ will continue to try and send messages to a queue manager that is no longer there. I will need to do some testing.

Quote:
You can deny access to the DLQ

I think it best to provide access. That way 2035 messages will end up there and I don't really want to block the channels if I can help it.

Quote:
Check Out Authority Events for the Queue Manager

I think I better check this one out. I don't think I have it on.

Quote:
Check out the Max Message Size and set it as small as possible for your apps!

I do that on all q's except the DLQ, xmitqs and channels. Hmmm, something to think about.

Quote:
You should have a dedicated listener running for each company that this EdgeQM talks to, and have each one running on a separate port

Yes, we do this. I also sometimes have separate queue managers for separate companies/interfaces (where appropriate) with firewall rules preventing one interface accessing another interface port. This further isolates the different interfaces from each other. I may look at IPT here also, especially when using clients (internally).

Quote:
Dont set Put Authority to Context on the RCVR channels.

I never use context, I have never really seen the point (even on the mainframe where you can validate against both the context user id and MCAUSER id - although in this case, it adds just one more thing a potential intruder needs to find. As with all security, it is impossible to totally secure, we just need to make it as hard as possible for an intruder to get in). As you said, I can specify anything I want including mqm. Plus, doing so means I need to define potentially many user ids to my O/S. Although, if I use expired report messages I need to do this anyway (one thing that really irks me with MQ).

Quote:
So, do you know all the ways MQ can be used to make a security attack? Maybe you do, maybe you don't.

I'm of the belief only a fool thinks he does.

Thanks for your further input. Gives me a few more things I need to address.

Cheers,
Sean.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Wed Apr 07, 2004 5:13 am    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Quote:

I assume the overheads of running IPT is minimal!?!




My channels to seem to take a second or two to start running, but I don't know if this is a function of MQIPT overhead, or the fact that the other QM is halfway across the country being talked to over the regular internet, or the SSL negotiation taking place. Probably all 3.

There is a section on Performance Tuning in the PDF that comes with the MQIPT support pack, but there are no numbers that say "Oh, MQIPT will add x% to your overall transaction time."
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
oz1ccg
PostPosted: Wed Apr 07, 2004 6:23 am    Post subject: Reply with quote

Yatiri

Joined: 10 Feb 2002
Posts: 628
Location: Denmark

I just snipped a couple of issues from the cluster manual, which are pointing something out on this topic.

Quote:
Preventing queue managers joining a cluster
If you want to ensure that only certain authorized queue managers attempt to join a cluster, you must either use a security exit program on the cluster-receiver channel, or write an exit program to prevent unauthorized queue managers from writing to SYSTEM.CLUSTER.COMMAND.QUEUE. Do not restrict access to SYSTEM.CLUSTER.COMMAND.QUEUE such that no queue manager can write to it, or you would prevent any queue manager from joining the cluster.

Because repositories retain information for only 90 days, after that time a queue manager that was forcibly removed can reconnect to a cluster. It does this automatically (unless it has been deleted). If you want to prevent a queue manager from rejoining a cluster, you need to take appropriate security measures.


So using clusters without exits is not a good idea, you can use SSL to help, but not to keep it bulletproof.

About having more than one listener, would not help you, because the CLUSRCVR of the receiving qmgr is used for creating the partners CLUSSDR......
Then you can create a cluster for each connecting partner.... and only let the partner connect to the cluster.

Just my $0.02
_________________
Regards, Jørgen
Home of BlockIP2, the last free MQ Security exit ver. 3.00
Cert. on WMQ, WBIMB, SWIFT.
Back to top
View user's profile Send private message Send e-mail Visit poster's website MSN Messenger
seanb
PostPosted: Wed Apr 07, 2004 8:23 am    Post subject: Reply with quote

Apprentice

Joined: 02 Aug 2003
Posts: 39

With the 90 day retention (I forgot about that), doesn't look like I can remove access to SYSTEM.CLUSTER.COMMAND.QUEUE to the partner.

We were planning on DMZ plus SSL, but I will now strongly consider exits also.

Cheers,
Sean.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Wed Apr 07, 2004 1:05 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

SSL on the CLUSRCVR will force an authentication of who the other side is. This is a relativily strong way of knowing who the other side. But I suppose the other guy could take the SSL certificates from Server1/QM1, which is a valid QM you allowed, and use those certs on another QM on the same server or a different server.

I guess that's where a security exit is a bit stronger than plain old SSL. The security exit can ask for things like a source ip address, Queue Manager Name, port number, etc. That is a bit harder to spoof, but certainly not impossible. Keeping that in mind, maybe SSL will be enough, and simpler than writing exits. Then again, it may be worth it to use the Exit, maybe in conjunction with SSL.


But even a channel exit is not bullet proof. If your exit allows QM1 on Server1 from port 1418, I can see the bad guy creating a new server, calling it Server1, building a QM1, setting the LOCLADDR to 1418, and trying to connect to you. If you have AdoptNewMCA set, he bumps you off and takes your place. OK a bit far fetched, but possible!



Quote:

About having more than one listener, would not help you, because the CLUSRCVR of the receiving qmgr is used for creating the partners CLUSSDR......


True, the CLUSRCVR definition is used as a model for creating the CLUSSNDR on the sending side. However, if you have multiple outside QMs coming into your one EdgeQM, all those connection attempts must be routed through the one default Listener on your QM. So one of the external QMs can bombard you with connection attempts, crashing that one listener, and kncking everyone out of the water. Separate listeners avoid this.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
jefflowrey
PostPosted: Wed Apr 07, 2004 1:27 pm    Post subject: Reply with quote

Grand Poobah

Joined: 16 Oct 2002
Posts: 19981

Peter -
You should really take the time to re-collate all the information in this thread into a white paper and publish it somewhere, if only in the Tips&Tricks...

I'd do it, but it's really your work.
_________________
I am *not* the model of the modern major general.
Back to top
View user's profile Send private message
seanb
PostPosted: Fri Apr 09, 2004 3:01 am    Post subject: Reply with quote

Apprentice

Joined: 02 Aug 2003
Posts: 39

The important thing with SSL is you must protect your private key (at all costs). This leads to interesting challenges such as certificate distribution. This is especially so if you are dealing with many clients scattered throughout the country.

I have been told by my security department, if you make the key non-exportable people will not be able to copy it. I'm not sure how this fits with MQ as I keep seeing the statement, make the private key exportable. I'm under the impression this means it must be exportable to be able to import it, but when it is imported it should be set to be non-exportable!?!?!? I think I have done this in Windows but I cannot remember for sure (it was a while ago).

I have also been told by my security department, you can generate a certificate that will be unique to the server, which means even if people were to copy it, it would be useless. I am yet to verify this but I trust what they say.

This all leads to another area we have touched on here and that is O/S security (perhaps this has been covered in another thread? so I wont say too much here). All your best laid plans can be undone by one Wintel user who has administration access or by one unsecured channel (especially SVRCONN). MQ security cannot be looked at in isolation (ie, a single queue manager), but must be looked at across the enterprise (not only all queue managers, but also their associated environments) and beyond (partner security). You can not only consider MQ security, but must also consider other issues such as O/S security, who has access to the server and what rites they have and segregation of duties (how many people really consider this). You don't want a developer to have integration, QC and production access; Similarly, do you really want your MQ administrator also setting the security authorisations? (From experience, this segregation may not always be possible). What about protecting your Wintel box from viruses. Not doing so could lead to DoS.

There is a fantastic redbook 'WebSphere MQ Security in an Enterprise Environment' (sg246814.pdf) that should be read by anyone serious about MQ security.

Cheers,
Sean.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » Clustering » Clustering with 3rd parties
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.