ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » Clustering » Performance Remote Queues???

Post new topic  Reply to topic
 Performance Remote Queues??? « View previous topic :: View next topic » 
Author Message
klamerus
PostPosted: Sun Mar 20, 2005 7:19 pm    Post subject: Performance Remote Queues??? Reply with quote

Disciple

Joined: 05 Jul 2004
Posts: 199
Location: Detroit, MI

We currently have a very limited form of redundancy by allocating work that comes in to our MQ cluster to one of 2 servers.

Each of these have a number of queues that are read by programs that perform work on the messages in them. These messages can be rather large (perhaps up to a MB or two). They are documents that need work performed on them. There is only one program that can perform the particular type of work needed for each queue.

I've been looking at ways of making the system more reliable and also more scalable. Part of that is redesigning our programs so that several of them can run at a time and perform work on messages from the same queue.

At the same time, I was looking moving all the queues to a host server and all the work programs to work servers. The theory being that we can add servers (with programs) as necessary to get the volume of work we need performed. We would have a fail-over server for this primary queue server (with the queues stored on a SAN).

This would give us a core place where the work (queues) are stored with a fail-over server. We would have the ability to scale up as many of these agent programs that do the work as we need as well.

The concern is with the read/write of data across the network between these systems. I'm writing to see if anyone has done anything remotely similar to this. Has the network ever been an issue? Has anyone ever run into any issue with several programs reading from a common queue? Has anyone set up queues on a SAN for fail-over like this?
Back to top
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger
jefflowrey
PostPosted: Mon Mar 21, 2005 5:32 am    Post subject: Reply with quote

Grand Poobah

Joined: 16 Oct 2002
Posts: 19981

At least a few people have set up QMgrs on SAN, following the normal MQ Failover procedures (using OS level failover, and using the SAN for shared disk).

The question to be aware of in putting programs on different machines than queue maangers is "Do my programs need two-phase commit?".

If the answer is "No", then you're fine using an MQClient. The performance of an MQClient connection is somewhat slower than that of a Bindings connections - since there is network traffic invovled... and it will only remove the CPU burden of your business logic (hopefully the large part of the work being done) - since the client agent does all the MQ API processing on the server, not on the client.

If your apps DO need two-phase commit, then you're best to leave them on the same machine as a queue manager.

The factor that should drive how much failover you need is the business cost of any particular message being unprocessed for the amount of time it would take your IT staff to restore a damaged queue server to the point that you can move messages off of it.

Then compare this with the cost of implementing failover at the OS Level. If the first cost is bigger than the second cost... implement failover.

But be aware that implemeting OS Level failover using HACMP, MSCS, Veritas and etc. may be very big if you've never done it before.
_________________
I am *not* the model of the modern major general.
Back to top
View user's profile Send private message
JLRowe
PostPosted: Mon Mar 21, 2005 7:57 am    Post subject: Reply with quote

Yatiri

Joined: 25 May 2002
Posts: 664
Location: South East London

The problem is that having this middle tier of a host server is that there are 2 hops from the client to your server. This is a lot of work, and having your clients submit directly to each server QM is more scalable.

Of course, HA is a major step up. Having each server with it's own QM and work programs mean you scale your availability and capacity by adding 'n' servers in the MQ cluster. But HA is a big undertaking unless you have prior experience. In the past, i've seen the amount of work and money spent on shared hardware, scripts and troubleshooting that goes on in MSCS, HACMP and serviceguard implementations. It is getting easier, but cross vendor issues can still cause difficulty.
Back to top
View user's profile Send private message Send e-mail
klamerus
PostPosted: Mon Mar 21, 2005 5:03 pm    Post subject: More Info Reply with quote

Disciple

Joined: 05 Jul 2004
Posts: 199
Location: Detroit, MI

Maybe I'm not being as clear as I might (or maybe I'm misunderstanding a bit in my responses). Either way, I'll post some more in the hopes I continue to get good info.

Our system is a document processing "hub". Messages are sent to it with a set of requested tasks. This is described in an XML header. There might be from 1 or 2 up to 5 or 6 tasks requested. The system can fax, print, archive, e-mail, convert and do a number of other things with the documents it is sent. These can take from a second (e-mail) to a couple of minutes (FAX). Each task has its own queue and as the tasks are performed by our programs (called agents) they place the message on the queue of the task. Eventually the work is completed.

Right now our system is comprised of 2 servers. Requests get sent to them in "round robin" fashion. They each might be 5000 requests a day (it depends on the day of month). The work comes in quickly (in batches) but takes quite a while to process.

Right now, if a server has an issue, it can hold up the processing of work for hours (even days). That's not good, so we're looking for ways to make the system more reliable. The major thing that causes problems is the agents failing. MQ almost never has an issue. So, we were looking to move the queue management to a separate system and use our two servers to just host the agents.

We also want to modify the agents so we can run several of a particular type on each server. That will give us the scalability we need. Our servers are never very busy because each agent is currently single threaded to the rate it can do work. We have lots of excess server capacity.

At the same time, we were considering hosting the queue files themselves on a SAN and cluster the server. That should be about as reliable as we can get. If this is a program, we'll settle for the queue server just being separate from our agent servers. As I said, the MQ software almost never has issues (none that any of us can remember).

So, that's what we're trying to accomplish. Scalability number one, reliability number two. A side benefit would be that we only need to license MQ on the server hosting MQ. That's a minor benefit though.
Back to top
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger
klamerus
PostPosted: Mon Mar 21, 2005 5:09 pm    Post subject: okay Reply with quote

Disciple

Joined: 05 Jul 2004
Posts: 199
Location: Detroit, MI

Okay, it looks like my fingers didn't keep up with my brain. I see my third from last paragraph is pretty sloppy.

I meant to say that we would like to host our queues on one server and do all the processing on other servers. That way we can scale the system as we need to.

Separating the two would insulate our MQ server from the problems with our agents.

Since we would have several servers with agents, if they died, they wouldn't hold things up. The other agents (on the same server or the other server) would continue to do work. Nothing would get "stuck" like it does now.
Back to top
View user's profile Send private message Send e-mail AIM Address Yahoo Messenger MSN Messenger
jefflowrey
PostPosted: Tue Mar 22, 2005 6:17 am    Post subject: Reply with quote

Grand Poobah

Joined: 16 Oct 2002
Posts: 19981

Again, as long as you don't need two-phase commit (for instance, inserting a message into a database before you fax it), then you can use an MQ Client.

But if an application processing a queue gets stuck with a server connection, it's going to get stuck with a client connection. And if you can't monitor for that now, you won't be able to monitor for it later either.

And if you use an MQClient connection, you need to think about what happens if the Qmgr is not available - should the app just stop, or should it try for a different qmgr, or just try again later?
_________________
I am *not* the model of the modern major general.
Back to top
View user's profile Send private message
JLRowe
PostPosted: Tue Mar 22, 2005 12:48 pm    Post subject: Reply with quote

Yatiri

Joined: 25 May 2002
Posts: 664
Location: South East London

If your servers pull the messages off from the QM server with the client, then perhaps if a server fails it can time out and rollback the message back onto the queue. I would investigate time outs with the client and if it can detect a failed connection. You may have to play with the OS keep alive setting.

Unless you use the XA client then you will be under single phase commit for MQ, so the servers must read the messages under syncpoint and commit MQ as the last thing.
Back to top
View user's profile Send private message Send e-mail
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » Clustering » Performance Remote Queues???
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.