ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » Clustering » maintain message through clustering and failing components

Post new topic  Reply to topic
 maintain message through clustering and failing components « View previous topic :: View next topic » 
Author Message
jchype
PostPosted: Thu Aug 15, 2002 7:22 am    Post subject: maintain message through clustering and failing components Reply with quote

Newbie

Joined: 12 Jun 2002
Posts: 8

my customer has an application that starts at a web front end that send an XML message to a C++ MQ user wrapper program. This wrapper is an MQClient program that puts a message on a queue where WMQI (running on AIX) then takes over and changes the xml message into a cobol copy book structure. The message is then sent over to a queue on an MVS platform. That queue and message gets read by the MQ/CICS bridge. From there the inquiry gets satisfied by a backend system and then sent back to the AIX box where WMQI will change the message format from the cobol structure to XML. Finally after the flow, the message is placed on a queue where the original wrapper application has already performed a get with wait based on coorelationID. With the above scenario, it is very easy to maintain the reply queue, reply queue manager and coorelationID through the process and the originating app can find the message, but if another AIX box containing WMQI broker to aide in availability and reliability is added. then it i
s more difficult to manage the process.

The challenge is to make sure the originating web front-end mq client program who does a mq get with wait and based on coorelid, receives the response expects back. The challenge is that the front end program does not know where the message will be when it is returned. Since the the application is a psuedo synchronous and also it is mqclient where it does a get to one of two queues, it is very difficult to know where the message will be to.

How can or what is the best way the originating application can find the message when we are using clustering and also through failing components such as loosing a qmanager?
Back to top
View user's profile Send private message
udaybho
PostPosted: Thu Aug 15, 2002 10:34 am    Post subject: Reply with quote

Voyager

Joined: 09 May 2002
Posts: 94
Location: Chicago

Interesting subject:

In your scenario you can have multiple instances of the input queue on WMQI AIX boxes. One on each box.

I am assuming that there are two AIX machines clustred and running WMQI and each having one queue manager hosting Broker.

Now when your client drops the message Cluster Worklod Exit will direct the message to either one of the queue instance when both machines ( and queue managers) are up and running. If either of QM fails the Cluster Work load Exit will direct traffic to running queue manager.

Your reply FLOW should know the destination queue which you have to send as replytoqueue and further mainframe have to retain this information in reply message.

Uday Bhosle
Back to top
View user's profile Send private message Send e-mail
PeterPotkay
PostPosted: Thu Aug 15, 2002 7:04 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Quote:
The challenge is that the front end program does not know where the message will be when it is returned


Sure it does. It would be on the Reply2Queue/Reply2QueueManager specified on the original request. If the client connected to QM1 to put its request, it would ask for the reply to come back to ReplyQ1 on QM1, and thats where it should go do its Get with wait.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
oz1ccg
PostPosted: Thu Aug 15, 2002 10:42 pm    Post subject: Reply with quote

Yatiri

Joined: 10 Feb 2002
Posts: 628
Location: Denmark

Another way to archive the goal might be creating differnet Cluster queues on each qluster qmgr for the reply.
And in the reply queue (to client appl) place the name of the connected qmgr eg: QMX1.REPLY, and then let the wrapper application Inquire the QMGR name and place it in RepltyToQ.
When using this approach the cluster functions will work, and you would not have to change the wrapper application, when you add a new qmgr (QMX2) in the cluster, it simply ask it for the QMGR name and form the reply queue: QMX2.REPLY...

The MQ-adm have ofcause to create the cluster queue correctly....

One of customers have several of souch solutions and it works.

Just my $0.02
_________________
Regards, Jørgen
Home of BlockIP2, the last free MQ Security exit ver. 3.00
Cert. on WMQ, WBIMB, SWIFT.
Back to top
View user's profile Send private message Send e-mail Visit poster's website MSN Messenger
jchype
PostPosted: Sat Aug 17, 2002 12:35 pm    Post subject: Reply with quote

Newbie

Joined: 12 Jun 2002
Posts: 8

Let me simplify this question. If I have a MQSeries client application that is expecting an answer back so it needs to perform a get with wait against the reply queue. If the reply queue is clustered and is 1 of 2 where it is possible that the returning message could end up on either one of the queues. If the MQSeries client application knows the coorelid, what is the best technique in order to find the queue where the message is located.
Back to top
View user's profile Send private message
oz1ccg
PostPosted: Mon Aug 19, 2002 3:11 am    Post subject: Reply with quote

Yatiri

Joined: 10 Feb 2002
Posts: 628
Location: Denmark

Quote:
If the reply queue is clustered and is 1 of 2 where it is possible that the returning message could end up on either one of the queues.


You're right, when the reply queue is clustered(to archive availability). but why cluster it ?
Even if you write CLWL-exit I can't see how to determine where the client is connected(seen from CLWL-exit point overview), and where to route the message.

If your client appl, is requesting data, and the connected qmgr goes down, the client appl gets broken, and reinitiate the communication to another Qmgr. On the new qmgr, the client appl have to request the answer again specifying a queue on the connecting qmgr.

In your design senario it's complicated, due to the use of mutiple UOWs on different platforms. I've been into a similar senario, and the result of the investigations was: a shared queue on Z/OS.

Just my $0.02
_________________
Regards, Jørgen
Home of BlockIP2, the last free MQ Security exit ver. 3.00
Cert. on WMQ, WBIMB, SWIFT.
Back to top
View user's profile Send private message Send e-mail Visit poster's website MSN Messenger
bmccarty
PostPosted: Wed Aug 21, 2002 6:54 pm    Post subject: Reply with quote

Apprentice

Joined: 18 Dec 2001
Posts: 43

We did use Tuxedo one time as the "client" in this scenario. Tuxedo can know that status of all of the virtual reply to queues for the queue managers in the cluster that it is attached to. In this way Tuxedo would broker the request back to the real user/client so it is transparent to them what queue the message was actually extracted from.

Of course if you don't have tuxedo, there will probably have to be some brokering coded manually in between the real client and the queue manager.
Back to top
View user's profile Send private message AIM Address MSN Messenger
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » Clustering » maintain message through clustering and failing components
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.