|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
Maintaining/deploying many versions of a message flow/set? |
« View previous topic :: View next topic » |
Author |
Message
|
Sandman |
Posted: Mon Oct 27, 2008 6:02 am Post subject: Maintaining/deploying many versions of a message flow/set? |
|
|
Centurion
Joined: 16 Oct 2001 Posts: 134 Location: Lincoln, RI
|
Hi,
I've searched the forum and the manuals but to no avail. Hoping you might provide some direction on how to manage this when the goal is reuse.
During our first few years of broker development (which focuses on CICS integration), our normal design was to create a new message flow and message set for each new integration. This strategy provided no definition or code reuse though.
So on our last big project, we created a message set in which we defined many "objects" that are reusable across message flows. We also have a message flow project that contains corresponding ESQL files that provide transformation procs between XML and COBOL formats. Many of our message flows then use these defs/procs to build and process messages.
We're now experiencing some of the maintenance and administration pains of this approach as new projects come along and wondering if we're maybe doing something a bit unorthodox.
If we need to make changes to either the common message set or procedure project, if all of the current integrations that use these are not willing/able/ready to digest the changes, we need to be able to support multiple runtime versions of them. This is less of an issue w/ the common procedure message flow project (vs. the message set) - presuming we can keep track of which versions of each ESQL file each integration uses - because the code from that common project is compiled into each flow. [We did this project in v.6.0, and I believe the new versioning features are part of 6.1?]
At any rate, our crude "versioning" tactic under tight timelines was to simply make copies of the procs in the common ESQL project and rename them for use by the new integrations. We knew this was a temporary solution though and now we've gone back and updated this project by renaming the new procs (to the old names) and regression testing the original flows.
We did the same thing for the message SET, in which we created new copies of its definitions and added a suffix to them to make them unique, so as to not disturb existing message flow projects (seeing all message flows share the same defs in the one-and-only running copy of the message set in the exec group).
I'd like to know how others manage this? Even though we might sometimes be able to get all existing users of a message flow or set to agree to absorb changes, we realize that sometimes we will not, and we'll have to maintain/deploy multiple versions of these.
If the broker only allows each like-named message flow or message set to be deployed once per execution group, how do you support the need for multiple runtime versions of these? Do you put different versions in their own execution groups? (To date, we've deployed everything to one big exec group. Is that a bad strategy?) Or do you copy the flow/set under a new name (i.e. include a version # in it)? If so, then you have multiple "logical" copies of the same flow in the repository, but by different "physical" names? (This would seem confusing.)
Lastly (thanks for bearing w/ me on this )... in addition to the runtime concerns above, how do you manage the design time artifact relationships? For instance, version 1 of message flow MF1 uses version 5 of the common procedure message flow project MF2 and version 3 of message set MS1. But version 2 of MF1 uses version 6 of MF2 and version 4 of MS1. How does one manage these relationships - both in the toolkit workspace(s) and in the repository?
Also, do others employ this type of reuse strategy - a common message set that contains the most commonly-used "objects" in our integrations, combined w/ a common message flow project that contains ESQL procs for transforming between XML and COBOL?
Thank you for any guidance you can provide, esp. on the runtime versioning; and regarding the code versioning, even if it's just to point me to the appropriate reading. |
|
Back to top |
|
 |
marko.pitkanen |
Posted: Mon Oct 27, 2008 12:12 pm Post subject: |
|
|
 Chevalier
Joined: 23 Jul 2008 Posts: 440 Location: Jamsa, Finland
|
Hi,
Hard questions from you :D
Some ideas below that comes to my mind.
I think that one way to reduce complexity is to declare "common" resource (what ever it is: message set or function, procedure or sub flow library) as given and there is just one version of it that can be used and if it changes all the other resources that uses it must upgrade, test and deploy to production as soon as possible with the latest official version of common resources.
With xml messages (and general xml parsers) changes can be perhaps designed so that those ones who don't need new pieces of information don't need to know it or change their behavior at all even they get the new form messages.
Perhaps you can design your architecture so that for those who can't accept the new common message format you have a separate transformation between this application specific and common format( study ESB design patterns for using canonical/common message formats).
Use MQSI keywords, version control system and keyword substitution function of the toolkit with our resources and you have possibility to check what versions/revisions are in the runtime env. and bar -files.
Marko |
|
Back to top |
|
 |
mqjeff |
Posted: Mon Oct 27, 2008 12:23 pm Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
There's a big difference in the runtime between common ESQL routines and common message sets.
Anything that is compiled into a message flow is not affected by whatever else is compiled into some other message flow.
So where you have a common set of subflows or ESQL procedures, the only version that a given flow will know about is the version that is in the workspace that builds the BAR file that deploys the flow.
Message Sets, however.. yes. The only way in version 6 and 6.1 to create uniquely deployable versions of MRM message sets is to create a copy of the message set project, with a version specific name. XSDZIP files is a different question - and presumbably the different versions of the XSDs in the XSDZIP have version specific namespaces so it doesn't really matter.
To run multiple versions of the same message flow simultaneously, you can put the flow into a version specific Broker schema.... (or you can rename the .cmf file in the bar before you deploy it... this is not for the faint of heart) |
|
Back to top |
|
 |
Sandman |
Posted: Wed Oct 29, 2008 8:57 am Post subject: |
|
|
Centurion
Joined: 16 Oct 2001 Posts: 134 Location: Lincoln, RI
|
Thanks for the replies so far guys.
Marko:
I realize that just mandating that all existing flows get on board immediately w/ any new updates to a common subflow/proc/message set would be the easiest to manage, and that is indeed what we strive for. It's just not always practical.
I also understand what you're saying about adapting to a canonical model, as we've implemented one already. That was key in us being able to create ONE message set and ONE corresponding XML/MRM and MRM/XML transformation project (w/ reusable procedures). So ALL of our transformations between XML and MRM are done against just one model. For example, regardless of the message flow/set, if an integration uses an Address, we create an element of type ADDRESS in it. Then we can use the AddressMRMtoXML or AddressXMLtoMRM procs to transform it by simply passing input and output references to the proc.
However, we've also had the luxury of usually only having to integrate w/ our own internal applications. So we've chosen to place the task of converting to/from canonical and app-specific data on the apps themselves (as opposed to adapter flows in the hub). In the flows where the data comes from outside sources, we've done exactly what you suggested - written adapter flows that first convert to our canonical model.
The harsh reality is that we envision a point where we will have to maintain and deploy more than one verision of our canonical message set and maintain more than one version of its transformation project. I haven't done anything to date w/ keywords/substitutions, but will read up on that; thank you.
MQJeff:
I'm aware that ESQL procs and subflows are actually compiled into the message flows that use them. However, it would seem to require some mighty careful workspace manipulation to make sure we pickup the correct versions of these when deploying if some flows are using the latest while others are still using prior versions. As we update each flow in our master bar file, we might have to swap out versions of these dependent artifacts because we can only have one at a time of the same-named file in the workspace. Or would broker schemas aid in this too?
I'm unfamiliar w/ XSDZIP files. Is this a broker concept or something else?
We've never used anything except the default Broker Schemas. Sounds like it's time to expand our horizons? Thanks for pointing me in that direction.
Thanks again for your suggestions. |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|