Author |
Message
|
zpat |
Posted: Fri Jan 18, 2002 4:54 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
When using the new Aggregation nodes in WMQI 2.1, I have quite a few
entries in the broker database recording the aggregation state data.
Many of these get left behind, if the aggregation does not complete - for
example in testing I only run the "fan-out" flow and not the corresponding
"fan-in" flow.
My question (to IBM really) is what is going to clean up these database
records?
Otherwise they will steadily accumulate in the broker database. |
|
Back to top |
|
 |
devans |
Posted: Tue Mar 19, 2002 9:06 am Post subject: |
|
|
Apprentice
Joined: 18 Mar 2002 Posts: 43
|
The database will be cleaned up during normal operation if your aggregations complete or timeout. During development of flows, you may leave stray data lying around when, as you say, you test your fan-out but not the fan-in. These entries will be removed when you delete your broker. |
|
Back to top |
|
 |
Miriam Kaestner |
Posted: Wed Mar 20, 2002 12:34 am Post subject: |
|
|
Centurion
Joined: 26 Jun 2001 Posts: 103 Location: IBM IT Education Services, Germany
|
The Broker DB is not cleared immediately of completed or timed-out aggregation messages. It seems they do kind of garbage collection at a later point in time. |
|
Back to top |
|
 |
devans |
Posted: Wed Mar 20, 2002 1:06 am Post subject: |
|
|
Apprentice
Joined: 18 Mar 2002 Posts: 43
|
Relevant database records relating to an aggregation are removed from the broker database when the aggregated reply message is propagated from the "out" or "timeout" terminals of the AggregateReply node. |
|
Back to top |
|
 |
zpat |
Posted: Wed Mar 20, 2002 1:17 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
With respect - normal completion is not the issue. Over time entries will accumulate due to the aggregation NOT completing for whatever reasons.
My point is that NOTHING is clearing this garbage out. |
|
Back to top |
|
 |
devans |
Posted: Wed Mar 20, 2002 1:35 am Post subject: |
|
|
Apprentice
Joined: 18 Mar 2002 Posts: 43
|
What you perceive as garbage, the broker considers to be important user data. In a production environment, where your flows are correct, you would expect every aggregation to complete or timeout, in which case there is no issue with orphaned records. In a development environment where you are tweaking your flows, then records may be orphaned, but will be deleted eventually when you run the mqsideletebroker command.
I know that when I'm developing flows I get my queues filled up with stray messages, and I just clear them out with an MQ command. Is this the sort of thing you're after for Aggregation? Some way to clear specific records from the database by specifying a broker name and an aggregation name, but leaving the other data intact? |
|
Back to top |
|
 |
zpat |
Posted: Wed Mar 20, 2002 9:00 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Correct, aggregation entries that will never be completed (for whatever reasons) are garbage and nothing is removing them. No interface or command is provided to remove entries. For example if the timeout was 60 seconds, it would be reasonable to remove entries older than 7 days - don't you agree?
There is no facility to do this - it seems an oversight by IBM not to consider the production issues over a long period of time. If you had a million messages per day and perhaps 0.01% failed to aggregate successfully - you would have 100 redundant database entries per day. |
|
Back to top |
|
 |
|