Author |
Message
|
Esa |
Posted: Wed May 30, 2012 12:54 am Post subject: releasing a parser allocated by MQGet |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
If you use MQGet with Output data location pointing to a location under InputLocalEnvironment and Message Domain set to XMLNSC (or anything else than BLOB), does the parser assigned to the gotten message get released if an upstream node propagates the same LocalEnvironment with DELETE DEFAULT?
Does the message have to be propagated to an output node (for example MQOutput) for the parser to be released, or will a plain Passthrough node do?
Does this apply to V 6.1.0.5? |
|
Back to top |
|
 |
Esa |
Posted: Wed May 30, 2012 5:43 am Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
Well, my guess is that the parser is not released untill the flow instance terminates, because the node that created the parser does not propagate the message.
To be able to control when the parser is released, I will have to fish the message into InputLocalEnvironment as BLOB and parse it in the upstream compute node that called the MQGet node.
I would like to verify this, but I think user or service traces do not contain information on parsers being released.
Or am I trying to solve a problem that does not exist? |
|
Back to top |
|
 |
smdavies99 |
Posted: Wed May 30, 2012 5:52 am Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
Perhaps you could explain why you want to release the parser from memory? _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
Esa |
Posted: Wed May 30, 2012 6:02 am Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
smdavies99 wrote: |
Perhaps you could explain why you want to release the parser from memory? |
I have a situation where a flow may need to MQGet a couple of hundred thousand messages, first in browse mode and then destructively, and experience from a previous adventure gives me a hunch that I may get into trouble with orphaned parsers... |
|
Back to top |
|
 |
kimbert |
Posted: Wed May 30, 2012 6:17 am Post subject: |
|
|
 Jedi Council
Joined: 29 Jul 2003 Posts: 5542 Location: Southampton
|
I think I remember the previous scenario. Personally, I would make strenuous efforts to avoid using MQGet in a loop. I would try to use a standard MQInput node that records information for the next flow. I guess you are trying to achieve high performance by doing a non-destructive read first...but is that really a requirement? Is there a single-pass algorithm that would work? |
|
Back to top |
|
 |
Esa |
Posted: Wed May 30, 2012 6:36 am Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
I'm implementing a delayed retry of a web service call. The calls are queued while the endpoint is down and should be processed in the same order they were received when the endpoint is available. I dont like the idea of starting/stopping the flow, because I don't want to give Configuration Manager such a central role (this is V 6.1) and because there may be several endpoints down simultaneously.
What do you think of my idea of parsing the messages in the compute node and letting the parsers get released by a call to PROPAGATE?
I will give it a test run in near future. |
|
Back to top |
|
 |
mqjeff |
Posted: Wed May 30, 2012 6:38 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
I would be tempted to use a DB table instead of a queue for this.
Then you can have the flow that does the resend use an MQInput node, and have the flow that reads the table use the BLOB parser. |
|
Back to top |
|
 |
Esa |
Posted: Wed May 30, 2012 7:08 am Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
kimbert wrote: |
I think I remember the previous scenario. Personally, I would make strenuous efforts to avoid using MQGet in a loop. I would try to use a standard MQInput node that records information for the next flow. I guess you are trying to achieve high performance by doing a non-destructive read first...but is that really a requirement? Is there a single-pass algorithm that would work? |
Yes, I could modify the flow so that it processes only one call at a time and puts the trigger message in a queue that it is listening itself with an MQInput node as an alternative input. Or propagates to TimeoutControl if the endpoint goes down again before all messages have been processed.
But if there are tens of thousands of messages in the process queue, a round-trip via a queue for each message will cause a huge overhead (on top of the even bigger overhead caused by the web service calls). So the optimal solution would be to process only a 'secure' number of calls within one loop... which brings us back to the question of releasing the parsers.
But as a matter of fact I think the round-trip would nicely brake down the process so that we won't have to route the web service calls via a throttling gateway to avoid exhausting the endpoint.
mqjeff wrote: |
I would be tempted to use a DB table instead of a queue for this. |
I wouldn't 
Last edited by Esa on Wed May 30, 2012 7:17 am; edited 1 time in total |
|
Back to top |
|
 |
Vitor |
Posted: Wed May 30, 2012 7:13 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
Esa wrote: |
mqjeff wrote: |
I would be tempted to use a DB table instead of a queue for this. |
I wouldn't  |
I would as well. What's your objection?
Someone will be along in a moment to espouse the virtues of solidDB and in this context it's a very good suggestion. But even a more conventional database would offer value here. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
lancelotlinc |
Posted: Wed May 30, 2012 7:16 am Post subject: |
|
|
 Jedi Knight
Joined: 22 Mar 2010 Posts: 4941 Location: Bloomington, IL USA
|
Vitor wrote: |
Someone will be along in a moment to espouse the virtues of solidDB and in this context it's a very good suggestion. But even a more conventional database would offer value here. |
Couldn't have said it better... _________________ http://leanpub.com/IIB_Tips_and_Tricks
Save $20: Coupon Code: MQSERIES_READER |
|
Back to top |
|
 |
Esa |
Posted: Wed May 30, 2012 7:22 am Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
Vitor wrote: |
I would as well. What's your objection?
|
Do you think selecting one row at a time from a database within a loop is better than calling MQGet within a loop? Why would it be? |
|
Back to top |
|
 |
lancelotlinc |
Posted: Wed May 30, 2012 7:30 am Post subject: |
|
|
 Jedi Knight
Joined: 22 Mar 2010 Posts: 4941 Location: Bloomington, IL USA
|
Esa wrote: |
Vitor wrote: |
I would as well. What's your objection?
|
Do you think selecting one row at a time from a database within a loop is better than calling MQGet within a loop? Why would it be? |
With a database, you have more flexibility on what data elements inside the payload are indexed. With larger datasets, using a database would be more efficient since an index tuned for the query would be faster than a queue browse and looking at the payload. _________________ http://leanpub.com/IIB_Tips_and_Tricks
Save $20: Coupon Code: MQSERIES_READER |
|
Back to top |
|
 |
Vitor |
Posted: Wed May 30, 2012 7:31 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
Esa wrote: |
Do you think selecting one row at a time from a database within a loop is better than calling MQGet within a loop? |
Yes.
Esa wrote: |
Why would it be? |
Because database software is designed and written to find specific rows very quickly; aside from any code improvements it has indexes and other artifacts to locate items.
WMQ is not a database. It's written to destructively read the first message in a queue. If you browse & read a queue it's effectively doing a tablescan and it doesn't perform that well. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
Esa |
Posted: Wed May 30, 2012 7:43 am Post subject: |
|
|
 Grand Master
Joined: 22 May 2008 Posts: 1387 Location: Finland
|
I asked this from Message Broker point of view. Have you implemented a flow that fetches thousands of rows from a database one at the time and parses each of them into a message? Can you do it without crashing the EG?
Probably yes. Why should it not be possible with MQGet node?
I must admit that I (once again) used a real life scenario as a camouflage for posing a theoretical question that has puzzled me for months (and is not related to MQGet node only):
When does a parser get released?
My guess is that a parser gets released when the node that created it gets a propagate call returned.
Is this a correct assumption? |
|
Back to top |
|
 |
mqjeff |
Posted: Wed May 30, 2012 8:05 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
It will be more successful for you to examine a service trace than for you to ask for an informal opinion.
I would not guess that a parser got released based on anything to do with a propagate. I'd guess it had to do with whenever the message it was associated with was freed.
I suspect that you'll run into orphaned parsers if you continually CREATE FIELD DOMAIN into a persistent tree - local or global environment, rather than into a message. Because each time you do, you'll create a new parser and orphan the existing one.
If you do that onto OutputRoot, then it's more likely OutputRoot will get cleared, particularly if you specify the right options on the propagate. |
|
Back to top |
|
 |
|