Author |
Message
|
Cogito-Ergo-Sum |
Posted: Mon Feb 07, 2011 9:12 pm Post subject: MQRC_SYNCPOINT_LIMIT_REACHED : How to insert a MQCMIT ? |
|
|
 Master
Joined: 07 Feb 2006 Posts: 293 Location: Bengaluru, India
|
For wonder's sake, I created a database table with around 20,000 rows to test for one particular input message. The message flow would use the input message to select all the 20,000 rows and write to a MQOutput node. This is where I am encountering a MQRC_SYNCPOINT_LIMIT_REACHED message.
The look-up is being done in a Mapping node. How do I ensure that a commit is done at every n rows selected or MQPUT ? The configurable properties for committing seem to apply only for input messages. Is there such a thing as an ESQL COMMIT that will commit the MQPUT ? I have been looking into ESQL reference but, I do not think, I located it.
Edit: Slight grammatical change. _________________ ALL opinions are welcome.
-----------------------------
Debugging tip:When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
---Sherlock Holmes
Last edited by Cogito-Ergo-Sum on Mon Feb 07, 2011 11:06 pm; edited 1 time in total |
|
Back to top |
|
 |
fjb_saper |
Posted: Mon Feb 07, 2011 10:05 pm Post subject: Re: MQRC_SYNCPOINT_LIMIT_REACHED : How to insert a MQCMIT ? |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
Cogito-Ergo-Sum wrote: |
For wonder's sake, I created a database table with around 20,000 rows for one particular message. The message flow would use the input message to select all the 20,000 rows and write to a MQOutput node. This is where I am encountering a MQRC_SYNCPOINT_LIMIT_REACHED message.
The look-up is being done in a Mapping node. How do I ensure that a commit is done at every n rows selected or MQPUT ? The configurable properties for committing seem to apply only for input messages. Is there such a thing as an ESQL COMMIT that will commit the MQPUT ? I have been looking into ESQL reference but, I do not think, I located it. |
Seems bizarre. It looks like each row in the DB is retrieved individually...
You might want to open a PMR on this...  _________________ MQ & Broker admin |
|
Back to top |
|
 |
Cogito-Ergo-Sum |
Posted: Mon Feb 07, 2011 11:05 pm Post subject: |
|
|
 Master
Joined: 07 Feb 2006 Posts: 293 Location: Bengaluru, India
|
I changed the value of Transaction for MQInput and MQOutput nodes to 'No'. And, it seems to work. _________________ ALL opinions are welcome.
-----------------------------
Debugging tip:When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
---Sherlock Holmes |
|
Back to top |
|
 |
fatherjack |
Posted: Tue Feb 08, 2011 1:57 am Post subject: |
|
|
 Knight
Joined: 14 Apr 2010 Posts: 522 Location: Craggy Island
|
Cogito-Ergo-Sum wrote: |
I changed the value of Transaction for MQInput and MQOutput nodes to 'No'. And, it seems to work. |
Well yes it would. Your flow is no longer transactional. All your PUTS and GETS are outside of transaction control so you won't get any syncpoint limit issues. But if you are putting a message for every row you need to be aware that if your flow throws an exception the messages you have already put will have been comitted and presumably consumed by the application they are destined for. _________________ Never let the facts get in the way of a good theory. |
|
Back to top |
|
 |
Cogito-Ergo-Sum |
Posted: Tue Feb 08, 2011 3:58 am Post subject: |
|
|
 Master
Joined: 07 Feb 2006 Posts: 293 Location: Bengaluru, India
|
Quote: |
be aware that if your flow throws an exception the messages you have already put will have been comitted and presumably consumed by the application they are destined for. |
Yes, I am aware of that pit-fall and that is why I am not too comfortable with this option. _________________ ALL opinions are welcome.
-----------------------------
Debugging tip:When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.
---Sherlock Holmes |
|
Back to top |
|
 |
fatherjack |
Posted: Tue Feb 08, 2011 4:11 am Post subject: |
|
|
 Knight
Joined: 14 Apr 2010 Posts: 522 Location: Craggy Island
|
Cogito-Ergo-Sum wrote: |
Quote: |
be aware that if your flow throws an exception the messages you have already put will have been comitted and presumably consumed by the application they are destined for. |
Yes, I am aware of that pit-fall and that is why I am not too comfortable with this option. |
But even if there was an option to commit every 'n' rows you'd still have the same issue.
You could try increasing MAXUMSGS. _________________ Never let the facts get in the way of a good theory. |
|
Back to top |
|
 |
smdavies99 |
Posted: Tue Feb 08, 2011 4:46 am Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
And depending upon the WMQ logging method, you may well have to increase the number of log files. _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
mqjeff |
Posted: Tue Feb 08, 2011 5:29 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
If you want to use the input node's transaction, and still implement a cursor based output, with a fixed number of records in each transaction, then you need to implement this yourself.
Something like fetching the first N rows from the table, putting them out, then putting a new message holding the start point (N+1, likely) onto a queue (either the same or different as started the flow). You then read this message to figure out where to start reading from the database again, and start a new transaction for the next N records.
But if you have any requirement at all for these messages to be received "in order", then it's a more complicated problem. |
|
Back to top |
|
 |
|