Author |
Message
|
IIB_Intel |
Posted: Wed Aug 23, 2017 12:33 pm Post subject: Limit of IIB monitoring events emitted per transaction |
|
|
Acolyte
Joined: 07 May 2015 Posts: 64
|
“Is there a limit on max number of monitoring event that can be emitted by a message flow for a transaction”?
I think it is 10k as we are seeing one transaction cannot emit more than 10k events for a transaction where we process 20k records in a loop.
Basically we read a file and read all the records from it and update a db. We use monitoring events for our logging & auditing perspective. Since entire file is read under one transaction all the monitoring events are omitted at the end of transaction. Of-course emitted events will be published to a topic and we have a subscription defined on that topic to receive those events.
Anywhere is it documented about the limitation on max number of monitoring event that can be emitted by a message flow for a transaction? |
|
Back to top |
|
 |
IIB_Intel |
Posted: Wed Aug 23, 2017 2:10 pm Post subject: |
|
|
Acolyte
Joined: 07 May 2015 Posts: 64
|
Does MAXUMSGS has anything to do with this? |
|
Back to top |
|
 |
fjb_saper |
Posted: Wed Aug 23, 2017 5:53 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
IIB_Intel wrote: |
Does MAXUMSGS has anything to do with this? |
I would think it does. What happens if say you up this limit to 50000 (20000 plus room to spare)...  _________________ MQ & Broker admin |
|
Back to top |
|
 |
timber |
Posted: Thu Aug 24, 2017 1:40 am Post subject: |
|
|
 Grand Master
Joined: 25 Aug 2015 Posts: 1292
|
If you divide the message flow into two flows ( a file splitter and a DB writer ) then the problem goes away. You can use persistent MQ messages to join the two flows. |
|
Back to top |
|
 |
zpat |
Posted: Thu Aug 24, 2017 2:03 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Yes, I prefer the model where a file is converted to messages (one message per record) in an initial flow. It is more granular for any recovery as well.
If you want to process the messages as a group then you can just the message grouping options, but this then does re-introduce the transaction limit.
If you handle all the records in one transaction then there is always going to be a limit that might get breached, plus your MQ logs might get filled. _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Aug 24, 2017 4:11 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
zpat wrote: |
If you handle all the records in one transaction then there is always going to be a limit that might get breached, plus your MQ logs might get filled. |
Yes. The limit that is *most* likely to get breached is the EG memory limit. _________________ chmod -R ugo-wx / |
|
Back to top |
|
 |
IIB_Intel |
Posted: Thu Aug 24, 2017 5:32 am Post subject: |
|
|
Acolyte
Joined: 07 May 2015 Posts: 64
|
The business processing of the file doesn't have any problem. The records are pretty small in size.
It is the monitoring events that is causing the issue where a flow is not able to emit more than 10k events.
One question I have is a flow always publish message to a topic and each event is an independent event from publish standpoint then how come MAXUMSGS impose this limit to this? |
|
Back to top |
|
 |
Vitor |
Posted: Thu Aug 24, 2017 5:41 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
IIB_Intel wrote: |
One question I have is a flow always publish message to a topic and each event is an independent event from publish standpoint then how come MAXUMSGS impose this limit to this? |
Because the publications are in the same unit of work as your business process unless you've configured them not to be. Same way one message isn't linked to another message and are put to the queue one at a time, but they're not committed until the UoW closes.
Note that I'm not suggesting you should reconfigure the monitoring events to be outside the UoW; this would mean that in the event of a flow failure, you'd have events for processing that was rolled back.
This is another good reason for using a file splitter to separate records into transactions. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Aug 24, 2017 6:13 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
I'm confused where Pub/Sub comes in.
I would build your flow like this
FileInput (reads one record at a time, or maybe 10 records, or etc) -> ... -> some output node..
You will use a different DFDL model for the single/batch of records.
Then if your business process *really* needs the whole file, you could use use a Collector node to grab all the records and then emit them as a single chunk.
But you will still *very likely* run into memory problems *at some point* if you continue to process the whole file as a single message. _________________ chmod -R ugo-wx / |
|
Back to top |
|
 |
Vitor |
Posted: Thu Aug 24, 2017 6:18 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
mqjeff wrote: |
I'm confused where Pub/Sub comes in. |
Event messages. The broker publishes them.
No pub/sub in the OP's business flow that I can see. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
IIB_Intel |
Posted: Thu Aug 24, 2017 7:40 am Post subject: |
|
|
Acolyte
Joined: 07 May 2015 Posts: 64
|
Yes, pub-sub is only for monitoring events. |
|
Back to top |
|
 |
zpat |
Posted: Thu Aug 24, 2017 9:10 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Vitor wrote: |
Because the publications are in the same unit of work as your business process unless you've configured them not to be. |
Is there any choice in this matter? I vaguely recall IBM enhancing IIB to allow event messages to be put outside of the syncpoint? _________________ Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error. |
|
Back to top |
|
 |
Vitor |
Posted: Thu Aug 24, 2017 9:29 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
zpat wrote: |
Vitor wrote: |
Because the publications are in the same unit of work as your business process unless you've configured them not to be. |
Is there any choice in this matter? I vaguely recall IBM enhancing IIB to allow event messages to be put outside of the syncpoint? |
Oh yes, it's configurable but the point I was making is that if you send them out of syncpoint you could get mismatches between the events (which have been sent) and the business processing (which was rolled back). _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
IIB_Intel |
Posted: Thu Aug 24, 2017 9:33 am Post subject: |
|
|
Acolyte
Joined: 07 May 2015 Posts: 64
|
IBM confirmed , in case you are using MQ for publish / subscribe feature, then the maximum number of monitored events will be delimited by MAXUMSGS property.
As an alternate design we will be changing the event Unit of work to "None" instead of "Message Flow" on our events. |
|
Back to top |
|
 |
|