Author |
Message
|
mpong |
Posted: Tue Jun 05, 2018 8:54 am Post subject: Last Msg |
|
|
Disciple
Joined: 22 Jan 2010 Posts: 164
|
Hello All, I read a whole file and generate msg for each record and drop into a queue. There is another flow which reads from the queue and groups them based on some condition (Shared Row to cache) to call target rest bulk API.
After dropping all record msg, in the end, I also drop a custom msg (After the while completes) in order for the second flow to process the last group and clear the cache.
Everything works fine if there is no addition thread on the second flow.
When I increase the thread to a certain number (4 or 5), sometimes it picks the last custom msg before the actual msg which is leaving the last group to remain on the cache and it does not get processed until next file is received.
I am thinking of introducing a delay in dropping the last msg which certainly does not guarantee that all msgs are processed in the second flow. Do you have any thoughts? |
|
Back to top |
|
 |
Vitor |
Posted: Tue Jun 05, 2018 9:37 am Post subject: Re: Last Msg |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
mpong wrote: |
Do you have any thoughts? |
Yes - you have a form of message affinity, where you need a given message to be processed after all the others.
If you need to read the messages off the queue in groups, why not add them to the queue in groups (by which I mean the MQ construct of a group) so that when you've read off the complete group (which MQ will tell you), you call the API.
No cache, no second flow, no problems.
Or you could still add the group messages to a cache, trigger the API when the last group message comes through, and set the cache records to auto-expire, removing the need for the 2nd flow. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
mpong |
Posted: Tue Jun 05, 2018 11:08 am Post subject: |
|
|
Disciple
Joined: 22 Jan 2010 Posts: 164
|
[quote]If you need to read the messages off the queue in groups, why not add them to the queue in groups[/quote]
I cant add them in the group as it is not received in a group in the file. Also, each record is validated against API for this existence and only valid records are sent to queue.
[quote]
you could still add the group messages to a cache, trigger the API when the last group message comes through[/quote]
There is no specific last group. This is when I sent the last custom msg after all record msg's are sent to MQ output. Rather than auto expiry, if the flow reads the last custom msg after all record msg, it gracefully ends. Let me think through. |
|
Back to top |
|
 |
Vitor |
Posted: Tue Jun 05, 2018 11:55 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
mpong wrote: |
I cant add them in the group as it is not received in a group in the file. |
I'm told they've invented utilities which sort files.
mpong wrote: |
Also, each record is validated against API for this existence and only valid records are sent to queue. |
So your first flow screens them against the API.
mpong wrote: |
Rather than auto expiry, if the flow reads the last custom msg after all record msg, it gracefully ends. |
And what I'm saying is that if you pre-group the data, you don't need a specific function to clear the cache and can let it auto-expire.
mpong wrote: |
Let me think through. |
Points to ponder then:
- how many records are we talking about here?
- why are they added to the database in groups?
- can they be added as a single call to this "bulk" API?
- what's the impact if they're added individually?
- what does this API do that IIB can't do with a direct database connection? _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
mpong |
Posted: Tue Jun 05, 2018 12:49 pm Post subject: |
|
|
Disciple
Joined: 22 Jan 2010 Posts: 164
|
Quote: |
I'm told they've invented utilities which sort files. |
They are reinventing the wheel
Quote: |
- how many records are we talking about here? |
100k
Quote: |
- why are they added to the database in groups? |
There is no DB set up here
Quote: |
- can they be added as a single call to this "bulk" API? |
No, each group has different rest API, max record in a group is 50k
Quote: |
- what's the impact if they're added individually? |
Max bulk API call in a day in 2000, do not want to exceed (Oracle Eloqua)
Quote: |
- what does this API do that IIB can't do with a direct database connection? |
No direct DB connection, services are exposed through API |
|
Back to top |
|
 |
mpong |
Posted: Tue Jun 05, 2018 12:57 pm Post subject: |
|
|
Disciple
Joined: 22 Jan 2010 Posts: 164
|
Using timer nodes + MQ GET (No message terminal), I am able to drop the last msg after all data record msg but it is still worth to hunt down the source team and get the sorted file.
Thanks for your response Vitor |
|
Back to top |
|
 |
Vitor |
Posted: Tue Jun 05, 2018 1:07 pm Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
mpong wrote: |
Using timer nodes + MQ GET (No message terminal), I am able to drop the last msg |
Do not try and read 100K messages with an MQGet node. You will blow the execution group out of memory unless you have an unfeasibly large amount of heap. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
mpong |
Posted: Tue Jun 05, 2018 1:45 pm Post subject: |
|
|
Disciple
Joined: 22 Jan 2010 Posts: 164
|
No no. I am not reading 100K using MQ get node. It will crash the EG. After processing all msg, I check if there are any msg left on MQ using MQGet node by connecting no message terminal. |
|
Back to top |
|
 |
Vitor |
Posted: Wed Jun 06, 2018 5:16 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
mpong wrote: |
No no. I am not reading 100K using MQ get node. It will crash the EG. After processing all msg, I check if there are any msg left on MQ using MQGet node by connecting no message terminal. |
Does rather assume that the message has not only been read off but that processing has finished successfully; I imagine that's where the timer node comes it.
Are you concerned about records that fail to process and/or unusually long API response times? _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
|