ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Messageflow Misbehavior For More than 10k row processing

Post new topic  Reply to topic Goto page 1, 2  Next
 Messageflow Misbehavior For More than 10k row processing « View previous topic :: View next topic » 
Author Message
Pratik611
PostPosted: Mon Nov 19, 2018 8:09 am    Post subject: Messageflow Misbehavior For More than 10k row processing Reply with quote

Novice

Joined: 27 Jul 2014
Posts: 17

Hi Guys,

got this requirement where i got to process on an average 1 lakh records from a DB in my message flow.
Approach:
1. Run a select query to fetch all records into a ROW variable
2. Run a While loop to process each record form ROW variable and propagate to downstream flow basis count.

Issue:
The flow always only process max of 10010 records after which it just stops processing.
NO error logs observed.

Is there some limit here? Either Broker configuration wise or environment wise?

WE are running this in IIB V10.0.0.8.

Thanks.
Back to top
View user's profile Send private message
Vitor
PostPosted: Mon Nov 19, 2018 8:34 am    Post subject: Re: Messageflow Misbehavior For More than 10k row processing Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Pratik611 wrote:
Is there some limit here? Either Broker configuration wise or environment wise?


*lakh - Indian term for 100,000


My guess is you're running out of memory; do you detach & delete items from the ROW once they're processed?

Does it always stop in the same place? Does it work if you only have 10,000 records?

What does the user trace say is happening when the flow "stops"?
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Pratik611
PostPosted: Mon Nov 19, 2018 8:47 am    Post subject: Re: Messageflow Misbehavior For More than 10k row processing Reply with quote

Novice

Joined: 27 Jul 2014
Posts: 17

Vitor wrote:
Pratik611 wrote:
Is there some limit here? Either Broker configuration wise or environment wise?


*lakh - Indian term for 100,000


My guess is you're running out of memory; do you detach & delete items from the ROW once they're processed?

Does it always stop in the same place? Does it work if you only have 10,000 records?

What does the user trace say is happening when the flow "stops"?


Hey Vitor,

To answer your doubts:

We don't detach and delete the items from the row once processed.
It does always stop at the same place i.e. 10,010 records.

User trace doesn't say anything. It just stops and the flow is back to waiting for the input node send data.

By Memory you mean heap size for the EG?
Back to top
View user's profile Send private message
Vitor
PostPosted: Mon Nov 19, 2018 8:59 am    Post subject: Re: Messageflow Misbehavior For More than 10k row processing Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Pratik611 wrote:
We don't detach and delete the items from the row once processed.


Why not? It's best practice for large record sets.


Pratik611 wrote:
It does always stop at the same place i.e. 10,010 records.


And does it work correctly for 10,009 records or lower


Pratik611 wrote:
By Memory you mean heap size for the EG?


Perhaps, depending on what the downstream flow is doing. My first thought is you've got 10,009 parsed message trees sitting in the main memory and it can't parse the 10,010th one.

But my first thought has been known to be wrong.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Pratik611
PostPosted: Mon Nov 19, 2018 9:35 am    Post subject: Re: Messageflow Misbehavior For More than 10k row processing Reply with quote

Novice

Joined: 27 Jul 2014
Posts: 17

Vitor wrote:
Pratik611 wrote:
We don't detach and delete the items from the row once processed.


Why not? It's best practice for large record sets.


Pratik611 wrote:
It does always stop at the same place i.e. 10,010 records.


And does it work correctly for 10,009 records or lower


Pratik611 wrote:
By Memory you mean heap size for the EG?


Perhaps, depending on what the downstream flow is doing. My first thought is you've got 10,009 parsed message trees sitting in the main memory and it can't parse the 10,010th one.

But my first thought has been known to be wrong.


>I'm working on deleting each record as we process. Let me try that and see if it does make a difference.

>It Does work for 10,009 records or lower perfectly

>I think your first thought may have hit the bull's eye here. Will try increasing the memory if the deletion doesn't work.
Will increase the JVM for that EG.
Back to top
View user's profile Send private message
Vitor
PostPosted: Mon Nov 19, 2018 10:05 am    Post subject: Re: Messageflow Misbehavior For More than 10k row processing Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Pratik611 wrote:
>I think your first thought may have hit the bull's eye here. Will try increasing the memory if the deletion doesn't work.
Will increase the JVM for that EG.


Bear in mind the parser is typically part of the "main" process and doesn't use the JVM; certainly that's the case if you're running the SELECT and WHILE in an ESQL Compute node (which has no Java component).
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Vitor
PostPosted: Mon Nov 19, 2018 10:05 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Moved to more relevant section
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Pratik611
PostPosted: Mon Nov 19, 2018 11:05 am    Post subject: Re: Messageflow Misbehavior For More than 10k row processing Reply with quote

Novice

Joined: 27 Jul 2014
Posts: 17

Vitor wrote:
Pratik611 wrote:
>I think your first thought may have hit the bull's eye here. Will try increasing the memory if the deletion doesn't work.
Will increase the JVM for that EG.


Bear in mind the parser is typically part of the "main" process and doesn't use the JVM; certainly that's the case if you're running the SELECT and WHILE in an ESQL Compute node (which has no Java component).


Can you tell what memory is to be increased here? I've developed a lot but never dug deep into broker related config for memory issues.
Back to top
View user's profile Send private message
Vitor
PostPosted: Mon Nov 19, 2018 12:08 pm    Post subject: Re: Messageflow Misbehavior For More than 10k row processing Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Pratik611 wrote:
Can you tell what memory is to be increased here? I've developed a lot but never dug deep into broker related config for memory issues.


It's the memory being allocated by your OS to the DataFlowEngine.exe itself. You don't mention your OS but it's not easy to change on Linux and near impossible on Windows.

More importantly, you say in your original post you've got about 100,000 records (a lakh) to process. So you're running out of memory about one tenth of the way through. I don't think adding 10 times the amount of memory to your process is advisable or even feasible. I accept that it's an old programmer saw all performance problems can be solved with more resources, but you're well short of the memory you need and fixing your code is the way to go.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Vitor
PostPosted: Mon Nov 19, 2018 12:14 pm    Post subject: Re: Messageflow Misbehavior For More than 10k row processing Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Pratik611 wrote:
2. Run a While loop to process each record form ROW variable and propagate to downstream flow basis count.


One thing you might want to consider if cleaning up the memory doesn't appeal or doesn't help is to split out the processing. Loop round the result set as currently, but process the record as a BLOB (i.e. don't parse it) and write the record straight out as an MQ message (or a uniquely named temporary file with something like a timestamp in it). This way, all the processing is in a discrete flow with discrete memory management and only processes 1 instance not 100,000.

This also opens up options to multi-thread your processing, with a single flow instance running down the result set and another flow (with the processing cut and pasted from your current one) running multiple instances.

It's what I'd do. Which doesn't mean it's what you should do.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
abhi_thri
PostPosted: Tue Nov 20, 2018 1:30 am    Post subject: Reply with quote

Knight

Joined: 17 Jul 2017
Posts: 516
Location: UK

hi...just to add to what Vitor has already mentioned, it is never a good idea to do the whole processing in a single go. For eg:- you might somehow tune the flow to process 100,000 records but what if database rows increases ten fold in a few years time.

When we tackled a similar problem we worked closely with the DBAs (who wrote the stored procedures) to return only a configurable set of rows at a time from the database table, eg:- 100, 1000 etc. The flow then processes those records and if the no. of records returned by the Store proc is the same as the max limit configured the then flow will send out a trigger message to itself via MQ itself (not wiring a Compute node in loop as that is worse) starting a new transaction.

This means the memory used by the Integration server will always remain the same and is predictable...not saying that you should do this but just adding another perspective.
Back to top
View user's profile Send private message
Pratik611
PostPosted: Wed Nov 21, 2018 5:46 am    Post subject: Reply with quote

Novice

Joined: 27 Jul 2014
Posts: 17

@abhi_thri and @Vitor
Agreed on what you'll both say regarding memory issues and how people dont consider a long term view with respect to memory.

However turned out the issue was altogether something else.
Seems the MQ Output node on which i was sending the messages using the loop had the property "Max Commit Count" set to 10k.

Change the Transaction property to "NO" and it's all good now.

Lesson Learnt: Don't take any node for granted.

And i'm definitely asking infra to increase the core/RAM for the server considering future load.

Thanks guys
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Nov 21, 2018 6:04 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Pratik611 wrote:
Change the Transaction property to "NO" and it's all good now.


Except that a) you've lost the ability to control unit of work and b) you've proved your queue manager is so starved of resources it hangs up if you try and commit a large block of messages.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Pratik611
PostPosted: Wed Nov 21, 2018 6:46 am    Post subject: Reply with quote

Novice

Joined: 27 Jul 2014
Posts: 17

Vitor wrote:
Pratik611 wrote:
Change the Transaction property to "NO" and it's all good now.


Except that a) you've lost the ability to control unit of work and b) you've proved your queue manager is so starved of resources it hangs up if you try and commit a large block of messages.


For A --> Not required here, asynchronous flow
For B --> We never tried increasing the Commit Count because considering the future scope may go high.
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Nov 21, 2018 6:54 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

Pratik611 wrote:
For B --> We never tried increasing the Commit Count because considering the future scope may go high.


I'm not suggesting that you should, or that it's a good idea to commit large (a lahk-worth) number of messages at once. Indeed, I would assert that smaller commit counts are better.

My point is that a 10K commit should have worked. That it didn't indicates a resource shortage on the queue manager side.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2  Next Page 1 of 2

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Messageflow Misbehavior For More than 10k row processing
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.