ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Best approach for multiple database inserts

Post new topic  Reply to topic
 Best approach for multiple database inserts « View previous topic :: View next topic » 
Author Message
kevinobyrne
PostPosted: Thu May 23, 2013 10:53 am    Post subject: Best approach for multiple database inserts Reply with quote

Voyager

Joined: 17 Jul 2007
Posts: 83
Location: Ireland

I'm on WMB 7.0.0.5 and Oracle 11.2

My flow processes an input message which contains ~10,000 elements. Each element is inserted into a database table using a stored procedure. This is taking a long time and it appears to be the database transaction which is taking up nearly all of the time.

Here's my flow:

Code:
MQInput -> Compute (MRM to XMLNSC) -> Compute (DB insert) -> XMLNSC to MRM (SWIFT) -> MQOutput


(I have multiple different MRM Inputs/MQInputs coming in which is why I map to XMLNSC)

I explored the option of using INSERT ALL with passthru, sending 500 at a time as suggested in this thread
http://www.mqseries.net/phpBB2/viewtopic.php?t=46360&sid=13481d8a9dc034ddacc226b4dfb399dd
but building the query seems to take a long time.

My DBA asked if I could send an array to a procedure, but this doesnt seem to be possibe:
http://www.mqseries.net/phpBB2/viewtopic.php?t=46360&sid=13481d8a9dc034ddacc226b4dfb399dd

Can anyone suggest a better approach. Has feature been introduced recently which might be useful?

Thanks for you help.
Back to top
View user's profile Send private message
lancelotlinc
PostPosted: Thu May 23, 2013 11:28 am    Post subject: Reply with quote

Jedi Knight

Joined: 22 Mar 2010
Posts: 4941
Location: Bloomington, IL USA

A. Add a Compute node in front of your database Compute node you use to interact with the database.

B. Pulse the database Compute node from the new Compute node once for each record using PROPAGATE.

C. Make sure you set the Transaction to Commit on your database Compute node; and all other nodes that interact with your database in your flow.

D. Have the database Compute node process one record each pulse.

E. If your SQL is written well, you should be able to achieve the same latency I do: INTERVAL '0.009147' SECOND per row. This means you can insert 100 rows per second.
_________________
http://leanpub.com/IIB_Tips_and_Tricks
Save $20: Coupon Code: MQSERIES_READER


Last edited by lancelotlinc on Thu May 23, 2013 11:29 am; edited 1 time in total
Back to top
View user's profile Send private message Send e-mail
mqjeff
PostPosted: Thu May 23, 2013 11:28 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

You could look at having the stored procedure parse the xmlnsc message, and serialize that as a parameter to the procedure call....

otherwise you're pretty much in the land of JCNs.
Back to top
View user's profile Send private message
kash3338
PostPosted: Thu May 23, 2013 7:09 pm    Post subject: Reply with quote

Shaman

Joined: 08 Feb 2009
Posts: 709
Location: Chennai, India

Are you going to insert the XML message (of type XML in Oracle) into the DB? If so, the approach suggested by lancelotlinc would be a good one.

But if you are going to insert the values of the MRM elements, then you can have a Oracle Stored Procedure in place which gets the complete message into it and the SP can take care of multiple inserts using INSERT ALL in Oracle. In this case, the job done in your flow will be very minimal.

If you are not going to insert the XML, why are you parsing your MRM to XML first and then doing all these DB stuff? Why not directly using MRM parser?
Back to top
View user's profile Send private message Send e-mail
lancelotlinc
PostPosted: Fri May 24, 2013 4:18 am    Post subject: Reply with quote

Jedi Knight

Joined: 22 Mar 2010
Posts: 4941
Location: Bloomington, IL USA

The OP may find the slowness is attributable to the amount of disk thrashing by the virtual memory on his database server caused by throwing the whole payload at the database server in one shot.

Going row-by-row allows the OP to negate any virtual memory used by the database server thereby increasing performance of the whole operation. This technique is useful irrespective of the input format, MRM or XML.

Using the steps I outlined, the OP can achieve an insert rate of 100 rows per second. His load of 10,000 rows therefore would take only 100 seconds to accomplish.

I think the OP may find that throwing all 10,000 rows of data at the database, even if using INSERT ALL, will be slower, even if using MRM due to the inefficient way the database server manages memory for such an operation compared to the way the database server manages the memory in a row-by-row context.

The OP may also wish to recognize that code reuse is not as important as performance and therefore have one separate flow for each individual input vehicle (one for MRM, one for XML), not trying to funnel all input vehicles into one flow. Converting MRM data into XML is wasted effort, therefore wasted performance. Skipping this step will decrease latency.

The next option would be solidDb.
_________________
http://leanpub.com/IIB_Tips_and_Tricks
Save $20: Coupon Code: MQSERIES_READER
Back to top
View user's profile Send private message Send e-mail
mqjeff
PostPosted: Fri May 24, 2013 4:36 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

This does assume that in the first place the performance of the message flow itself outside of the database interactions has been optimized.

By, for example, doing the basic and obvious first step and never using [n] to reference elements in ESQL.

I suspect that the relative performance of 10,000 individual transactions vs a smaller set of bulk commits depends heavily on the specific database technology in use, and the specific configuration of the relevant database instances and the specific configuration of the os and hardware behind the relevant database instances.

Assuming that other performance tuning has already been done, if it has been determined that the next step is to convert from sending individual transactions to using bulk transactions, there are limited functions for doing that in ESQL.

It's not clear that offloading that work from Broker onto some kind of backend adapter from solidDB to ensure that all data ends up in the actual system of record is a useful suggestion in most places. Particularly where solidDB is not already heavily in use and being paid for.
Back to top
View user's profile Send private message
kash3338
PostPosted: Fri May 24, 2013 10:08 pm    Post subject: Reply with quote

Shaman

Joined: 08 Feb 2009
Posts: 709
Location: Chennai, India

lancelotlinc wrote:
I think the OP may find that throwing all 10,000 rows of data at the database, even if using INSERT ALL, will be slower, even if using MRM due to the inefficient way the database server manages memory for such an operation compared to the way the database server manages the memory in a row-by-row context.


It also depends on the way the Oracle stored procedure is written. There are many ways to tune the stored procedures while inserting large amount of data. The database server configuration also has a major factor over the performance of the inserts. With a reasonably good database server, it is better to go with stored procedures than with the option of sending individual transactions from ESQL. I agree with the point made by mqjeff here,

Quote:
Assuming that other performance tuning has already been done, if it has been determined that the next step is to convert from sending individual transactions to using bulk transactions, there are limited functions for doing that in ESQL.
Back to top
View user's profile Send private message Send e-mail
lancelotlinc
PostPosted: Tue May 28, 2013 3:21 am    Post subject: Reply with quote

Jedi Knight

Joined: 22 Mar 2010
Posts: 4941
Location: Bloomington, IL USA

kash3338 wrote:
lancelotlinc wrote:
I think the OP may find that throwing all 10,000 rows of data at the database, even if using INSERT ALL, will be slower, even if using MRM due to the inefficient way the database server manages memory for such an operation compared to the way the database server manages the memory in a row-by-row context.


It also depends on the way the Oracle stored procedure is written. There are many ways to tune the stored procedures while inserting large amount of data. The database server configuration also has a major factor over the performance of the inserts. With a reasonably good database server, it is better to go with stored procedures than with the option of sending individual transactions from ESQL. I agree with the point made by mqjeff here,

Quote:
Assuming that other performance tuning has already been done, if it has been determined that the next step is to convert from sending individual transactions to using bulk transactions, there are limited functions for doing that in ESQL.


While it seems that your points are valid, both yours and mqjeffs, the facts don't bear that out. I have actually tried it both ways and find the direct insert row-by-row from ESQL is 32 percent faster than calling same with stored procedure as you have described.

This is counter-intuitive, I agree. There are lots of shortcomings with how Message Broker manages database connections and I would expect the product roadmap to address this.
_________________
http://leanpub.com/IIB_Tips_and_Tricks
Save $20: Coupon Code: MQSERIES_READER
Back to top
View user's profile Send private message Send e-mail
mqjeff
PostPosted: Tue May 28, 2013 4:36 am    Post subject: Reply with quote

Grand Master

Joined: 25 Jun 2008
Posts: 17447

I think the facts do bear out the points I actually made.

That
  • ESQL does not provide much in the way of a solid mechanism to handle bulk inserts
  • The use of a stored procedure gives one more control over bulk inserts than ESQL does
  • Tuning one's ESQL is the first thing one should do, becuase in 90% of the cases where this particular symptom is seen - "I'm trying to insert a lot of records and it runs really slow" - it's because someone wrote an ESQL loop that walks 10,000 records using [n], [n+1], which is HORRIBLE for performance.
  • Whether a database is better able to handle bulk inserts versus lots of individual inserts depends quite a bit on the database in question
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » WebSphere Message Broker (ACE) Support » Best approach for multiple database inserts
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.