Author |
Message
|
santy |
Posted: Tue Jan 20, 2009 9:14 am Post subject: Database insertions by Compue node in 6.0 |
|
|
Centurion
Joined: 03 Nov 2006 Posts: 141
|
Hi,
I'm facing problem with Database insertion by compute node in 6.0
Scenario is - I have 2 compute nodes , one is inserting the data in database and other one is reading data from database based on those messages.
If I plays 1000 messages at a time, the Insert statement will do as many insertion as possible, but not everyone at a time, and when another node tries to retrieve data based on those messages, suppose the data for that message not get inserted at that moment, it throws an exception.
I cannot use BEGIN ATOMIC or cannot change the property of compute node to set Transaction Property to 'COMMIT' beacuse I'm using PROPAGATE statement in that. It will be a lot of change in the code, if I have to use these options. So trying for some other option.
Is there any other way to achive the solution on this problem ?
Thanks. |
|
Back to top |
|
 |
smdavies99 |
Posted: Tue Jan 20, 2009 10:51 am Post subject: |
|
|
 Jedi Council
Joined: 10 Feb 2003 Posts: 6076 Location: Somewhere over the Rainbow this side of Never-never land.
|
If you can't depent upon the inserted value to actually be in the db by the time you want to do a select, you could try to do it this way.
1) Insert as now
2) Select from the DB without using the column that might get changed.
3) do another select from the in memory result set using the column value that was removed in 2)
Can you explain why you need read the DB in the way you are currently? _________________ WMQ User since 1999
MQSI/WBI/WMB/'Thingy' User since 2002
Linux user since 1995
Every time you reinvent the wheel the more square it gets (anon). If in doubt think and investigate before you ask silly questions. |
|
Back to top |
|
 |
santy |
Posted: Wed Jan 21, 2009 4:04 am Post subject: |
|
|
Centurion
Joined: 03 Nov 2006 Posts: 141
|
We are inserting data from one compute node and reading data from other compute node, we are doing so beacuse , when we insert data, it gets inserted in 2 tables, (we are inserting diff. data by looking at the messages) and when the other compute node reads the data, the next flow which gets trigged based on that data, throws an exception, beacuse the data read by the other compute node has unavailable status.
Also we have 2 Insert statement in one compute node, if one message takes time, the other bypasses.
We can not go ahead with your solution beacuse most of the things depend on that column itself from where the next flow gets triggered. |
|
Back to top |
|
 |
santy |
Posted: Thu Jan 22, 2009 6:48 am Post subject: |
|
|
Centurion
Joined: 03 Nov 2006 Posts: 141
|
Can anybody please guide me for my problem ? |
|
Back to top |
|
 |
mqjeff |
Posted: Thu Jan 22, 2009 7:02 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
There's absolutly no good reason, that I can think of, for one node to do read from a database any information that was populated by the immediately previous node.
Everything that you insert into the database in Node 1 can be stored in the output message of Node 1 or in the LocalEnvironment or in the Global Environment.
Aside from the transactional issues that you yourself have run into, it is a significant peformance and resource problem to go back out to the database and refetch data that you already have.[/i] |
|
Back to top |
|
 |
radha* |
Posted: Thu Jan 22, 2009 7:12 am Post subject: |
|
|
Newbie
Joined: 21 Jan 2009 Posts: 5
|
ur problem is not clear..how many tables r u dealing with?
In compute node 1, u try to insert records in table A or table B, go to the next compute node 2, and what validation is doen here before reading the records??
ur using same datasources? what status r u trying to update in the tables?
how many flows r u dealing with |
|
Back to top |
|
 |
santy |
Posted: Thu Jan 22, 2009 7:41 am Post subject: |
|
|
Centurion
Joined: 03 Nov 2006 Posts: 141
|
I have 2 flows.
In first flow -I've compute node 1 where I'm inserting data in to 2 database tables.
In second flow - I've another compute node, where I'm trying read data from database tables.
As of now, No validation is done, before reading the data, where its been successfully commited or not.
My main problem is to commit the data in database in compute node 1. As there are 2 insert statements in compute node 1, if first insert statement takes time, next messages bypass from that and goes to second flow. |
|
Back to top |
|
 |
jbanoop |
Posted: Tue Jan 27, 2009 1:57 pm Post subject: |
|
|
Chevalier
Joined: 17 Sep 2005 Posts: 401 Location: SC
|
Do you have multiple copies of flow 1 running ?
If not, how can a message reach the second flow without the insertions being successfully commited by the first flow (assuming transactionality is set to automatic). ?
Why dont you copy the piece of code in the first flow, related to insertion and propagation on the post so that we can have a look ? |
|
Back to top |
|
 |
santy |
Posted: Wed Jan 28, 2009 2:34 am Post subject: |
|
|
Centurion
Joined: 03 Nov 2006 Posts: 141
|
Yes we have multiple copies of flow running. That's why I'm facing the problem. |
|
Back to top |
|
 |
mqjeff |
Posted: Wed Jan 28, 2009 2:41 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
As someone else has said, the use of propagate with ATOMIC blocks only prohibits you from puting the propagate statement INSIDE the atomic block, and does not prevent you from putting the propagate statement in the same ESQL module.
And well laid out ESQL code should make it very easy to put all of the relevant database code inside an atomic block that is separate from the propagate statements.
None of this has anything to do with the fact that your second flow is not designed with the correct exception handling functions. If it fails to find the data that it is supposed to find, and you believe that there is a reasonable case that the data may be "about to arrive", then you should code the second flow to... retry the select.
And if the flow is never supposed to try to fetch the data until you know for a fact that the data has been inserted - then the both flows are not correct. The first flow should be using a global transaction and sending the message that starts the second flow. |
|
Back to top |
|
 |
|