|
RSS Feed - WebSphere MQ Support
|
RSS Feed - Message Broker Support
|
 |
|
MQ Instead of internal table |
« View previous topic :: View next topic » |
Author |
Message
|
sumithar |
Posted: Fri Jun 13, 2008 4:51 am Post subject: MQ Instead of internal table |
|
|
Apprentice
Joined: 13 Jun 2008 Posts: 47
|
COBOL (Batch) program running on zOS
At this time we have an internal table (OCCURS clause) that we use to initially load some reference data from a DB2 table and then use for processing in the program. This is to eliminate the i/o of having to SELECT data from the table for each input record processed.
Would it make sense to use a non-persistent MQ queue instead of this internal table? SELECT all rows from the DB2 table in the INIT Para and populate each row into the queue as a message.
The reason is, recently we had a table overflow situation and didn't want to limit ourselves to whatever the OCCURS limited us to!
Thanks |
|
Back to top |
|
 |
kevinf2349 |
Posted: Fri Jun 13, 2008 5:06 am Post subject: |
|
|
 Grand Master
Joined: 28 Feb 2003 Posts: 1311 Location: USA
|
I am not sure what that would buy you unless you are likely to add items (messages) to the queue while the batch job is running.
If you are going to load the table at the start of the job then load it into storage (depending on the size). That will give you the most gain.
I am a little bit confused as to why you don't just use a SELECT statement for the record you want though. |
|
Back to top |
|
 |
sumithar |
Posted: Fri Jun 13, 2008 5:33 am Post subject: |
|
|
Apprentice
Joined: 13 Jun 2008 Posts: 47
|
Kevin,
Thanks for the reply. Let me see if I can do a better job of explaining by becoming long-winded.
There is a file we receive that needs to be processed. One of the fields in this has to be validated against a set of values. The list of valid values is in a DB2 table maintained by a some other department
While processing each record in this file it is necessary to validate this field. One choice is to SELECT from the DB2 table where the column is equal to the value in the file field. If matching row found, then valid, otherwise invalid.
This has the impact of i/o for each record being processed in the input file and we want to avoid that.
To eliminate this we initially read all the rows from the DB2 table at the beginning of the run and stored it in an (internal) COBOL array and validated the file field against this array instead.
When you say "load it into storage" I think you are recommending using a COBOL array, right?
Now, there was a possibility that the number of valid values might increase between runs. And this happened to us. Our COBOL program was set for OCCURS 100 times and the number of rows in the table was (unbeknownst to us) increased to 120. So the initial para that loaded this COBOL array abended.
The simple solution was to increase the OCCURS clause to some large value. But a manager said to look for a solution that would not limit us. And hence the idea of using an MQ Queue.
No, we won't be adding anything to the queue during the job, though. |
|
Back to top |
|
 |
fjb_saper |
Posted: Fri Jun 13, 2008 6:48 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
sumithar wrote: |
Kevin,
Thanks for the reply. Let me see if I can do a better job of explaining by becoming long-winded.
There is a file we receive that needs to be processed. One of the fields in this has to be validated against a set of values. The list of valid values is in a DB2 table maintained by a some other department
While processing each record in this file it is necessary to validate this field. One choice is to SELECT from the DB2 table where the column is equal to the value in the file field. If matching row found, then valid, otherwise invalid.
This has the impact of i/o for each record being processed in the input file and we want to avoid that.
To eliminate this we initially read all the rows from the DB2 table at the beginning of the run and stored it in an (internal) COBOL array and validated the file field against this array instead.
When you say "load it into storage" I think you are recommending using a COBOL array, right?
Now, there was a possibility that the number of valid values might increase between runs. And this happened to us. Our COBOL program was set for OCCURS 100 times and the number of rows in the table was (unbeknownst to us) increased to 120. So the initial para that loaded this COBOL array abended.
The simple solution was to increase the OCCURS clause to some large value. But a manager said to look for a solution that would not limit us. And hence the idea of using an MQ Queue.
No, we won't be adding anything to the queue during the job, though. |
Expand your cobol structure to use a qualifier
You can then use the occurs depending on.
Your SQL statement should return to you the number of rows so you can set the depending on field. You would then have the expanding problem solved.
Enjoy  _________________ MQ & Broker admin |
|
Back to top |
|
 |
bruce2359 |
Posted: Fri Jun 13, 2008 7:23 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
This has the impact of i/o for each record being processed in the input file and we want to avoid that. |
What real problem are you addressing here? Mainframes do i/o spectacularly.
Have you confirmed (with RMF reports) that i/o is a bottleneck for you? Or that i/o is adversely affecting other applications. I/o aviodance is usually a good idea; but i/o shuffling (moving it from here to there) isn't usually productive.
Unless your DB2 table is gigantic, and/or indices have not been built or have been built incorrectly, i/o should not be a show-stopper issue. You will likely be better off to do some database tuning.
Conceptually, moving i/o's from db2 to mq doesn't sound like a good performance tuning solution.
Again, what real problem/issue are you addressing? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
sumithar |
Posted: Fri Jun 13, 2008 12:52 pm Post subject: |
|
|
Apprentice
Joined: 13 Jun 2008 Posts: 47
|
fjb_saper,
This looks like the approach we will take, an occurs...depending on where we do a count(*) to get the number of rows. It still means that we have to specify a possible upper limit but we can make it absurdly large and the DEPENDS ON will minimise the amount of memory allocated, I guess.
bruce,
the individual SQL itself is fairly trivial but we need to execute it 200K times if the data is not stored locally (memory). That is the number of records in the input file.
the problem is not I/O per se but really a way to store a certain amount of information 'local' to the program w/o knowing in advance the amount of that information. |
|
Back to top |
|
 |
sumithar |
Posted: Fri Jun 13, 2008 12:58 pm Post subject: |
|
|
Apprentice
Joined: 13 Jun 2008 Posts: 47
|
sumithar wrote: |
fjb_saper,
This looks like the approach we will take, an occurs...depending on where we do a count(*) to get the number of rows. It still means that we have to specify a possible upper limit but we can make it absurdly large and the DEPENDS ON will minimise the amount of memory allocated, I guess.
bruce,
the individual SQL itself is fairly trivial but we need to execute it 200K times if the data is not stored locally (memory). That is the number of records in the input file.
the problem is not I/O per se but really a way to store a certain amount of information 'local' to the program w/o knowing in advance the amount of that information. |
I stand corrected. From what I have been able to find out
"there is no difference in memory allocation for a cobol internal table. both are allocated the max occurs size." |
|
Back to top |
|
 |
bruce2359 |
Posted: Fri Jun 13, 2008 1:09 pm Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
the problem is not I/O per se but really a way to store a certain amount of information 'local' to the program w/o knowing in advance the amount of that information. |
Color me confused.
Are you concerned about 200k sql calls to mainframe DB2? A well-crafted database table with indices will be a stellar performer on z/OS.
Does the information you need to store need to be in a database table? That is, does the information need to be related to something else that's in another table? If not, maybe it should be stored in something less labor-intensive - like VSAM.
Please define your issue with a little more precision. What are you trying to accomplish? _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
sumithar |
Posted: Mon Jun 16, 2008 4:49 am Post subject: |
|
|
Apprentice
Joined: 13 Jun 2008 Posts: 47
|
bruce2359 wrote: |
Quote: |
the problem is not I/O per se but really a way to store a certain amount of information 'local' to the program w/o knowing in advance the amount of that information. |
Color me confused.
Are you concerned about 200k sql calls to mainframe DB2? A well-crafted database table with indices will be a stellar performer on z/OS.
Does the information you need to store need to be in a database table? That is, does the information need to be related to something else that's in another table? If not, maybe it should be stored in something less labor-intensive - like VSAM.
Please define your issue with a little more precision. What are you trying to accomplish? |
Bruce,
What you have said is "different". We have been conditioned to believe that we should minimize DB access and the kind of scenario I laid out is quite common here. Any "reference data" is loaded into COBOL arrays when possible with the objective of reducing SQL Calls. You are telling me that it is not necessary!
Let me see if I can convince TPTB that there is nothing wrong with accessing the tables...
The reason the data is in DB2 is that it is maintained by a different area and they have online apps to maintain it. VSAM is frowned upon.
Thanks! |
|
Back to top |
|
 |
bruce2359 |
Posted: Mon Jun 16, 2008 6:32 am Post subject: |
|
|
 Poobah
Joined: 05 Jan 2008 Posts: 9469 Location: US: west coast, almost. Otherwise, enroute.
|
Quote: |
...objective of reducing SQL Calls |
The objective of system design and performance tuning is to meet service levels - all the while accomplishing the business requirement. This applies to SQL calls, file opens/closes, and a host of other things.
Your organizations objective should be to reduce (eliminate) needless SQL calls, and needless i/o's. If you want to eliminate SQL calls, get rid of relational databases. If you need one, then tune the db to meet service levels.
If, for this application, a databse isn't called for - there is no need to relate the data to anything - then some other access method (VSAM, PDS, PDS/E, sequential, etc.) avoid all SQL calls. VSAMs access methods are generally faster than SQL. PDS is faster than VSAM. Sequential access is faster than VSAM.
Is your organization also trying to avoid processor time? If so, COBOL pathlengths are longer than Assembler coding to do the equivalent function.
No SQL is one of those ROTs (Rules of Thumb) from the early days that has since become an iROT, an irrational Rule of Thumb. z/OS, DB2 on z/OS, Workload Manager (WLM), offer lots of tuning knobs to assist in this effort.
Perhaps your organiation should take a look at having VLF/DLF manage this data as a data object - keep in virtual storage. _________________ I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live. |
|
Back to top |
|
 |
sumithar |
Posted: Tue Jun 17, 2008 4:26 am Post subject: |
|
|
Apprentice
Joined: 13 Jun 2008 Posts: 47
|
bruce2359 wrote: |
Quote: |
...objective of reducing SQL Calls |
The objective of system design and performance tuning is to meet service levels - all the while accomplishing the business requirement. This applies to SQL calls, file opens/closes, and a host of other things.
Your organizations objective should be to reduce (eliminate) needless SQL calls, and needless i/o's. If you want to eliminate SQL calls, get rid of relational databases. If you need one, then tune the db to meet service levels.
If, for this application, a databse isn't called for - there is no need to relate the data to anything - then some other access method (VSAM, PDS, PDS/E, sequential, etc.) avoid all SQL calls. VSAMs access methods are generally faster than SQL. PDS is faster than VSAM. Sequential access is faster than VSAM.
Is your organization also trying to avoid processor time? If so, COBOL pathlengths are longer than Assembler coding to do the equivalent function.
No SQL is one of those ROTs (Rules of Thumb) from the early days that has since become an iROT, an irrational Rule of Thumb. z/OS, DB2 on z/OS, Workload Manager (WLM), offer lots of tuning knobs to assist in this effort.
Perhaps your organiation should take a look at having VLF/DLF manage this data as a data object - keep in virtual storage. |
Bruce,
Thanks for that reality check! Very educational and I will discuss that with the group and I hope I can convince them.
Will definitely be back on this forum!
Rgds |
|
Back to top |
|
 |
|
|
 |
|
Page 1 of 1 |
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|
|
|