Author |
Message
|
harshvajra |
Posted: Mon Feb 11, 2008 1:29 am Post subject: Exclusive Access To Put/Get Message |
|
|
 Apprentice
Joined: 23 Apr 2007 Posts: 46 Location: India
|
Hello all,
To [put/get] messages to/from a queue, can we give exclusive access to queue.
I've seen options to set EXCLUSIVE access to only MQGET call, when i try for MQPUT call,an mq exception raises with
CompletionCode : 2 ReasonCode : 2045.
Any help is appreciated.
Thanks. _________________ Failure is not a defeat,it's just a delay, Walk TALL |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 11, 2008 1:52 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
Opening a queue like that for get is normally used where you've got serious message affinity issues and need to ensure a single application is reading messages. And you're not using groups or any of the other mechanisms.
The reason is that it stops your solution scaling as you can only ever have one application reading from the queue.
If you want to restrict what applications can read or write to a queue, set security. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
harshvajra |
Posted: Mon Feb 11, 2008 2:03 am Post subject: |
|
|
 Apprentice
Joined: 23 Apr 2007 Posts: 46 Location: India
|
You mean to say that, we cannot give EXCLUSIVE access to queue to put messages(MQPUT call).
If no, what is the procedure, make he understand
If yes, then how to make sure that only one instance application is putting msg to queue. _________________ Failure is not a defeat,it's just a delay, Walk TALL |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 11, 2008 2:31 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
harshvajra wrote: |
If yes, then how to make sure that only one instance application is putting msg to queue. |
Set security.
And ask yourself why you'd only want one application putting to a queue. See my comments above regarding scaling. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
zpat |
Posted: Mon Feb 11, 2008 3:04 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
Even with security, you could have the queue opened for output more than once.
It seems a strange design but it could be done by using MQINQ and looking at the MQIA_OPEN_OUTPUT_COUNT value. |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 11, 2008 3:25 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
zpat wrote: |
Even with security, you could have the queue opened for output more than once. |
Not if you only give one application the user id to run as! Procedureal rather than technical I know.
Your solution is equally procedural in that it relies on every application that wants to open the queue doing the check first. There's nothing to stop an application just connecting, unless you have some kind of gatekeeper admin program monitoring the open count & forceably disconnecting the 2nd and subsequent connecting applications.
And that's just contact admin on so many levels.
I'm still interested to know why you're restrict putting applications. Message affinity gives the requirement at GET (assuming you don't use any of the "nicer" solutions) but why put? _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
zpat |
Posted: Mon Feb 11, 2008 3:45 am Post subject: |
|
|
 Jedi Council
Joined: 19 May 2001 Posts: 5866 Location: UK
|
The only reason I can think of would be to stop the same application (with same userid access rights) being started more than once at the same time.
If it was production, most of these use the same operational userid so it could happen, however there are other ways to achieve application serialisation in z/OS. |
|
Back to top |
|
 |
Vitor |
Posted: Mon Feb 11, 2008 4:25 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
zpat wrote: |
The only reason I can think of would be to stop the same application (with same userid access rights) being started more than once at the same time. |
Clearly but where's the design imperitive? _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
harshvajra |
Posted: Mon Feb 11, 2008 9:47 pm Post subject: |
|
|
 Apprentice
Joined: 23 Apr 2007 Posts: 46 Location: India
|
I've arequirement to transfer a file.
This requires that only one instance of the application is putting msg to avoid duplicate msg of the file being transfered by another instance
(same app).
Now that the application is putting msg's of a file.
If i run another instance of it, what about the redundancy of msg for the file !!!
What would be the solution for such case ?
I believe to avoid such issues exclusive access to PUT msg to queue is required, at least msg's of file are in sequence order(even though there may be duplicates msg for same file after the successful transfer of complete file).
Any suggessions are welcome. _________________ Failure is not a defeat,it's just a delay, Walk TALL |
|
Back to top |
|
 |
Vitor |
Posted: Tue Feb 12, 2008 1:38 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
Firstly, why are you not using one of the commercial file-over-FTP solutions? Why reinvent the wheel?
Secondly, why not take an exclusive lock on the file you're transfering to prevent a 2nd instance of the same app transfering it?
Thirdly, whatever you do you'll need a procedure for preventing the same file being transfered twice accidently, which solves your duplication problem. _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
harshvajra |
Posted: Tue Feb 12, 2008 2:11 am Post subject: |
|
|
 Apprentice
Joined: 23 Apr 2007 Posts: 46 Location: India
|
First : A Commercial file-over FTP is not reliable, there are chances of losing data. The app(..) is being developed with resume support(n/w failures) which the former cannot.
Second : An exclusive lock can prevent a 2nd instance of the same app transfering fileas the same time,but later i.e when file lock is released it'll read and send them again.
Basically the app is request-reply msg'ing style.
From the response it(app) comes 2 know where to read from.
Thanks. _________________ Failure is not a defeat,it's just a delay, Walk TALL |
|
Back to top |
|
 |
Vitor |
Posted: Tue Feb 12, 2008 2:27 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
harshvajra wrote: |
First : A Commercial file-over FTP is not reliable, there are chances of losing data. The app(..) is being developed with resume support(n/w failures) which the former cannot. |
Endless apologies - I think the vending machine is giving out decaff coffee spiked with Red Bull. I meant file-over-MQ, PM4Data or similar, which eliminates the problems you quite correctly mention with standard FTP.
harshvajra wrote: |
Second : An exclusive lock can prevent a 2nd instance of the same app transfering fileas the same time,but later i.e when file lock is released it'll read and send them again.
Basically the app is request-reply msg'ing style.
From the response it(app) comes 2 know where to read from.
|
But even if the app is the only one allowed to put to the queue as you suggest, there's nothing to stop the same file being requested again, so duplicates are a risk unless a mechanism exists to stop that. An exclusive lock would prevent the same file being transmitted at the same time, which I thought is what you were trying to prevent with the "exclusive put". _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
jefflowrey |
Posted: Tue Feb 12, 2008 4:46 am Post subject: |
|
|
Grand Poobah
Joined: 16 Oct 2002 Posts: 19981
|
So, you can't actually rely on file system locks, even on a local file system. For one, NFS doesn't actually support them, really. For two, you don't know for a fact that the file system is really *local*. It may just look like it.
If you need to read a file transactionally, and are forced, for various reasons, to write the code yourself, the following heuristic will give you a very good chance of being successful in almost all cases.
There are a few different phases to the process.
The first phase is to identify the files that are ready to be processed. In order to do this, you need to
- Read the directory or directories you are supposed to be watching.
- Get the name AND size of all files that you are interested in (match your criteria) and save this data
- WAIT for one scan interval
- Rescan the directory, collecting Name and Size of all files that match
- Compare data from the previous scan.
- Any file that has not changed size is ready to be processed.
Now, you have a list of files that are ready to be processed. The next phase is to take control of those files.
This is done with a rename operation. The steps in this rename phase are - Ignore any file that contains your "in process flag"
- Rename the first file in your list
Rename the file to prepend or append a piece of data, constructed according to the following rules - contains a unique identifier(pid+tid+time, perhaps)
- contains a fixed piece of data that indicates that YOUR code is using the file (an "in process flag)
- is easily distinguished from the rest of the file name (a new extension, perhaps)
Notice I say "rename the first file". You're actually going to be iterating over the "take control" phase AND the "process file" phase for each file that you need to process as a single "unit of work". You will take control of a file, process it (or fail), and then try to take control of the Next file.
If the rename operation FAILS, then you know that you don't have control of the file for whatever reason, and so you should IGNORE it (but *log a message somewhere that you did*).
If you take control of all the files, and then try to process them in a separate loop, you run a larger risk of not processing the files because of bugs in your code.
During the process file phase, you will - Do whatever you need to do to the file - read it, copy it to a local working directory, put it on a queue, put it to a database, transform the contents, whatever
- Delete the file
In case of RECOVERABLE errors during your processing, you will rename the file back to it's original name (which will remove your "in process flag"). In case of UNRECOVERABLE errors, you will rename the file AGAIN to something that indicates an ERROR problem, OR move it to an error directory or etc.
Once the file has been processed, then go back to your list of "ready" files again, and try to rename and process the next one.
By ensuring that you use a unique id AND a known "in process flag", you ensure that no two instances of *your* application will interfere with each other.
There's nothing you can do to prevent some *OTHER* application, including the sending application, from interfering with your file. But that should be clear to everyone from the get-go. The rename operation will help this quite a bit, mind you.
This looks like quite a messy and complicated process. However, the simple fact is that reading files is a much more messy and complicated process than it seems like when they teach it to you on Day 2 of Programming 101.
The one cost that's a bit hidden in this heuristic is that you have to sacrifice one wait/scan interval's period of time before the file is processed. You have to wait at least one scan interval, because there's no guarantee that someone is not still writing to the file when you first try to read it. The delay period may seem expensive, but it goes very far towards ensuring that you won't process a partial file. _________________ I am *not* the model of the modern major general. |
|
Back to top |
|
 |
harshvajra |
Posted: Tue Feb 12, 2008 8:52 pm Post subject: |
|
|
 Apprentice
Joined: 23 Apr 2007 Posts: 46 Location: India
|
Thanks guys,
The light on this implementation gave me info as to how to go bout.
The study of heuristic procedure can be implemented even though it is expensive and complex in my case.
Thanks for the responses. _________________ Failure is not a defeat,it's just a delay, Walk TALL |
|
Back to top |
|
 |
Vitor |
Posted: Wed Feb 13, 2008 2:11 am Post subject: |
|
|
 Grand High Poobah
Joined: 11 Nov 2005 Posts: 26093 Location: Texas, USA
|
harshvajra wrote: |
The study of heuristic procedure can be implemented even though it is expensive and complex in my case. |
This is why so many people buy boxed ftp-over-MQ solutions - the TCO so often works out cheaper.  _________________ Honesty is the best policy.
Insanity is the best defence. |
|
Back to top |
|
 |
|