Author |
Message
|
svemula2 |
Posted: Thu Oct 30, 2014 5:38 pm Post subject: Global Map Multi threading issues |
|
|
Newbie
Joined: 23 Sep 2013 Posts: 3
|
Hi,
How multi threading can be handled when we are working on a clustered brokers.
We have two brokers in clustered env. When I am trying to put the map in to global map i am getting Duplicate ID exception. Below is the code.
if(!globalMap.containsKey("abc")) { Set<String> operationNames = new HashSet<String>(); operationNames.add(operationName); globalMap.put("abc", operationNames);
}
With the above code there never a case we can get Duplicate ID exeption unless untill broker 2 creates it before it reaches put statement.
In java we can synchronize the object and control it. Can the same approach work here? |
|
Back to top |
|
 |
fjb_saper |
Posted: Thu Oct 30, 2014 8:51 pm Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
you'd need to check out the API.
As an aside, what kind of design drives you to update the same key (concurrently) from both brokers? Would a simple race condition be satisfying or do you absolutely need synchronization between both broker instances?
Have fun  _________________ MQ & Broker admin |
|
Back to top |
|
 |
svemula2 |
Posted: Fri Oct 31, 2014 2:18 pm Post subject: |
|
|
Newbie
Joined: 23 Sep 2013 Posts: 3
|
As the same code will be deployed to all the clustered brokers there could be a chance that same global map can be accessed at the same time.
In globalMap there methods put and update . Put method is to put the any object in to global map time for the first. Update will be used when an object is already present.
In my code i am determining whether I need to call Put or Update.
Its entering in to the if block(ie object is not present in globalMap) and by the time its trying to put thread its throwing exception that its already present. That means two threads are entering at the same time in to the if block(1 thread from 1 broker and other thread from other broker) one thread is putting it and by the time other thread tries to put it it is throwing duplicate excpetion. There is no other possible case I can see here by the code.
Pls help on how thread synchronization can be handled across the brokers. |
|
Back to top |
|
 |
fjb_saper |
Posted: Sat Nov 01, 2014 5:15 am Post subject: |
|
|
 Grand High Poobah
Joined: 18 Nov 2003 Posts: 20756 Location: LI,NY
|
svemula2 wrote: |
As the same code will be deployed to all the clustered brokers there could be a chance that same global map can be accessed at the same time.
In globalMap there methods put and update . Put method is to put the any object in to global map time for the first. Update will be used when an object is already present.
In my code i am determining whether I need to call Put or Update.
Its entering in to the if block(ie object is not present in globalMap) and by the time its trying to put thread its throwing exception that its already present. That means two threads are entering at the same time in to the if block(1 thread from 1 broker and other thread from other broker) one thread is putting it and by the time other thread tries to put it it is throwing duplicate excpetion. There is no other possible case I can see here by the code.
Pls help on how thread synchronization can be handled across the brokers. |
Sorry but this is still a different version of your primary question ... how to? ...
It does not answer my question on design ... why? ...
Looking at what you are trying to do it seems that the global cache is very volatile in your design. The usual purpose is for the global cache to be initialized once and then queried over and over again. So there should be no problem with concurrency.
The question we have -- and related to your application design -- is why does your flow have to update concurrently the SAME key?  _________________ MQ & Broker admin |
|
Back to top |
|
 |
mqjeff |
Posted: Mon Nov 03, 2014 6:55 am Post subject: |
|
|
Grand Master
Joined: 25 Jun 2008 Posts: 17447
|
Also, your design is a bit like inquiring on queue depth and then executing GET that many times.
It's probably a much better design to simply update the cache, and then catch duplicate or missing key exceptions. |
|
Back to top |
|
 |
|