Author |
Message
|
varunraot |
Posted: Wed May 16, 2018 8:00 am Post subject: Multi Broker Cache Topology |
|
|
Acolyte
Joined: 01 Jun 2011 Posts: 63
|
If there are 3 integration nodes configured in multi broker cache topology ( Policy File is below), each integration node containing one catalog server each, does the stopping of any integration node cause cache becoming unavailable across all 3 integration nodes?
My understanding is - It should not unless all 3 nodes are stopped
Excerpts from https://www.ibm.com/support/knowledgecenter/en/SSMKHH_9.0.0/com.ibm.etools.mft.doc/bc23789_.htm
"Two brokers that each host a catalog server; if one catalog server fails, the global cache switches to the catalog server in the other broker."
Please confirm.
<?xml version="1.0" encoding="UTF-8"?>
<cachePolicy xmlns="http://www.ibm.com/xmlns/prod/websphere/messagebroker/globalcache/policy-1.0">
<broker name="IBNODE1" listenerHost="Host1">
<!--
This broker hosts one catalog server.
-->
<catalogs>1</catalogs>
<!--
This broker uses ports between 3300-3319.
-->
<portRange>
<startPort>3500</startPort>
<endPort>3519</endPort>
</portRange>
</broker>
<broker name="IBNODE2" listenerHost="Host2">
<!--
This broker hosts one catalog server.
-->
<catalogs>1</catalogs>
<!--
This broker uses ports between 3300-3319.
-->
<portRange>
<startPort>3520</startPort>
<endPort>3539</endPort>
</portRange>
</broker>
<broker name="IBNODE3" listenerHost="Host3">
<!--
This broker hosts one catalog server.
-->
<catalogs>1</catalogs>
<!--
This broker uses ports between 3300-3319.
-->
<portRange>
<startPort>3540</startPort>
<endPort>3559</endPort>
</portRange>
</broker>
</cachePolicy> |
|
Back to top |
|
 |
timber |
Posted: Thu May 17, 2018 1:01 am Post subject: |
|
|
 Grand Master
Joined: 25 Aug 2015 Posts: 1292
|
I assume that you have tested this, and you are not getting the expected result? Please confirm  |
|
Back to top |
|
 |
varunraot |
Posted: Thu May 17, 2018 2:08 am Post subject: |
|
|
Acolyte
Joined: 01 Jun 2011 Posts: 63
|
timber wrote: |
I assume that you have tested this, and you are not getting the expected result? Please confirm  |
Yes. It is inconsistent most of the time. In several occasions in the past, restarting one of the integration server in 3rd Integration node ( IB3) has caused cache becoming unavailable for more than 15 minutes.
Assuming the first message through a flow will wait (in
theMbGlobalMap.getGlobalMap() call) for up to 30 seconds or more in the case of integration server restart - it should not have caused the cache to become unavailable ?
Before I proceed further, I would like to check if the expected behavior of multi broker cache topology ( each integration node containing one catalog server each) if restart of any integration server/node would cause cache becoming unavailable across all 3 integration nodes? |
|
Back to top |
|
 |
varunraot |
Posted: Tue May 22, 2018 7:30 am Post subject: |
|
|
Acolyte
Joined: 01 Jun 2011 Posts: 63
|
Any further comment is highly appreciated. |
|
Back to top |
|
 |
timber |
Posted: Tue May 22, 2018 7:42 am Post subject: |
|
|
 Grand Master
Joined: 25 Aug 2015 Posts: 1292
|
I would open a PMR at this point. My understanding of the term 'highly available' does not include a 15-minute outage. Maybe there's something in your setup that makes switching to the new catalog server very slow? |
|
Back to top |
|
 |
varunraot |
Posted: Tue May 22, 2018 8:05 am Post subject: |
|
|
Acolyte
Joined: 01 Jun 2011 Posts: 63
|
Thanks. I will have Salesforce case / PMR created for the behavior.
To confirm, In ideal case - In multi broker cache topology ( each integration node containing one catalog server each) cache should be available except for initial boot time of 15 to 30 seconds during the restart of any integration server/node right? |
|
Back to top |
|
 |
timber |
Posted: Tue May 22, 2018 8:58 am Post subject: |
|
|
 Grand Master
Joined: 25 Aug 2015 Posts: 1292
|
I think an IBM PMR might get you a better result
I'm afraid it's no use asking me to comment on what should happen. I'm just reading the docs, same as you are. I agree that with your interpretation of the docs, but that's about all I can say. |
|
Back to top |
|
 |
|