- 17.9K All Categories
- 3.3K Industry Applications
- 3.3K Intelligent Advisor
- 60 Insurance
- 534.7K On-Premises Infrastructure
- 137.8K Analytics Software
- 38.5K Application Development Software
- 5.4K Cloud Platform
- 109.2K Database Software
- 17.5K Enterprise Manager
- 8.8K Hardware
- 70.9K Infrastructure Software
- 105.1K Integration
- 41.5K Security Software
Federation across networks with private and public DNSes
In our current ActiveActive topology we have two clusters (ClusterA and ClusterB) each hosted in different data centres (DataCentreA and DataCentreB). Cluster communication is via unicast (wka list) running on a reserved port, not the default 7574 port. There's a firewall between the two data centres but the reserved port is open between the two clusters.
Point A: In data centre A, the ClusterA members bind addresses to the internal DNS which isn't visible to data centre B.
Point B: In data centre B, the ClusterB members bind addresses to a global DNS which is also resolvable from data centre A.
TCP communication between ClusterA and ClusterB members works if the global DNS names are used.
Due to the nature of this setup, federation from ClusterA to ClusterB works but not the other way around.
Looking at the logs of a member from ClusterB, when a member hits the NameService from ClusterA, it's given an IPAddress/Port, whereby the IP Address is bound to the internal DNS. ClusterB then tries to connect to ClusterA with this IP Address, which fails as explained in Point A.
I gave "Using a Specific Network Interface for Federation Communication" a stab to no avail. It looks like the instructions here are outdated?