- 3,726,973 Users
- 2,245,299 Discussions
- 7,852,514 Comments
- 17.8K All Categories
- Industry Applications
- 3.2K Intelligent Advisor
- 1.6K On-Premises Infrastructure
- 578 Analytics Software
- 49 Application Development Software
- 1.8K Cloud Platform
- 700.5K Database Software
- 17.5K Enterprise Manager
- 20 Hardware
- 270 Infrastructure Software
- 141 Integration
- 74 Security Software
Despite many years of working with LDoms, I've never really had t do much with inter-domain-links, I just accepted the fact they worked (whenser to the default=auto).
However, I am currently trying to work around an issue (external network) and thought that this would solve the problem.
I have an S7 which already had a private network running on a NIC with no cables, so I know the inter-domain-link is working.
I have 2 LDoms (dom1 & dom2) on the same platform.
Both are on separate VLANS (lets say dom1=10 and dom2=11)
The platform was recently changed from an "old core switch" with 1 VLAN per NIC to a "new core switch" using Link Aggr and multiple VLANs down each NIC.
dom1 and dom2 started having big perf issues as dom2 mounted a NFS share from dom1.
On investigation, the GW for VLAN10 (dom1) is still on the "old core switch" and (the kicker) a separate site. The 2 sites only have a 100Mbs link. (yeah, I know ....)
Further investigation showed that the link was being swamped by something to 99% (yeah, I know ...)
I can't say why no perf issues were noticed before th change, but I can understand why we are seeing issues now.
As a work around, at least until the network is fixed, I decided to set the VSWs on the CDom to inter-domain-link=on.
But the traffic is still going out to the GW, it was my understanding that teh CDom would see the MAC of dom2 and dom1, see they were on the same platform and not go external.
The question is, what am I mis-understanding here, given that the private net on the same platform works OK?