This discussion is archived
0 Replies Latest reply: Dec 13, 2012 11:52 AM by User368268 RSS

Solaris Cluster 4.1 installation options

User368268 Newbie
Currently Being Moderated
Hi All,

I have to implement the following environment for Oracle E-Business environment:

1. T4-4 servers - 2 in number for Oracle Database - Solaris 11.1 with Solaris Cluster 4.1.
2. T4-2 servers - 2 in number as Application servers.
3. T4-2 servers - 2 in number as DMZ servers.
3. ZFS 7320 in Cluster mode.

All the servers also have 10G ports in addition to onboard Ethernet interfaces. There are no Fibre Channel HBA cards. There are two 72 port switches for 10 G for redundancy. Only the DMZ and application servers connect to the external network while the T4-4 servers and the ZFS 7320 storage array communicate with the Application and DMZ servers using the 10G network.

Considering the hardware infrastructure, can anyone please provide his/her inputs:

1. Should I use 10G interfaces for the private interconnects on Oracle Solaris Cluster nodes or should I configure on-board network interfaces?
2. Should I use 10G interfaces for the public network on Oracle Solaris Cluster nodes or should I use on-board network interfaces?
3. Is iSCSI preferred over NFS for Oracle database files if the file system is ASM?
4. What will be the best practices for Oracle VM server for SPARC on Solaris Cluster considering iSCSI connectivity?

Regards.

Legend

  • Correct Answers - 10 points
  • Helpful Answers - 5 points