Tech Article: How to Build Software Defined Networks Using Elastic Virtual Switches - Part 1

Version 5

    Using Oracle Solaris 11.2

     

    by Orgad Kimchi with contributions from Girish Moodalbail

     

    This article demonstrates how to use the Elastic Virtual Switch (EVS) feature of Oracle Solaris to set up two compute nodes that host four Oracle Solaris Zones and then set up two elastic virtual switches across the two compute nodes to isolate the network traffic for two cloud tenants.

     

    Table of Contents
    About the Elastic Virtual Switch Feature of Oracle Solaris
    EVS Building Blocks
    Architecture We Will Use
    Tasks We Will Perform
    Prerequisites
    Setting Up SSH Authentication
    Setting Up the EVS Controller
    Configuring compute-node1
    Setting Up the First Zone on compute-node1
    Setting Up the Second Zone on compute-node1
    Configuring compute-node2
    Setting Up the Third Zone on compute-node2
    Setting Up the Fourth Zone on compute-node2
    Testing Our Final Configuration
    Summary
    See Also
    Acknowledgment
    About the Authors

     

    Oracle Solaris 11.2 enhances the existing, integrated software-defined networking (SDN) technologies provided by earlier releases of Oracle Solaris to provide much greater application agility without the added overhead of expensive network hardware. It now enables application-driven, multitenant cloud virtual networking across a completely distributed set of systems; decoupling from the physical network infrastructure; and application-level network service-level agreements (SLAs)—all built in as part of the platform. Enhancements and new features include the following:

     

    • Network virtualization with virtual network interface cards (VNICs), elastic virtual switches, virtual local area networks (VLANs), and virtual extensible VLANs (VXLANs)
    • Network resource management and integrated, application- level quality of service (QoS) to enforce bandwidth limits on VNICs and traffic flows
    • Cloud readiness, a core feature of the OpenStack distribution included in Oracle Solaris 11
    • Tight integration with Oracle Solaris Zones

     

    About the Elastic Virtual Switch Feature of Oracle Solaris

     

    The Elastic Virtual Switch (EVS) feature provides a built-in distributed virtual network infrastructure that can be used to deploy and manage virtual switches that are spread across several compute nodes. These compute nodes are the physical machines that host virtual machines (VMs).

     

    An elastic virtual switch is an entity that represents explicitly created virtual switches that belong to the same Layer 2 (L2) segment. An elastic virtual switch provides network connectivity between VMs connected to it from anywhere in the network.

     

    Note: With EVS, all references to the term virtual machines (or VMs) specifically refer to Oracle Solaris Zones and Oracle Solaris Kernel Zones—that is, solaris(5) and solaris-kz(5) branded zones, respectively.

     

    The Benefits of EVS

     

    EVS technology provides better manageability, flexibility, and observability. EVS is tightly integrated with the newly introduced Oracle Solaris Kernel Zones as well as with native zones, allowing a zone's VNIC to easily connect to an elastic virtual switch.

     

    If you are familiar with the Oracle Solaris Zones administration commands, such as zonecfg and zoneadm, adding EVS support will be very easy with a minimal learning curve.

     

    EVS provides centralized management and observability for ease of use, and for the monitoring of resources across all the nodes, in a single view. It provides centralized management of the following:

     

    • MAC addresses and IP addresses for the virtual ports
    • SLAs, on a per-virtual-switch or per-virtual-port basis

     

    It also enables the monitoring of runtime network traffic statistics for the virtual ports. In addition, EVS forms the back end for OpenStack networking, and it facilitates inter-VM communication (on the same compute node or across compute nodes) using either VLANs or VXLANs.

     

    Management is performed through easy-to-use administration tools or through OpenStack's Horizon Dashboard using the OpenStack Neutron plugin for EVS.

     

    Fine-grained SLAs, such as network bandwidth enforcement, can be per VM or per tenant, including many VMs. In addition, if you migrate a VM to a different physical system, the SLA enforcement remains intact.

     

    EVS can be deployed on multiple network fabrics. Currently EVS supports VXLANs and VLANs for maximum flexibility, and for easy integration into your existing environment. Its architecture is fabric-independent, and can be extended in the future to support additional types of network fabrics.

     

    VLANs and VXLANs

     

    VLANs have been used in networking infrastructures for many years now, and they can be used to enforce L2 isolation. Support for VLANs is now available in most operating systems, NICs, network equipment (for example, switches, routers, firewalls, and so on) and also in most virtualization solutions. As virtualized data centers scale and grow, some of the shortcomings of VLAN technology start to emerge, and cloud providers need some extensions to the basic VLAN mechanism.

     

    The first issue is the VLAN namespace itself. The IEEE 802.1Q specification defines a VLAN ID to be 12 bits, which restricts the number of VLANs to 4096. (Usually some VLAN IDs are reserved for "well-known" uses, which restricts the range further.) Cloud-provider environments require accommodating different tenants in the same underlying physical infrastructure. Each tenant may in turn create multiple L2 and layer 3 (L3) networks and other network components, such as firewalls and load balancers, within their own slice of the virtualized data center. This drives the need for a greater number of L2 networks.

     

    The second issue has to do with the operational model for deploying VLANs. Although the VLAN Trucking Protocol (VTP) exists as a protocol for creating, disseminating, and deleting VLANs, as well as for pruning them for optimal extent, most networks disable it.

     

    Manual VLAN setup can be a tedious task, especially in a large network environment when you need to create the VLANs on the global zones in addition to creating them on the network switches. In order to create elastic and isolated virtual networks in the cloud, you can enable the GARP VLAN Registration Protocol (GVRP).

     

    GVRP reduces the chances for errors in VLAN configuration by automatically providing VLAN ID consistency across the network. In addition, GVRP enables a device to dynamically create 802.1Q-compliant VLANs on links with other devices, such as network switches that are running GVRP. The use case for this technology is a cloud environment in which a new VLAN needs to be added automatically on the network, for example, when a new user starts to use the cloud infrastructure and you want to assign a new VLAN in order to segregate the new user's network traffic from other users.

     

    In addition, a VLAN can be removed when all the interfaces that used this VLAN have been removed. In this way, you can reuse the VLAN for another user.

     

    The global zone dynamically sends updates to the network switch when VLANs are configured on the data link. The network switch updates its VLANs database with the VLAN associated with the switch port.

     

    Note: Only the host OS (global zone) would be able to send GVRP messages to the fabric, so GVRP wouldn't allow a tenant guest to attack the network by sending a message to the fabric. For more information about GVRP, see Managing Network Virtualization and Network Resources in Oracle Solaris 11.2.

     

    The VXLAN specification (RFC-7348) provides a network virtualization scheme that overlays L2 over L3. VXLAN uses L3 multicast to support the transmission of multicast and broadcast traffic in the virtual network, while decoupling the virtual network from the physical infrastructure. VXLAN uses a UDP-based encapsulation to tunnel Ethernet frames. VXLAN can extend the virtual network across a set of Oracle Solaris servers, providing L2 connectivity among the hosted VMs.

     

    The VXLAN ID space is 24 bits. This doubling in size allows the namespace to increase to over 16 million unique identifiers, which should provide sufficient room for expansion for years to come. VXLANs use the Internet Protocol (IP)—both unicast and multicast—as the transport medium. The ubiquity of IP networks and equipment allows the end-to-end reach of a VXLAN segment to be extended far beyond the typical reach of VLANs using 802.1Q today. For more information about VXLANs, see "VXLAN in Solaris 11.2."

     

    Note: VLANs and VXLANs can coexist on the same network cloud infrastructure.

     

    EVS Building Blocks

     

    EVS building blocks include the EVS manager, the EVS controller, EVS clients, EVS nodes (compute nodes), an IP network (also known as an IPnet), virtual ports (VPorts), and tenants.

     

    f1.gif

    Figure 1. Some of the EVS building blocks

     

    EVS Manager

     

    The EVS manager is the entity that communicates with the EVS controller to define the L2 network topologies and the IP addresses that must be used on the L2 networks. The EVS manager communicates with the EVS controller by using the evsadm command. The EVS manager and the EVS controller can also be on the same Oracle Solaris node.

     

    Note: For simplicity, in this article, the EVS manager and the EVS controller will be on the same Oracle Solaris node.

     

    EVS Controller

     

    The EVS controller is the main component of the EVS framework and has a global view of the virtual network infrastructure. The EVS controller provides functionality for the configuration and administration of an elastic virtual switch and all the resources associated with it. It provides an API to the EVS manager and EVS clients in order to configure and monitor the EVS resources.

     

    In OpenStack, the Neutron network virtualization service provides network connectivity for other OpenStack services on multiple OpenStack systems and for VM instances. In Oracle Solaris, network virtualization services are provided through the Elastic Virtual Switch capability, which acts as a single point of control for creating, configuring, and monitoring virtual switches that span multiple physical servers. Applications can drive their own behavior for prioritizing network traffic across the cloud. Neutron provides an API for users to dynamically request and configure virtual networks.

     

    In a multinode OpenStack architecture, the network node is responsible for the network services and it requires configuring both the EVS controller and the Neutron DHCP agent and, optionally, the Neutron L3 agent. EVS forms the back end for OpenStack networking, and it facilitates communication between VM instances, using either VLANs or VXLANs. For more information about multinode OpenStack architectures, see Installing and Configuring OpenStack in Oracle Solaris 11.2.

     

    Note: Part 2 of this series will cover EVS in the context of OpenStack.

     

    The EVS controller is associated with properties that you can configure by using the evsadm set-controlprop subcommand. To implement the L2 segments across physical machines, you need to configure the properties of an EVS controller with information such as the available VLAN IDs, the available VXLAN segment IDs, or an uplink port for each EVS node.

     

    Note: You must set up only one physical machine as the EVS controller in a data center.

     

    EVS Clients

     

    The dladm and zonecfg commands are EVS clients. You can define the L2 network topologies through the evsadm command by using the elastic virtual switch, IPnet, and VPorts. You can use the dladm command to connect the VNICs to the L2 network topologies, or you can use the zonecfg command to connect the VNIC anet resource, thereby connecting the zones to the L2 network topologies.

     

    In OpenStack, the Neutron API service runs on the controller node. The Neutron API service is an EVS client that communicates with the EVS controller installed on the network node.

     

    EVS Nodes (Compute Nodes)

     

    Compute nodes are hosts whose VNICs (or zones whose VNIC anet resources) connect to an elastic virtual switch, as shown in Figure 2. You can use commands such as dladm and zonecfg to specify VNICs that need to be connected to an elastic virtual switch.

     

    In OpenStack, the compute nodes are EVS nodes that connect to the EVS controller on the network node.

     

    f2.gif

    Figure 2. EVS nodes connect to an elastic virtual switch

     

    IP Networks

     

    An IPnet represents a block of IPv4 or IPv6 addresses with a default router for the block. This block of IPv4 or IPv6 addresses is also known as the subnet. You can associate only one IPnet to an elastic virtual switch. All VMs that connect to the elastic virtual switch through a VPort are assigned an IP address from the IPnet that is associated with the elastic virtual switch.

     

    You can also manually assign an IP address to a VM by setting the IP address property, ipaddr, for the VPort. This IP address must be within the subnet range of the IPnet.

     

    VPorts

     

    A VPort represents the point of attachment between a VNIC and an elastic virtual switch. When a VNIC connects to a VPort, the VNIC inherits the network configuration parameters that the VPort encapsulates, such as the following:

     

    • SLA parameters, such as maximum bandwidth, class of service, and priority
    • MAC address
    • IP address

     

    When you create a VPort, using the evsadm add-vport subcommand, a randomly generated MAC address and the next available IP address from the associated IPnet are assigned to the VPort. The randomly generated MAC address has a default prefix consisting of a valid IEEE OUI with the local bit set. You can also manually specify the IP address (static IP) and the MAC address while adding a VPort.

     

    Note: Each elastic virtual switch can have multiple VPorts.

     

    Tenants

     

    The elastic virtual switches and their resources are logically grouped together. Each logical group is called a tenant. The defined resources for the elastic virtual switch within a tenant are not visible outside that tenant's namespace. The tenant acts as a container to hold all the tenant's resources together.

     

    Note: Each tenant can have multiple elastic virtual switches.

     

    For testing or non-production environments, you can install all the EVS components on a single system.

     

    Architecture We Will Use

     

    This article will demonstrate how to set up two compute nodes (compute-node1, compute-node2) that host four Oracle Solaris Zones (z1, z2, z3, z4), as shown in Figure 3 and described in Table 1.

     

    In addition, we will set up two elastic virtual switches (HR, ENG) across the two compute nodes for two cloud tenants (tenantA, tenantB), as shown in Figure 4 and described in Table 1.

     

    Table 1. EVS components

                                 

    Compute Node NameZone NameVNIC NameIP AddressElastic Virtual Switch NameTenant Name
    compute-node1z1z1/net0192.168.100.2HRtenantA
    compute-node1z2z2/net0192.168.200.3ENGtenantB
    compute-node2z3z3/net0192.168.100.3HRtenantA
    compute-node2z4z4/net0192.168.200.2ENGtenantB

     

     

    f3.gif

    Figure 3. Architecture we will use

     

    f4.gif

    Figure 4. Elastic virtual switches we will use

     

    Note: The compute nodes can be an x86 box, an Oracle VM Server for SPARC root domain, or an I/O domain. If the compute node is a guest domain, see the "Networking Constraint" section of this blog.

     

    Important: In the examples presented in this article, the command prompt indicates which user needs to run each command in addition to indicating the environment where the command should be run. For example, the command prompt root@evs-controller:~# indicates that user root needs to run the command from the EVS controller host, which is named evs-controller.

     

    Tasks We Will Perform

     

    In the next sections, we will perform the following operations in order to build the architecture:

     

    • Verify connectively between the nodes (prerequisite).
    • Install the EVS packages on the EVS controller and the compute nodes (prerequisite).
    • Enable Secure Shell (SSH) connectively between the EVS controller and the compute nodes.
    • Set up the EVS controller.
    • Configure the first compute node.
    • Set up the first zone on the first compute node.
    • Set up the second zone on the first compute node.
    • Configure the second compute node.
    • Set up the third zone on the second compute node.
    • Set up the fourth zone on the second compute node.
    • Test our final configuration.

     

    Prerequisites

     

    To perform EVS operations, you need to be superuser or a user with the EVS administration rights profile. You can also create a user and assign the EVS administration rights profile to the user. For more information, see Securing Users and Processes in Oracle Solaris 11.2.

     

    Verify Connectivity Between the Nodes

     

    The EVS controller relies on a fully qualified domain name (FQDN) to connect to the host to modify VPort properties. You should verify that every compute host is able to connect to the EVS controller using the controller's FQDN name. You can use the ping command in order to verify network connectivity.

     

    For example, to verify network connectively between compute-host1 and the evs-contoller host, run the following command:

     

    root@compute-node1:~# ping evs-controller.oracle.com
    evs-controller.oracle.com is alive

     

    To verify network connectively between compute-host2 and the evs-contoller host, run the following:

     

    root@compute-node2:~# ping evs-controller.oracle.com
    evs-controller.oracle.com is alive

     

    Install Packages on the EVS Controller

     

    You must use only one controller to manage all the elastic virtual switches in a data center. You must install the pkg:/service/network/evs package and the pkg:/system/management/rad/module/rad-evs-controller package on the system that acts as the EVS controller.

     

    To install the packages, open the evs-controller terminal and use the following commands:

     

    root@evs-controller:~# pkg install evs

               Packages to install:  1

                Services to change:  1

           Create boot environment: No

    Create backup boot environment: No

    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

    Completed                                1/1         15/15      0.1/0.1  2.3M/s

     

    PHASE                                          ITEMS

    Installing new actions                         40/40

    Updating package state database                 Done

    Updating package cache                           0/0

    Updating image state                            Done

    Creating fast lookup database                working

    Creating fast lookup database                   Done

    Updating package cache                           1/1

     

    root@evs-controller:~# pkg install rad-evs-controller

               Packages to install:  1

                Services to change:  1

           Create boot environment: No

    Create backup boot environment: No

    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

    Completed                                1/1           7/7      0.1/0.1  2.5M/s

     

    PHASE                                          ITEMS

    Installing new actions                         32/32

    Updating package state database                 Done

    Updating package cache                           0/0

    Updating image state                            Done

    Creating fast lookup database                working

    Creating fast lookup database                   Done

    Updating package cache                           1/1

     

    After you install the rad-evs-controller package, you need to use the following command to restart the rad:local service to load the EVS controller:

     

    root@evs-controller:~# svcadm restart rad:local

     

    Then verify the that rad:local service is online:

     

    root@evs-controller:~# svcs rad:local

    STATE          STIME    FMRI

    online         23:31:51 svc:/system/rad:local

     

    Note: If the EVS manager is on a different node than the EVS controller, you need to install the evs package on the EVS manager.

     

    Install the evs Package on the Compute Nodes

     

    Open the compute-node1 terminal and install the evs package:

     

    root@compute-node1:~# pkg install evs

     

    Verify that the evs service is online:

     

    root@compute-node1:~# svcs evs

    STATE          STIME    FMRI

    online         13:02:55 svc:/network/evs:default

     

    Open the compute-node2 terminal and install the evs package:

     

    root@compute-node2:~# pkg install evs

     

    Verify that the evs service is online:

     

    root@compute-node2:~# svcs evs

    STATE          STIME    FMRI

    online         13:02:55 svc:/network/evs:default

     

    Note: You need to repeat these steps for each compute node that you add in the future.

     

    Setting Up SSH Authentication

     

    EVS uses the SSH protocol as a secure transport mechanism between the components.

     

    About the evsuser User

     

    To simplify configuration, a user called evsuser, who has all the authorizations and privileges to perform EVS operations, is created when you install the evs package using the pkg install evs command.

     

    You need SSH authentication with the preshared public key for the evsadm command to communicate with the EVS controller noninteractively and securely. You need to set up the SSH authentication with the preshared public key for evsuser between the EVS controller and the EVS nodes, as follows:

     

    • For communication between the EVS nodes and the EVS controller—In the /var/user/evsuser/.ssh/authorized_keys file on the EVS controller, for each EVS node, append the public key for the root user. You need to append these public keys because the zoneadmd daemon runs as root. This daemon connects to the EVS controller and retrieves configuration information for the VNIC anet resource. For more information, see the zoneadmd(1M) man page.
    • For communication between the EVS controller and the EVS nodes—In the /var/user/evsuser/.ssh/authorized_keys file on each EVS node, append the public key for evsuser. You need to do this because the EVS controller communicates with each of the EVS nodes for setting VPort properties.

     

    Set Up SSH Authentication Between the EVS Nodes and the EVS Controller

     

    Figure 5 shows SSH authentication between the EVS nodes and the EVS controller, and the subsequent procedure describes how to set up the SSH authentication.

     

    f5.gif

    Figure 5. Authentication between the EVS nodes and the EVS controller

     

    1. Generate an RSA key pair on compute-node1.

     

    root@compute-node1:~# ssh-keygen -t rsa

    Generating public/private rsa key pair.

    Enter file in which to save the key (/root/.ssh/id_rsa):

    Enter passphrase (empty for no passphrase):

    Enter same passphrase again:

    Your identification has been saved in /root/.ssh/id_rsa.

    Your public key has been saved in /root/.ssh/id_rsa.pub.

    The key fingerprint is:

    c0:e8:6d:61:67:13:1b:22:91:e9:5c:f9:af:2f:70:65 root@compute-node1

     

    2. Copy the public key from the /root/.ssh/id_rsa.pub file on compute-node1 to the /var/user/evsuser/.ssh/authorized_keys file on the EVS controller.

     

     

    root@compute-node1:~# scp /root/.ssh/id_rsa.pub root@evs-controller.oracle.com:/var/user/evsuser/.ssh/authorized_keys

    The authenticity of host 'evs-controller)' can't be established.

    RSA key fingerprint is e8:87:92:a6:30:38:d7:bf:af:13:99:b7:33:1a:4e:58.

    Are you sure you want to continue connecting (yes/no)? yes

    Warning: Permanently added 'evs-controller' (RSA) to the list of known hosts.

    Password:

    id_rsa.pub           100% |*****************************|   400       00:00

     

    3. Log in to the EVS controller as evsuser from compute-node1 to verify whether the SSH authentication is set up.

     

    root@compute-node1:~# ssh evsuser@evs-controller.oracle.com

    Oracle Corporation      SunOS 5.11      11.2    June 2014

    evsuser@evs-controller:~$

     

    The output shows that you can log in to the EVS controller as evsuser from compute-node1 without a password.

     

    Enter the exit command in order to log out from the evs-controller host and return to compute-node1.

     

    evsuser@evs-controller:~$ exit

    Connection to evs-controller.oracle.com closed.

    root@compute-node1:/#

     

    4. Generate an RSA key pair on compute-node2.

     

    root@compute-node2:~# ssh-keygen -t rsa

     

    5. Copy the public key from the /root/.ssh/id_rsa.pub file on compute-node2 and append it to the /var/user/evsuser/.ssh/authorized_keys file on the EVS controller.

     

    root@compute-node2:~# cat /root/.ssh/id_rsa.pub | ssh root@evs-controller.oracle.com 'cat >> /var/user/evsuser/.ssh/authorized_keys'

    The authenticity of host 'evs-controller.oracle.com ' can't be established.

    RSA key fingerprint is e8:87:92:a6:30:38:d7:bf:af:13:99:b7:33:1a:4e:58.

    Are you sure you want to continue connecting (yes/no)? yes

    Warning: Permanently added 'evs-controller.oracle.com (RSA) to the list of known hosts.

    Password:

     

    6. Log in to the EVS controller as evsuser from compute-node2 to verify whether the SSH authentication is set up.

     

    root@compute-node2:~# ssh evsuser@evs-controller.oracle.com

    Last login: Sun Sep 21 07:33:18 2014 from p4259-04.us.ora

    Oracle Corporation      SunOS 5.11      11.2    June 2014

    evsuser@evs-controller:~$

     

    The output shows that you can log in to the EVS controller as evsuser from compute-node2 without a password.

     

    Enter the exit command in order to log out from the evs-controller host and return to compute- node2.

     

    evsuser@evs-controller:~$ exit

    Connection to evs-controller.oracle.com closed.

    root@compute-node2:~#

     

    7. Generate an RSA key pair in the EVS controller.

     

    root@evs-controller:~# ssh-keygen -t rsa

     

    8. Copy the public key from the /root/.ssh/id_rsa.pub file and append it to the /var/user/evsuser/.ssh/authorized_keys file on the EVS controller.

     

    root@evs-controller:~# cat /root/.ssh/id_rsa.pub >> /var/user/evsuser/.ssh/authorized_keys

     

    9. From the EVS controller, log in to the EVS controller as evsuser to verify whether the SSH authentication is set up.

     

    root@evs-controller:~# ssh evsuser@evs-controller.oracle.com

    Last login: Tue Oct 21 02:34:51 2014

    Oracle Corporation      SunOS 5.11      11.2    June 2014

    evsuser@evs-controller:~$

     

    Enter the exit command in order to log out from the evsuser account:

     

    evsuser@evs-controller:~$ exit
    Connection to evs-controller.oracle.com closed.

     

     

    Set Up SSH Authentication Between the EVS Controller and the EVS Nodes

     

    Figure 6 shows SSH authentication between the EVS controller and the EVS nodes, and the subsequent procedure describes how to set up the SSH authentication.

     

    f6.gif

    Figure 6. Authentication between the EVS controller and the EVS nodes

     

    1. Become the evsuser user on the EVS controller.

     

    root@evs-controller# su - evsuser

     

    2. Generate an RSA key pair on the EVS controller for evsuser.

     

    evsuser@evs-controller$ ssh-keygen -t rsa

     

    3. Copy the public key from the /var/user/evsuser/.ssh/id_rsa.pub file on the EVS controller to the /var/user/evsuser/.ssh/authorized_keys file on compute-node1.

     

    Note: You need to provide the root password of compute-node1.

     

    evsuser@evs-controller:~$ scp /var/user/evsuser/.ssh/id_rsa.pub root@compute-node1:/var/user/evsuser/.ssh/authorized_keys

    The authenticity of host 'compute-node1 (10.129.195.32)' can't be established.

    RSA key fingerprint is 2f:f6:c9:1d:30:5f:f5:8c:19:70:5a:27:83:ab:f6:24.

    Are you sure you want to continue connecting (yes/no)? yes

    Warning: Permanently added 'compute-node1,10.129.195.32' (RSA) to the list of known hosts.

    Password:

    id_rsa.pub           100% |*****************************|   404       00:00

     

    4. Repeat the equivalent of Step 3 for compute-node2:

     

    Note: You need to provide the root password of compute-node2.

     

    evsuser@evs-controller:~$ scp /var/user/evsuser/.ssh/id_rsa.pub root@compute-node2:/var/user/evsuser/.ssh/authorized_keys

    The authenticity of host 'compute-node2 (10.129.195.22)' can't be established.

    RSA key fingerprint is 83:3a:63:ea:bb:38:bd:5f:e8:10:f1:81:da:55:09:9e.

    Are you sure you want to continue connecting (yes/no)? yes

    Warning: Permanently added 'compute-node2,10.129.195.22' (RSA) to the list of known hosts.

    Password:

    id_rsa.pub           100% |*****************************|   404       00:00

     

    5. Repeat the equivalent of Step 3 for the EVS controller:

     

    evsuser@evs-controller:~$ cat /var/user/evsuser/.ssh/id_rsa.pub >> /var/user/evsuser/.ssh/authorized_keys

     

    6. Verify the SSH setup:

     

    Log in to compute-node1 as evsuser from the EVS controller to verify whether the SSH authentication is set up:

     

    evsuser@evs-controller:~$ ssh evsuser@compute-node1

    Oracle Corporation      SunOS 5.11      11.2    June 2014

    evsuser@compute-node1:~$

     

    The output shows that you can log in to the node as evsuser from the EVS controller without a password.

     

    Enter the exit command to log out from compute-node1 and return to the EVS controller.

     

    evsuser@compute-node1:~$ exit
    Connection to compute-node1 closed.

     

    Log in to compute-node2 as evsuser from the EVS controller to verify whether the SSH authentication is set up:

     

    evsuser@evs-controller:~$ ssh evsuser@compute-node2

    Oracle Corporation      SunOS 5.11      11.2    June 2014

    evsuser@compute-node2:~$

     

    Enter the exit command to log out from compute-node2 and return to the EVS controller.

     

    evsuser@compute-node2:~$ exit
    Connection to compute-node2 closed.

     

    Log in to the EVS controller as evsuser from the EVS controller to verify whether the SSH authentication is set up:

     

    evsuser@evs-controller:~$ ssh evsuser@evs-controller
    Oracle Corporation      SunOS 5.11      11.2    June 2014

     

     

    Setting Up the EVS Controller

     

    After you set up the SSH authentication, you need to specify the EVS controller and set the EVS controller's properties. The assumption is that the controller property is set to ssh://evsuser@evs-controller.oracle.com on the EVS nodes, the EVS manager, and the EVS controller.

     

    1. Specify the EVS controller:

     

    root@evs-controller:~# evsadm set-prop -p controller=ssh://evsuser@evs-controller.oracle.com

     

    Then verify the modification:

     

    root@evs-controller:~# evsadm show-prop

    PROPERTY            PERM VALUE                         DEFAULT

    controller          rw   ssh://evsuser@evs-controller.oracle.com -

     

    You can see from the command output that the evs-controller.oracle.com host is the EVS controller.

     

    2. Set the EVS controller properties:

     

    First, determine whether you want to implement the elastic virtual switch using a VLAN, a VXLAN, or both. Then set the properties for the corresponding type of L2 topology that must be used for the elastic virtual switch. This example uses a VXLAN.

     

    If you use a VXLAN to implement the elastic virtual switch, you need to set the vxlan-range and uplink-port properties or the vxlan-addr property.

     

    Optionally, you can also set the vxlan-mgroup property, which specifies the multicast address that needs to be used. The VXLAN link will use this address to discover other VXLAN links on the same VXLAN segment. If this property is not set, the default all-host address (224.0.0.1) will be used by the VXLAN link.

     

    Optionally, you can set the vxlan-ipvers property if you want to set up IPv6 addresses. The default is to set up IPv4 addresses:

     

    root@evs-controller:~# evsadm set-controlprop -p l2-type=vxlan

     

    Set the VXLAN range:

     

    root@evs-controller:~# evsadm set-controlprop -p vxlan-range=200-300

     

    Set the uplink-port property to specify the data link that is used for the VXLAN. On top of this data link, the EVS nodes' VNICs will be created by the EVS controller. In the following example, all the EVS nodes will use net2 as the uplink port:

     

    root@evs-controller:~# evsadm set-controlprop -p uplink-port=net2

     

    Note: If you are using VXLAN as the layer 2 topology, you need to create an IP interface on data link net2 on all the EVS nodes. (A VXLAN link is a virtual link that is created over an IP interface that will be used for receiving and transmitting VXLAN packets.) We will do this in a later step.

     

    If the EVS nodes do not have the same data link, then for every EVS node, you need to specify the data link for the uplink-port property. For example, consider two compute nodes, compute-node1 with the data link net2 and compute-node2 with the data link net3. You need to specify the data links of both the hosts when you set the uplink-port property, as follows:

     

    root@evs-controller:~# evsadm set-controlprop -h compute-node1.oracle.com -p uplink-port=net2
    root@evs-controller:~# evsadm set-controlprop -h compute-node2.oracle.com -p uplink-port=net3

     

    (Optional) In order to add high availability to the EVS nodes, you can set up the uplink port on top of link aggregation, as shown in Figure 7.

     

    f7.gif

    Figure 7. Setting up link aggregation

     

    For example, run the following command to set up the uplink port for a link aggregation named aggr0 that has been configured on compute-node1:

     

    root@compute-node1:~# dladm show-link aggr0

    LINK                CLASS     MTU    STATE    OVER

    aggr0               aggr      1500   up       net0 net1 net2 net3

    root@evs-controller:~# evsadm set-controlprop -h compute-node1-p uplink-port=aggr0

     

     

    For more information, see "Using Datalink Multipathing to Add High Availability to Your Network."

     

    Once you completed the EVS configuration, you can verify the EVS controller properties:

     

    root@evs-controller:~# evsadm show-controlprop -p l2-type,vxlan-range,uplink-port

    PROPERTY            PERM VALUE               DEFAULT             HOST

    l2-type             rw   vxlan               vlan                --

    uplink-port         rw   net2                --                  --

    vxlan-range         rw   200-300             --                  --

     

    You can see that we are using VXLAN for the L2 topology and the uplink port is net2. In addition, VXLAN IDs 200 through 300 have been set aside for elastic virtual switches.

     

    3. Create the elastic virtual switch HR for the tenant tenantA.

     

    root@evs-controller:~# evsadm create-evs -T tenantA HR

     

    Then verify the switch creation:

     

    root@evs-controller:~# evsadm show-evs HR

    EVS           TENANT        STATUS NVPORTS IPNETS      HOST

    HR            tenantA       idle   0       --          --

     

    The following columns are shown in the output:

     

    EVS: The name of the elastic virtual switch.
    TENANT: The name of the tenant that owns the switch.
    STATUS: Whether the switch is idle or busy. It is busy if it has at least one VPort that has a VNIC connected to it.
    NVPORTS: The number of VPorts associated with the switch.
    IPNETS: The list of IP networks associated with the switch. Currently only one IP network can be associated with an elastic virtual switch.
    HOST: The list of hosts that the switch spans across.

     

    Note: We didn't create the VPorts and IP networks yet; this is why 0 is shown in the NVPORTS column and the IPNETS column is empty.

     

    4. Create the elastic virtual switch ENG for the tenant tenantB.

     

    root@evs-controller:~# evsadm create-evs -T tenantB ENG

     

    Then verify the switch creation:

     

    root@evs-controller:~# evsadm show-evs ENG

    EVS           TENANT        STATUS NVPORTS IPNETS      HOST

    ENG           tenantB       idle   0       --          --

     

    5. Add the hr_ipnet IP network to the HR elastic virtual switch.

     

    root@evs-controller:~# evsadm add-ipnet -T tenantA -p subnet=192.168.100.0/24 HR/hr_ipnet

     

    6. Add the eng_ipnet IP network to the ENG elastic virtual switch.

     

    root@evs-controller:~# evsadm add-ipnet -T tenantB -p subnet=192.168.200.0/24 ENG/eng_ipnet

     

    7. Verify the IP network creation:

     

    root@evs-controller:~# evsadm show-ipnet

    NAME                TENANT        SUBNET            DEFROUTER         AVAILRANGE

    HR/hr_ipnet         tenantA       192.168.100.0/24  192.168.100.1     192.168.100.2-192.168.100.254

    ENG/eng_ipnet       tenantB       192.168.200.0/24  192.168.200.1     192.168.200.2-192.168.200.254

     

    The following columns are shown in the output:

     

    NAME: The name of the IP network along with name of the elastic virtual switch with which it is associated. It's in the form <evsname/ipnetname>.
    TENANT: The name of the tenant that owns the switch.
    SUBNET: The IPv4 or IPv6 subnet for the IP network.
    DEFROUTER: The IP address of the default router for the given IP network.
    AVAILRANGE: A comma-separated list of available IP addresses that can be assigned to VPorts.

     

    8. Add the VPort vport0 to the elastic virtual switch HR.

     

    root@evs-controller:~# evsadm add-vport -T tenantA HR/vport0

     

    9. Add the VPort vport1 to the elastic virtual switch ENG.

     

    root@evs-controller:~# evsadm add-vport -T tenantB ENG/vport1

     

    10. Verify the creation of the VPorts.

     

    root@evs-controller:~# evsadm show-vport

    NAME                TENANT        STATUS VNIC         HOST

    HR/vport0           tenantA       free   --           --

    ENG/vport1          tenantB       free   --           --

     

    The following columns are shown in the output:

     

    NAME: The name of the VPort along with name of the elastic virtual switch with which it is associated. It's of the form <evsname/vportname>.
    TENANT: The name of the tenant that owns the switch.
    STATUS: Whether the VPort is used or free. A VPort is used if it has a VNIC associated with it. Otherwise, it's free.
    VNIC: The name of the VNIC associated with the VPort.
    HOST: The host that has the VNIC associated with the VPort.

     

    11. Verify the elastic virtual switches that were created for tenantA and tenantB:

     

    root@evs-controller:~# evsadm

    NAME          TENANT        STATUS VNIC         IP                HOST

    HR            tenantA       idle   --           hr_ipnet          --

       vport0     --            free   --           192.168.100.2/24  --

    ENG           tenantB       idle   --           eng_ipnet         --

       vport1     --            free   --           192.168.200.2/24  --

     

    12. Check the MAC address and the IP address associated with HR/vport0.

     

    root@evs-controller:~# evsadm show-vportprop -p macaddr,ipaddr HR/vport0

    NAME              TENANT      PROPERTY  PERM VALUE         DEFAULT   POSSIBLE

    HR/vport0         tenantA     ipaddr    r-   192.168.100.2/24 --     --

    HR/vport0         tenantA     macaddr   r-   2:8:20:6c:9b:af --      --

     

    ipaddr represents the IP address associated with the VPort. When a VNIC connects to a VPort, this address will be applied to the VNIC. By default, the EVS controller will automatically select an IP address from the IP network associated with the elastic virtual switch. If a zone or VNIC needs to be assigned a particular IP address, that can be achieved by manually setting the ipaddr property to the desired IP address when the VPort is added to the elastic virtual switch.

     

    Note: Once a VPort is created, its IP address cannot be changed through the evsadm set-vportprop command.

     

    macaddr represents the MAC address associated with the VPort. The VNIC that connects to this VPort basically inherits the MAC address from the VPort. By default, the EVS controller will generate a random MAC address for the VPort. If a VNIC needs to be assigned a particular MAC address, that can be achieved by manually setting the macaddr property to the desired MAC address when the VPort is added to the elastic virtual switch.

     

    Note: Once a VPort is created, its MAC address cannot be changed through the evsadm set-vportprop command.

     

    13. Check the VXLAN segment ID associated with the elastic virtual switches HR and ENG.

     

    root@evs-controller:~# evsadm show-evs -L

    EVS           TENANT        VID  VNI

    HR            tenantA       --   200

    ENG           tenantB       --   201

     

    14. Create an IP interface on data link net2. This interface will be used to encapsulate the VXLAN packets on all the EVS nodes.

     

    Create the IP interface on top of data link net2:

     

    root@evs-controller:~# ipadm create-ip net2

     

    Create a static IPv4 address on the net2 interface:

     

    root@evs-controller:~# ipadm create-addr -T static -a local=192.168.1.3 net2/addr

     

    Verify the configuration:

     

    root@evs-controller:~# ipadm show-addr net2

        ADDROBJ           TYPE     STATE        ADDR

        net2/addr         static   ok           192.168.1.3/24

     

    Configuring compute-node1

     

    The next step is the configuration of the first EVS node, compute-node1.

     

    1. Specify the EVS controller.

     

     

    root@compute-node1:~# evsadm set-prop -p controller=ssh://evsuser@evs-controller.oracle.com

     

    Note: All the compute nodes should have an FQDN set up. EVS relies on this to connect to the host to modify VPort properties.

     

    Then verify that the evs-controller host is the EVS controller:

     

    root@compute-node1:~# evsadm show-prop

    PROPERTY            PERM VALUE                         DEFAULT

    controller          rw   ssh://evsuser@evs-controller.oracle.com -

     

     

    2. Create an IP interface on data link net2. This interface will be used to encapsulate the VXLAN packets on all the  EVS nodes.

     

     

    Create the IP interface on top of data link net2:

     

    root@compute-node1:~# ipadm create-ip net2

     

    Create a static IPv4 address on the net2 interface:

     

    root@compute-node1:~# ipadm create-addr -T static -a local=192.168.1.1 net2/addr

     

    Verify the configuration:

     

    root@compute-node1:~# ipadm show-addr net2

    ADDROBJ           TYPE     STATE        ADDR

    net2/addr         static   ok           192.168.1.1/24

     

    Verify network connectivity between the net2 network interfaces on evs-controller and compute-node1:

     

    root@compute-node1:~# ping 192.168.1.3
    192.168.1.3 is alive

     

    Note: 192.168.1.3 is the IP address of net2 on the EVS controller.

     

     

     

    Setting Up the First Zone on compute-node1

     

    1. Now, let's set up the first Oracle Solaris Zone, z1, on the compute-node1. During the zone setup, we will configure the VNIC anet, set the zone's tenant to be tenantA, connect the zone to the elastic virtual switch HR, and set the VPort to vport0.

     

     

    root@compute-node1:~# zonecfg -z z1

    Use 'create' to begin configuring a new zone.

    zonecfg:z1> create

    create: Using system default template 'SYSdefault'

    zonecfg:z1> set tenant=tenantA

    zonecfg:z1> select anet linkname=net0

    zonecfg:z1:anet> set evs=HR

    zonecfg:z1:anet> set vport=vport0

    zonecfg:z1:anet> end

    zonecfg:z1> commit

    zonecfg:z1> exit

     

     

    2. Install zone z1.

     

     

    root@compute-node1:~# zoneadm -z z1 install

    The following ZFS file system(s) have been created:

        rpool/export/zones

        rpool/export/zones/z1

    [truncated output]

     

     

    3. Boot the zone:

     

     

    root@compute-node1:~# zoneadm -z z1 boot 

     

     

    4 Use the zlogin utility to configure z1. zlogin is a utility that is used to enter a non-global zone from the global zone. zlogin has three modes: interactive, noninteractive, and console. For our first login to the newly created zone, we will use the console (-C) mode.

     

     

    root@compute-node1:~# zlogin -C z1

     

    You will see the progress of the initial boot, and then after about two minutes, you will get the System Configuration Tool window shown in Figure 8. Subsequent boots will be much faster.

     

     

    f8.gif

    Figure 8. System Configuration Tool window

     

    Press F2 to continue. Then specify the following information in the interactive screens of the System Configuration tool:

     

    For the computer name, specify z1.

     

    Note: Since the network configuration is managed by the EVS controller, you will get the following message on the network screen: No configurable network interfaces found. They are all controlled from global zone. Press F2 to continue.

     

    For Time Zone Regions, select Americas.
    For Time Zone Locations, select United States.
    For Time Zone, select Pacific Time.
    For Locale: Language, select English.
    For Locale: Territory, select United States (en_US.UTF-8).
    For Keyboard, select US-English.
    Enter your root password, but do not enter anything for the optional user account.
    On the last screen, review the settings you have specified before pressing F2 to apply the settings.

     

     

    5. After few seconds, you will get the zone's console prompt. Enter the root password that you set up:

     

     

    z1 console login: root

    Password:

    Sep 22 11:18:56 z1 login: ROOT LOGIN /dev/console

    Oracle Corporation      SunOS 5.11      11.2    June 2014

     

     

    6. Wait one minute, and then verify that all the services are up and running:

     

     

    root@z1:~# svcs -xv

     

    If all the services are up and running without any issues, the command will return to the system prompt without any error message.

     

     

    7. Run the ipadm command to see the IP address that has been assign to the VNIC:

     

     

    root@z1:~# ipadm

    NAME              CLASS/TYPE STATE        UNDER      ADDR

    lo0               loopback   ok           --         --

       lo0/v4         static     ok           --         127.0.0.1/8

       lo0/v6         static     ok           --         ::1/128

    net0              ip         ok           --         --

       net0/v4        inherited  ok           --         192.168.100.2/24

     

    You can see that the VNIC's IP address is 192.168.100.2. This is the IP address associated with HR/vport0.

     

    inherited indicates that the address was configured based on the allowed-address zonecfg property configured for the solaris branded zone or the solaris-kz branded zone.

     

     

    8. The next step is to install the iperf tool inside zone z1. This tool provides the ability to the measure the performance of a network, and it can observe TCP or UDP throughput and provide real-time statistics. We will use this tool to measure the maximum network bandwidth between the Oracle Solaris Zones that will be managed by EVS.

     

     

    root@z1:~# pkg install iperf

               Packages to install:  1

           Create boot environment: No

    Create backup boot environment: No

    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

    Completed                                1/1           6/6      0.1/0.1  2.5M/s

     

    PHASE                                          ITEMS

    Installing new actions                         24/24

    Updating package state database                 Done

    Updating package cache                           0/0

    Updating image state                            Done

    Creating fast lookup database                   Done

    Updating package cache                           1/1

     

     

     

    9. Disconnect from the zone's virtual console by entering a tilde (~) followed by a period:

     

     

    root@z1:~# ~.
    [Connection to zone 'z1' console closed]

     

     

     

    Checking What Happens Behind the Scenes

     

    1. Using the following command, we can see the VNIC that has been created on compute-node1:

     

     

    root@compute-node1:~# dladm show-vnic

    LINK                OVER              SPEED  MACADDRESS        MACADDRTYPE VIDS

    z1/net0             evs-vxlan200      0      2:8:20:15:95:d0   fixed       0

     

     

    2. By running the following command, we can see that the status of the HR elastic virtual switch is busy, it now has one VPort (indicated by the 1 in the NVPORT column), and its IP network is hr_ipnet. In addition, we can see that compute-node1 is using the HR elastic virtual switch.

     

     

    root@compute-node1:~# evsadm show-evs

    EVS           TENANT        STATUS NVPORTS IPNETS      HOST

    HR            tenantA       busy   1       hr_ipnet    compute-node1

    ENG           tenantB       idle   1       eng_ipnet   --

     

     

    3. Using the following command, we can see that the free IP address range for HR/hr_ipnet is now 192.168.100.3 through 192.168.100.254, because IP address 192.168.100.2 has been assigned to vport0.We can also see that the free IP addresses range for ENG/eng_ipnet is 192.168.200.3 through 192.168.200.254, because IP address 192.168.200.2 has been assigned to vport1.

     

     

    root@compute-node1:~# evsadm show-ipnet

    NAME                TENANT        SUBNET            DEFROUTER         AVAILRANGE

    HR/hr_ipnet         tenantA       192.168.100.0/24  192.168.100.1     192.168.100.3-192.168.100.254

    ENG/eng_ipnet       tenantB       192.168.200.0/24  192.168.200.1     192.168.200.3-192.168.200.254

     

     

    4. And the following command shows that VNIC net0 in the z1 zone is using vport0 on host compute-node1:

     

     

    root@compute-node1:~# evsadm show-vport

    NAME                TENANT        STATUS VNIC         HOST

    HR/vport0           tenantA       used   z1/net0      compute-node1

    ENG/vport1          tenantB       free   --           --

     

     

    5. Now run the following command:

     

     

    compute-node1:~# dladm show-vxlan

    LINK                ADDR                     VNI   MGROUP

    evs-vxlan200        192.168.1.1              200   224.0.0.1

     

    The following columns are shown in the output:

     

    LINK: The name of the VXLAN link.
    ADDR: The address of the IP interface associated with the VXLAN link.
    VNI: The VXLAN segment number that the VXLAN link belongs to.
    MGROUP: The multicast group associated with the VXLAN link.

     

     

     

    Setting Up the Second Zone on compute-node1

     

    1. Let's set up the second zone, z2, assign it the VNIC anet, and connect it to the elastic virtual switch HR.

     

     

    Note: We don't associate the anet VNIC with a VPort; a new VPort called sys-vport0 will be created automatically and connected to the VNIC.

     

    root@compute-node1:~# zonecfg -z z2

    Use 'create' to begin configuring a new zone.

    zonecfg:z2> create

    create: Using system default template 'SYSdefault'

    zonecfg:z2> set tenant=tenantB

    zonecfg:z2> select anet linkname=net0

    zonecfg:z2:anet> set evs=ENG

    zonecfg:z2:anet> end

    zonecfg:z2> commit

    zonecfg:z2> exit

     

     

    2. Next, install and boot zone z2.To do this, we will clone z1.

     

     

    First, shut down z1, because we can't clone a running zone:

     

    root@compute-node1:~# zoneadm -z z1 shutdown

     

    Clone z1 to create z2 and then boot z2:

     

    root@compute-node1:~# zoneadm -z z2 clone z1
    root@compute-node1:~# zoneadm -z z2 boot

     

     

    3. Use the zlogin utility to log in to zone z2 and complete the zone configuration.

     

     

    root@compute-node1:~# zlogin -C z2
    [Connected to zone 'z2' console]

     

    Wait about one minute until the System Configuration Tool window appears. Press F2 to continue, and then specify the following information in the interactive screens of the System Configuration tool:

     

    For the host name, specify z2.

     

    Note: On the network screen, you will receive the following message: No configurable network interfaces found. They are all controlled from global zone. Press F2 to continue.

     

    For Time Zone Regions, select Americas.
    For Time Zone Locations, select United States.
    For Time Zone, select Pacific Time.
    For Locale: Language, select English.
    For Locale: Territory, select United States (en_US.UTF-8).
    For Keyboard, select US-English.
    Enter your root password, but do not enter anything for the optional user account.
    On the last screen, review the settings before pressing F2 to apply the settings.

     

     

    4. After few seconds, you will get the zone's console prompt. Enter the root password that you set up:

     

     

    z2 console login: root

    Password:

    Sep 22 07:58:17 z2 login: ROOT LOGIN /dev/console

    Oracle Corporation      SunOS 5.11      11.2    June 2014

     

     

    5. Wait one minute, and then verify that all the services are up and running:

     

     

    root@z2:~# svcs -xv

     

    If all the services are up and running without any issues, the command will return to the system prompt without any error message.

     

     

    6. Run the ipadm command to see the IP address that has been assigned to the VNIC:

     

     

    root@z2:~# ipadm

    NAME              CLASS/TYPE STATE        UNDER      ADDR

    lo0               loopback   ok           --         --

       lo0/v4         static     ok           --         127.0.0.1/8

       lo0/v6         static     ok           --         ::1/128

    net0              ip         ok           --         --

       net0/v4        inherited  ok           --         192.168.200.3/24

     

    You can see that the net0 interface has IP address 192.168.200.3, and its configuration has been inherited from the host.

     

     

    7. Disconnect from the zone's virtual console by entering a tilde (~) followed by a period:

     

     

    root@z2:~# ~.
    [Connection to zone 'z2' console closed]

     

     

    8. You can now reboot z1:

     

     

    root@compute-node1:~# zoneadm -z z1 boot

     

     

     

    Configuring compute-node2

     

    The next step is to configure the second EVS node, compute-node2.

     

    1. Specify the EVS controller.

     

     

    root@compute-node2:~# evsadm set-prop -p controller=ssh://evsuser@evs-controller.oracle.com

     

    Note: All the compute nodes should have an FQDN set up. EVS relies on this to connect to the host to modify VPort properties.

     

    Then verify that the evs-controller host is the EVS controller:

     

    root@compute-node2:~# evsadm show-prop

    PROPERTY            PERM VALUE                         DEFAULT

    controller          rw   ssh://evsuser@evs-controller.oracle.com -

     

     

    2. Create an IP interface on data link net2. This interface will be used to encapsulate the VXLAN packets on all the EVS nodes.

     

     

    Create the IP interface on top of data link net2:

     

    root@compute-node2:~# ipadm create-ip net2

     

    Create a static IPv4 address on the net2 interface:

     

    root@compute-node2:~# ipadm create-addr -T static -a local=192.168.1.2 net2/addr

     

    Verify the configuration:

     

    root@compute-node2:~# ipadm show-addr net2

    ADDROBJ           TYPE     STATE        ADDR

    net2/addr         static   ok           192.168.1.2/24

     

     

    3. Verify network connectivity between the net2 network interfaces on evs-controller and compute-node2.

     

     

    root@compute-node2:~# ping 192.168.1.3
    192.168.1.3 is alive

     

    Note: 192.168.1.3 is the IP address of net2 on the EVS controller.

     

     

     

    Setting Up the Third Zone on compute-node2

     

    1. Now, let's set up the third zone, z3, on compute-node2.

     

     

    root@compute-node2:~# zonecfg -z z3

    Use 'create' to begin configuring a new zone.

    zonecfg:z3> create

    zonecfg:z3> set tenant=tenantA

    zonecfg:z3> select anet linkname=net0

    zonecfg:z3:anet> set evs=HR

    zonecfg:z3:anet> end

    zonecfg:z3> commit

    zonecfg:z3> exit

     

     

     

    2. Install and boot zone z3.

     

     

    root@compute-node2:~# zoneadm -z z3 install
    root@compute-node2:~# zoneadm -z z3 boot

     

     

    3. Use the zlogin utility to configure z3.

     

     

    root@compute-node2:~# zlogin -C z3
    [Connected to zone 'z3' console]

     

    Wait one minute until the System Configuration Tool window appears. Press F2 to continue, and then specify the following information in the interactive screens of the System Configuration tool:

     

    For the host name, specify z3.

     

    Note: On the network screen, you will receive the following message: No configurable network interfaces found. They are all controlled from global zone. Press F2 to continue.

     

    For Time Zone Regions, select Americas.
    For Time Zone Locations, select United States.
    For Time Zone, select Pacific Time.
    For Locale: Language, select English.
    For Locale: Territory, select United States (en_US.UTF-8).
    For Keyboard, select US-English.
    Enter your root password, but do not enter anything for the optional user account.
    On the last screen, review the settings before pressing F2 to apply the settings.

     

     

    4. After about one minute, you will get the zone's console prompt. Enter the root password that you set up:

     

     

    z3 console login:root
    Password:

     

     

    5. Wait one minute, and then verify that all the services are up and running:

     

    root@z3:~# svcs -xv

     

    If all the services are up and running without any issues, the command will return to the system prompt without any error message.

     

    root@z3:~# ipadm

    NAME              CLASS/TYPE STATE        UNDER      ADDR

    lo0               loopback   ok           --         --

       lo0/v4         static     ok           --         127.0.0.1/8

       lo0/v6         static     ok           --         ::1/128

    net0              ip         ok           --         --

       net0/v4        inherited  ok           --         192.168.100.3/24

     

     

    6. Run the ipadm command in order to see the IP address that has been assigned to the VNIC:

     

     

    You should see that it's within the range of 192.168.100.2 to 192.168.100.254 on IP network hr_ipnet.

     

     

    7. The next step is to install the iperf tool inside z3 using the following command:

     

     

    root@z3:~# pkg install iperf

               Packages to install:  1

           Create boot environment: No

    Create backup boot environment: No

    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED

    Completed                                1/1           6/6      0.1/0.1  2.5M/s

     

    PHASE                                          ITEMS

    Installing new actions                         24/24

    Updating package state database                 Done

    Updating package cache                           0/0

    Updating image state                            Done

    Creating fast lookup database                   Done

    Updating package cache                           1/1

     

     

    8. Disconnect from the zone's virtual console by entering a tilde (~) followed by a period:

     

    root@z3:~# ~.
    [Connection to zone 'z3' console closed]

     

     

    Setting Up the Fourth Zone on compute-node2

     

    1. Let's create the fourth zone, z4. During the zone setup, we will set its tenant to be tenantB, assign it the VNIC anet, connect the zone to elastic virtual switch ENG, and set the VPort to vport1.

     

    root@compute-node1:~# zonecfg -z z4

    Use 'create' to begin configuring a new zone.

    zonecfg:z4> create

    create: Using system default template 'SYSdefault'

    zonecfg:z4> set tenant=tenantB

    zonecfg:z4> select anet linkname=net0

    zonecfg:z4:anet> set vport=vport1

    zonecfg:z4:anet> set evs=ENG

    zonecfg:z4:anet> end

    zonecfg:z4> commit

    zonecfg:z4> exit

     

    2. Next, install and then boot z4. To do this, we will clone z3.

     

    First, shut down z3:

     

    root@compute-node2:~# zoneadm -z z3 shutdown

     

    Clone z3 to create z4:

     

    root@compute-node2:~# zoneadm -z z4 clone z3

     

    Then boot z4:

     

    root@compute-node2:~# zoneadm -z z4 boot

     

    3. Use the zlogin utility to log in to zone z4 and complete the zone configuration.

     

    root@compute-node2:~# zlogin -C z4
    [Connected to zone 'z4' console]

     

    Wait one minute until the System Configuration Tool window appears. Press F2 to continue, and then specify the following information in the interactive screens of the System Configuration tool:

     

    For the host name, specify z4.

     

    Note: On the network screen, you will receive the following message: No configurable network interfaces found. They are all controlled from global zone. Press F2 to continue.

     

    For Time Zone Regions, select Americas.
    For Time Zone Locations, select United States.
    For Time Zone, select Pacific Time.
    For Locale: Language, select English.
    For Locale: Territory, select United States (en_US.UTF-8).
    For Keyboard, select US-English.
    Enter your root password, but do not enter anything for the optional user account.
    On the last screen, review the settings before pressing F2 to apply the settings.

     

    4. After about one minute, you will get the zone's console prompt. Enter the root password that you set up:

     

    z4 console login: root
    Password:

     

    5. Wait one minute, and then verify that all the services are up and running:

     

    root@z4:~# svcs -xv

     

    If all the services are up and running without any issues, the command will return to the system prompt without any error message.

     

    6. Run the ipadm command to see the IP address that has been assigned to the VNIC:

     

    root@z4:~# ipadm

    NAME              CLASS/TYPE STATE        UNDER      ADDR

    lo0               loopback   ok           --         --

       lo0/v4         static     ok           --         127.0.0.1/8

       lo0/v6         static     ok           --         ::1/128

    net0              ip         ok           --         --

       net0/v4        inherited  ok           --         192.168.200.2/24

     

    You can see that it's within the range of 192.168.200.2 to 192.168.200.254 on IP network eng_ipnet.

     

    7. Disconnect from the zone's virtual console by entering a tilde (~) followed by a period:

     

    root@z4:~# ~.
    [Connection to zone 'z4' console closed]

     

    8. You can now reboot z3:

     

    root@compute-node2:~# zoneadm -z z3 boot

     

     

    Testing Our Final Configuration

     

    1. Verify the VNIC anet resources that were created.

     

    root@compute-node2:~# dladm show-vnic -c

    LINK            TENANT        EVS       VPORT       OVER            MACADDRESS       VIDS

    z4/net0         tenantB       ENG       vport1      evs-vxlan201    2:8:20:20:d7:9e  0

    z3/net0         tenantA       HR        sys-vport1  evs-vxlan200    2:8:20:70:59:bf  0

     

    2. Display the information related to the VPorts.

     

    root@compute-node2:~# evsadm show-vport -o all

    NAME                VPORT         EVS           TENANT        STATUS VNIC         HOST      IPADDR         MACADDR

    HR/vport0           vport0        HR            tenantA       used   z1/net0      compute-node1 192.168.100.2/24 2:8:20:d6:7b:78

    ENG/vport1          vport1        ENG           tenantB       used   z4/net0      compute-node2 192.168.200.2/24 2:8:20:35:39:d

    ENG/sys-vport0      sys-vport0    ENG           tenantB       used   z2/net0      compute-node1 192.168.200.3/24 2:8:20:d4:38:4c

    HR/sys-vport0       sys-vport0    HR            tenantA       used   z3/net0      compute-node2 192.168.100.3/24 2:8:20:66:52:9

     

     

    You can see that the four VNIC anet resources are used and assigned the VPorts' IP addresses.

     

    3. Now, test the network connectivity between the zones.

     

    First, from zone z1, try to ping the IP address of z3 (192.168.100.3):

     

    root@z1:~# ping 192.168.100.3
    192.168.100.3 is alive

     

    Then, from zone z3, try to ping the IP address of z1 (192.168.100.2):

     

    root@z3:~# ping 192.168.100.2
    192.168.100.2 is alive

     

    4. Test the isolation between the tenants.

     

    Zone z1 is associated with tenantA and zone z4 is associated with tenantB.

     

    From z1, try to ping the IP address of z4 (192.168.200.2).

     

    root@z1:~# ping 192.168.200.2
    ^C

     

    We can see that we can't ping zone z4 from zone z1, because each tenant network is isolated from the other tenant network.

     

     

    Summary

     

    If you are managing virtual/cloud environments on Oracle Solaris 11, the EVS feature of Oracle Solaris provides a number of benefits including centralized network management and observability, as well as integration with operating system virtualization technologies such as Oracle Solaris Zones and Oracle VM Server for SPARC.

     

    A new tool, evsadm, which will be covered in Part 2 of this series, can help you to create and manage elastic virtual switches. It can also be used to define network SLA profiles to provide network customization across Oracle Solaris systems installed in the data center environment.

     

    See Also

     

     

    Also see these additional publications by this author:

     

     

    And here are additional Oracle Solaris 11 resources:

     

     

    Acknowledgment

     

    The authors would like to thank Nicolas Droux for his contributions to this article.

     

    About the Authors

     

    Orgad Kimchi is a principal software engineer on the ISV Engineering team at Oracle (formerly Sun Microsystems). For seven years he has specialized in virtualization, big data, and cloud computing technologies.

     

    Girish Moodalbail is a principal software engineer on the Oracle Solaris Core team at Oracle (formerly Sun Microsystems). For seven years he has worked on all layers of the Oracle Solaris networking stack: network applications, network configuration, TCP/IP, and network virtualization at the MAC layer. He is the technical lead for EVS and developed the OpenStack Neutron plugin for EVS.

     

     

    Revision 1.0, 02/12/2015

     

    Follow us:
    Blog | Facebook | Twitter | YouTube