How to Easily Deploy and Manage Oracle Openstack with Oracle Linux Hand On Lab Instructions

Version 2

    Introduction

     

    Oracle OpenStack for Oracle Linux is a fully integrated enterprise OpenStack cloud solution with complete, end-to-end support from Oracle. This five node Oracle VirtualBox appliance is capable of demonstrating a multi-node Oracle OpenStack for Oracle Linux Release 2.0.2 set up using only 16 GB of RAM. The host required for this appliance must be 64-bit. The appliance consists of the following virtual machines (VMs):

    • ctrl1: Management (master) controller node and registry server

    • ctrl2: Controller node   

    • net1: Network node   
    • compute1: Compute node   
    • compute2: Compute node
         

     

    Importing the VirtualBox Appliance

     

    Before you import the appliance, configure VirtualBox networking to support static IP addressing, which is required by the appliance virtual machines.

    Select File > Preferences      > Network, then select the tab for Host-only         Networks. Configure vboxnet0 and vboxnet1      interfaces as follows:

    • vboxnet0: 172.16.1.1, Netmask 255.255.255.0   
    • vboxnet1: 172.18.1.1, Netmask 255.255.255.0   

    Make sure you leave IPv6 configuration empty, and do not enable DHCP.

    If you see an informational message in VirtualBox about display video memory being set too low, you can either increase the display video memory, or safely ignore the message.

    Import the appliance by selecting File > Import Appliance, then navigating to the file downloaded from OTN. Accept the license agreement for each virtual machine, and wait for the import to complete.

     

    Setting up the VMs

     

    1. Using VirtualBox, start each VM.
    2. Using the VirtualBox console, for each VM (ctrl1, ctrl2, net1, compute1, and compute2):
      1. Log in as the root user and set a password. There is no default password for the root user, so you only enter "root' at the login prompt, for example:       
        Oracle Linux Server 7.2
        Kernel ...
        ctrl1 login: root
        Last login: ...
        [root@ctrl1 ~]# passwd
      2. Set a password for the user labuser:
        # passwd labuser
      3. If a web proxy is required to access http://yum.oracle.com, configure proxy variables in the /etc/environment file using a text editor. Make sure you exclude the range 172.16.0.0/14 and the "local" top level domain using the no_proxy variable. For example:

        http_proxy=http://proxy.example.com:8080
        http://proxy.example.com:8080
        no_proxy=127.0.0.1,localhost,172.16.0.0/14,local
      4. Log out of all VirtualBox VM consoles.

    3. Cutting and pasting is much simpler when using a terminal window versus the VirtualBox console. To aid in this, start a terminal window on the host operating system, or an SSH client such as PuTTy if the host operating system is Microsoft Windows.  Log into ctrl1 as labuser:
      $ ssh labuser@172.16.1.10

      Start four more terminal windows, or additional tabs in the existing window. In each window (or tab), log into each of the VMs:

      For ctrl2:

      $ ssh labuser@172.16.1.11

      For net1:

      $ ssh labuser@172.16.1.4

      For compute1:

      $ ssh labuser@172.16.1.16

      For compute2:

      $ ssh labuser@172.16.1.17
    4. (Optional) While logged in to any of the VMs, if you would like to be able to resolve DNS names external to the virtual environment, edit the file /etc/resolv.upstream.conf on the ctrl1 host to point to your local nameserver. While logged in to the VMs, DNS names for other VMs will resolve in any case, and the deployment should still complete if you skip this step.
    5. (Optional) On ctrl1, examine the file /etc/hosts. If you would like to be able to log in to VMs by name from the host OS, copy the entries for ctrl1, ctrl2, net1, compute1, and compute2 into the static hosts file on the host OS. If you omit this step, you will need to reference VMs by IP address from the host OS, as above, but the deployment should still complete.

    Installing Oracle Openstack for Oracle Linux Packages

    1. On the ctrl1 node, install the openstack-kollacli package:
      $ sudo yum -y install openstack-kollacli
    2. On all other nodes, install the openstack-kolla-preinstall package:
      $ sudo yum -y install openstack-kolla-preinstall
    3. On ctrl1, add labuser to the kolla group. For the purpose of this lab, we will also add labuser to the docker group:
      $ sudo usermod -a -G kolla,docker labuser
    4. On each node, enable unprivileged users to allocate pseudo-terminals:
      $ sudo chmod 666 /dev/pts/ptmx
    5. The default KVM hypervisor has some known issues running under VirtualBox, so set the hypervisor to be qemu for Nova compute nodes. On ctrl1, use a text editor to insert the following text into the file /etc/kolla/config/nova/nova-compute.conf:
      [libvirt]
      virt_type=qemu
    6. Log out from ctrl1 and then immediately log back in so that the labuser's supplemental groups are updated:
      $ logout
      $ ssh labuser@172.16.1.10

    Configuring the Deployment

    All of these commands should be run on the management control node, ctrl1.

    1. Add all the nodes in this deployment to the deployment tool, kollacli:
      kollacli host add ctrl1.local
      kollacli host add ctrl2.local
      kollacli host add net1.local
      kollacli host add compute1.local
      kollacli host add compute2.local
    2. Set up the hosts:
      kollacli host setup ctrl1.local
      root password for ctrl1.local: Starting setup of host (ctrl1.local) Host (ctrl1.local) setup succeeded
      kollacli host setup ctrl2.local
      root password for ctrl2.local: Starting setup of host (ctrl2.local) Host (ctrl2.local) setup succeeded

      kollacli host setup net1.local
      root password for net1.local: Starting setup of host (net1.local) Host (net1.local) setup succeeded
      kollacli host setup compute1.local
      root password for compute1.local: Starting setup of host (compute1.local) Host (compute1.local) setup succeeded
      kollacli host setup compute2.local
      root password for compute2.local: Starting setup of host (compute2.local) Host (compute2.local) setup succeeded
    3. Add the control nodes to the control, storage, and database groups.
      kollacli group addhost control ctrl1.local
      kollacli group addhost storage ctrl1.local
      kollacli group addhost database ctrl1.local
      kollacli group addhost control ctrl2.local
      kollacli group addhost storage ctrl2.local
      kollacli group addhost database ctrl2.local
    4. Add the network node to the network group.
      kollacli group addhost network net1.local
    5. Add the compute nodes to the compute group:
      kollacli group addhost compute compute1.local
      kollacli group addhost compute compute2.local
    6. (Optional) Confirm the deployment groups are correctly set up.
      kollacli host list
      +----------------+------------------------------------+ | Host           | Groups                             | +----------------+------------------------------------+ | compute1.local | ['compute']                        | | compute2.local | ['compute']                        | | ctrl1.local    | ['control', 'storage', 'database'] | | ctrl2.local    | ['control', 'storage', 'database'] | | net1.local     | ['network']                        | +----------------+------------------------------------+
    7. Set the kollacli deployment properties:
      kollacli property set openstack_release 2.0.2
      kollacli property set docker_registry ctrl1.local:8443
      kollacli property set kolla_internal_address 172.16.1.9
      kollacli property set network_interface eth1
      kollacli property set tunnel_interface eth2
      kollacli property set neutron_external_interface  eth3
      kollacli property set enable_murano no
    8. (Optional) Confirm the deployment properties are correctly set.
      kollacli property list
      +----------------------------------------+----------------------------+ | Property Name                          | Property Value             | +----------------------------------------+----------------------------+ | ansible_ssh_user                       | kolla                      | | cinder_api_port                        | 8776                       | | cinder_backup_driver                   | nfs                        | | cinder_backup_share                    |                            | | cinder_backup_swift_user               | swift                      | | cinder_database_name                   | cinder                     | | cinder_database_user                   | cinder                     | | cinder_keystone_user                   | cinder                     | | cinder_volume_driver                   | lvm                        | | config_strategy                        | COPY_ALWAYS                | | database_cluster_name                  | openstack                  | | database_port                          | 3306                       | | database_user                          | root                       | | docker_api_version                     | 1.18                       | | docker_insecure_registry               | False                      | | docker_namespace                       | oracle                     | | docker_pull_policy                     | always                     | | docker_registry                        | ctrl1.local:8443           | | docker_restart_policy                  | always                     | | docker_restart_policy_retry            | 10                         | | enable_cinder                          | yes                        | | enable_glance                          | yes                        | | enable_haproxy                         | yes                        | | enable_heat                            | yes                        | | enable_horizon                         | yes                        | | enable_keystone                        | yes                        | | enable_mariadb                         | no                         | | enable_murano                          | no                         | | enable_mysqlcluster                    | yes                        | | enable_neutron                         | yes                        | | enable_nova                            | yes                        | | enable_rabbitmq                        | yes                        | | enable_swift                           | no                         | | glance_api_port                        | 9292                       | | glance_database_name                   | glance                     | | glance_database_user                   | glance                     | | glance_keystone_user                   | glance                     | | glance_registry_port                   | 9191                       | | heat_api_cfn_port                      | 8000                       | | heat_api_port                          | 8004                       | | heat_database_name                     | heat                       | | heat_database_user                     | heat                       | | heat_keystone_user                     | heat                       | | horizon_database_name                  | horizon                    | | horizon_database_user                  | horizon                    | | keystone_admin_port                    | 35357                      | | keystone_database_name                 | keystone                   | | keystone_database_user                 | keystone                   | | keystone_public_port                   | 5000                       | | kolla_base_distro                      | ol                         | | kolla_install_type                     | openstack                  | | kolla_internal_address                 | 172.16.1.9                 | | mariadb_ist_port                       | 4568                       | | mariadb_port                           | 3306                       | | mariadb_sst_port                       | 4444                       | | mariadb_wsrep_port                     | 4567                       | | memcached_port                         | 11211                      | | murano_api_port                        | 8082                       | | murano_database_name                   | murano                     | | murano_database_user                   | murano                     | | murano_keystone_user                   | murano                     | | mysqlcluster_data_memory               | 1G                         | | mysqlcluster_index_memory              | 768M                       | | mysqlcluster_number_of_attributes      | 20000                      | | mysqlcluster_number_of_ordered_indexes | 2000                       | | mysqlcluster_number_of_tables          | 1024                       | | mysqlcluster_number_of_triggers        | 3000                       | | mysqlcluster_server_port               | 40200                      | | network_interface                      | eth1                       | | neutron_bridge_name                    | br-ex                      | | neutron_database_name                  | neutron                    | | neutron_database_user                  | neutron                    | | neutron_external_interface             | eth3                       | | neutron_keystone_user                  | neutron                    | | neutron_plugin_agent                   | openvswitch                | | neutron_server_port                    | 9696                       | | neutron_tenant_type                    | vxlan                      | | neutron_vlan_bridge                    | br-vlan                    | | neutron_vlan_interface                 | -                          | | neutron_vlan_physnet                   | physnet1                   | | neutron_vlan_range                     | 1:1000                     | | node_config_directory                  | /etc/kolla                 | | node_templates_directory               | /usr/share/kolla/templates | | nova_api_ec2_port                      | 8773                       | | nova_api_port                          | 8774                       | | nova_database_name                     | nova                       | | nova_database_user                     | nova                       | | nova_keystone_user                     | nova                       | | nova_metadata_port                     | 8775                       | | openstack_logging_debug                | False                      | | openstack_logging_verbose              | True                       | | openstack_region_name                  | RegionOne                  | | openstack_release                      | 2.0.2                      | | project_name                           | neutron                    | | rabbitmq_cluster_name                  | openstack                  | | rabbitmq_cluster_port                  | 25672                      | | rabbitmq_epmd_port                     | 4369                       | | rabbitmq_management_port               | 15672                      | | rabbitmq_port                          | 5672                       | | rabbitmq_user                          | openstack                  | | swift_account_server_port              | 6001                       | | swift_admin_tenant_name                | admin                      | | swift_container_server_port            | 6002                       | | swift_devices_mount_point              | /srv/node                  | | swift_keystone_user                    | swift                      | | swift_object_server_port               | 6000                       | | swift_proxy_server_port                | 8080                       | | tunnel_interface                       | eth2                       | +----------------------------------------+----------------------------+

      To narrow down the list, you could also use:

      kollacli property list |grep -e openstack_release -e docker_registry -e kolla_internal_address \
      -e network_interface -e tunnel_interface -e neutron_external_interface -e enable_murano
      | docker_registry                        | ctrl1.local:8443           | | enable_murano                          | no                         | | kolla_internal_address                 | 172.16.1.9                 | | network_interface                      | eth1                       | | neutron_external_interface             | eth3                       | | openstack_release                      | 2.0.2                      | | tunnel_interface                       | eth2                       |
    9. Kollacli is now configured.  All that remains is to run the deployment:
      kollacli deploy

    Initial OpenStack Configuration

    All of these commands should be run on the management control node, ctrl1

    1. Download the admin-openrc.sh credential file as described in the video, or write one by hand using a text editor:
      #!/bin/sh
      OS_USERNAME=admin
      OS_PASSWORD=password
      OS_TENANT_NAME=admin
      OS_PROJECT_NAME=admin
      OS_AUTH_URL=http://172.16.1.9:5000/v2.0/
      export OS_USERNAME OS_PASSWORD OS_TENANT_NAME OS_PROJECT_NAME OS_AUTH_URL
    2. Source the credential file:
      $ source $HOME/admin-openrc.sh
    3. Install the OpenStack client utilities.
      $ sudo yum -y install openstack-kolla-utils
    4. Make the m1.tiny flavor even tinier. Remove the existing m1.tiny flavor and create a smaller one using:
      $ docker-ostk nova flavor-delete m1.tiny
      +----+---------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+---------+-----------+------+-----------+------+-------+-------------+-----------+ | 1  | m1.tiny | 512       | 1    | 0         |      | 1     | 1.0         | True      | +----+---------+-----------+------+-----------+------+-------+-------------+-----------+
      $ docker-ostk nova flavor-create --is-public True  m1.tiny 1 384 1 1
      +----+---------+-----------+------+-----------+------+-------+-------------+-----------+ | ID | Name    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | +----+---------+-----------+------+-----------+------+-------+-------------+-----------+ | 1  | m1.tiny | 384       | 1    | 0         |      | 1     | 1.0         | True      | +----+---------+-----------+------+-----------+------+-------+-------------+-----------+
    5. Load Cirros test image into Glance.
      $ load_cirros
      +--------------------------------------+------------+-------------+------------------+------+--------+ | ID                                   | Name       | Disk Format | Container Format | Size | Status | +--------------------------------------+------------+-------------+------------------+------+--------+ | 2ca3db6c-eb4e-4e1f-b77b-4628108e57c4 | cirros_aki | aki         | aki              |      | active | | bb6041b2-6c4e-4df8-8a45-68c9b702d77d | cirros_ami | ami         | ami              |      | active | | 3c83df9c-77d5-46e9-add2-871d06a8ed4f | cirros_ari | ari         | ari              |      | active | +--------------------------------------+------------+-------------+------------------+------+--------+
    6. Set up the public network.
      $ docker-ostk neutron net-create \
      --router:external \
      --provider:network_type flat \
      --provider:physical_network physnet1 \
      public-net
      Created a new network: +---------------------------+--------------------------------------+ | Field                     | Value                                | +---------------------------+--------------------------------------+ | admin_state_up            | True                                 | | id                        | c3f9a703-f9b9-4b57-bca3-c89b1f35210f | | mtu                       | 0                                    | | name                      | public-net                           | | provider:network_type     | flat                                 | | provider:physical_network | physnet1                             | | provider:segmentation_id  |                                      | | router:external           | True                                 | | shared                    | False                                | | status                    | ACTIVE                               | | subnets                   |                                      | | tenant_id                 | f073796ec4544e5aa77b0f67ee2a1c7e     | +---------------------------+--------------------------------------+
      $ docker-ostk neutron subnet-create \
      --name public-subnet \
      --gateway 172.18.1.1 \
      --allocation-pool start=172.18.1.10,end=172.18.1.199 \
      --disable-dhcp \
      --ip-version 4 \
      public-net 172.18.1.0/24
      Created a new subnet: +-------------------+-------------------------------------------------+ | Field             | Value                                           | +-------------------+-------------------------------------------------+ | allocation_pools  | {"start": "172.18.1.10", "end": "172.18.1.199"} | | cidr              | 172.18.1.0/24                                   | | dns_nameservers   |                                                 | | enable_dhcp       | False                                           | | gateway_ip        | 172.18.1.1                                      | | host_routes       |                                                 | | id                | 56722ee0-0d04-4dd2-9425-221bf9497697            | | ip_version        | 4                                               | | ipv6_address_mode |                                                 | | ipv6_ra_mode      |                                                 | | name              | public-subnet                                   | | network_id        | c3f9a703-f9b9-4b57-bca3-c89b1f35210f            | | subnetpool_id     |                                                 | | tenant_id         | f073796ec4544e5aa77b0f67ee2a1c7e                | +-------------------+-------------------------------------------------+
    7. Finally, exercise the newly deployed OpenStack environment using a sample Heat template provided in the appliance:
      $ docker-ostk heat stack-create -f /data/vts.yaml -P $(./vts.params) vts_test1
      +--------------------------------------+------------+--------------------+----------------------+ | id                                   | stack_name | stack_status       | creation_time        | +--------------------------------------+------------+--------------------+----------------------+ | 3b6aea97-efa0-4890-9df9-103d24e95877 | vts_test1  | CREATE_IN_PROGRESS | 2016-03-03T16:11:20Z | +--------------------------------------+------------+--------------------+----------------------+
    8. You may also want to explore Horizon, the OpenStack dashboard, which is available at:

      http://172.16.1.9/