
Introduction
We will discuss the use of infrastructure as a code software to deploy Oracle Linux within an Oracle OpenStack 4.0 private cloud. Infrastructure as a code is a process where datacenter computing can be provisioned using machine-readable definition files. Infrastructure as a code can replace traditional tools and techniques such as manual systems administration and interactive UI and command line. Terraform is open-source software that allows users to define complex datacenter infrastructure in a high-level configuration language used within a supported private or public cloud using API’s.
Oracle Linux provisioning with Oracle OpenStack 4.0 and Terraform
Using Oracle Linux 7, installing Terraform is easy; simply enable ol7_developer yum channel, then run yum install terraform. Once you have installed Terraform, users can create configuration files to suit their end configuration using examples and documentation here. Once the files are completed and then checked via Terraform utilities, the software talks to the Oracle OpenStack 4.0 via APIs and can test or dry run the desired configuration before building based upon the configuration files. Terraform keeps track of what it deploys and users can re-create entities for example removed by hand. Terraform can also remove the end configuration. For this example, I have an Oracle OpenStack 4.0 private cloud setup with a project, users and Oracle Linux images. I have a separate server running Oracle Linux 7 where I will install Terraform, create the configuration files and drive the building of entities within the Oracle OpenStack 4.0 Cloud. The setup of the Oracle OpenStack 4.0 private cloud is outside the scope of this paper. Oracle OpenStack 4.0 documentation is available here.
Example Oracle Linux provisioning flow with Oracle Cloud Infrastructure and Terraform
Terraform Client Installation
For Oracle Linux 7 we simply perform the following to install Terraform:
As the root user or using sudo edit /etc/yum.repos.d/public-yum-ol7.repo and if not present add the following to the file:
[ol7_developer]
name=Oracle Linux $releasever Development Packages ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL7/developer/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1
If the entry does exist then change enabled=1 as per the example above.
Run: sudo yum install terraform
There are frequent updates of Terraform packages; therefore, we recommend regular yum updates to enable any new features.Terraform uses the concept of providers and holds a list of these here. The provider is a method of interacting with the end “provider” using the terraform utilities. The OpenStack provider is here which contains the provider components.
Create an SSH key to access the Oracle Linux instances
We will create and use a private key in our Terraform configuration files allowing us to access our Oracle Linux instances running within the Oracle OpenStack 4.0 cloud. For further information, reference the Oracle Linux 7 documentation. Using a passphrase encrypts the key making it impossible to use even if someone were to obtain the private key file.
Firstly, we need to create a directory to store the keys:
mkdir ~/.key
cd ~/.key
Next, we create a key using ssh-keygen:
ssh-keygen -t rsa -f openstack.key
This will generate a private and public key pair:
ls
openstack.key openstack.key.pub
Creating the Terraform configuration files
We recommend that you create directory areas within your home directory for each single or group of instances you wish to create.
mkdir ~/terraform
Terraform has the concept of a variables file where you can store common items. For this exercise I will not use a variables file but will use an RC script (OracleOpenStackRC.sh), that when sourced provides all the access details needed to drive the Oracle OpenStack cloud using the API.
Within this example file, we have the following:
#!/usr/bin/env bash
# To use an OpenStack cloud you need to authenticate against the Identity
# service named keystone, which returns a **Token** and **Service Catalog**.
# The catalog contains the endpoints for all services the user/tenant has
# access to - such as Compute, Image Service, Identity, Object Storage, Block
# Storage, and Networking (code-named nova, glance, keystone, swift,
# cinder, and neutron).
#
# *NOTE*: Using the 3 *Identity API* does not necessarily mean any other
# OpenStack API is version 3. For example, your cloud provider may implement
# Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
# only for the Identity API served through keystone.
export OS_AUTH_URL=http://10.3.12.30:5000/v3
# With the addition of Keystone we have standardized on the term **project**
# as the entity that owns the resources.
export OS_PROJECT_ID=57cb144888e74657911e20eee11b235b
export OS_PROJECT_NAME="My_Project"
export OS_USER_DOMAIN_NAME="Default"
if [ -z "$OS_USER_DOMAIN_NAME" ]; then unset OS_USER_DOMAIN_NAME; fi
# unset v2.0 items in case set
unset OS_TENANT_ID
unset OS_TENANT_NAME
# In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="myuser"
# With Keystone you pass the keystone password.
echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "
read -sr OS_PASSWORD_INPUT
export OS_PASSWORD=$OS_PASSWORD_INPUT
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"
# Don't leave a blank variable, unset it if it was empty
if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
export OS_INTERFACE=public
export OS_IDENTITY_API_VERSION=3
When downloaded from the Oracle OpenStack UI this file provides all the access details for the private cloud, also when sourced captures the password for the Oracle OpenStack user for Terraform to drive the API. Further details on the RC scripts are available from the OpenStack documentation.
We now need to create some *.tf files which are the code which will build our infrastructure within the Oracle OpenStack private cloud. These files are specific to the entity or entities that are required. For example, you could have one to create the Virtual Cloud Network and another to create the Oracle Linux 7 instance. You can number them for ease of reference; for example, I have the following *.tf files:
01_key_pair.tf 02_create_sec_group.tf 03_create_network.tf create_instance.tf
If we look at 01_key_pair.tf:
#
# Resource - KeyPair
# Creates a new keypair in our openstack tenant.
# Will show up in OpenStack as "tf-keypair-1"
# Can be referenced elsewhere in terraform configuration as "keypair1"
#
resource "openstack_compute_keypair_v2" "keypair1" {
name = "tf-keypair-1"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+4Rq9aCHyZs+y+soDd8jAcGwFlT+pLgvuYmlp4qRBvJfWXXuJQ9s6YPPJozkBqFTXQ1L0pWLnY7DMy7LAiJLvmpW/1gP2B4pPCAe4lrTFvK+sIL9Yulazv+S2GniG8lBvLgjezJppaouL9GiAhZA9nltLYrGh/vLJC6xiCaLv2+ydiHM3sWoVh5P6kXRZh5h3ZWAz232vbLvWiaa1aDFTonASwbARhiwSeIm/AQTyhd3+zpDtwIBr3yydUmC1cDj+z7Dpy8p0U3WeCilV0aL4k1YHzpoxLEGhC8rcKN8Sp7bsrRliZvRjR1e8lTT2lsOQYq1p/51RpQO8iRGdWLgp simon@ol7"
}
The 01_key_pair.tf file creates a key pair for the Oracle OpenStack private cloud tenant. This key pair is used to access any instances we create. Further details on this resource are available here. The required fields are name and public_key. Name should be a unique name for the key pair and the public_key should be a pre-generated OpenSSH-formatted public key. We need to copy the key we previously created and insert into the script.
The 02_create_sec_group.tf file creates an example security group, which allows access for SSH, and SSL. Rules are added or removed as required to fit your local security requirements.
#
# Create a security group
#
resource "openstack_compute_secgroup_v2" "tf_sec_1" {
region = ""
name = "tf_sec_1"
description = "Security Group Via Terraform"
rule {
from_port = 22
to_port = 22
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
rule {
from_port = 1
to_port = 65535
ip_protocol = "tcp"
self = true
}
rule {
from_port = 443
to_port = 443
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
}
The 03_create_network.tf file creates the Virtual Cloud Network. The key parts are as follows:
• Create a new Virtual Cloud Network with CIDR and bring to an UP state
• Create a new Internet Gateway using the external_network_id field which can be discovered using the OpenStack UI or CLI tools
• Create and attach new Router to the Virtual Cloud Network
• Create a Subnet
• Create two floating IP’s from the external pool (already configured which can be discovered using the OpenStack UI or CLI)
#
# Create a Network
#
resource "openstack_networking_network_v2" "tf_network" {
name = "tf_network"
admin_state_up = "true"
}
#
# Create a subnet in our new network
# Notice here we use a TF variable for the name of our network above.
#
resource "openstack_networking_subnet_v2" "tf_net_sub1" {
name = "tf_net_sub1"
network_id = "${openstack_networking_network_v2.tf_network.id}"
cidr = "192.168.1.0/24"
ip_version = 4
}
#
# Create a router for our network
#
resource "openstack_networking_router_v2" "tf_router1" {
name = "tf_router1"
admin_state_up = "true"
external_network_id = "eaed9ac5-aace-464f-969c-881bccf75544"
}
##
# Attach the Router to our Network via an Interface
#
resource "openstack_networking_router_interface_v2" "tf_rtr_if_1" {
router_id = "${openstack_networking_router_v2.tf_router1.id}"
subnet_id = "${openstack_networking_subnet_v2.tf_net_sub1.id}"
}
#
# Create some Openstack Floating IP's for our VM's
#
resource "openstack_compute_floatingip_v2" "fip_1" {
pool = "external"
}
resource "openstack_compute_floatingip_v2" "fip_2" {
pool = "external"
}
The create_instance.tf file creates an Oracle Linux 7.4 instance. The key parts are as follows:
• Create a 1GB block volume
• Create an OL7.4 instance using the ol74 image
• Create a medium flavor instance (already configured)
• Attach our key_pair to the instance to allow SSH access
• Assign our security group to the instance
• Attach our instance to the new Virtual Cloud Network
• Attach the 1GB volume to the instance
• Attaches a floating IP to the instance
#
# Create a VM Instance on Ol74 with an attached volume
#
resource "openstack_blockstorage_volume_v2" "tf_vol" {
name = "tf_vol"
size = 1
}
resource "openstack_compute_instance_v2" "tf_ol74" {
name = "tf_ol74"
image_name = "ol74"
flavor_name = "medium"
key_pair = "tf-keypair-1"
security_groups = ["tf_sec_1"]
metadata {
demo = "metadata"
}
network {
name = "tf_network"
}
}
resource "openstack_compute_volume_attach_v2" "attached" {
instance_id = "${openstack_compute_instance_v2.tf_ol74.id}"
volume_id = "${openstack_blockstorage_volume_v2.tf_vol.id}"
}
resource "openstack_networking_floatingip_v2" "fip_1" {
pool = "external"
}
resource "openstack_compute_floatingip_associate_v2" "fip_1" {
floating_ip = "${openstack_networking_floatingip_v2.fip_1.address}"
instance_id = "${openstack_compute_instance_v2.tf_ol74.id}"
}
There are multiple options for the creation and manipulation of components such as:
• Obtaining information regarding data sources such as DNS, images, flavors and networks
• Creating and attaching block storage
• Creating and configuring compute instances
• Creating and configuring databases
• Creating and configuring DNS
• Creating and configuring Identity services
• Creating and configuring images
• Creating and configuring Networking
• Creating and configuring load balancing services
• Creating and configuring firewalls
• Creating and configuring object storage
Examples of usage for these components are available within the Terraform OpenStack provider documentation.
Build process
Once we create our files, we need to run a series of steps to create our entities. For my example, I have two directories to separate my network build and my instance build. For my network and compute build directory I use numbered *.tf files.
Firstly, we need to source the RC file described earlier to set the environment variables to access our Oracle OpenStack 4.0. This script is run: . ~/ OracleOpenStackRC.sh which will also ask for and store the password for the Oracle OpenStack user.
Each directory containing our *.tf files needs to run terraform init. This step only needs to run once per directory structure. This command prepares the working directory for use with Terraform and creates the .terraform directory. This command is safe to be run multiple times and used to bring the current working directory up to date with any configuration changes. If using a proxy then before running terraform init we need to export the proxy as follows: export HTTP_PROXY=http://my-proxy.com:80
Next, we run terraform plan, which creates an execution plan and advises which actions will apply and is a dry run of the end build. You can use the -out switch to create a file, which when used is an input to the terraform apply command. If you experience a hang with the terraform plan command and are using a proxy unset the proxy: unset HTTP_PROXY
Finally, we run terraform apply which will execute the actions and will provision our infrastructure. As explained above you can use this command with a file created by terraform plan, which can be useful for automation purposes.
When the instance is created, you can access the instance Oracle Linux OS via the floating IP with ssh and private key. For example:
ssh -i ~/.key/openstack.key -l cloud-user <floating_IP>
Rebuild and destroy process
If you either accidentally or purposefully change the Terraform built configuration via the Oracle OpenStack UI/CLI you can recover using Terraform. For example, if you delete a block volume or delete an instance created by Terraform you simply re-run the terraform plan command. The command will advise what needs to be rebuilt or changed; you then run terraform apply and the configuration that was previously there within the configuration files is recreated. Any changes to the configuration files will apply to the newly created entities.
The terraform destroy command will simply destroy all entities created by the terraform apply command based upon the Terraform execution plan. The destroy command will display what is to be destroyed and prompt for confirmation to proceed.