How to use Terraform with Oracle Linux and Oracle Cloud Infrastructure (OCI)

Version 6

    This document was created with help and input from Christophe Pauliat from the EMEA Oracle Solutions Center.

     

    Introduction

    We will discuss the use of infrastructure as a code software to deploy Oracle Linux within Oracle Cloud Infrastructure. Infrastructure as a code is a process where data center computing can be provisioned using machine-readable definition files. Infrastructure as a code can replace traditional tools and techniques such as manual systems administration and interactive UI and command line. Terraform is open-source software that allows users to define complex data center infrastructure in a high-level configuration language used within a supported public cloud using API’s.

     

    Oracle Linux provisioning with Oracle Cloud Infrastructure and Terraform

    For Oracle Linux 7, installing Terraform is easy: simply enable ol7_developer yum channel, then run yum install terraform. For Oracle Linux there is no need to install the terraform-provider-oci RPM as terraform will pull in the provider if it is referenced in a *.tf file when terraform init is run. For other operating systems, download the Terraform binary and the Terraform provider for Oracle Cloud Infrastructure from here. Next, create a .terraformrc or terraform.rc file to tell Terraform where the terraform-provider-oci binary is. Once you have installed Terraform, users can create configuration files to suit their end configuration using examples and documentation here.

     

    Users need to generate a RSA key pair and enter their public key as an API key via the Oracle Cloud Infrastructure as explained here. Terraform uses their private key to connect to their Oracle Cloud Infrastructure tenancy. Once the files are completed and then checked via Terraform utilities, the software talks to Oracle Cloud Infrastructure via APIs and can test or dry run the desired configuration before building based upon the configuration files. Terraform keeps track of what it deploys and users can re-create entities for example removed by hand. Terraform can also remove the end configuration.

     

    Example Oracle Linux provisioning flow with Oracle Cloud Infrastructure and Terraform

    Terraform Client Installation

    For Oracle Linux 7 we simply perform the following to install Terraform:

    Edit /etc/yum.repos.d/public-yum-ol7.repo and if not present add the following to the file:

    [ol7_developer]

        name=Oracle Linux $releasever Development Packages ($basearch)

        baseurl=http://yum.oracle.com/repo/OracleLinux/OL7/developer/$basearch/

        gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

        gpgcheck=1

        enabled=1

        If the entry does exist then change enabled=1 as per the example above.

    Run: sudo yum install terraform

    As Terraform packages are frequently updated, we recommend regular yum updates to enable any new features.

     

    Create an API key for your user

    Before we can use Terraform, Oracle Cloud Infrastructure CLI or Oracle Cloud Infrastructure Rest APIs in general, we need to create an API key pair with openssl. Our user public key is then imported using the Oracle Cloud Infrastructure UI/console as explained here. We then use the private key in our Terraform configuration files, along with the fingerprint shown in the Oracle Cloud Infrastructure Cloud UI/Console for our API key.

     

    Firstly, we need to create a directory to store the keys:

    mkdir ~/.oci

     

    Next, we create a key using openssl:

    openssl genrsa -out ~/.oci/oci_api_key.pem 2048

     

    Next, we should ensure only we can read the keys:

    chmod go-rwx ~/.oci/oci_api_key.pem

     

    Finally, we generate the public key:

    openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem

     

    We can run the following command to view the fingerprint. The fingerprint of a key is a unique sequence of letters and numbers used to identify the key. Similar to a fingerprint of two people never being the same.

    openssl rsa -pubout -outform DER -in ~/.oci/oci_api_key.pem | openssl md5 -c

     

    We need to cat the public key file and copy the key output, as we need this to upload the API key to Oracle Cloud Infrastructure.

    cat ~/.oci/oci_api_key_public.pem

     

    Upload the Public Key

    We now need to upload the public key created in the previous step to the Oracle Cloud Infrastructure. Log into Oracle Cloud Infrastructure and follow the steps detailed here. Once you upload the key, you will see your fingerprint displayed. You can check using the command referenced above. It is possible to create and upload a maximum of three keys.

     

    Obtain the Tenancy and User OCIDs

    We need to capture these id’s to use in our configuration files. You can get the Tenancy ID from the bottom of the UI. The User OCID by clicking on your username in the top right hand corner and then selecting User Settings. Under the Create / Reset Password box, there is the truncated User OCID. You can choose to either show the User OCID or copy it. If your user is part of a compartment then we should obtain this too. For further information on compartments, refer to the documentation.

     

    Create a Public Key for created Instances

    We should create a public key when creating either bare metal or virtual instances. Reference the Oracle Linux 7 documentation.

     

    Run the following command:

    ssh-keygen

     

    Creating the Terraform configuration files

    We recommend that you create directory areas within your home directory for each single or group of instances you wish to create.

    mkdir -p OCI_myinstance_ol7/userdata

     

    Within the main directory (OCI_myinstance_ol7) create a file to be used for the variables; in my example this will be terraform.tfvars.

    Within this file, we have the following:

         # -- Tenant Information

         tenancy_ocid = "ocid1.tenancy.oc1..aaaaaaaaw7e6nkszrry6d5h7l6yxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

         user_ocid = "ocid1.user.oc1..aaaaaaaayblfepjieokyssotansaki2u4xxxxxxxxxxxxxxxxxxxxxxxxxxx"

         fingerprint = "19:1d:7b:3a:17:04:17:e0:89:xx:xx:xx:xx:xx:xx:xx"

         private_key_path = "/home/simon/.oci/oci_api_key.pem"

         compartment_ocid = "ocid1.compartment.oc1..aaaaaaaakqmkvukdc2k7rmrhudxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

         region = "eu-frankfurt-1"

     

         # ---- availability domain (1, 2 or 3)

         AD = "1"

     

         # ---- Authorized public IPs ingress (0.0.0.0/0 means all Internet)

         #authorized_ips="90.119.77.177/32" # a specific public IP on Internet

         #authorized_ips="129.156.0.0/16" # a specific Class B network on Internet

         authorized_ips="0.0.0.0/0" # all Internet

     

         # -- variables for BM/VM creation

         BootStrapFile_ol7 = "userdata/bootstrap_ol7"

         ssh_public_key_file_ol7 = "/home/simon/.ssh/id_rsa_ol7.pub"

     

    Populate the Tenant Information using the data captured in earlier steps. For details on Regions and Availability, domains refer to the documentation.

     

    The authorized_ips section is set authorized_ips="0.0.0.0/0" to allow ingress to the instance from the public internet. Note the other two examples which are commented out; for a specific internet address use authorized_ips="90.119.77.177/32" and for a specific internet subnet use authorized_ips="129.156.0.0/16". For further options, refer to the Oracle Cloud Infrastructure Security Lists documentation.

     

    With respect to the variables for either a Bare Metal or Virtual Machine, you can create a bootstrap file, which is useful for running commands that only need to be run once when creating an instance. As noted above the bootstrap file should be contained within the userdata directory; the public key file enables us to log into the new instance.

     

    We now need to create some *.tf files which are the code which will build our infrastructure within the Oracle Infrastructure Cloud. These files are specific to the entity or entities that are required. For example, you could have one to create the Virtual Cloud Network and another to create the Oracle Linux 7 instance. You can number them for ease of reference; for example, I have the following three *.tf files:

    01_auth.tf 02_vcn.tf 03_instance_ol7.tf

     

    If we look at 01_auth.tf:

         # ---- use variables defined in terraform.tfvars file

         variable "tenancy_ocid" {}

         variable "user_ocid" {}

         variable "fingerprint" {}

         variable "private_key_path" {}

         variable "compartment_ocid" {}

         variable "region" {}

         variable "AD" {}

         variable "BootStrapFile_ol7" {}

         variable "ssh_public_key_file_ol7" {}

         variable "authorized_ips" {}

     

     

         # ---- provider

         provider "oci" {

         region = "${var.region}"

         tenancy_ocid = "${var.tenancy_ocid}"

         user_ocid = "${var.user_ocid}"

         fingerprint = "${var.fingerprint}"

         private_key_path = "${var.private_key_path}"

         }

     

    The first reference is to the terraform.tfvars file.  We will use the variables to enable us to connect to the correct part of the Oracle Cloud Infrastructure, create any entities in the correct Tenancy and Compartment as well as by the correct user.

    The 02_vcn.tf file creates the Virtual Cloud Network. The key parts are as follows:

    • ·    Create a New Virtual Cloud Network
    • ·    Create a new Internet Gateway
    • ·    Create a new Route Table
    • ·    Create a new Security List
    • ·    Create a Public Subnet

     

    # -------- get the list of available ADs

    data "oci_identity_availability_domains" "ADs" {

      compartment_id = "${var.tenancy_ocid}"

    }

     

    # ------ Create a new VCN

    variable "VCN-CIDR" { default = "10.0.0.0/16" }

     

    resource "oci_core_virtual_network" "tf-demo01-vcn" {

      cidr_block = "${var.VCN-CIDR}"

      compartment_id = "${var.compartment_ocid}"

      display_name = "tf-demo01-vcn"

      dns_label = "tfdemovcn"

    }

     

    # ------ Create a new Internet Gateway

    resource "oci_core_internet_gateway" "tf-demo01-ig" {

      compartment_id = "${var.compartment_ocid}"

      display_name = "tf-demo01-internet-gateway"

      vcn_id = "${oci_core_virtual_network.tf-demo01-vcn.id}"

    }

     

    # ------ Create a new Route Table

    resource "oci_core_route_table" "tf-demo01-rt" {

      compartment_id = "${var.compartment_ocid}"

      vcn_id = "${oci_core_virtual_network.tf-demo01-vcn.id}"

      display_name = "tf-demo01-route-table"

      route_rules {

        cidr_block = "0.0.0.0/0"

        network_entity_id = "${oci_core_internet_gateway.tf-demo01-ig.id}"

      }

    }

     

    # ------ Create a new security list to be used in the new subnet

    resource "oci_core_security_list" "tf-demo01-subnet1-sl" {

      compartment_id = "${var.compartment_ocid}"

      display_name = "tf-demo01-subnet1-security-list"

      vcn_id = "${oci_core_virtual_network.tf-demo01-vcn.id}"

      egress_security_rules = [{

        protocol = "all"

        destination = "0.0.0.0/0"

      }]

     

      ingress_security_rules = [{

        protocol = "6" # tcp

        source = "${var.VCN-CIDR}"

        },

        {

        protocol = "6" # tcp

        source = "0.0.0.0/0"

        source = "${var.authorized_ips}"

        tcp_options {

          "min" = 22

          "max" = 22

        }

      }]

    }

     

    # ------ Create a public subnet 1 in AD1 in the new VCN

    resource "oci_core_subnet" "tf-demo01-public-subnet1" {

      availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}"

      cidr_block = "10.0.1.0/24"

      display_name = "tf-demo01-public-subnet1"

      dns_label = "subnet1"

      compartment_id = "${var.compartment_ocid}"

      vcn_id = "${oci_core_virtual_network.tf-demo01-vcn.id}"

      route_table_id = "${oci_core_route_table.tf-demo01-rt.id}"

      security_list_ids = ["${oci_core_security_list.tf-demo01-subnet1-sl.id}"]

      dhcp_options_id = "${oci_core_virtual_network.tf-demo01-vcn.default_dhcp_options_id}"

    }

    The 03_instance_ol7.tf file creates the Oracle Linux 7.4 instance. The key parts are as follows:

    • ·    Create an OL7.4 instance from the compartment using the bootstrap file and public key
    • ·    Display the public IP address for the instance

    # --------- Get the OCID for the more recent for Oracle Linux 7.4 disk image

    data "oci_core_images" "OLImageOCID-ol7" {

      compartment_id = "${var.compartment_ocid}"

      operating_system = "Oracle Linux"

      operating_system_version = "7.4"

    }

     

    # ------ Create a compute instance from the more recent Oracle Linux 7.4 image

    resource "oci_core_instance" "tf-demo01-ol7" {

      availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}"

      compartment_id = "${var.compartment_ocid}"

      display_name = "tf-demo01-ol7"

      hostname_label = "tf-demo01-ol7"

      image = "${lookup(data.oci_core_images.OLImageOCID-ol7.images[0], "id")}"

      shape = "VM.Standard1.1"

      subnet_id = "${oci_core_subnet.tf-demo01-public-subnet1.id}"

      metadata {

        ssh_authorized_keys = "${file(var.ssh_public_key_file_ol7)}"

        user_data = "${base64encode(file(var.BootStrapFile_ol7))}"

      }

     

      timeouts {

        create = "30m"

      }

    }

     

    # ------ Display the public IP of instance

    output " Public IP of instance " {

      value = ["${oci_core_instance.tf-demo01-ol7.public_ip}"]

    }

     

    It is possible to create and attach a new block volume to an instance using examples as follows:

    # ------ Create a 500GB block volume

    resource "oci_core_volume" "tf-demo01-ol7-vol1" {

      availability_domain = "${lookup(data.oci_identity_availability_domains.ADs.availability_domains[var.AD - 1],"name")}"

      compartment_id = "${var.compartment_ocid}"

      display_name = "tf-demo01-ol7-vol1"

      size_in_gbs = "500"

    }

     

    # ------ Attach the new block volume to the ol7 compute instance after it is created

    resource "oci_core_volume_attachment" "tf-demo01-ol7-vol1-attach" {

      attachment_type = "iscsi"

      compartment_id = "${var.compartment_ocid}"

      instance_id = "${oci_core_instance.tf-demo01-ol7.id}"

      volume_id = "${oci_core_volume.tf-demo01-ol7-vol1.id}"

    }

     

     

    The block volume attach is via iSCSI. To connect this to the instance we need to enhance the bootstrap script to do the following:

    • ·    Download via wget the iscsiattach.sh script
    • ·    Run fdisk and partition the new volume
    • ·    Make a file-system, create a mount point and have /etc/fstab updated to mount the volume at boot

    Examples and the iSCSI attach script are available within the examples section of the documentation.

    It is also possible to create a second instance as part of the terraform plan. For example, you could have two separate instance files building separate instances as part of the same plan.

    There are examples of various configuration files within the documentation, which when used are templates to build upon.

     

    Build process

    Once we create our files, we need to run a series of steps to create our entities.

     

    Firstly we need to run terraform init within the directory containing our *.tf files. This step only needs to run once. This command prepares the working directory for use with Terraform and creates the .terraform directory. This command is safe to be run multiple times and used to bring the current working directory up to date with any configuration changes.

     

    Next, we run terraform plan, which creates an execution plan and advises which actions will apply and is a dry run of the end build. You can use the -out switch to create a file, which when used is an input to the terraform apply command.

     

    Finally, we run terraform apply which will execute the actions and will provision our infrastructure. As explained above you can use this command with a file created by terraform plan, which can be useful for automation purposes. At the end of the apply process an output is given including any public IP details. These IP’s can be used with ssh and a private key to access the instance OS. For example:

    ssh -i /home/simon/.ssh/id_rsa -l <user> <OCI_Public_IP>

     

    Rebuild and destroy process

    If you either accidentally or purposefully change the Terraform built configuration via the Oracle Cloud Infrastructure UI/Console you can recover using Terraform. For example, if you delete a block volume or delete an instance created by Terraform you simply re-run the terraform plan command. The command will advise what needs to be rebuilt or changed; you then run terraform apply and the configuration that was previously there within the configuration files is recreated. Any changes to the configuration files will apply to the newly created entities.

     

    The terraform destroy command will simply destroy all entities created by the terraform apply command based upon the Terraform execution plan. The destroy command will display what is to be destroyed and prompt for confirmation to proceed.