How-to: Installing and Deploying Oracle Linux Cloud Native Environment

Version 6

    Before You Begin

    This tutorial shows you how to install and set up the Oracle Linux Cloud Native Environment on Oracle Linux instances in the Oracle Cloud Infrastructure. When deploying a multi-master Kubernetes cluster, you need to set up a load balancer to enable high availability of the cluster. In this tutorial, you configure a single-master Kubernetes cluster, therefore the steps to set up a load balancer are not included. In this tutorial, you also configure X.509 Private CA Certificates used to manage the communication between the nodes. There are other methods to manage and deploy the certificates, such as by using HashiCorp Vault secrets manager, or by using your own certificates, signed by a trusted Certificate Authority (CA). These other methods are not included in this tutorial.

     

     

    Background

    Oracle Linux Cloud Native Environment is a fully integrated suite for the development and management of cloud-native applications. The Kubernetes module is the core module. It is used to deploy and manage containers and also automatically installs and configures CRI-O, runC and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster. The runtime may be either runC or Kata Containers. The Kubernetes module also includes Flannel, the default overlay network for a Kubernetes cluster and CoreDNS, the DNS server for a Kubernetes cluster.

     

    The architecture consists of the Platform API Server, the Platform Agent, and the Platform CLI. The Platform API Server is responsible for managing all entities, from hosts to microservices, and is also responsible for managing the state of the environment, including the deployment and configuration of modules to one or more nodes in a cluster. The Platform Agent runs on each host to proxy requests from the Platform API Server to small worker applications. The Platform CLI is used to communicate with the Platform API Server. The Platform CLI is a simple application (the olcnectl command) that converts the input to Platform API Server calls. The required software for modules is configured by the Platform CLI, such as CRI-O, runC, Kata Containers, CoreDNS and Flannel.

     

     

    What Do You Need?

    • 3 Oracle Linux instances on the Oracle Cloud Infrastructure: operator node, Kubernetes master node, and Kubernetes worker node
    • Instances have a minimum of Oracle Linux 7 Update 5 (x86_64) installed and running the Unbreakable Enterprise Kernel Release 5 (UEK R5)
    • Instances have the oracle-linux-release-el7 RPM installed and the oracle-olcnerelease-el7 RPM installed
    • Instances have access to the following yum repositories: ol7_olcne, ol7_kvm_utils, ol7_addons, ol7_latest, and ol7_UEKR5, or access to related ULN channels
    • Network Time Protocol (NTP) service is running on the Kubernetes master and worker nodes
    • Swap is disabled on the Kubernetes master and worker nodes
    • SELinux is disabled or in permissive mode on the Kubernetes master and worker nodes
    • Instances are configured with necessary firewall rules (refer to “Oracle Linux Cloud Native Environment: Getting Started Guide” for list of firewall rules)

     

     

    Steps

     

    1. Set up the Operator Node

    The operator node performs and manages the deployment of environments, including deploying the Kubernetes cluster. An operator node may be a node in the Kubernetes cluster, or a separate host. In this tutorial, the operator node is a separate host. On the operator node, install the Platform CLI, Platform API Server, and utilities. Enable the olcne-api-server service, but do not start it.

     

    # yum install olcnectl olcne-api-server olcne-utils

    # systemctl enable olcne-api-server.service

     

     

    2. Set up Kubernetes Nodes

    Perform these steps on both Kubernetes master and worker nodes. Install the Platform Agent package and utilities. Enable the olcne-agent service, but do not start it. Install the Kubernetes packages. Enable the kubelet service, but do not start it.

     

    # yum install olcne-agent olcne-utils

    # systemctl enable olcne-agent.service

    # yum install kubeadm kubelet kubectl

    # systemctl enable kubelet.service

     

    If you use a proxy server, configure it with CRI-O. On each Kubernetes node, create a CRI-O systemd configuration directory. Create a file named crio-proxy.conf in the directory and add the proxy server information. This example uses a specific proxy. Substitute the appropriate proxy for your environment. The IP address for the NO_PROXY setting also uses a specific value. Again, change this as necessary for your environment. Finally, enable and start the crio service.

     

    # mkdir /etc/systemd/system/crio.service.d

    # vi /etc/systemd/system/crio.service.d/crio-proxy.conf

    [Service]

    Environment="HTTP_PROXY=http://<insert your proxy info>"

    Environment="HTTPS_PROXY=http://<insert your proxy info>"

    Environment="NO_PROXY=.<your proxy info>,100.102.*"

    # systemctl enable --now crio.service

     

    If the docker service is running, or if the containerd service is running, stop and disable them.

     

    # systemctl disable --now docker.service

    # systemctl disable --now containerd.service

     

     

    3. Set up X.509 Private CA Certificates

    Use the /etc/olcne/gen-certs-helper.sh script to generate a private CA and certificates for the nodes. Run the script from the /etc/olcne directory. The script saves the certificate files in the current directory. Use the --nodes option followed by the nodes for which you want to create certificates. Create a certificate for each node that runs the Platform API Server or Platform Agent that is, for the operator node, and each Kubernetes node. Provide the private CA information using the --cert-request* options. Some of these options are given in the example. You can get a list of all command options using the gen-certs-helper.sh --help command.

     

    # cd /etc/olcne

     

    This example uses a specific “common-name” and specific nodes. Substitute the appropriate operator, master, worker(s) for your environment. In this example, the script was ran from the operator node as the root user.

     

    # ./gen-certs-helper.sh \

      --cert-request-organization-unit "My Company Unit" \

      --cert-request-organization "My Company" \

      --cert-request-locality "My Town" \

      --cert-request-state "My State" \

      --cert-request-country US \

      --cert-request-common-name linuxandvirtiad.oraclevcn.com \

      --nodes crm-operator.webad2iad.linuxandvirtiad.oraclevcn.com,crmmaster-5313.webad2iad.linuxandvirtiad.oraclevcn.com,crm-worker3919.webad2iad.linuxandvirtiad.oraclevcn.com

     

     

    4. Transfer Certificates

    The /etc/olcne/gen-certs-helper.sh script used to generate a private CA and certificates for the nodes was ran from the operator node. Ensure that the operator node has ssh access to the Kubernetes master and worker node (not shown in this tutorial) then run the following command as non-root (opc) user on the operator node to transfer certificates from operator node to other nodes.

     

    $ bash -ex /etc/olcne/configs/certificates/olcne-tranfer-certs.sh

     

     

    5. Configure the Platform API Server to Use the Certificates

    As the root user on the operator node, run the /etc/olcne/bootstrap-olcne.sh script as shown to configure the Platform API Server to use the certificates. Alternatively, you can use certificates managed by HashiCorp Vault. This method is not included in this tutorial.

     

    # /etc/olcne/bootstrap-olcne.sh \

        --secret-manager-type file \

        --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \

        --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \

        --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \

        --olcne-component api-server

     

     

    6. Configure the Platform Agent to Use the Certificates

    As the root user on each Kubernetes node, run the /etc/olcne/bootstrap-olcne.sh script as shown to configure the Platform Agent to use the certificates. Alternatively, you can use certificates managed by HashiCorp Vault. This method is not included in this tutorial.

     

    # /etc/olcne/bootstrap-olcne.sh \

        --secret-manager-type file \

        --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \

        --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \

        --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \

        --olcne-component agent

     

    Repeat step 6 as needed to ensure this script is ran on each Kubernetes node.

     

     

    7. Create the Environment

    As the non-root (opc) user on the operator node, create the environment using the olcnectl environment create command as shown. Alternatively, you can use certificates managed by HashiCorp Vault. This method is not included in this tutorial.

     

    $ olcnectl --api-server 127.0.0.1:8091 environment create -environment-name myenvironment \

        --update-config \

        --secret-manager-type file \

        --olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \

        --olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \

        --olcne-node-key-path /etc/olcne/configs/certificates/production/node.key

     

     

    8. Add Kubernetes to the Environment

    Not a necessary step, but you can list the available modules for an environment using the following command as the non-root (opc) user on the operator node.

     

    $ olcnectl --api-server 127.0.0.1:8091 module list --environmentname myenvironment

    Available Modules:

            kubernete

     

    As the non-root (opc) user on the operator node, use the following command to add the Kubernetes module to the environment created in step 7. This example includes specific master and worker nodes. Substitute nodes as necessary for your environment. The --api-serveradvertise-address option specifies the IP address of the interface on the master node. If more than one worker node, separate the node names with a comma for example: --workernodes worker1.example.com:8090,worker2.example.com:8090.

     

    $ olcnectl --api-server 127.0.0.1:8091 module create --environmentname myenvironment \

      --module kubernetes --name mycluster \

      --container-registry container-registry.oracle.com/olcne \

      --apiserver-advertise-address 100.102.107.21 \

      --master-nodes crm-master5313.webad2iad.linuxandvirtiad.oraclevcn.com:8090 \

      --worker-nodes crm-worker3919.webad2iad.linuxandvirtiad.oraclevcn.com:8090

    Modules created successfully.

     

     

    9. Validate the Kubernetes module

    As the non-root (opc) user on the operator node, use the following command to validate that the nodes are configured correctly to deploy the Kubernetes module. In this example, there are no validation errors. If there are any errors, the commands required to fix the nodes are provided as output of this command.

     

    $ olcnectl --api-server 127.0.0.1:8091 module validate -environment-name myenvironment \

      --name mycluster

    Validation of module mycluster succeeded.

     

     

    10. Deploy the Kubernetes Module

    Use the following command to deploy a module to the environment.

     

    $ olcnectl --api-server 127.0.0.1:8091 module install --environmentname myenvironment \

      --name mycluster

    Modules installed successfully.

     

     

    Want to Learn More?