Hands On Lab: Use Vagrant and VirtualBox on Oracle Linux to set up Oracle Container Services for use with Kubernetes

Version 12

    Introduction

     

    This hands-on lab will demonstrate how to create a 3-node Kubernetes cluster consisting of one master node and two worker nodes using Hashicorp Vagrant and Oracle VM VirtualBox.

     

    Getting Started

    The Hands-On Lab laptops provided at Oracle OpenWorld HOL have been preinstalled with the required software:

    • Oracle Linux 7 Update 5
    • Git
    • Oracle VM VirtualBox
    • Hashicorp Vagrant

     

    In the instructions below, commands that you will enter will be shown in bold font.

    After each instruction, expected feedback from the system will be shown. In some examples, the feedback will be redacted to reduce the length of these instructions.

     

     

    Oracle Single Sign On (SSO) and Oracle Container Registry

     

    NOTE: You will be prompted for your Oracle Single Sign On Username and Password, one will not be provided for you. The SSO is required to log into the Oracle Container Registry.

    The first time you use a Repository from the Oracle Container Registry, you will be asked to accept some Terms and Conditions.

     

    Using a browser on your laptop, go to container-registry.oracle.com

    Sign in by clicking "Sign in" in the upper right side of the window:

     

     

    Enter your SSO Username and password as prompted:

     

     

    After logging in, click to browse the selection "Container Services"

     

     

    Select the Language from the pulldown menu:

     

     

    Click "Continue" to read and accept the Oracle Standard Terms and Restrictions.

     

     

    Clone the Oracle Vagrant GitHub repository

     

    In a Terminal session on the demo laptop, enter the following command to clone the Oracle Linux Vagrant boxes to your local machine - the text below also shows some expected results.

     

    [demo@desktop1 ~]$ git clone https://github.com/oracle/vagrant-boxes

    Cloning into 'vagrant-boxes'...

    remote: Enumerating objects: 31, done.

    remote: Counting objects: 100% (31/31), done.

    remote: Compressing objects: 100% (25/25), done.

    remote: Total 615 (delta 10), reused 23 (delta 5), pack-reused 584

    Receiving objects: 100% (615/615), 136.77 KiB | 0 bytes/s, done.

    Resolving deltas: 100% (327/327), done.

    Setup the Master node

     

    Change directory to the sub folder as shown:

    [demo@desktop1 ~]$ cd vagrant-boxes/Kubernetes/

     

    Before we start creating the environment, let's install some Vagrant plugins that will be useful for this lab:

     

    [demo@desktop1 ~]$ vagrant plugin install vagrant-hosts vagrant-env

    This will take a few minutes to complete. The vagrant-hosts plugin will automatically synchronize the /etc/hosts file across the VMs created by Vagrant and the vagrant-env plugin will copy any environment variables set on the host into the VM.

     

     

    With the single command entered as the next step, Vagrant will:

    • Create the virtual machine in VirtualBox
    • Configure two VirtualBox virtual network interfaces, including port forwarding
    • Import a base Oracle Linux 7 VirtualBox VM image
    • Boot the image
    • Install required prerequisite packages
    • Install docker-engine
    • Install Kubernetes

     

    Issue the command as shown:

    [demo@desktop1 Kubernetes]$ vagrant up master

    Bringing machine 'master' up with 'virtualbox' provider...

    ==> master: Importing base box 'ol7-latest'...

    ==> master: Matching MAC address for NAT networking...

    ==> master: Setting the name of the VM: Kubernetes_master_1538599449038_71077

    ==> master: Clearing any previously set network interfaces...

    ==> master: Preparing network interfaces based on configuration...

        master: Adapter 1: nat

        master: Adapter 2: hostonly

    ==> master: Forwarding ports...

        master: 8001 (guest) => 8001 (host) (adapter 1)

        master: 22 (guest) => 2222 (host) (adapter 1)

    ==> master: Running 'pre-boot' VM customizations...

    ==> master: Booting VM...

    ==> master: Waiting for machine to boot. This may take a few minutes...

        master: SSH address: 127.0.0.1:2222

        master: SSH username: vagrant

        master: SSH auth method: private key

        master:

        master: Vagrant insecure key detected. Vagrant will automatically replace

        master: this with a newly generated keypair for better security.

        master:

        master: Inserting generated public key within guest...

        master: Removing insecure key from the guest if it's present...

        master: Key inserted! Disconnecting and reconnecting using new SSH key...

    ==> master: Machine booted and ready!

     

    >>> 300 lines redacted <<<

     

        master: Complete!

        master: net.bridge.bridge-nf-call-ip6tables = 1

        master: net.bridge.bridge-nf-call-iptables = 1

        master: Your Kubernetes VM is ready to use!

    ==> master: Running provisioner: shell...

        master: Running: inline script

    ==> master: Running provisioner: shell...

        master: Running: inline script

     

    A great thing about using Vagrant - it automatically injects SSH keys into all the guests it provisions. This makes it MUCH easier to SSH into the master and worker nodes.

    Run the following command to SSH to the master node:

     

    [demo@desktop1 Kubernetes]$ vagrant ssh master

     

    Welcome to Oracle Linux Server release 7.5 (GNU/Linux 4.1.12-124.14.1.el7uek.x86_64)

     

    The Oracle Linux End-User License Agreement can be viewed here:

     

        * /usr/share/eula/eula.en_US

     

    For additional packages, updates, documentation and community help, see:

     

        * http://yum.oracle.com/

     

    You will need to be 'root' to have proper privileges to set up the Master node in Kubernetes.

    Run the following command to become root:

     

    [vagrant@master ~]$ su root

     

    Oracle is providing scripts to ease deployment of the Master and Worker nodes.

    NOTE: You will be prompted for your Oracle Single Sign On Username and Password, one will not be provided for you. The SSO is required to log into the Oracle Container Registry.

     

    Enter the command as shown - don't forget to exit out of vagrant root and vagrant SSH when completed with this step

     

    [root@master vagrant]# /vagrant/scripts/kubeadm-setup-master.sh

    /vagrant/scripts/kubeadm-setup-master.sh: Login to container-registry.oracle.com

    Username: xxxxx.xxxxx@oracle.com

    Password:

    Login Succeeded

    /vagrant/scripts/kubeadm-setup-master.sh: Setup Master node -- be patient!

    /vagrant/scripts/kubeadm-setup-master.sh: Copying admin.conf for vagrant user

    /vagrant/scripts/kubeadm-setup-master.sh: Copying admin.conf into host directory

    /vagrant/scripts/kubeadm-setup-master.sh: Saving token for worker nodes

    /vagrant/scripts/kubeadm-setup-master.sh: Master node ready, run

        /vagrant/scripts/kubeadm-setup-worker.sh

    on the worker nodes

     

    [root@master vagrant]# exit

    [vagrant@master ~]$ exit

     

    Setup Worker1

     

    The following procedures will be very similar to setting up the Master. Configuring the Workers is a bit quicker than the Master...

     

    Enter the command to configure and bring up worker1:

    [demo@desktop1 Kubernetes]$ vagrant up worker1

    Bringing machine 'worker1' up with 'virtualbox' provider...

    ==> worker1: Importing base box 'ol7-latest'...

    ==> worker1: Matching MAC address for NAT networking...

    ==> worker1: Setting the name of the VM: Kubernetes_worker1_1538600100785_82769

    ==> worker1: Fixed port collision for 22 => 2222. Now on port 2200.

    ==> worker1: Clearing any previously set network interfaces...

    ==> worker1: Preparing network interfaces based on configuration...

        worker1: Adapter 1: nat

        worker1: Adapter 2: hostonly

    ==> worker1: Forwarding ports...

        worker1: 22 (guest) => 2200 (host) (adapter 1)

    ==> worker1: Running 'pre-boot' VM customizations...

    ==> worker1: Booting VM...

    ==> worker1: Waiting for machine to boot. This may take a few minutes...

        worker1: SSH address: 127.0.0.1:2200

        worker1: SSH username: vagrant

        worker1: SSH auth method: private key

        worker1:

        worker1: Vagrant insecure key detected. Vagrant will automatically replace

        worker1: this with a newly generated keypair for better security.

        worker1:

        worker1: Inserting generated public key within guest...

        worker1: Removing insecure key from the guest if it's present...

        worker1: Key inserted! Disconnecting and reconnecting using new SSH key...

    ==> worker1: Machine booted and ready!

     

    <<< 301 lines redacted >>>

     

        worker1: Complete!

        worker1: net.bridge.bridge-nf-call-ip6tables = 1

        worker1: net.bridge.bridge-nf-call-iptables = 1

        worker1: Your Kubernetes VM is ready to use!

     

     

    Run the following command to SSH to worker1:

     

    [demo@desktop1 Kubernetes]$ vagrant ssh worker1

     

    Welcome to Oracle Linux Server release 7.5 (GNU/Linux 4.1.12-124.14.1.el7uek.x86_64)

     

    The Oracle Linux End-User License Agreement can be viewed here:

     

        * /usr/share/eula/eula.en_US

     

    For additional packages, updates, documentation and community help, see:

     

        * http://yum.oracle.com/

     

     

    You will need to be 'root' to have proper privileges to set up the Worker nodes in Kubernetes.

    Run the following command to become root:

     

    [vagrant@worker1 ~]$ su root

     

    Execute the script as shown - you will be prompted for your Oracle Single Sign On Username and Password. The SSO is required to log into the Oracle Container Registry.

    When the script completes, remember to exit from vagrant root and vagrant SSH.

     

    [root@worker1 vagrant]# /vagrant/scripts/kubeadm-setup-worker.sh

    /vagrant/scripts/kubeadm-setup-worker.sh: Login to container-registry.oracle.com

    Username: xxxxx.xxxxx@oracle.com

    Password:

    Login Succeeded

    /vagrant/scripts/kubeadm-setup-worker.sh: Setup Worker node

    Starting to initialize worker node ...

    Checking if env is ready ...

    Checking whether docker can pull busybox image ...

    Checking access to container-registry.oracle.com/kubernetes ...

    Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy-amd64 ...

    v1.9.1-1: Pulling from container-registry.oracle.com/kubernetes/kube-proxy-amd64

    Digest: sha256:f525d06eebf7f21c55550b1da8cee4720e36b9ffee8976db357f49eddd04c6d0

    Status: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.9.1-1

    Checking whether docker can run container ...

    Checking iptables default rule ...

    Checking br_netfilter module ...

    Checking sysctl variables ...

    Enabling kubelet ...

    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.

    Check successful, ready to run 'join' command ...

    [preflight] Running pre-flight checks.

    [validation] WARNING: kubeadm doesn't fully support multiple API Servers yet

    [discovery] Trying to connect to API Server "192.168.99.100:6443"

    [discovery] Trying to connect to API Server "192.168.99.100:6443"

    [discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:6443"

    [discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:6443"

    [discovery] Requesting info from "https://192.168.99.100:6443" again to validate TLS against the pinned public key

    [discovery] Requesting info from "https://192.168.99.100:6443" again to validate TLS against the pinned public key

    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.99.100:6443"

    [discovery] Successfully established connection with API Server "192.168.99.100:6443"

    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.99.100:6443"

    [discovery] Successfully established connection with API Server "192.168.99.100:6443"

     

    This node has joined the cluster:

    * Certificate signing request was sent to master and a response

      was received.

    * The Kubelet was informed of the new secure connection details.

     

    Run 'kubectl get nodes' on the master to see this node join the cluster.

    /vagrant/scripts/kubeadm-setup-worker.sh: Worker node ready

     

    [root@worker1 vagrant]# exit

    [vagrant@worker1 ~]$ exit

    Setup Worker2

     

    The following procedures will be almost identical to setting up Worker1.

     

    Enter the command to configure and bring up worker2:

    [demo@desktop1 Kubernetes]$ vagrant up worker2

    Bringing machine 'worker2' up with 'virtualbox' provider...

    ==> worker2: Importing base box 'ol7-latest'...

    ==> worker2: Matching MAC address for NAT networking...

    ==> worker2: Setting the name of the VM: Kubernetes_worker2_1538600483114_57537

    ==> worker2: Fixed port collision for 22 => 2222. Now on port 2201.

    ==> worker2: Clearing any previously set network interfaces...

    ==> worker2: Preparing network interfaces based on configuration...

        worker2: Adapter 1: nat

        worker2: Adapter 2: hostonly

    ==> worker2: Forwarding ports...

        worker2: 22 (guest) => 2201 (host) (adapter 1)

    ==> worker2: Running 'pre-boot' VM customizations...

    ==> worker2: Booting VM...

    ==> worker2: Waiting for machine to boot. This may take a few minutes...

        worker2: SSH address: 127.0.0.1:2201

        worker2: SSH username: vagrant

        worker2: SSH auth method: private key

        worker2:

        worker2: Vagrant insecure key detected. Vagrant will automatically replace

        worker2: this with a newly generated keypair for better security.

        worker2:

        worker2: Inserting generated public key within guest...

        worker2: Removing insecure key from the guest if it's present...

        worker2: Key inserted! Disconnecting and reconnecting using new SSH key...

    ==> worker2: Machine booted and ready!

     

    <<< 301 lines redacted >>>

     

        worker2: Complete!

        worker2: net.bridge.bridge-nf-call-ip6tables = 1

        worker2: net.bridge.bridge-nf-call-iptables = 1

        worker2: Your Kubernetes VM is ready to use!

     

     

    Run the following command to SSH to worker2:

     

    [demo@desktop1 Kubernetes]$ vagrant ssh worker2

     

    Welcome to Oracle Linux Server release 7.5 (GNU/Linux 4.1.12-124.14.1.el7uek.x86_64)

     

    The Oracle Linux End-User License Agreement can be viewed here:

     

        * /usr/share/eula/eula.en_US

     

    For additional packages, updates, documentation and community help, see:

     

        * http://yum.oracle.com/

     

     

    Run the following command to become root:

     

    [vagrant@worker2 ~]$ su root

     

    Execute the script as shown - you will be prompted for your Oracle Single Sign On Username and Password. The SSO is required to log into the Oracle Container Registry.

    When the script completes, remember to exit from vagrant root and vagrant SSH.

     

    [root@worker2 vagrant]# /vagrant/scripts/kubeadm-setup-worker.sh

    /vagrant/scripts/kubeadm-setup-worker.sh: Login to container-registry.oracle.com

    Username: xxxxx.xxxxx@oracle.com

    Password:

    Login Succeeded

    /vagrant/scripts/kubeadm-setup-worker.sh: Setup Worker node

    Starting to initialize worker node ...

    Checking if env is ready ...

    Checking whether docker can pull busybox image ...

    Checking access to container-registry.oracle.com/kubernetes ...

    Trying to pull repository container-registry.oracle.com/kubernetes/kube-proxy-amd64 ...

    v1.9.1-1: Pulling from container-registry.oracle.com/kubernetes/kube-proxy-amd64

    Digest: sha256:f525d06eebf7f21c55550b1da8cee4720e36b9ffee8976db357f49eddd04c6d0

    Status: Image is up to date for container-registry.oracle.com/kubernetes/kube-proxy-amd64:v1.9.1-1

    Checking whether docker can run container ...

    Checking iptables default rule ...

    Checking br_netfilter module ...

    Checking sysctl variables ...

    Enabling kubelet ...

    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.

    Check successful, ready to run 'join' command ...

    [preflight] Running pre-flight checks.

    [validation] WARNING: kubeadm doesn't fully support multiple API Servers yet

    [discovery] Trying to connect to API Server "192.168.99.100:6443"

    [discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:6443"

    [discovery] Trying to connect to API Server "192.168.99.100:6443"

    [discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:6443"

    [discovery] Requesting info from "https://192.168.99.100:6443" again to validate TLS against the pinned public key

    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.99.100:6443"

    [discovery] Successfully established connection with API Server "192.168.99.100:6443"

    [discovery] Requesting info from "https://192.168.99.100:6443" again to validate TLS against the pinned public key

    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.99.100:6443"

    [discovery] Successfully established connection with API Server "192.168.99.100:6443"

     

    This node has joined the cluster:

    * Certificate signing request was sent to master and a response

      was received.

    * The Kubelet was informed of the new secure connection details.

     

    Run 'kubectl get nodes' on the master to see this node join the cluster.

    /vagrant/scripts/kubeadm-setup-worker.sh: Worker node ready

     

    [root@worker2 vagrant]# exit

    [vagrant@worker2 ~]$ exit

    Validate the Kubernetes Cluster Configuration

     

    Your cluster is ready. Log into the Master node to verify the Cluster setup:

     

    [demo@desktop1 Kubernetes]$ vagrant ssh master

     

    Welcome to Oracle Linux Server release 7.5 (GNU/Linux 4.1.12-124.14.1.el7uek.x86_64)

     

    The Oracle Linux End-User License Agreement can be viewed here:

     

        * /usr/share/eula/eula.en_US

     

    For additional packages, updates, documentation and community help, see:

     

        * http://yum.oracle.com/

     

    Try the following command to get basic cluster information:

     

    [vagrant@master ~]$ kubectl cluster-info

    Kubernetes master is running at https://192.168.99.100:6443

    KubeDNS is running at https://192.168.99.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

     

    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

     

    To get more information about each node, try this command:

     

    [vagrant@master ~]$ kubectl get nodes

    NAME                 STATUS    ROLES     AGE       VERSION

    master.vagrant.vm    Ready     master    1h        v1.9.1+2.1.8.el7

    worker1.vagrant.vm   Ready     <none>    1h        v1.9.1+2.1.8.el7

    worker2.vagrant.vm   Ready     <none>    1h        v1.9.1+2.1.8.el7

     

    Notice that all of the nodes are in the "Ready" status.

     

    And try this command to get information about the Kubernetes pods:

     

    [vagrant@master ~]$ kubectl get pods --namespace=kube-system

    NAME                                        READY     STATUS    RESTARTS   AGE

    etcd-master.vagrant.vm                      1/1       Running   0          1h

    kube-apiserver-master.vagrant.vm            1/1       Running   0          1h

    kube-controller-manager-master.vagrant.vm   1/1       Running   0          1h

    kube-dns-855949bbf-27k8k                    3/3       Running   0          1h

    kube-flannel-ds-bsjxw                       1/1       Running   0          1h

    kube-flannel-ds-f4k72                       1/1       Running   0          1h

    kube-flannel-ds-hgjq7                       1/1       Running   0          1h

    kube-proxy-fmq29                            1/1       Running   0          1h

    kube-proxy-kwj62                            1/1       Running   0          1h

    kube-proxy-v82dm                            1/1       Running   0          1h

    kube-scheduler-master.vagrant.vm            1/1       Running   0          1h

    kubernetes-dashboard-7c966ddf6d-25csv       1/1       Running   0          1h

     

    Display the Kubernetes Management Console GUI

     

    You should still be in a Vagrant SSH session on the master node - if not, from a Terminal session on the host run the following command to SSH to the master node:

    [demo@desktop1 Kubernetes]$ vagrant ssh master

     

    Welcome to Oracle Linux Server release 7.5 (GNU/Linux 4.1.12-124.14.1.el7uek.x86_64)

     

    The Oracle Linux End-User License Agreement can be viewed here:

     

        * /usr/share/eula/eula.en_US

     

    For additional packages, updates, documentation and community help, see:

     

        * http://yum.oracle.com/

     

    An easy and quick way to display the Kubernetes GUI is by using a service account token.

    Enter this LONG command to display your token (Hint: You might want to copy and paste it!)

     

    Don't try to use this example token, it won't work on your cluster.

    [vagrant@master ~]$ kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token;

    Name:         namespace-controller-token-8bvwb

    Type:  kubernetes.io/service-account-token

    token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi04YnZ3YiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBkZTU1NGEwLWM3NGUtMTFlOC04MWNjLTA4MDAyNzI0MTllZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.B_bEUA99O19h_xglDJdg3WRcWu2vnPfx-R0iqyJPCMKdNlD3DBSChRBhYB1o7L01rUFKTp4s5JHkb7caj9nUKzy1YUdqkyxZeUUzAe_zBUEIk85252AsGcm96ZveZdC9QMCenuUMsLdpB8VgdPEpCezvpogWNnetq-kh5Qz33V-HwuY6qktIkOYV_Y7ouHDppK1exKd8x4jaP-R9WNMKMJQg3M-CpD1znCDUDRhQIkmSBJkaa01ykpv8sDGE-m-N_AgRyrn1RmQzGxnKWd_N_UiDBDmQMyXMLlCD4ckv8fVjpN9X8e3bUz9KmHRwLI2uD-grYN5x9t5SVDQstCPUdw

    From your Terminal window, carefully select the characters in the Token - then Right Click and select Copy

     

    If you have these instructions in a Firefox browser on your Oracle HOW Lab laptop, RIGHT CLICK the link below and select "Open Link in New Window" (otherwise these instructions would go away!)

    If all else fails, enter the link manually into a new tab or browser.

     

    http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

     

    Make sure the Token button is selected, Right Click and Paste the Token into the field, then click "SIGN IN" as shown in this screenshot:

     

    The Kubernetes GUI dashboard will appear!

     

    BONUS LAB (if you have time...)

     

    This section will step you through creating a Kubernetes pod that runs MySQL Server.

     

    You should still be in a Vagrant SSH session on the master node, but not logged in as root.

     

    The next step is to create a YAML file for the deployment. You will be cutting text from the Web Browser and pasting it into your favorite Linux text editor (vi, for example.)

    The file should be named      mysql-db.yaml

     

    In another browser or browser tab, go to this link: https://docs.oracle.com/cd/E52668_01/E88884/html/kubectl-pod-yaml-deployments.html

     

    In the section MySQL Server Deployment you will find 4 example YAML file definitions (Persistent Volume, Persistent Volume Claim, Service, and MySQL Server Instance.)

     

    Select each YAML file definition from that document and paste it into a text editor. Double check the file! YAML syntax is column specific . Make sure the first character of every line is indented exactly as shown in the manual.

     

    When you have completed the creation of the YAML file, run the following command:

     

    [vagrant@master ~]$ kubectl create -f mysql-db.yaml

    persistentvolume "mysql-pv-volume" created

    persistentvolumeclaim "mysql-pv-claim" created

    service "mysql-service" created

    pod "mysql" created

     

    Try this command to get information about the Kubernetes pods. You should find your MySQL is running.

     

    [vagrant@master ~]$ kubectl get pods

    NAME      READY     STATUS    RESTARTS   AGE

    mysql     1/1       Running   0          12m