Oracle Container Services for use with Kubernetes - Certificates (and how to Update them!)

Version 7

    Introduction

     

    There are quite a few certificates involved with Kubernetes, and unfortunately they are not in one place. Information about managing these keys is not included in our current Documentation, so we will add this information to future versions of the "Oracle Container Services for use with Kubernetes User's Guide",

     

    For now, this Oracle Linux Community Document should help provide a lot of information about these certificates plus some instructions for managing these certificates and keep them from expiring!

     

    Big thanks to Ritesh Kumar and River Zhang for contributing research and information to make this Document possible!

     

    Important Note: The commands listed below are valid only for "Oracle Container Services for use with Kubernetes 1.1.9". Newer Kubernetes will change the syntax of many of these commands! Oracle will document those changes in the new releases when ready.

     

    Certification Usage

     

    Certification usage in Kubernetes is important to understand, but it is a bit complicated. The TLS certs are being used broadly for 2 different purposes:

     

    • For running apiserver and kubelet in https mode which we will call "Server Side Certificates" . This is similar to running a https website .
    • For authenticating Kubernetes end users/clients (both internal as well as external) which we will call "Client Certificates" for authentication of users.

     

    In general on a Master Node, most of the certs live in /etc/kubernetes/pki, and Client certificates are usually with the client configuration files (kubelet.conf, controller-manager.conf, scheduler.conf ,admin.conf etc).

     

    The Master Node is also usually configured as a Worker Node within the cluster, so it is important to understand the usage and location of all of the keys in both Node types. Lets explore these in a bit more detail.

     

    Master Node

     

    In a typical setup the following files are found in /etc/kubernetes/pki on the Master node:

    • ca.key, ca.crt

    This is an auto generated CA pair which has validity of 10 years from the date generated and is not a concern or candidate while rotating certs. Since all client certificates are signed by this CA,  avoid recreating it as doing so will require making changes across all worker nodes for kubelet.  The command kubeadm can manage this key.

    • apiserver.key, apiserver.crt 

    This is the server side key pair for running API Server in https mode. It is valid for 1 year from the date generated. The command kubeadm can manage this key.

    • apiserver-kubelet-client.key, apiserver-kubelet-client.crt

    This is the client certificate pair, used by apiserver to get authenticated to kubelet of each worker node running in HTTPS mode. It is valid for 1 year from the date generated. The command kubeadm can manage this key.

    • sa.pub, sa.key

    This is the key pair used for Service account and it does not need any rotation

    • front-proxy-ca.key, front-proxy-ca.crt

    This is the CA pair for api proxy server. It is valid for 10 years and is not a concern or candidate while rotating certs.

    • front-proxy-client.key, front-proxy-client.crt

    This is the key pair used by apiserver to authenticate for proxy server. It is valid for 1 year from the date generated. The command kubeadm can manage this key.

     

    The Master node also contains many client certificates which are stored withiin their respective .conf files, not as separate key files. This is a bit unusual. These configuration files are contained in /etc/kubernetes. All of the following .conf files can be managed by using the command # kubeadm alpha phase kubeconfig

     

    • admin.conf

    Client/user is kubernetes-admin and key pair mentioned are base64 values in the field of client-certificate-data and client-key-data. It is valid for 1 year from the date generated.

    • kubelet.conf

    Client/user is system:node:<master node name> and key pair mentioned are base64 values in the field of client-certificate-data and client-key-data. It is valid for 1 year from the date generated.

    • controller-manager.conf

    Client/user is system:kube-controller-manager  and key pair mentioned are base64 values in the field of client-certificate-data and client-key-data. It is valid for 1 year from the date generated.

    • scheduler.conf

    Client/user is system:kube-scheduler and key pair mentioned are base64 values in the field of client-certificate-data and client-key-data. It is valid for 1 year from the date generated.

     

    Worker Nodes

     

    In a typical setup the following files are found in /var/lib/kubelet/pki on each Worker node:

    • kubelet.key, kubelet.crt

    This is the server key pair which is generated and auto signed by kubelet when it gets setup and is used to run kubelet in https mode. It is valid for 1 year from the date generated. Important Note: kubeadm does NOT manage this key pair! To refresh this key pair, remove them and restart kubelet.  Doing this will generate a new self signed pair, valid for 1 year from the date generated.

    • kubelet-client.key, kubelet-client.crt

    This is the client key pair of kubelet which has been signed by the default CA on the Master server, and is used by kubelet to authenticate itself to Master. It is valid for 1 year from the date generated. This key pair is of little concern as it gets auto renewed by kubelet when it gets near its expiry date. The parameter for this is --rotate-certificates=true which has been setup on each kubelet by default .

     

    Suggested Workflow to renew Certificates

    Note: In Oracle Container Services, Master Nodes are also configured as Worker Nodes. Please refresh the kubelet.key on each Master Node by running the procedure found under the section "Worker Nodes" above.

     

    1. IMPORTANT! Run the following commands as a root user on each Master Node to backup your cluster BEFORE proceeding!

     

    If desired, create a directory where the backup files should be stored

    # mkdir /etc/kubernetes/backups

     

    Stop the cluster.

    # kubeadm-setup.sh stop

     

    Run kubeadm-setup.sh backup and specify the directory where the backup files should be stored

    # kubeadm-setup.sh backup /etc/kubernetes/backups

     

    2. Run the following commands as a non-root user on the Master node:

    $ kubeadm version

    $ kubectl cluster-info

     

    Here is an example of the results of the two commands:

    [river@ol101 ~]$ kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1+2.0.2.el7", GitCommit:"71224132401864a0ae9bb0743a7a59280xxxxxx", GitTreeState:"archive", BuildDate:"2018-03-02T21:07:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

    [river@ol101 ~]$ kubectl cluster-info
    Kubernetes master is running at https://192.168.1.113:6443
    CoreDNS is running at https://192.168.1.113:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

    We will use that output to create a YAML configuration file for the kubeadm command - we get the version (in this example 1.9.1) from the first command, and the IP address (in this example 192.168.1.113) from the second command.

     

    To create the file,:

    • log into the Master node and get root privileges (either su root or modify commands below to use sudo)
    • enter # cd /etc/kubernetes
    • Use your favorite editor to create a file. The commands here will use kubeadm-config.yml  as the file name. Please use the version and IP address from your environment. Your kubeadm-config.yml file should end up similar to the one below:
    apiVersion: kubeadm.k8s.io/v1alpha1
    kind: MasterConfiguration
    api:
    advertiseAddress: 192.168.1.113
    kubernetesVersion: 1.9.1
    3. Archive the old configuration files

     

    Enter the following commands in order while logged into the Kubernetes Master Node as root:

    # cd /etc/kubernetes

    # mkdir conf-archive

    # for f in `ls *.conf`; do mv $f conf-archive/$f.old; done

    If you run the command

    # ls conf-archive/

    you should see admin.conf.old, controller-manager.conf.old, kubelet.conf.old, and scheduler.conf.old

     

    4. Archive the old cert files

     

    You should still be in /etc/kubernetes

    Enter the following commands on the Master Node while logged in as root:

    # mkdir pki-archive

    # cd pki

    # for f in `ls apiserver* front-*client*`; do mv $f ../pki-archive/$f.old; done

    If you run the command

     

    # ls ../pki-archive/

     

    you should see apiserver.crt.old, apiserver.key.old, apiserver-kubelet-client-crt.old, front-proxy-client.crt.old, and front-proxy-client.key.old

     

    5. Generate refreshed certs

     

    Run the following kubeadm command on the Master node while logged in as root:

    # cd /etc/kubernetes
    # kubeadm alpha phase certs apiserver --config ./kubeadm-config.yml

    You should see results similar to the following:

    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [ol101.us.oracle.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [192.168.0.1 192.168.1.113]

    Run another kubeadm command on the Master node:

    # kubeadm alpha phase certs apiserver-kubelet-client --config ./kubeadm-config.yml

    You should see results similar to the following:

    [certificates] Generated apiserver-kubelet-client certificate and key.

    Run another kubeadm command on the Master node:

     

    [root@ol101 ~]# kubeadm alpha phase certs front-proxy-client --config ./kubeadm-config.yml

    You should see results similar to the following:

    [certificates] Generated front-proxy-client certificate and key.

    Finally, run one more kubeadm command on the Master node:

    # kubeadm alpha phase kubeconfig all --config ./kubeadm-config.yml

    You should see results similar to the following:

    [kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"

     

    6. Reboot the Master node and Check the status

     

    Reboot the master node. You should see the dates on the keys have changed. Here is an example:

     

    [river@ol101 ~]$ reboot
    [river@ol101 ~]$ kubectl get nodes

    NAME STATUS    ROLES AGE       VERSION
    ol101.us.oracle.com   Ready master    306d      v1.9.1+2.0.2.el7
    ol111.us.oracle.com   Ready <none>    306d v1.9.1+2.0.2.el7
    ol112.us.oracle.com   Ready <none>    306d v1.9.1+2.0.2.el7
    [river@ol101 ~]$ ls -lt /etc/kubernetes/pki/
    total 48
    -rw-r--r-- 1 root root 1050 Jan 17 00:16 front-proxy-client.crt
    -rw------- 1 root root 1675 Jan 17 00:16 front-proxy-client.key
    -rw-r--r-- 1 root root 1099 Jan 17 00:16 apiserver-kubelet-client.crt
    -rw------- 1 root root 1679 Jan 17 00:16 apiserver-kubelet-client.key
    -rw-r--r-- 1 root root 1237 Jan 17 00:15 apiserver.crt
    -rw------- 1 root root 1679 Jan 17 00:15 apiserver.key
    -rw-r--r-- 1 root root 1025 Mar 16  2018 front-proxy-ca.crt
    -rw------- 1 root root 1675 Mar 16  2018 front-proxy-ca.key
    -rw------- 1 root root 1679 Mar 16  2018 sa.key
    -rw------- 1 root root  451 Mar 16  2018 sa.pub
    -rw-r--r-- 1 root root 1025 Mar 16  2018 ca.crt
    -rw------- 1 root root 1679 Mar 16  2018 ca.key

    Additional Resources

     

    Visit the resources below to take advantage of Oracle Linux to help you build your cloud infrastructure: