Kubernetes cluster deployment using Kubespray

Introduction

Containers technology has become the most essential component of the “as a service” offering on the cloud or on-premises.  There are numerous blogs posts that covers the benefits of containers technology and especially Kubernetes.  Hence, I wouldn’t spend time preaching to the choir rather walk you through on how to quickly setup Kubernetes on-premises.

There are multiple ways a Kubernetes cluster can be deployed like bootstrapping clusters with kubeadm or using deployment tools like kubespray, kops etc., I have found kubespray to be the easiest and simplest deployment option without having to spend a lot of time troubleshooting the Kubernetes network setup. 

Why Kubespray?

Kubespray uses Ansible playbooks to deploy Kubernetes cluster across the configured nodes.

As such Kubespray automates deployment of all the components of Kubernetes including the prerequisites, core setup (kubernetes, etcd,docker), network plugin (calico by default but you can change to any of your preferred network plugins), application (coredns, cert-manager, ingress-nginx).

Kubespray supports most popular Linux distributions and can be used to deploy a production ready highly available cluster on the Cloud or on-premises.

For official documentation please check kubespray.io.

Test Environment

For this setup, I configured 5 virtual machines, one for the master node and four for the worker nodes.  The base image I used for all these VMs was Ubuntu 18.04. For a production-grade cluster, please see the sizing guide at kubernetes.io.

As part of the Ansible setup, configure passwordless SSH connectivity between the Master and Worker nodes.  This is required for Ansible to work across the nodes.

Note: Alternatively, you can setup an additional management VM on which you can setup ansible and configure the passwordless SSH into all the K8s nodes. 

root@kub-master01:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:uAp7asKtr2506B2P4qKz1YRnwAFTMtohp9BnBGI5taY root@kub-master01
The key's randomart image is:
+---[RSA 2048]----+
|O=O+.            |
|+&.o+            |
|o ==             |
|  oo   .         |
| Eo + . S        |
| o B   .         |
|+ * = .          |
|+*.*.o           |
|OXB+.            |
+----[SHA256]-----+
root@kub-master01:~#

Copy the public key from the Master node to itself and all the other nodes using ssh-copy-id.

root@kub-master01:~# ssh-copy-id -i ~/.ssh/id_rsa.pub kub-master01

root@kub-master01:~# 
for node in 01 02 03 04; 
do   
   echo $node;   
    ssh-copy-id -i ~/.ssh/id_rsa.pub kub-node${node}; 
done

Install python pip3 package.

root@kub-master01:~/kubespray# apt install python3-pip

Now get the official kubespray package onto the Master node.

root@kub-master01:~# git clone https://github.com/kubernetes-sigs/kubespray

Change to the kubespray directory and Install the required packages by running the following commands.

root@kub-master01:~# cd kubespray
root@kub-master01:~/kubespray# pip3 install -r requirements.txt
root@kub-master01:~/kubespray# pip3 install -r contrib/inventory_builder/requirements.txt

Now copy the inventory/sample as inventory/mycluster where you will be updating configs related to the cluster you are about to set up.

root@kub-master01:~/kubespray# cp -pr inventory/sample inventory/mycluster

To populate the hosts file for ansible that will consist of the Kubernetes cluster nodes, specify the IP addresses of all the nodes including the Master node as follows.

root@kub-master01:~/kubespray# declare -a IPS=(10.21.214.70 10.21.214.71 10.21.214.72 10.21.214.73 10.21.214.74)
root@kub-master01:~/kubespray# CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

You can edit the hosts.yml and make node name changes as you deem appropriate.  By default kubespray sets up the node names as node1, node2 etc., I changed them to reflect my preferred node names.

Following is how my hosts.yaml looks after running the inventory builder and modifying the hostnames. The changes are highlighted in orange color.

root@kub-master01:~/kubespray/inventory/mycluster# more hosts.yaml
all:
  hosts:
    kub-master01:
      ansible_host: 10.21.214.70
      ip: 10.21.214.70
      access_ip: 10.21.214.70
    kub-node01:
      ansible_host: 10.21.214.71
      ip: 10.21.214.71
      access_ip: 10.21.214.71
    kub-node02:
      ansible_host: 10.21.214.72
      ip: 10.21.214.72
      access_ip: 10.21.214.72
    kub-node03:
      ansible_host: 10.21.214.73
      ip: 10.21.214.73
      access_ip: 10.21.214.73
    kub-node04:
      ansible_host: 10.21.214.74
      ip: 10.21.214.74
      access_ip: 10.21.214.74
  children:
    kube-master:
      hosts:
        kub-master01:
        kub-node01:
    kube-node:
      hosts:
        kub-master01:
        kub-node01:
        kub-node02:
        kub-node03:
        kub-node04:
    etcd:
      hosts:
        kub-master01:
        kub-node01:
        kub-node02:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}
root@kub-master01:~/kubespray/inventory/mycluster#

By default, Kubespray installs the K8s cluster with Calico network plugin (which is a layer 3 network as opposed to layer 2 network by Flannel). If you do not want to use the calico network plugin, edit the file inventory/mycluster/group_vars/k8s-cluster.yml  and update the kube_network_plugin entry.

# Choose network plugin (cilium, calico, weave or flannel. Use cni for generic cni plugin)
# Can also be set to 'cloud', which lets the cloud provider setup appropriate routing
kube_network_plugin: calico

Install the Kubernetes Cluster with ansible-playbook command.

Note: The option “–become” is required and the command will fail without “–become”.

root@kub-master01:~/kubespray# ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml

In my environment for five nodes, it took almost 10 minutes to complete the setup. Here is some excerpt from the ansible playbook output.

PLAY RECAP ***********************************************************************************************
kub-master01  : ok=519  changed=111  unreachable=0    failed=0    skipped=970  rescued=0    ignored=0
 kub-node01   : ok=593  changed=122  unreachable=0    failed=0    skipped=1115 rescued=0    ignored=1
 kub-node02   : ok=443  changed=95   unreachable=0    failed=0    skipped=649  rescued=0    ignored=0
 kub-node03   : ok=370  changed=79   unreachable=0    failed=0    skipped=595  rescued=0    ignored=0
 kub-node04   : ok=370  changed=79   unreachable=0    failed=0    skipped=595  rescued=0    ignored=0
 localhost    : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
 Thursday 11 March 2021  02:26:46 +0000 (0:00:00.071)       0:09:14.999 
 kubernetes/control-plane : kubeadm | Initialize first master ------------------ 59.73s
 kubernetes/kubeadm : Join to cluster ------------------------------------------ 32.25s
 kubernetes/control-plane : Joining control plane node to the cluster. --------- 24.13s
 container-engine/docker : ensure docker packages are installed ---------------- 23.46s
 reload etcd ------------------------------------------------------------------- 10.78s
 Gen_certs | Write etcd member and admin certs to other etcd nodes -------------- 7.98s
 Gen_certs | Write etcd member and admin certs to other etcd nodes -------------- 7.74s
 kubernetes/control-plane : Master | wait for kube-scheduler -------------------- 7.58s
 kubernetes/preinstall : Install packages requirements -------------------------- 6.63s
 Gen_certs | Write node certs to other etcd nodes ------------------------------- 5.78s
 wait for etcd up --------------------------------------------------------------- 5.74s
 kubernetes-apps/ansible : Kubernetes Apps | Start Resources -------------------- 5.60s
 Gen_certs | Write node certs to other etcd nodes ------------------------------- 5.59s
 Configure | Check if etcd cluster is healthy ----------------------------------- 5.32s
 kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS Template ---------- 5.31s
 container-engine/docker : ensure docker-ce repository is enabled --------------- 5.20s
 download_container | Download image if required -------------------------------- 5.00s
 kubernetes/preinstall : Update package management cache (APT) ------------------ 4.78s
 download_container | Download image if required -------------------------------- 4.50s
 download_container | Download image if required -------------------------------- 4.46s
 root@kub-master01:~/kubespray#

Run the following command to verify the cluster deployment.

root@kub-master01:~/kubespray# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
kub-master01   Ready    control-plane,master   11m   v1.20.4
kub-node01     Ready    control-plane,master   11m   v1.20.4
kub-node02     Ready    <none>                 10m   v1.20.4
kub-node03     Ready    <none>                 10m   v1.20.4
kub-node04     Ready    <none>                 10m   v1.20.4

As you can see here, the control-plane is setup on two nodes kub-master01, kub-node01 based on the config in the hosts.yaml inventory file.

If you want to check all the events logged by K8s sorted by the timestamp, you can run the following command.

kubectl get events --sort-by='.metadata.creationTimestamp' --all-namespaces

Other useful kubespray commands:

Scale the cluster

Edit hosts.yaml with the new nodes to be added and perform the following command.

# ansible-playbook -i inventory/mycluster/hosts.yaml --user=root scale.yml

Removing a worker node

# ansible-playbook -i inventory/mycluster/hosts.yaml --user=root remove-node.yml -e "node=<node1>,<node2>"

Uninstall / Remove the cluster

(Use caution as this will remove the cluster)

# ansible-playbook -i inventory/mycluster/hosts.yaml --user=root reset.yml 

Accessing Kubernetes cluster from a workstation

If you would like to access the Kubernetes cluster through the kubectl command from your laptop or any other servers, do the following.

  1. Install kubectl tool on the workstation.
  2. Copy the admin.conf file from the Kubernetes master node /etc/kubernetes directory to the workstation.
  3. Set the environment variable to point to the absolute path of the admin.conf file as follows.
    export KUBECONFIG=/home/kub-ansible/admin.conf
  4. Get the Master server’s IP address using the kubectl command at the Master server.
    root@kub-master01:~# kubectl get nodes -o wide |grep master |awk ' { print $6 }'
    10.21.124.80
  5. Edit the admin.conf and replace the following line that refers to the localhost IP address (127.0.0.1) with the IP address of the Master server.
    server: https://127.0.0.1:6443
  6. Execute any command like the following to get the list of all namespaces in your cluster.
    kubectl get namespaces

Now you are all set to work with your Kubernetes cluster from your workstation/laptop of any other server that is not part of the cluster.

Photo courtesy: CHUTTERSNAP on Unsplash

Like it? Share ...Share on twitter
Twitter
Share on linkedin
Linkedin
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Verified by MonsterInsights