K8s Hands-On Lab -> First Steps
Kubernetes Learning Path for Cloud and DevOps Engineers
Table of contents
- 📝Introduction
- 📝Log in to the AWS Management Console
- 📝GitHub repository
- 📝Prerequisistes to Installing Kubeadm
- Disabling the use of swap in the system. This is required because Kubernetes doesn't work well with swap enabled:
- Load the kernel modules required for Kubernetes to work:
- Configure some system parameters. This will ensure our cluster works correctly:
- 📝 Installing Kubernetes packages
- 📝Installing containerd
- 📝Configuring containerd
- 📝Enabling the kubelet service
- 📝Initializing the cluster
- 📝Adding the other node to the cluster
- 📝Installing Weave Net
- 📝Deploy to test Pods communication
📝Introduction
This post demonstrates the first steps of what is and how to configure and use Kubernetes.
For this lab, I created the resources (2 Linux VMs) as one Control Plane Node and the other as a Worker Node in AWS (2 Nodes), adding and allowing in an SG the default ports required from Kubernetes, following the instructions from the GitHub repo shared below.
📝Log in to the AWS Management Console
Using your credentials, make sure you're using the right Region. In my case, I chose us-east-1
.
Note: You must create the AWS Access Key and AWS Secret Access Key and configure the AWS CLI in the terminal to use it.
You can use link1 and link2 for it.
📝GitHub repository
Go to the GitHub repo to access the files used in this Lab on this link.
📝Prerequisistes to Installing Kubeadm
Disabling the use of swap in the system. This is required because Kubernetes doesn't work well with swap enabled:
sudo swapoff -a
Note 1: If you are using AWS to create your resources, you should not be concerned about disabling this setting since they do not use it.
Note 2: The steps from Prerequisites until and including Enabling the kubelet service must be applied to both VMs (Control Plane Node and Worker Node).
Load the kernel modules required for Kubernetes to work:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
Configure some system parameters. This will ensure our cluster works correctly:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
📝 Installing Kubernetes packages
Install the Kubernetes packages:
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
📝Installing containerd
Install containerd, which are essential for our Kubernetes environment:
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update && sudo apt-get install -y containerd.io
📝Configuring containerd
Configure containerd so that it works properly with our cluster:
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl status containerd
📝Enabling the kubelet service
Enabling the kubelet service so that it starts automatically with the system:
sudo systemctl enable --now kubelet
📝Initializing the cluster
Starting the cluster:
sudo kubeadm init --pod-network-cidr=10.10.0.0/16 --apiserver-advertise-address=<CONTROL PLANE IP TO COMMUNICATE TO THE NODES>
If the step above was executed successfully, a message stating that the cluster has been initialized will be displayed:
You will see a list of commands to configure cluster access with kubectl. Copy and paste this command into your terminal:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This configuration is required so that kubectl can communicate with the cluster, because when we copy the admin.conf
file to the user's .kube
directory, we are copying the file with root permissions, this is the reason we can run the command sudo chown $(id -u):$(id -g) $HOME/.kube/config
to change the file permissions for the user running the command.
To check if the Control Plane Node was created, run the following command below:
kubectl get no #Or
kubectl get nodes
📝Adding the other node to the cluster
To do this, we will again use the kubeadm
command, but instead of running the command on the control plane node, at this point, we need to run the command directly on the node we want to add to the cluster the Worker Node.
When we initialized our cluster, kubeadm
showed us the command we need to run on the new nodes so that they can be added to the cluster as workers
.
sudo kubeadm join 172.31.57.89:6443 --token if9hn9.xhxo6s89byj9rsmd \
--discovery-token-ca-cert-hash sha256:ad583497a4171d1fc7d21e2ca2ea7b32bdc8450a1a4ca4cfa2022748a99fa477
On the Worker node, run the command above using the settings generated during the initialization of the Cluster from the Control Plane Node.
To check if the Worker node also was created, run the following command from the Control Plane Mode:
kubectl get no #Or
kubectl get nodes
The two new nodes have been added to the cluster, but they still have the status Not Ready
since we have not yet installed our network plugin to communicate between the pods is possible. Let's do it now😊.
📝Installing Weave Net
Install Weave Net by running the following command on the Control Plane Node:
$ kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
Wait a few minutes until all cluster components are up and running. You can check the status of the cluster components with the following command on the Control Plane Node:
kubectl get pods -n kube-system #Or
kubectl get pods -A
To check if the status of the 2 nodes is Ready
run the following command on the Control Plane Node:
kubectl get no #Or
kubectl get nodes
Once we already have our cluster initialized and Weave Net installed, let's create a Deployment to test communication between Pods.
📝Deploy to test Pods communication
The command to create the Deployment must be run from the Control Plane Node:
kubectl create deployment nginx --image=nginx --replicas 3
kubectl get pods -o wide
root@ip-172-31-53-44:~# kubectl create deployment nginx --image=nginx --replicas 3
deployment.apps/nginx created
root@ip-172-31-53-44:~#
root@ip-172-31-53-44:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7854ff8877-g57gn 1/1 Running 0 7s 10.32.0.2 ip-172-31-56-3 <none> <none>
nginx-7854ff8877-lfth7 1/1 Running 0 7s 10.32.0.3 ip-172-31-56-3 <none> <none>
nginx-7854ff8877-vnz7b 1/1 Running 0 7s 10.32.0.4 ip-172-31-56-3 <none> <none>
root@ip-172-31-53-44:~#
From the Worker Node, we can check if any of the NGINXs PODs are loading the default HTML page.
curl -k http://<IP_of_any_NGINX_Pods>
Congratulations — you have completed this hands-on lab covering the basics of Kubernetes installation and initializing a cluster, having it Up and Running and the Pods running on different nodes.
Thank you for reading. I hope you understood and learned something helpful from my blog.
Please follow me on CloudDevOpsToLearn and LinkedIn franciscojblsouza