AWS Hands-On Lab Launching EKS Cluster for Cloud and DevOps Engineers

AWS Hands-On Lab Launching EKS Cluster for Cloud and DevOps Engineers

AWS Learning Path for Cloud and DevOps Engineers

📝Introduction

This post introduces work with the AWS command line interface (CLI) and console, using command line utilities like eksctl and kubectl to launch an EKS Cluster, provision a Kubernetes deployment and pod-running instances of nginx, and create a LoadBalancer service to expose your application over the internet.

📝Log in to the AWS Management Console

Using your credentials, make sure you're using the right Region. In my case, I chose us-east-1.

Note: We will need to use the files from the Launching EKS Cluster repository for this lab.

📝Create IAM User

  1. Navigate to IAM > Users and click Add Users.

  2. In the User name field, enter k8-admin and click Next.

  3. Select Attach policies directly, select AdministratorAccess.

  4. Click Next, and click Create user.

  5. Select the newly created user k8-admin.

  6. Select the Security credentials tab.

  7. Scroll down to Access keys and select Create access key.

  8. Select Command Line Interface (CLI) and checkmark the Acknowledgement at the bottom of the page.

  9. Click Next, and click the Create access key.

  10. Either copy both the access key and the secret access key and paste them into a local text file or click Download .csv file. We will use the credentials when setting up the AWS CLI.

  11. Click Done.

📝Create EC2 Instance and config AWS CLI

  1. Navigate to EC2 \> Instances and click Launch Instance.

  2. At the Amazon Machine Image (AMI) dropdown, select the Amazon Linux 2 AMI.

  3. Leave t2.micro selected under Instance type.

  4. In the Key pair (login) box, select Create new key pair.

  5. Give it a Key pair name of <your-keypair-name>

  6. Click Create new key pair. This will download the key pair for later use.

  7. Expand Network settings and click on Edit.

  8. In the Network settings box:

    • Network: Leave as default.

    • Subnet: Leave as default.

    • Auto-assign Public IP: Select Enable.

  9. Click Launch instance.

  10. Click on the instance ID link (which looks like i-xxxxxxxxx), and give the new instance a few minutes to enter the running state.

  11. Once the instance is fully created, check the checkbox next to it and click Connect at the top of the window.

  12. In the Connect to your instance dialogue, select the EC2 Instance Connect (browser-based SSH connection).

    Note: I preferred to use my VS code tool to connect to the EC2 instance.

  13. Click Connect.

  14. In the command line window, check the AWS CLI version: aws --version

    It should be an older version.

  15. Download v2:

    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

  16. Unzip the file: unzip awscliv2.zip

  17. See where the current AWS CLI is installed: which aws

    It should be /usr/bin/aws.

  18. Update it: sudo ./aws/install --bin-dir /usr/bin --install-dir /usr/bin/aws-cli --update

  19. Check the version of AWS CLI: aws --version

    It should now be updated.

  20. Configure the CLI: aws configure

  21. For AWS Access Key ID, paste in the access key ID you copied earlier.

  22. For AWS Secret Access Key, paste in the secret access key you copied earlier.

  23. For Default region name, enter <your-region>.

  24. For Default output format, enter json.

  25. Download kubectl:

    curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/linux/amd64/kubectl

  26. Apply to execute permissions to the binary: chmod +x ./kubectl

  27. Copy the binary to a directory in your path:

    mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin

  28. Ensure kubectl is installed: kubectl version --short --client

  29. Download eksctl: curl --silent --location "[https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname](github.com/weaveworks/eksctl/releases/lates.. -s)_amd64.tar.gz" | tar xz -C /tmp

  30. Move the extracted binary to /usr/bin: sudo mv /tmp/eksctl /usr/bin

  31. Get the version of eksctl: eksctl version

  32. See the options with eksctl: eksctl help

📝Provision an EKS Cluster

  1. Provision of an EKS Cluster with three worker nodes in us-east-1:

    eksctl create cluster --name dev --region us-east-1 --nodegroup-name standard-workers --node-type t3.medium --nodes 2 --nodes-min 1 --nodes-max 3 --managed

    If your EKS resources can't be deployed due to AWS capacity issues, delete your eksctl-dev-cluster CloudFormation stack and retry the command using the --zones parameter and suggested availability zones from the CREATE_FAILED message.

    It will take 10–15 minutes since it's provisioning the control plane and worker nodes, attaching the worker nodes to the control plane, and creating the VPC, security group, and Auto Scaling group.

  2. In the AWS Management Console, navigate to CloudFormation and take a look at what’s going on there.

  3. Select the eksctl-dev-cluster stack (this is our control plane).

  4. Click Events, so you can see all the resources that are being created.

  5. We should then see another new stack being created — this one is our node group.

  6. Once both stacks are complete, navigate to Elastic Kubernetes Service > Clusters.

  7. Click the listed cluster.

  8. If you see a Your current user or role does not have access to Kubernetes objects on this EKS cluster message just ignore it, as it won't impact the next steps of the activity.

  9. Click the Compute tab (under Configuration), and then click the listed node group. There, we'll see the Kubernetes version, instance type, status, etc.

  10. Click dev in the breadcrumb navigation link at the top of the screen.

  11. Click the Networking tab (under Configuration), where we'll see the VPC, subnets, etc.

  12. Click the Logging tab (under Configuration), where we'll see the control plane logging info.

  13. Navigate to EC2 > Instances, where you should see the instances have been launched.

  14. Close out of the existing CLI window, if you still have it open.

  15. Select the original t2.micro instance, and click Connect at the top of the window.

  16. In the Connect to your instance dialog, select EC2 Instance Connect (browser-based SSH connection).

    Note: I preferred to use my VS code tool to connect to the EC2 instance.

  17. Click Connect.

  18. In the CLI, check the cluster: eksctl get cluster

  19. Enable it to connect to our cluster: aws eks update-kubeconfig --name dev --region <your-region>

📝Create a Deployment on Your EKS Cluster

  1. Install Git: sudo yum install -y git

  2. Download the course files:

    git clonehttps://github.com/fjblsouza/AWS-HandsOn-Labs.git

  3. Change directory: cd Lauching_EKS_Cluster

  4. Take a look at the deployment file: cat nginx-deployment.yaml

  5. Take a look at the service file: cat svc-nginx.yaml

  6. Create the service: kubectl apply -f ./svc-nginx.yaml

  7. Check its status: kubectl get service

    Copy the external DNS hostname of the load balancer, and paste it into a text file, as we'll need it in a minute.

  8. Create the deployment: kubectl apply -f ./nginx-deployment.yaml

  9. Check its status: kubectl get deployment

  10. View the pods: kubectl get pod

  11. View the ReplicaSets: kubectl get rs

  12. View the nodes: kubectl get node

  13. Access the application using the load balancer, replacing <LOAD_BALANCER_DNS_HOSTNAME> with the IP you copied earlier (it might take a couple of minutes to update): curl "<LOAD_BALANCER_DNS_HOSTNAME>"

    The output should be the HTML for a default Nginx web page.

  14. In a new browser tab, navigate to the same IP, where we should then see the same Nginx web page.

📝Test the High Availability Features of Your EKS Cluster

  1. In the AWS console, on the EC2 Instances page, select the worker node instances, and click Instance state.

  2. Select Stop instance, in the dialogue box, click Stop.

  3. After a few minutes, we should see EKS launching new instances to keep our service running.

  4. In the CLI, check the status of our nodes: kubectl get node

    All the nodes should be down (i.e., display a NotReady status).

  5. Check the pods: kubectl get pod

    We'll see a few different statuses — Terminating, Running, and Pending — because, as the instances shut down, EKS is trying to restart the pods.

  6. Check the nodes again: kubectl get node

    We should see a new node, which we can identify by its age.

  7. Wait a few minutes, and then check the nodes again: kubectl get node

    We should have one in a Ready state.

  8. Check the pods again: kubectl get pod

    We should see a couple of pods are now running as well.

  9. Check the service status: kubectl get svc

  10. Copy the external DNS Hostname listed in the output.

  11. Access the application using the load balancer, replacing <LOAD_BALANCER_DNS_HOSTNAME> with the DNS Hostname you just copied:

    curl "<LOAD_BALANCER_EXTERNAL_IP>"

    We should see the Nginx web page HTML again. (If you don't, wait a few more minutes.)

  12. In a new browser tab, navigate to the same IP, where we should again see the Nginx web page.

  13. In the CLI, delete everything: eksctl delete cluster dev --region <your-region>

📝Video Step-by-Step of Hands-On Lab

Congratulations — you have completed this hands-on lab covering the basic introduction to launching an EKS Cluster, provisioning a Kubernetes deployment and pod-running instances of Nginx, and creating a High Availability using Load Balancer service to expose your application over the internet.

Thank you for reading. I hope you were able to understand and learn something helpful from my blog.

Please follow me on CloudDevOpsToLearn and LinkedIn franciscojblsouza