Kubernetes Exercise

Ouvert le : lundi 14 octobre 2019, 23:18
À remettre : mardi 15 octobre 2019, 11:15


  • Become familiar with Kubernetes and its command-line tool kubectl
  • Define, deploy, run and scale a web-application using the Kubernetes abstractions of Pods, Services and Deployments
  • Verify elasticity and resilience of a Kubernetes-managed application


In this lab you will perform a number of tasks and document your progress in a lab report. Each task specifies one or more deliverables to be produced. Collect all the deliverables in your lab report. Give the lab report a structure that mimics the structure of this document.

You will first gain access to a Kubernetes cluster we set up on the AWS public cloud through the kubectl command line tool. Then you will deploy a Redis key-value store service provided as example. The goal of the first part is to deploy a complete version of a three-tier application (Frontend + API Server + Redis) using Pods. The second part of the lab will require you to make the application resilient to failures. You will deploy multiple Pods with a Deployment for the Frontend and API services.

The following resources and tools are required for this laboratory session:

  • Any modern web browser
  • The Kubernetes command line tool kubectl (instructions to install below in Task 1)

Each student group has access to a Kubernetes cluster with a specific user and private token. For isolation each group is also required to use a specific namespace on the cluster.

  • To get access you need to download the cluster certificate ca.crt which you will find at the end of this document. You will also need a group token for your team. 

The code for this lab is provided in three files at the end of this document: api-pod.yamlredis-pod.yaml and redis-svc.yaml.

  • Create a local directory as your working directory for this lab and download the three files.


The entire lab session takes 90 minutes.


In this task you will setup the environment and deploy an example application to the Kubernetes cluster.

The goal of the provided example application is to store and edit a To Do list. The application consists of a simple Web UI (Frontend) that uses a REST API (API service) to access a Key Value storage service (Redis). Ports to be used for Pods and Services are shown in the figure.

The required Docker images are provided on Docker Hub:

  • Frontend: icclabcna/ccp2-k8s-todo-frontend
  • API backend: icclabcna/ccp2-k8s-todo-api
  • Redis: redis:3.2.10-alpine


Use the commands below to download and install the kubectl client binary for your Operating System.


Download the binary using:

curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.8/2019-08-14/bin/darwin/amd64/kubectl

When the download is finished run the following commands:

chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl


Download the binary using:

curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.8/2019-08-14/bin/linux/amd64/kubectl

When the download is finished run the following commands:

chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl


Download the exe from the following URL and save it to your project directory:


(you may also copy it to a common place and extend the PATH-variable).

Use kubectl.exe instead of just kubectl when executing a command.

Detailed info on the available kubectl commands, syntax and parameters can be found in the interactive command reference: https://kubernetes.io/docs/user-guide/kubectl/v1.10/ 

To setup the configuration of kubectl you need the provided cluster certificate (crt) and the group token file for your team. 

  • Download the ca.crt and your group-<GroupNo>.token file to the work directory.

To setup the connection and authenticate with the provided Kubernetes cluster execute the following commands. This will write information into your personal kubectl configuration file ~/.kube/config.

Create a new cluster entry named cluster-students that points to the cluster in AWS:

kubectl config set-cluster MSE-Cluster \
--embed-certs=true \
--server=https://30BC2FCFD5661FFFCA232E38EA659D68.sk1.eu-west-1.eks.amazonaws.com \

Create a new user entry named group-<GroupNo> with an authentication token (replace <GroupNo> with the number of your group):

kubectl config set-credentials group-<GroupNo> \
--token=<paste content of token file of group account here>

Create a new context entry named k8s-lab that combines the cluster entry with the user entry and a namespace:

kubectl config set-context k8s-lab --cluster=MSE-Cluster \
--user=group-<GroupNo> --namespace=group-<GroupNo>-ns

Make the new context the current context:

kubectl config use-context k8s-lab

If the configuration is valid runnning the command

kubectl version

should show the client (kubectl) version 1.10 and server (cluster) version 1.10 Running

kubectl get all

should return the message

No resources found


Next you create the configuration and deploy the three tiers of the application to the Kubernetes cluster.


Use the following commands to deploy Redis using the provided configuration files:

kubectl create -f redis-svc.yaml

kubectl create -f redis-pod.yaml

and verify it is up and running and on which ports using the command kubectl get all.

To zoom in on a Kubernetes object and see much more detail try kubectl describe po/redis for the Pod and kubectl describe svc/redis-svc for the Service.


Using the redis-svc.yaml file as example and information from api-pod.yaml, create the api-svc.yaml configuration file for the API Service. The Service has to expose port 8081 and connect to the port of the API Pod.

Be careful with the indentation of the YAML files. If your code editor has a YAML mode, enable it.

  • Deploy and verify the API-Service and Pod (similar to the Redis ones) and verify that they are up and running on the correct ports.


Using the redis-svc.yaml file as an example, create the frontend-svc.yaml configuration file for the Frontend Service.

Note that unlike the Redis and API Services the Frontend needs to be accessible from outside the Kubernetes cluster as a regular web server on port 80.

This will trigger the creation of an Elastic Load Balancer (ELB) on AWS. This might take some minutes, in the meanwhile let's deploy the Pod. Next step.


Using the api-pod.yaml file as an example, create the frontend-pod.yaml configuration file that starts the UI Docker container in a Pod.

  • Docker image for frontend container on Docker Hub is icclabcna/ccp2-k8s-todo-frontend

Note that the container runs on port: 8080

It also needs to be initialized with the following environment variables (check how api-pod.yaml defines environment variables):

  • API_ENDPOINT_URL: URL where the API can be accessed e.g., http://localhost:9000
    • What value must be set for this URL?

Hint: remember that anything you define as a Service will be assigned a DOMAIN that is visible via DNS everywhere in the cluster and a PORT.

  • Deploy the Pod using kubectl.


Now you can verify if the ToDo application is working correctly.

  • Find out the public URL of the Frontend Service load balancer using kubectl describe.
  • Access the public URL of the Service with a browser. You should be able to access the complete application and create a new ToDo.


Several things can be misconfigured. Remember that there are two Service dependencies:

  • the Frontend forwarding requests to the (not externally accessible) API Service;
  • the API Service accessing the Redis Service (also only accessible from within the cluster).

You can look at the Pod logs to find out where the problem is. You can see the logs of a Pod using:

kubectl logs -f pod_name

You may want to test if a Pod is responding correctly on its port. Normally accessing a Pod from outside requires an exposed service, but kubectlhas support for temporary port-forwarding that comes very handy. When you run

kubectl port-forward pod_name pod_port:local_port

a secure tunnel is created from your local machine to the instance of the Pod running on one of the cluster nodes. The command blocks and keeps running to keep the tunnel open. Using another terminal window or your browser you can now access the Pod on http://localhost:local_port.

A handy way to debug is to login to a container and see whether the required services are addressable.

You can run a command on a (container in a) Pod using:

kubectl exec -it pod_name <command>

E.g. use kubectl exec -it pod-name bash to start a Bash shell inside the container.

Container images tend to contain only the strict minimum to run the application. If you are missing a tool you can install it (assuming a Debian-family distribution):

apt-get update
apt-get install curl


By now you should have understood the general principle of configuring, running and accessing applications in Kubernetes. However, the above application has no support for resilience. If a container (resp. Pod) dies, it stops working. Next, we add some resilience to the application.


In this task you will create Deployments that will spawn Replica Sets as health-management components.

Converting a Pod to be managed by a Deployment is quite simple.

  • Have a look at an example of a Deployment described here: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
  • Note that the doc is for Kubernetes 1.10. For the Kubernetes version we are using (1.10) your file needs to start with: apiVersion: apps/v1
  • Create Deployment versions of your application configurations (e.g. redis-deploy.yaml instead of redis-pod.yaml) and modify/extend them to contain the required Deployment parameters.
  • Again, be careful with the YAML indentation!
  • Make sure to have always 2 instances of the API and Frontend running.
  • Use only 1 instance for the Redis-Server. Why?
  • Delete all application Pods (using kubectl delete pod ..., see https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete) and replace them with deployment versions.
  • Verify that the application is still working and the Replica Sets are in place. (kubectl get allkubectl get podskubectl describe ...)


In this subtask you will intentionally kill (delete) Pods and verify that the application keeps working and the Replica Set is doing its task.

Hint: You can monitor the status of a resource by adding the --watch option to the get command. To watch a single resource:

kubectl get <resource-name> --watch

To watch all resources of a certain type, for example all Pods:

kubectl get pods --watch

You may also use kubectl get all repeatedly to see a list of all resources. You should also verify if the application stays available by continuously reloading your browser window.

  • What happens if you delete a Frontend or API Pod? How long does it take for the system to react?
  • What happens when you delete the Redis Pod?
  • How can you change the number of instances temporarily to 3? Hint: look for scaling in the deployment documentation
  • What autoscaling features are available? Which metrics are used?
  • How can you update a component? (see update in the deployment documentation)


At the end of the lab session:

  • Delete all the Pods, Deployments and Services of your group.


Kubernetes documentation can be found on the following pages: