This is a series of blogs and video tutorials aimed at Managing Microservices with Kubernetes. I aim to cover the following topics using a microservice application for illustration:
In this first blog of the series, let me ensure that we are all clear on Docker and Kubernetes. I am often asked the question:
In this blog, I will answer by showing you what Docker and Kubernetes can do.
Watch the video
Here is an application called the Voting App that I will use to illustrate the difference between Docker and Kubernetes. This code is publicly available from Docker Inc. at this GitHub location https://github.com/dockersamples/example-voting-app
In this tutorial, I am going to run the voting application using Docker and Kubernetes. I will then explain the main differences between them. Here is the Architecture of the application. This is a microservices application consisting of 5 microservices. The 5 microservices are described below:
In a nutshell, users vote at the voting-app endpoint and you can view the results at a result-app endpoint. Both voting-app and result-app run on different ports.
Now that we understand how this application works, let us deploy the application using Docker and Kubernetes. I already have a Kubernetes cluster consisting of 1 master node and 4 worker nodes. I also have a single node dedicated to Docker.
First, I will ssh to the docker node and create the microservice application running on Docker with a Docker utility called docker-compose. In a nutshell, Docker-compose helps to run a multi-tiered application just by running a single command, docker-compose up. Before running the command, you should have created a docker-compose file to describe the microservice application. You can get more information about docker-compose here.
So, let’s create the application from the provided docker-compose file to do that:
cloudexperts@dockerlab:~$ git clone https://github.com/dockersamples/example-voting-app cloudexperts@dockerlab:~$ cd example-voting-app
cloudexperts@dockerlab:~$ docker-compose up cloudexperts@dockerlab:~/example-voting-app/tmp/example-voting-app$ docker-compose ps Name Command State Ports ------------------------------------------------------------------------------------------------------------------- db docker-entrypoint.sh postgres Up 5432/tcp example-voting-app_result_1 docker-entrypoint.sh nodem ... Up 0.0.0.0:5858->5858/tcp, 0.0.0.0:5001->80/tcp example-voting-app_vote_1 python app.py Up 0.0.0.0:5000->80/tcp example-voting-app_worker_1 /bin/sh -c dotnet src/Work ... Up redis docker-entrypoint.sh redis ... Up 0.0.0.0:32770->6379/tcp
As you can see, when I run docker-compose ps you see a bunch of containers running. This application is a microservice application consisting of 5 microservices as seen in the Architecture of the Microservice above.
To view this application from the browser, I need the voting-app microservice and the port it is running on. You can see from the docker-compose command above (highlighted in bold), that the example-voting-app-vote it is running on port 5000, while the example-voting-app_result is running on port 5001. In Docker terminology, these microservices are exposed on ports 5000 and 5001 respectively on the docker host though the applications are each running on port 80 inside the containers.
To access the voting application and the voting result from the browser use the IP address of the Docker host where the containers are running, and the port. (Remember to change the IP address to the IP address of your Docker node)
http://192.168.0.13:5000 http://192.168.0.13:5001
In lab 2 below, we are going to see how Kubernetes manages the same application just deployed with Docker using Docker compose.
Let’s create the microservice from the manifests files provided. Here I will log into the master node of the Kubernetes cluster and create the voting app application by running the below command. YAML files are used to manage the deployment of an application on Kubernetes and these manifest files were provided on the GitHub location below.
cloudexperts@master1:~$ git clone https://github.com/dockersamples/example-voting-app cloudexperts@master1:~$ cd example-voting-app/k8s-specifications
cloudexperts@master1:~/example-voting-app/k8s-specifications$ cloudexperts@master1:~/example-voting-app/k8s-specifications$ ls -al total 44 drwxrwxr-x 2 cloudexperts cloudexperts 4096 Jun 4 19:39 . drwxrwxr-x 8 cloudexperts cloudexperts 4096 Jun 8 14:52 .. -rw-rw-r-- 1 cloudexperts cloudexperts 646 Jun 4 17:57 db-deployment.yaml -rw-rw-r-- 1 cloudexperts cloudexperts 209 Jun 4 17:57 db-service.yaml -rw-rw-r-- 1 cloudexperts cloudexperts 510 Jun 4 17:57 redis-deployment.yaml -rw-rw-r-- 1 cloudexperts cloudexperts 221 Jun 4 17:57 redis-service.yaml -rw-rw-r-- 1 cloudexperts cloudexperts 408 Jun 4 17:57 result-deployment.yaml -rw-rw-r-- 1 cloudexperts cloudexperts 239 Jun 4 17:57 result-service.yaml -rw-rw-r-- 1 cloudexperts cloudexperts 394 Jun 4 17:57 vote-deployment.yaml -rw-rw-r-- 1 cloudexperts cloudexperts 234 Jun 4 18:06 vote-service.yaml -rw-rw-r-- 1 cloudexperts cloudexperts 335 Jun 4 17:57 worker-deployment.yaml cloudexperts@master1:~/example-voting-app/k8s-specifications$ kubectl get nodes NAME STATUS ROLES AGE VERSION master1 Ready master 18d v1.18.3 node01 Ready <none> 18d v1.18.3 node02 Ready <none> 18d v1.18.3 node03 Ready <none> 18d v1.18.3 node04 Ready <none> 18d v1.18.3
cloudexperts@dockerlab:~/example-voting-app/k8s-specifications$ Kubectl create ns vote cloudexperts@dockerlab:~/example-voting-app/k8s-specifications$ Kubectl create -f . deployment.apps/db created service/db created deployment.apps/redis created service/redis created deployment.apps/result created service/result created deployment.apps/vote created service/vote created deployment.apps/worker created
This will create all the Kubernetes objects that are needed to run the microservice. Let me mention here that Kubernetes has a way of doing its own thing and it can often look more complicated than doing things in Docker. Here, about 4 Kubernetes objects are required to create the microservice: Pods, ReplicaSets, Deployment, and Service objects. I will explain the functions of these objects in the follow-up tutorials. To access the microservice application running on Kubernetes, you need the voting-app microservice and its port. To see that I need to check the service Object. Before doing that, let me ensure that all the pods/containers are up and running.
cloudexperts@master1:~/example-voting-app/k8s-specifications$ kubectl get pods -n vote NAME READY STATUS RESTARTS AGE db-6789fcc76c-cm5c9 1/1 Running 0 5h35m redis-554668f9bf-9wbfs 1/1 Running 0 5h35m result-79bf6bc748-qtjlw 1/1 Running 19 5h35m vote-7478984bfb-pq46s 1/1 Running 0 5h35m vote-7478984bfb-vrbkr 1/1 Running 0 5h26m worker-dd46d7584-7btn7 1/1 Running 0 5h35m cloudexperts@master1:~/example-voting-app/k8s-specifications$ kubectl get svc -n vote NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE db ClusterIP 10.107.212.13 <none> 5432/TCP 5h36m redis ClusterIP 10.99.135.125 <none> 6379/TCP 5h36m result NodePort 10.105.160.130 <none> 5001:31001/TCP 5h36m vote NodePort 10.108.79.199 <none> 5000:31000/TCP 5h36m
Here I run the command to view the vote service on the vote namespace. There are different types of service objects, but this is a NodePort. To access the application on a NodePort, I need the IP address of any of the cluster nodes and the second port number (ensure the port is between 30000 and 32767) as indicated in bold above. For the vote service, the first port 5000 is the ClusterIP port while the second port 31000 is the Nodeport service.
Here I can access the voting app on port 31000 and the results on port 31001. You can check this blog to understand Kubernetes service objects and the different types of service objects. To access the voting microservice application and the result microservice application from the browser use any IP of the Kubernetes cluster and the corresponding ports(remember to change the IP address to one of the IP addresses of your Kubernetes cluster):
http://192.168.0.7:31000 http://192.168.0.7:31001
Now that we have seen the same application deployed on both Docker and Kubernetes, let me answer the questions that we started with.
Both the Docker runtime and Kubernetes Orchestrator help us to deploy and run a microservice application. You can also run your application in a VM or even on a physical BareMetal machine but compared to those options, Docker trumps them all because Docker is:
Rarely do you get something faster for cheaper but this is what Docker provides. Docker enables you to build, ship, and run your application. With Docker, you package your application into a container along with all the required libraries (build), upload the image to a registry (ship), and then run the application as I just did.
Great! now that we know why Docker and Kubernetes, let us know some of the differences between them:
The voting-app application may be running fine on a single node but imagine that you want to expose this voting app to the entire country of several million people. For the application to handle this level of users, you need to scale the application. Better still, you want to be able to dynamically scale the application as the number of people accessing the website increases. As good as Docker is, Docker is relevant on a single node but once you want to run your application on more than 1 node and to scale it, Docker may no longer be able to help. Scaling the application for several users will require resources more than one node can provide. For more than 1 node, you will generally require an Orchestrator, and this is where Kubernetes comes in. Kubernetes is an Orchestrator (cluster-manager) that helps with several things such as:
So, to clarify: what is the difference between Docker and Kubernetes? Docker is a container runtime while Kubernetes is a cluster manager that can handle the above-mentioned requirements of a microservice. Docker handles mostly the building and the shipping of the images and running a microservice on an individual cluster node while Kubernetes helps with the running of the entire microservices across all the nodes in a production environment. In running Kubernetes, you still need a container engine like Docker and in fact, Kubernetes is useless without a container engine.
The Workflow to create and run a microservice typically looks like this:
In teaching microservices, Docker, and Kubernetes, this is how I try to help my students understand the differences between Docker and Kubernetes. Here, I have not only explained but I have also shown you how to run a microservice application under both platforms. Used properly, Docker and Kubernetes are as powerful as never seen before in the history of IT infrastructure management. Stay tuned for the rest of the series and thank you for reading/watching.