Let’s start by defining Kubernetes as the open-source platform for automating the deployment, scaling, and managing of application containers. The emphasis here is on the application containers. It simply means that without containers, Kubernetes may be of no use.
So what are application containers and why are they relevant? Docker Inc popularized containers with the introduction of docker images. Basically, an image is a self-contained template with the application and all dependencies required to run the application. These often include binary or source files, configuration files, shared libraries, environmental variables, and everything else that the application might need. The docker image when instantiated by a Docker runtime engine creates a running version of the image called a docker container. When a container runs, it thinks it is the only one in the entire computer system. This is good because there can be no conflict between the container process and any other running process.
Containers are so relevant that they changed the way IT looks at both software development and operations management. Containers are particularly relevant to software developers who are able to do certain things that were difficult to do before containers such as:
Containers and hypervisors are conceptually similar but have differences. In the diagrams below, they are 2 major differences.
Fig 1: Hypervisor manager running 3 virtual machines with applications |
Fig 2: Container manager running 3 containerized applications |
The overall goal here is the isolation of processes but the container manager does it in a better and more efficient way. Not only can you get better ROI using the container manager, since you can have more container density in equivalent hardware compared to the hypervisor, but containers are also several microseconds faster than virtual machines.
Both the virtual machines and the containers use images but container images are lightweight and better portable between different computing environments than virtual machines images. So the new way of developing an application is to package the code along with its dependencies in a docker image and then deploy the image to a registry and used it to create containers. Putting the image in a central docker repository also helps with sharing the image between groups.
So containers are revolutionary and very helpful to the developers but the next thing is how to manage large clusters of containers. On a single node, containers are great and don’t require much management but understand that with the introduction of containers, developers have also found a new and better way of developing cloud applications. Cloud applications or cloud-native applications are designed in form of microservices as compared to traditional monolithic applications. To develop a cloud-native microservice application, developers would split the tasks into various functions and then develop each function into a standalone application that communicates with other standalone applications through API calls. Each standalone application, called a microservice, is then containerized and deployed on the cloud platform.
i.e. one microservice = one containerized application
A typical cloud-native application consists of hundreds if not thousands of containerized microservice apps running. In such a dynamic environment, we need a system to be in charge of coordinating and managing the overall health of the application. Specifically, we need to perform the following at the minimum:
The helmsman in charge of all these activities is called Kubernetes. This is the captain of the ship. Needless to say, without the container helmsman, containers would not achieve the prominent status they occupy today since they will be very difficult to manage at scale. We can illustrate with the ship on the sea below.
Here, if each of the containers represents a microservice application, Kubernetes would be the captain of the ship, and of course, we cannot overemphasize the competency of the captain of the ship. At the same time, this comparison can be an oversimplification since in reality, the functions of Kubernetes, as stated earlier, also involve coordinating several activities between the microservices/containers.
Now that we understand the critical role of Kubernetes in containerized applications, let’s mention two other features that make Kubernetes the darling of the cloud it is today–portability and extensibility.
A major advantage of containers and Kubernetes, and this is major, is that they are abstracted from the underlying infrastructure so that any cloud-native containerized app can be deployed and managed concurrently across the on-premise environments or across any public cloud providers. This is commonly referred to as multi-cloud.
Kubernetes is designed to be extensible. This means that you can add features to extend its capabilities to suit your cloud applications. You can either develop the features yourself or you may obtain a plugin from the community to extend Kubernetes.
In conclusion, as it stands, Kubernetes is one of the most relevant open-source projects today. Kubernetes seems to be the only glue that ties all cloud providers together. With the size and momentum of the community behind it, and with the new focus on cloud-native applications, it looks like the journey has just begun. It would be interesting to see how Docker/Kubernetes continues to change the cloud computing landscape.