Kubernetes (aka K8S) is an Open source software for orchestrating the deployment and management of containers. Kubernetes has fast grown to become the de facto container orchestration framework, and it seems its relevance has just begun and will continue to grow. Understanding the architecture of Kubernetes is the first step to getting started on the framework. In this tutorial, will guide you on the architecture of Kubernetes.
Kubernetes' architecture is broadly divided into master/worker role and each of the role designation has different functions to play. The master node is the control plane for the entire cluster and exposes the application program interface (API), container scheduling, controllers management and deploying and managing the entire cluster. Nodes are the workhorses of a Kubernetes cluster. They expose compute, networking and storage resources to applications. Each node runs a container runtime ( such as docker or rkt), an agent that communicates with the master, additional components for logging, monitoring, service discovery and optional add-ons. Nodes can be virtual machines (VMs) running in a cloud or bare metal servers running within the data center.
Kubernetes is used for managing cloud native microservices applications and also for CI/CD, and hence its features are designed to be able to handle the requirements of application development. The most common features of Kubernetes are:
- Service discovery
- Replication of services
- Load balancing
- Rolling updates
- Logging across services
- Monitoring/ health checks
A high level picture of the architecture (courtesy of Kubernetes Project) is shown below:
Description of the components
The Master Node
The master node is usually setup for High Availability (HA) to achieve high performance and failover purposes to reduce downtime.
- kubectl is used to interact with the cluster. Part from using the cluster through the Kubernetes dashboard or through API, kubectl is the main utility for interacting with Kubernetes from the command line.
- API Server: provides RESTful Kubernetes API to manage cluster configuration, backed by the etcd datastore.
- etcd: is a key-value store designed for strong consistency and high-availability. Kubernetes uses etcd to reliably store master state and configuration. The various master components ‘watch’ this data and act accordingly – for example, starting a new container to maintain a desired number of replicas.
- Scheduler: places unscheduled pods on nodes according to rules (e.g. labels). At this stage, the scheduler is simple, but it is, like most components in Kubernetes, pluggable.
- Controller Manager: manages all cluster-level functions, including creating/updating endpoints, nodes discovery, management and monitoring, and management of pods.
- Authentication/Authorization: Kubernetes API endpoints are secured with TLS (Transport Layer Security) ensuring that every user is authenticated through the most secure mechanism available.
The Worker Nodes
The number of worker nodes depends on the size of the cluster. It can be one, but it's not uncommon to have Kubernetes cluster with over 2000 nodes. The size depends on your budget and use case. Each worker node runs the following services.
Kubelet: Is a service that receives orders from the master and a container runtime (e.g. Docker) which it interfaces to manage container instances.
Kube-proxy: The proxy (kube-proxy) provides simple network proxy and load balancing capability. kube-proxy enables services to be exposed with a stable network address and name.
Docker: Docker is the container engine responsible for creating the pods on each slave node. To run the containers, it has to download the images from a docker registry. It is not the only container engine as others like rkt, crio are alternative options.
This tutorial introduces the Kubernetes architecture, which is the starting point for exploring Kubernetes. Subsequent tutorials will continue to explore other aspects of Kubernetes in more details. If you like the blog, share and like the blog before.