Kubernetes is an open source platform for managing containerised workloads and services that facilitates declarative configuration and automation.
The figure below shows the Kubernetes master and worker node components.
Control Plane governs how Kubernetes communicates with your cluster. Control plane runs the Scheduler, Controller Manager, API Server and etcd components and is responsible for managing the Kubernetes cluster.
Control plane is in constant contact with your worker nodes. Master Node acts as the control plane for the cluster.
kube-controller-manager runs controller processes. It manages different controllers that regulate the state of the cluster, manage workload life cycles, and perform routine tasks.
cloud-controller-manager runs controllers that interact with the underlying cloud providers to manage resources.
This is important because Kubernetes need to get information about the nodes running in the cluster, provision and configure load balancers, persistent volume, etc. Even though all cloud platforms have some common features there will be differences in the way they are implemented for operations.
etcd is used store configuration data that can be accessed by each of the nodes in the cluster. Only the kube-apiserver talks to the etcd database. If the kube-scheduler needs to make it known that a pod has been stopped or allocated to another node; it will inform the API server, and the API server will store the current state in the etcd database
kube-api-server is a component of the control plane that exposes the Kubernetes API. The API server is the front end for the control plane. The API Server is the main management point of the entire cluster. In short, it processes REST operations, validates them, and updates the corresponding objects in etcd. When you interact with your Kubernetes cluster using the kubectl command-line interface, you are actually communicating with the master API Server component.
kube-scheduler watches for newly created Pods with no assigned node, and selects a node for them to run on.
The kube-scheduler also reads the nodes’ affinity and anti-affinity rules to find out whether a node can or cannot run that pod.
Kubernetes Node (Worker) are the only place that you will be running workloads, as it is not recommended to have containers or loads on the master nodes, as they need to be available to manage the entire cluster. A Kubernetes cluster needs at least one worker node, but will normally have many. If you want to scale up the capacity of the cluster, add more worker nodes.
kubelet communicates with the master node. When the master node needs something to happen in a worker node, the kubelet executes the action. The kubelet makes sure containers are running in a pod.
kube-proxy is the one that makes communications between pods, containers, and nodes possible. It is responsible for routing traffic to the appropriate container based on IP and port number of the incoming request.
container runtime is the software that is responsible for running containers also creaeting the containers in the nodes kernel for our pods to run. The kubelet will be talking to this runtime and will spin up or stop our containers on demand.
Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).