Kubernetes (K8s) is not just a tool for running containers; it is a platform for managing declarative infrastructure. Unlike traditional imperative systems where you tell the server what to do, in Kubernetes, you tell the cluster what you want the final state to look like, and the controllers work tirelessly to maintain that state.


1. The Control Plane: The Brain of the Operation 🧠

To master K8s, you must understand the flow of data. When you run kubectl apply, you aren't talking to the containers; you are talking to the API Server.

API Server (kube-apiserver)

The gatekeeper. It is the only component that communicates directly with the database (etcd). It authenticates requests, validates data, and updates the state objects.

Etcd

The source of truth. A consistent, highly-available key-value store. Pro Tip: If you lose your etcd data and have no backup, your cluster is effectively dead, even if nodes are running.

kube-scheduler

The matchmaker. It watches for newly created Pods with no assigned node and selects the best node based on resource requirements, affinity rules, and taints/tolerations.

kube-controller-manager

The infinite loop. It contains logic for Node controllers, Deployment controllers, and Endpoints. It constantly compares Current State vs. Desired State.


2. The Worker Nodes: Where Work Happens 🏗️

The worker nodes are the muscle. They accept instructions from the Control Plane and execute them.

  • Kubelet: The primary "agent" running on every node. It registers the node with the API server and ensures that containers described in PodSpecs are running and healthy.
  • Kube-proxy: Maintains network rules. It uses the OS packet filtering layer (like iptables or IPVS) to allow network communication to your Pods from inside or outside the cluster.
  • Container Runtime Interface (CRI): Kubernetes doesn't actually run containers; it instructs a runtime (like containerd or CRI-O) to do it.

3. K8s Objects: Defining State

Everything in Kubernetes is an object. Here is how the core objects relate to one another in a real-world deployment.

The Abstraction Layer: We rarely create individual Pods manually. Instead, we create a Deployment, which creates a ReplicaSet, which manages the Pods.

The Pod (The Atom)

A Pod encapsulates an application container (or multiple sidecars), storage resources, and a unique network IP. All containers in a Pod share the same localhost.

pod-definition.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: webserver
spec:
  containers:
  - name: nginx
    image: nginx:1.14.2
    ports:
    - containerPort: 80

The Service (The Network Glue)

Since Pods are ephemeral (they die and get new IPs), we need a stable address. A Service defines a logical set of Pods and a policy by which to access them.

  • ClusterIP: Exposes the Service on a cluster-internal IP. (Default)
  • NodePort: Exposes the Service on each Node's IP at a static port.
  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer.