Container Orchestration with Kubernetes (K8s)
- Nishant Nath
- Sep 6, 2023
- 13 min read
Updated: Jul 13
1. Kubernetes Theory:
Kubernetes is an orchestration platform for managing containerized applications. It abstracts the underlying infrastructure and provides a consistent way to deploy, scale, and manage containerized workloads.
Kubernetes helps us make sure those containerized applications run where and when you want, and helps them find the resources and tools they need to work.

2. Kubernetes Features:
Orchestration
Autoscaling
Auto-Healing
Load Balancing
Platform Independent
Fault Tolerance ( Notice POD failure and create new POD)
Rollback
Health Monitoring of Containers
3. Kubernetes Architecture:
Master Node:
The Master Node is the brain of the Kubernetes cluster. It manages the overall state and control plane of the cluster. It consists of several components:
API Server: This is the entry point for all administrative tasks and serves as the control plane's frontend. Users and other parts of the cluster interact with the API server to manage the cluster's state.
etcd: A distributed key-value store that stores the configuration data of the cluster. It is the source of truth for the cluster's state.
Controller Manager: Watches the state of the cluster through the API server and ensures that the desired state matches the actual state. It includes controllers for Replication Controllers, ReplicaSets, and more.
Scheduler: Assigns work (Pods) to nodes based on resource requirements, constraints, and other policies. It aims to maintain a balanced distribution of workloads across the cluster.

4. Node (Minion) Nodes:
Nodes are the worker machines in the cluster. They are responsible for running containers and providing the runtime environment. Each Node has the following components:
Kubelet: An agent that runs on each Node and communicates with the Master Node. It ensures that containers are running in a Pod as expected.
Container Runtime: The software responsible for running containers (e.g., Docker, containerd).
Kube Proxy: Maintains network rules on the Node. It routes traffic to the appropriate container or Pod based on IP addresses and port numbers.
Pod: The smallest deployable unit in Kubernetes. It can contain one or more containers that share the same network namespace, storage, and context. Pods are scheduled to run on Nodes.
5. More on K8s Architecture:
We create manifest (.yml file)
Apply this to master to bring into desired state.
Pods runs on node which is controlled by Master.
Slides - Kubernetes Introduction Slides
6. Microservices:
Microservices are a software architecture pattern where an application is composed of small, independent, and loosely coupled services. Each service focuses on a specific business capability or function.
Small Services: Instead of one large application, you have many small services, each handling a specific task.
Independence: These services can be developed, updated, and scaled independently. Changes in one service don't disrupt others.
Loose Coupling: Services communicate through well-defined interfaces, often over the network. They don't depend heavily on each other's internal workings.
Scalability: You can scale individual services to handle more load, making the application flexible and efficient.
Easier Maintenance: Smaller services are easier to understand, maintain, and replace if needed.

7. Minikube/Kubeadm Setup:
Install kubectl --> kubectl - Kubernetes Command-Line Tool
Official Doc Minikube --> Hello Minikube Tutorial
Create cluster using Minikube --> Creating a Cluster - Kubernetes Basics
EKS Cluster Setup --> Creating a Cluster - EKS
Kubeadm Setup for Multinode cluster via Kubeadm --> Kubeadm-Setup-Multi-Node
8. POD
Smallest unit in kubernetes.
Pod is a group of one or more containers that are deployed together on the same host and a cluster is a group of nodes.
A cluster has at-least one Master & Node.
In K8s the control unit is POD not containers.
POD runs on node which is control my Master.
One POD usually contains one container but can have multi-containers.
Single Atomic Unit: A Pod is a single atomic unit that can host one or more containers. All containers within the same Pod share the same network namespace, making it easy for them to communicate with each other via localhost.
Shared Resources: Containers within the same Pod share the same storage volumes, IP address, and port space. This allows them to interact and share data more easily.
Primary Abstraction: While Pods are an essential part of Kubernetes, you typically interact with higher-level abstractions like Deployments, StatefulSets, or ReplicaSets to manage your applications. These controllers create and manage Pods on your behalf, ensuring high availability, rolling updates, scaling, etc.
GitHub page for LAB --> Kubernetes Lab - Pod
Slides --> Â Pods 101
9. Multi-Container POD
In Kubernetes, a multi-container Pod is a Pod that contains more than one container, with all containers in the Pod sharing the same network namespace, storage resources, and IP address.
Multi-container Pods are often used to encapsulate closely related processes that need to work together and communicate with each other while running in the same environment.
Key characteristics of multi-container Pods include:
Shared Resources: Containers within a multi-container Pod share the same storage volumes and network namespace, making it easier for them to communicate with each other.
Co-location: Multi-container Pods are often used when two or more containers need to work together on the same node. This enables them to efficiently communicate with each other through localhost.
Synchronization: Containers in a multi-container Pod can synchronize and coordinate tasks and actions more easily than if they were in separate Pods.
Resource Sharing: Containers can share data or execute tasks that depend on each other, simplifying orchestration and resource management.
GitHub page for LAB --> Kubernetes Lab - Multi-Container Pod
10. Pod environment variables:
Environment variables are a fundamental concept in containerization and Kubernetes. They are key-value pairs that are used to pass configuration and runtime information to containers and applications running within them.
Environment variables can be set within containers to provide dynamic configuration, influence application behavior, or interact with the operating system or other services.
GitHub page for LAB --> Kubernetes Lab - Pod Environment Variables
11. Pod Ports:
In Kubernetes, Pods can run one or more containers, and each container can listen on specific ports for incoming network traffic.
Ports are fundamental for communication between containers and external entities. Here are key aspects of ports in Pods:
Container Ports: Containers running in a Pod can specify which ports they will listen on. These ports are defined using the ports field in the Pod's container specification.
Pod Networking: Containers within a Pod share the same network namespace, allowing them to communicate with each other using localhost. This means that they can directly connect to each other over the specified container ports.
Service Discovery: Pods can communicate with other Pods or services by using their IP addresses and the specified container ports. These ports are important for service discovery and inter-container communication within a Pod.
Container-to-Container Communication: If multiple containers run in a Pod, they can use the specified container ports to communicate with each other. This is particularly useful when two containers within the same Pod need to interact.
GitHub page for LAB --> Kubernetes Lab - Pod Port
12. Kubernetes Objects
Kubernetes uses objects to represent the state of our cluster.
It represents as JSON or YAML files
What containerized application are running and on which node.
The policies around how those applications behave, such as restart policies, upgrade and fault tolerance.
Once we create the object the k8s system will constantly work to ensure that object exists and maintain cluster's desired state.
Every k8s object has two nested fields that governs the object config: 1. Object-spec --> Describe desired state ( features I want the object to have) 2. Object-status --> Describe the actual state ( features updated by k8s)
All objects are identified by a unique name and a UID.
Some basic objects are - Pod, Service, Volume, Namespace, Replicasets, Secrets, Configmaps, Deployments, Jobs, Daemonset.
13. Labels & Selectors:
Labels are mechanism we use to organize k8s objects.
Label is a key-value pair without any pre-defined meaning that can be attached to objects.
Multiple labels can be added to a single object.
Selectors are used to filter and target resources based on their labels.
Resources can be selected for various operations, such as scaling, load balancing, and scheduling, by specifying label selectors.
There are two main types of selectors: equality-based selectors and set-based selectors.
Equality-based selectors allow us to select resources where the label key-value pair matches exactly.
Set-based selectors allow you to select resources based on a set of label conditions.
GitHub page for code --> Kubernetes Lab - Labels and Selector
14. Scaling & Replication:
A ReplicationController in Kubernetes is an object used to ensure that a specified number of replicas of a Pod are running at all times.
If a Pod fails or gets deleted, the ReplicationController automatically replaces it to maintain the desired number of replicas.
15. ReplicaSet:
A ReplicaSet in Kubernetes is a resource that ensures a specified number of replica Pods are running at all times.
It is primarily used for maintaining high availability and reliability of applications by automatically replacing failed Pods and scaling the number of replicas up or down based on the desired configuration.
Replicas: We define the number of desired replica Pods in the spec.replicas field. The ReplicaSet continuously monitors the actual number of Pods and takes action to match the desired count.
Selectors: ReplicaSets use label selectors (specified in spec.selector) to identify the Pods it manages. All Pods controlled by a ReplicaSet must have labels that match the selector.
Template: We define a Pod template in the spec.template field. This template is used to create new Pods when needed, ensuring that they have the desired configuration.
Automatic Healing: If a Pod fails or is deleted, the ReplicaSet automatically replaces it to maintain the specified replica count.
Scalability: We can easily scale up or down by adjusting the spec.replicas field, and the ReplicaSet will take care of the rest.
Slides: ReplicaSet 101
GitHub page for code --> Kubernetes Lab - ReplicaSet
16. Deployment:
Deployments are used to manage the lifecycle of application instances, ensuring they are available, reliable, and can be scaled up or down as needed. Key features of Deployments include:
Replica Management: Deployments manage a set of identical replica Pods and ensure the desired number of replicas are running.
Rolling Updates and Rollbacks: Deployments support rolling updates by creating a new version of your application and gradually replacing the old Pods with the new ones. If issues arise during the update, you can easily roll back to the previous version.
Scalability: You can scale the number of replicas up or down, and the Deployment controller takes care of managing the Pods.
Self-healing: If a Pod fails, the Deployment controller replaces it to maintain the desired replica count.
Declarative Configuration: You specify the desired state of your application in a Deployment manifest, and Kubernetes reconciles the actual state with the desired state.
Slides: Deployment 101
GitHub page for code --> Kubernetes Lab - Deployment
Kubernetes Networking addresses four concerns - 1. Container within pod use network to communicate via loopback. 2. Cluster networking provides communication between different pods. 3. The service expose an application running in pods to be outside cluster
Here are several reasons why services are essential in the context of pods:
Stable Network Endpoint: Pods in Kubernetes have dynamic IP addresses, and they can come and go due to scaling, updates, or failures. Services provide a stable, virtual IP address (ClusterIP) or DNS name that acts as a consistent endpoint for communication. This abstraction allows other pods or external entities to reliably connect to the service without needing to know the IP addresses of individual pods.
Load Balancing: When a service exposes multiple pods, it automatically load-balances the incoming traffic among those pods. This ensures that the workload is distributed evenly, optimizing performance and preventing overload on specific pods.
Pod Discovery: Services simplify the process of discovering and connecting to other pods within the same or different namespaces. Instead of dealing with dynamic IP addresses and changes in the pod lifecycle, applications can refer to services by their stable DNS names.
Cross-Node Communication: Pods may be scheduled on different nodes in a Kubernetes cluster. Services provide a unified entry point, allowing communication between pods running on different nodes. This is particularly crucial for applications that span multiple nodes.
Service Types: Kubernetes supports various service types, each serving a specific purpose:
ClusterIP:Â Internal service accessible only within the cluster.
NodePort:Â Exposes the service on a static port on each node's IP. Useful for accessing the service externally.
LoadBalancer:Â Provides an externally accessible IP and automatically configures a cloud provider's load balancer to distribute traffic.
Slides: Services 101
GitHub page for code -->Kubernetes Lab - Pod Networking
18. Volumes
In Kubernetes, volumes in pods provide a way to persist and share data among containers within the same pod. Some key points about volumes in Kubernetes:
Data Sharing:Â Volumes allow multiple containers within the same pod to share and access the same data, facilitating communication and collaboration between containers.
Data Persistence:Â Volumes outlive the lifespan of individual containers. Even if a container crashes or is restarted, the data stored in the volume remains intact.
Types of Volumes:Â Kubernetes supports various types of volumes, such as emptyDir (temporary storage), hostPath (host machine's file system), Persistent Volumes (networked or local storage), and more.
Flexibility:Â Volumes can be mounted into one or more containers within a pod, enabling containers to read and write data to the shared storage.
Volume Mounts:Â Containers access volumes through volume mounts, specifying the desired volume and mount path in the pod's configuration.
GitHub page for code --> Kubernetes Lab - Volumes
19. Persistent Volume & LivinessProbes
A Persistent Volume (PV) is a piece of storage in the cluster that has been provisioned by an administrator.
Purpose:Â It is a way for cluster administrators to provide durable storage resources that can be consumed by applications (Pods) in a cluster.
Usage:Â Pods can request a specific amount of storage, and if available, a Persistent Volume Claim (PVC) can be bound to a Persistent Volume, providing the pod with the requested storage.
PV is like a pre-allocated hard drive that you, as an administrator, set up and offer to users.
PVC is like a user saying, "I need a piece of storage, please allocate some for me." It's a way for users to request storage resources without needing to know the details of how and where that storage is provisioned.
GitHub page for code --> Kubernetes-Lab Pv-and-PVC
Definition:Â In Kubernetes, a Liveness Probe is a diagnostic mechanism used to determine if a container within a Pod is still running and responsive.
Purpose:Â It helps to ensure the health and availability of the application running inside a container. If the liveness probe fails, the container is restarted.
Configuration:Â The liveness probe is configured with parameters such as the probe type (HTTP, TCP, or Command), the path or command to probe, and thresholds for success and failure.
GitHub page for code --> Kubernetes-Lab LivenessProbes
20. Configmaps & Secrets
ConfigMap is an API resource that provides a way to inject configuration data into applications. It decouples configuration artifacts from image content to keep containerized applications portable.
A ConfigMap allows us to:
Decouple Configuration:Â It separates configuration details from application code, making it easier to manage and update configurations without modifying the application code or container images.
Key-Value Pairs:Â ConfigMaps store configuration data as key-value pairs, where each key corresponds to a configuration item.
Pod Environment Variables:Â You can use ConfigMaps to populate environment variables in your Pod containers based on the key-value pairs defined in the ConfigMap.
Configuration Files:Â ConfigMaps can be used to store configuration files, which can then be mounted into Pods as volumes.
A Secret is an API resource used to store and manage sensitive information, such as passwords, API keys, and other confidential data. Secrets provide a way to keep this sensitive data secure, separate from the application code and configuration.
Here are key characteristics and use cases for Secrets in Kubernetes:
Key-Value Pairs:Â Secrets store data as key-value pairs, similar to ConfigMaps. Each key corresponds to a specific piece of sensitive information.
Base64 Encoding:Â Secret data is base64 encoded, but it's important to note that base64 encoding is not encryption. While it obscures the data, it is not a secure encryption mechanism.
Pod Environment Variables:Â Secrets can be used to provide sensitive data as environment variables in Pods, allowing applications to access this information securely.
Volume Mounts:Â Secrets can be mounted as files or volumes in Pods, allowing applications to read sensitive data from files.
Service Account Tokens:Â Kubernetes uses Secrets to store service account tokens, allowing Pods to authenticate with the API server.
GitHub page for ConfigMap code --> Kubernetes-Lab ConfigMap
GitHub page for Secrets code --> Kubernetes-Lab Secret
21. Namespace
Namespaces are a powerful feature of Kubernetes, enabling better organization, isolation, and management of resources within a cluster.
Key points about namespaces in Kubernetes:
Isolation:Â Namespaces are primarily used to provide a scope for resources, helping to avoid naming conflicts between different parts of the system.
Logical Partitioning:Â Namespaces allow you to logically partition resources, making it easier to manage and organize your applications and services.
Resource Sharing:Â Resources within a namespace are accessible to other resources within the same namespace but are isolated from resources in other namespaces.
Default Namespace:Â When you create resources without specifying a namespace, they are placed in the default namespace. You can also create custom namespaces.
Multi-Tenancy:Â Namespaces are often used to support multi-tenancy, where different teams or projects can use the same cluster without interfering with each other.
kubectl Context: The kubectl command-line tool has a concept of "contexts," which includes both the cluster and the namespace. It allows you to easily switch between different clusters and namespaces.
To work within a specific namespace, you can set the namespace in the kubectl context:
GitHub page for code --> Kubernetes-Lab Namespace
kubectl config set-context --current --namespace=mynamespace22. Kubernetes Ingress & Controllers
Kubernetes Ingress is a Kubernetes resource that manages external access to services in a cluster, typically HTTP and HTTPS traffic. Instead of exposing each service with a NodePort or LoadBalancer.
One entry point for multiple services using different URL paths or subdomains
Only one LoadBalancer is required instead of one per service.
There is one type of Ingress resource in Kubernetes, but Ingress Controllers (the actual software that implements ingress logic) come in different types:
Ingress Controller | Description |
NGINX | Most popular; supports custom annotations, SSL, path routing. |
Traefik | Dynamic, supports metrics, service mesh integration. |
HAProxy | High performance, more configurable. |
Istio Gateway | For service meshes with advanced routing. |
AWS ALB Ingress Controller | Integrates with AWS Application Load Balancers. |
Contour | Powered by Envoy proxy, performant and scalable. |
GitHub page for code --> Kubernetes-Lab Ingress
23. Helm Charts
Helm is a package manager for Kubernetes that simplifies the deployment and management of applications. Helm uses packages called "charts" to define, install, and upgrade even the most complex Kubernetes applications.
A Helm chart is a collection of pre-configured Kubernetes resource files (YAML files) that define a set of resources needed to run an application within a Kubernetes cluster. These resources can include deployments, services, config maps, secrets, and more.
A Helm chart typically has the following structure:
my-chart/
├── Chart.yaml # Metadata about the chart
├── values.yaml # Default configuration values
├── templates/ # Kubernetes resource templates
│ ├── deployment.yaml
│ ├── service.yaml
├── charts/ # Sub-charts dependencies
├── ...Chart.yaml file contains metadata about the Helm chart, including the chart name, version, description, and dependencies on other charts.
values.yaml defines default configuration values for the chart. Users can override these values when installing the chart, allowing for customization without modifying the chart's templates.
templates/ directory contains Kubernetes resource templates. These templates use Go templating to insert values from the values.yaml file and can be customized to suit your application's needs. Common resource types include Deployments, Services, ConfigMaps, and Secrets.
charts/ directory is used for including subcharts, which are separate Helm charts that can be used as dependencies in your main chart. Subcharts help modularize complex applications.
GitHub page for code --> Â Kubernetes-Lab HELM
Install a Chart:
helm install <release-name> <chart-name>Upgrade a Chart:
helm upgrade <release-name> <chart-name>List Releases:
helm listUninstall a Release:
helm uninstall <release-name>