Understanding Docker and Containers
- Nishant Nath
- Sep 5, 2023
- 4 min read
Updated: Jul 20
1. What is Docker?
Docker is like a lunchbox that holds software and everything it needs to run, and it makes it easy to carry, share, and run software on different computers without making a mess.
Docker simplifies the DevOps Methodology by allowing developers to create templates called “images,” using which we can create lightweight virtual machines called “containers.”

2. What is Virtualization?
Virtualization refers to importing a guest operating system on the host operating system and allowing developers to run multiple OS on different VMs while all of them run on the same host, thereby eliminating the need to provide extra hardware resources.

In the development stage, we created an application, and the code is correct. However, it doesn't work in production because the production team's local machine is missing some necessary dependencies. These dependencies are available in the development team's backend system, but they are not installed in the production environment.
The solution to the problem is for the DEV team to provide both the application and the required dependencies to the PROD team. This way, the application can run on the production team's virtual machine or server. However, in some real-world situations, it may not be feasible to share the operating system (OS) between teams.
A hypervisor is used to virtualize the hardware. When creating a virtual machine (VM), it allocates resources from the base hardware, including CPU, hardware, and RAM. Each VM has its own operating system (OS) installed, similar to the development team's environment.
The application is developed within these VMs, and the entire VM, including the necessary dependencies, is shared with the testing team. This allows the testing team to run and test the application successfully since they have the same environment as the development team. Once testing is complete, the application can be pushed to the operations team for production use.
3. What is Container?
In Containerization, we virtualize OS resources. It is more efficient as there is no guest OS consuming host resources. Instead, containers utilize only the host OS and share relevant libraries and resources, only when required.

The required binaries and libraries of containers run on the host kernel leading to faster processing and execution.
Container is a lightweight virtualization technology acting as an alternative to hypervisor virtualization. Bundle any application in a container and run it without thinking of dependencies, libraries, and binaries.
Containers are small and lightweight as they share the same OS kernel.
They do not take much time, only seconds, to boot up.
They exhibit high performance with low resource utilization.

4. Docker Architecture
Docker uses a client-server architecture. The Docker client consists of Docker build, Docker pull, and Docker run.
The client approaches the Docker daemon that further helps in building, running, and distributing Docker containers.
Docker client and Docker daemon can be operated on the same system; otherwise, we can connect the Docker client to the remote Docker daemon. Both communicate with each other by using the REST API, over UNIX sockets or a network.

Docker Client is the primary way for many Docker users to interact with Docker. It uses command-line utility or other tools that use Docker API to communicate with the Docker daemon.Docker client can communicate with more than one Docker daemon.
Docker daemon helps in listening requests for the Docker API and in managing Docker objects such as images, containers, volumes, etc. Daemon issues building an image based on a user’s input, and then saving it in the registry.
Docker registry is a repository for Docker images that are used for creating Docker containers.
Docker Image:
A Docker image is like a blueprint for an application.
It contains all the files, code, and dependencies needed to run your application.
Images are read-only and provide a consistent starting point for containers.
Docker Container:
A Docker container is a running instance of a Docker image.
It's like a lightweight, isolated environment where your application runs.
Containers are portable, efficient, and can be easily started, stopped, and moved between different systems.
Image vs Container:

5. Creating Image from Dockerfile:
A Dockerfile is used to create a Docker image. It typically includes various components and instructions to define the environment for our application.
Here's an overview of the main components in a Dockerfile:
Base Image: Specifies the base image from which our Docker image is built. For example, FROM: ubuntu:20.04 selects the Ubuntu 20.04 base image.
Environment Setup: Install any necessary packages, libraries, or dependencies using package managers like apt (for Ubuntu), yum (for CentOS), or language-specific package managers.
Working Directory: Set the working directory within the container where your application will be placed and executed using the WORKDIR instruction. example - WORKDIR: /app.
Copy Files: Copy your application code and resources into the container using the COPY (only for local) or ADD (from remote) instruction.
Run Commands: Execute commands within the container to configure your application or set up the environment. Use the RUN instruction.
Expose Ports: Specify which ports the container should listen on using the EXPOSE instruction. For example, EXPOSE 8080.
Environment Variables: Define environment variables that your application might require using the ENV instruction.
Entrypoint/CMD: Specify the command that should be executed when the container starts. You can use either CMD ( during container creation) or ENTRYPOINT (High priority over CMD).
6. Multi-Stage Dockerfile:
Instead of doing everything in one big image, a multi-stage Dockerfile breaks it into stages like this:
Builder Stage – Installs tools, builds code, installs dependencies.
Final Stage – Takes only the needed files from the builder, leaving all the extra stuff behind.
Benefits of multi-stage image:
Benefit | Explanation |
Smaller Image Size | Unused tools (like pip, gcc, etc.) are left out of the final image. |
Cleaner Separation | You keep build-time tools and files separate from runtime. |
More Secure | Final image has fewer packages, so less surface area for attacks. |
Faster Deployments | Smaller images download and run faster. |
Easy Maintenance | Changes in the build step don’t affect your runtime environment. |
7. Hands-On-Lab: Docker Concepts - NodeJS App
8. Hands-On-Lab: Multi-Stage Dockerfile application - Python Quiz App
9. Hands-On-Lab: Java Application without Docker - Java-App-without-Docker
10. Hands-On-Lab: Java Application with Docker - Java-App-with-Docker