Evolution of Docker from Linux Containers


Docker is a powerful tool that allows developers to easily build, deploy and run containerized applications. Containers are a lightweight and portable form of virtualization that packages an application and its dependencies, making it easy to move between different environments. The evolution of Docker began with Linux Containers (LXC) and has since revolutionized the way we think about software development and distribution. In this article, we'll explore the evolution of Docker from Linux containers, the benefits of using containers, and how Docker enhances the LXC concept.

What are Containers?

Containers are a logical packaging mechanism that extracts applications from the environment in which they run. The benefits of this abstraction include the ability to deploy applications across any environment easily and consistently. Developers can build an app on their local desktop, contain it, and securely deploy it in a public cloud. Both virtual machines and containers virtualize access to underlying hardware, such as CPU, memory, storage, and networking. However, virtual machines are expensive to create and maintain compared to containers, requiring multiple copies of the operating system to run.

Docker Evolution from Linux Containers

Docker was first released in 2013, but its roots go back to Linux Containers (LXC), first introduced in 2008. LXC is a user space interface for containerization features of the Linux kernel, such as spaces of names and “cgroups”. It allows multiple isolated Linux systems, or containers, to run on a single host, sharing the host's kernel. Docker built on the LXC concept and created a new platform for developing, shipping and running containerized applications. One major difference between Docker and LXC is that Docker uses a layered file system, called UnionFS, which allows multiple containers to share the same underlying image. This makes Docker more efficient and lightweight than traditional virtualization.

Docker also introduced the concept of container images, which are pre-built, pre-configured images of an application and its dependencies. These images can be easily shared and reused across different teams and environments, making it easier to deploy applications consistently and reliably. Docker has also added a command-line interface (CLI) and REST API for managing containers, making it easier for developers to integrate Docker into their workflow. Additionally, Docker introduced a centralized hub for sharing and discovering images, called Docker Hub, which made it easier for developers to find and use clip art.

Understand Linux Containers

Linux Containers, often referred to as LXC, was perhaps the first implementation of a full container manager. It is operating system-level virtualization that offers a mechanism for limiting and prioritizing resources such as CPU and memory across multiple applications. It also allows for complete isolation of the application's process tree, networks, and file systems. All processes share the same kernel space, making containers quite lightweight compared to virtual machines.

Arrival of Docker

While LXC provides a tidy and powerful userspace interface, it is not yet as user-friendly and has not generated mass appeal. Docker changes the game by abstracting most of the complexities of handling kernel functions and providing a simple format for containerizing an application and its dependencies. It also comes with support for auto-creating, versioning, and reusing containers. Docker is isolated from side effects of different LXC releases and distributions.

Advantages of Docker over LXC

Docker provides OS-level virtualization for sandboxing, just like LXC. However, Docker offers several advantages over LXC, such as a simpler and easier to use interface, support for automatic container creation and versioning, and a centralized hub for sharing and discovering images. Additionally, Docker has a modular architecture that relies on key components, such as the Docker daemon (dockerd), containerd, and runc, to provide its services. This architecture allows core components to evolve and standardize independently.

Docker Workflow

A typical Docker workflow includes packaging an application as an image, publishing it to the registry, and running it as containers, possibly with persistence. Docker's command line interface and REST API make it easy to integrate this workflow into your development process.

Docker Commands

To start using Docker, you first need to install it on your system. Once Docker is installed, you can start using it to run containers. The following command will execute a simple "Hello, World" container −

To list all running containers, use the command −

$ docker ps

To stop a running container, use the command −

$ docker stop <CONTAINER ID>

Docker Architecture

Docker has a modular architecture that relies on key components, such as the Docker daemon (dockerd), containerd, and runc, to deliver its services. This architecture allows the core components to evolve and standardize independently. The Docker daemon (dockerd) is the core of Docker and comprises the Docker daemon that listens to API requests and manages Docker objects. It also provides an API interface and command line interface for interacting with the Docker daemon. containerd is another service daemon that helps perform tasks like downloading images and running them as containers. It follows a standard API that clients like dockerd can connect to. Runc is the component that interacts with kernel functions and provides a standard mechanism for creating namespaces and control groups. It is a repackaging of libcontainer to comply with the OCI specification. This section looks at the core components of Docker and their role in the architecture.


Docker has come a long way since its inception as a container for Linux containers. It has become the de facto standard for containerization and has greatly simplified the process of developing, deploying and running applications.