If you are new to Docker, you may find it challenging to understand all the terms. It can seem like everyone has a different idea of what Docker terms mean, and sometimes, terms get used interchangeably.
For instance, you — like others who are learning about Docker — may wonder how a Docker image differs from a Docker container. Their contrasts are subtle but significant.
This article will explore the differences between Docker images and containers, helping you understand how and when to use each.
Solutions like Docker, Flatpak, and Snaps all have the same goal of packaging an app into a single bundle to install in any Linux distribution.
Solomon Hykes introduced Docker as an open-source project in 2013. Their first commercial version was ready for production in 2014.
The Docker containerization platform helps create, deploy, and run applications easily by using containers. Think of a container as a shipping container for software — it holds important content like files and programs so that an application can be delivered efficiently from producer to consumer.
One of the biggest benefits of containerization is that it enables developers to package their apps with all dependencies needed to run on any Linux distribution. This removes the need to install each dependency manually.
Multiple containers can run simultaneously, each based on the same or different images. Docker is similar to virtual machines in the way it creates multiple instances of an operating system. However, Docker lets you create containers that run on the same operating system. So, more containers than virtual machines can run on a given hardware combination.
Docker containers can even run within virtual machines. Docker provides an additional layer of abstraction and automation versus creating a virtual machine, making it easier to use.
The containerization platform’s popularity has increased among developers and system administrators because it encompasses the application’s complete filesystem with all its dependencies. This setup enables immutable infrastructure and guarantees that deployments are idempotent — they will stay exactly the same no matter how many times you repeat the operation.
A Docker daemon runs in the background to manage images, containers, and more. A client and the daemon communicate using sockets or through a RESTful API.
What is a Docker image?
Images are read-only templates containing instructions for creating a container. A Docker image creates containers to run on the Docker platform.
Think of an image like a blueprint or snapshot of what will be in a container when it runs.
An image is composed of multiple stacked layers, like layers in a photo editor, each changing something in the environment. Images contain the code or binary, runtimes, dependencies, and other filesystem objects to run an application. The image relies on the host operating system (OS) kernel.
For example, to build a web server image, start with an image that includes Ubuntu Linux (a base OS). Then, add packages like Apache and PHP on top.
You can manually build images using a Dockerfile, a text document containing all the commands to create a Docker image. You can also pull images from a central repository called a registry, or from repositories like Docker Hub using the command
docker pull [name].
When a Docker user runs an image, it becomes one or multiple container instances. The container’s initial state can be whatever the developer wants — it might have an installed and configured web server, or nothing but a bash shell running as root. In practice, though, most images include some preconfigured software and configuration files.
Docker images are immutable, so you cannot change them once they are created. If you need to change something, create another container with your changes, then save those as another image. Or, just run your new container using an existing image as a base and change that one.
After successfully building an application, Docker can further export images into other images. Images derived from each other are usually called parent and child images.
An image may have multiple tags, but each tag is unique. Tags distinguish images, like ubuntu:latest or ubuntu:14.04, for example.
Images themselves do not run, but you can create and run containers from a Docker image.
What is a container?
A container is an isolated place where an application runs without affecting the rest of the system and without the system impacting the application. Because they are isolated, containers are well-suited for securely running software like databases or web applications that need access to sensitive resources without giving access to every user on the system.
Since the container runs natively on Linux and shares the host machine’s kernel, it is lightweight, not using more memory than other executables. If you stop a container, it will not automatically restart unless you configure it that way. However, containers can be much more efficient than virtual machines because they don’t need the overhead of an entire operating system. They share a single kernel with other containers and boot in seconds instead of minutes.
You can use containers for packaging an application with all the components it needs, then ship it all out as one unit. This approach is popular because it eliminates the friction between development, QA, and production environments, enabling faster software shipping. Building and deploying applications inside software containers eliminates “works on my machine” problems when collaborating on code with fellow developers.
The applications can also run on any infrastructure and in any cloud. You can isolate applications and their underlying infrastructure from other applications.
Docker images vs. containers
A Docker image executes code in a Docker container. You add a writable layer of core functionalities on a Docker image to create a running container.
Think of a Docker container as a running image instance. You can create many containers from the same image, each with its own unique data and state.
Although images are not the only way to create containers, they are a common method.
A key benefit of adopting containers is the standardization and simplification of development, operations, and testing. However, for teams to take full advantage of containers, they need to make sure that developers, operations engineers, and testers create consistent environments.
A continuous integration and continuous deployment (CI/CD) pipeline can build, test, and package containers. Deployment then distributes that container to a runtime environment where it can execute as part of an application.
A continuous integration solution like CircleCI enables developers to automate builds, tests, and deployments. CircleCI can use Docker containers to make deploying your applications to multiple environments easier.
For example, CircleCI can build Docker images and push them to a container image registry like Docker Hub. From there, it can instantiate the images into containers in Kubernetes, OpenShift, or elsewhere.
The flow works like this:
- You commit changes to your Git repo.
- This commit triggers a CircleCI build job that checks out the source code from Git and runs unit tests on the code.
- If the unit tests pass, CircleCI pushes the built image to Docker Hub.
- If the unit tests fail, CircleCI alerts the developer and stops the workflow.
Features like Docker layer caching and test splitting can also help you build and test your images faster, shortening your time to deployment. For more on the benefits of using a continuous integration platform to orchestrate your Docker builds, tests, and deploys, visit How to build a CI/CD pipeline with Docker.
Although Docker images and containers have a similar purpose, they have different uses. An image is a snapshot of an environment, and a container runs the software.
Both containers and images allow users to specify application dependencies and configurations and to describe everything necessary for a machine to run that application. However, containers and images have different lifecycles. For example, you can use containers, but not images, on container-based systems like Pivotal Cloud Foundry. Likewise, you can use images, but not containers, in non-container systems like Heroku or OpenShift.
It is not a matter of choosing containers or images. They rely on each other, and you need both to work with Docker.
Now that you understand the nuances between Docker images and containers, you can make the most of the Docker platform. When working with Docker, automation helps you to integrate quickly and free up developer time to create new application features. Learn more about how CircleCI’s Docker and Kubernetes integrations can improve the efficiency of your software development process.