In computing, virtualization is the creation of virtual — as opposed to a physical — versions of computer hardware platforms, storage devices, and network resources. Virtualization is the technology on which many popular development tools like containers and virtual machines (VMs) are built. A container is a process or group of processes isolated from other programs. As a guest OS, a container has no awareness of other processes running on the host OS and has limited access to hardware. VMs are built on top of a software layer called a hypervisor. While a container is an isolated process made for a single service, VMs run a whole and separate OS. From building to testing to deployment to vulnerability management, containers and VMs help keep your CI/CD operations running smoothly.

In this article, you will learn about the differences between containers and VMs, the important role each plays in software development, and how you can incorporate containers and VMs into your development pipelines.

What is a container?

A container is a process or group of processes isolated from other programs on a shared kernel. A kernel is a computer program at the core of a computer’s operating system (OS) and generally has complete control over everything in the system. Containers are a guest OS with virtual resources. Containers do not have awareness of other processes running on the host OS. Unlike a host OS, which has direct access to hardware, a guest OS has limited access to hardware, just like any other application running on the host OS.

There are two types of containers: application containers and system containers. System containers are much less common now, but were popular before Docker made containers commonplace in 2015. When discussing containers, people most often mean application containers.

Containers are built from read-only templates called images that are pulled from a central repository to run on a host machine.

Most containers run a single process or application, although they can handle more than one. Containers usually run on Linux machines, where they run in a separate namespace. These containers neither depend upon nor directly interact with other processes on the machine running them. (Implementations on other operating systems follow similar architectural principles.) Any communication between the container and other processes and devices is through an additional software interface.

Although containers encapsulate the files and binaries they need to run, they require a container engine on which to run. There are several essential tools that allow users to manage and create containers quickly.

Docker, the most popular container system, uses a daemon to create and manage containers. Other container systems like Buildah and Kaniko offer daemon-less architecture. Daemon-less container building may offer greater security because Buildah and Kaniko do not need root access for their full functionality. However, these tools alone cannot run or manage containers. Developers may prefer tools like Docker and Podman because they let users build images and run and configure containers.

Containers are useful because they are highly portable. Most common container engines run in multiple environments and are lightweight in resource use. Because containers package all the dependencies they need, they run consistently no matter what compatible system you implement them on. That means that once you build an application from one or multiple containers, you can implement it on many different systems.

This portability makes containers perfect for large organizations that require computer networks to run identical software. Containers are also used to conduct parallel tests as a part of a continuous integration and continuous deployment (CI/CD) pipeline. Containers can also run repeated tasks simultaneously and divide individual processes of a larger application into individual microservices.

What is a VM?

Like a container, a VM is a piece of software used for virtualization. But unlike containers, you must build VMs on top of a software layer called a hypervisor. VMs run a whole and separate OS, meaning that a single VM can handle a wide range of tasks and execute them all simultaneously. As a result, VMs can be used for a broader set of use cases.

Because VMs create a complete computing environment, you can install new software on them and change their code down to the OS level. You can even snapshot the VM in a given state to roll it back to that configuration should there be issues later. Like containers, VMs are separate from other software on the same piece of hardware, making them perfect environments for software testing.

Because VMs allocate resources below the level of a guest OS, a malicious application that compromises one VM is unlikely to affect the host OS or access the machine’s firmware. That keeps other VMs running on the same machine safe. And, because your VM can use a different OS than its host device, you can use VMs to test software in different environments.

There are many tools available to build and manage VMs. The most essential are hypervisors, which govern access to the underlying resources for one or more VMs. Other tools help users create and manage many VMs simultaneously. Some developers use pre-configured VMs to ensure they are set up correctly and have all the basic programs they need to start.

Containers vs VMs: Which should you use?

VMs and containers are both powerful technologies with specific use cases. Both provide isolated environments to run processes securely, but they differ in their specific purposes.

A containerized application has more direct access to hardware than an application running in a VM, making containers well-suited for lightweight use cases. Suppose you are looking to run a single process in multiple separate instances or run many different processes in isolation from each other. In that case, it makes sense to go with containers. Their small resource footprints make them easy to start up quickly and run at scale.

Organizations concerned about security can securely examine a container image to know what it will do before creating the container itself. That transparency makes containers easy to scan, but it comes at a price. You must scan shared containers for vulnerabilities to avoid replicating those vulnerabilities in the systems where they are used.

It is also easy to update every instance of a containerized application. You make an updated version of the container image and create new containers from that updated image. Then you can delete out-of-date and less secure containers without affecting other processes. You can even automate the update process and rely on containers’ fast start-up times to ensure that updates happen quickly every time.

Containers can make some complex tasks simpler. For example, you can build a CI/CD pipeline easily with a Docker image and a CircleCI config file. The pipeline makes it easy to quickly test an image and then push it to Docker Hub or another container registry. That lets you move quickly through the CI/CD pipeline from building to deployment.

Containers also facilitate the use of microservices. Microservices split larger applications into bite-sized processes, giving greater flexibility to users and securely separating those processes. Running microservices on a VM would involve spinning up a separate VM for every microservice, which is an inefficient allocation of resources. Or it involves running multiple services on the same VM, which doesn’t have the benefit of isolation.

Despite their popularity, containers have not replaced VMs completely. In many cases, containers complement the use of VMs. If you want to test an application that may compromise your entire OS or need to share hardware between services running on different operating systems, you need a VM. Because all your containers on a given machine share the same kernel, it is easier for malicious code to compromise the entire machine.

Because containers depend on the kernel they run on, it is challenging to implement OS-level changes from within the container. For example, suppose you want to modify a system kernel with a sysctl command from within a container. In that case, you must give the container a level of privileges that would negate any security benefits gained from isolation.

Instead, you can use a machine executor to do it from within a VM. That is possible because a VM’s OS is installed within the VM, whereas the kernel that a container depends on is outside the container.

Using containers and VMs in CI/CD pipelines

As previously discussed, users can use containers to create a CI/CD pipeline in their code.

One of the benefits of using containers and VMs in CI/CD is standardization. When multiple developers are contributing code to a program, they are less likely to face issues if they write it in identical environments.

Teams can also deploy containers inside VMs, letting them build in identical environments while retaining control of resources below the OS level. Cloned build environments also make it possible to include greater automation, helping your CI/CD pipeline work as smoothly as possible.

Containers can also facilitate CI/CD automation by making it easy to move and deploy code between machines. You can build a piece of code as a container image and deploy it as a readable file to a hub. Companies like Wit.ai integrate containers in CI/CD to automate their testing and deployment pipeline for maximum efficiency.

CI/CD and containers also work well for vulnerability management with Docker and to automate the building of new containers. For example, you can use containers as part of an automated, ongoing process of frequent and incremental patch testing instead of releasing extensive updates with less well-understood effects on all parts of your system. This makes managing vulnerabilities fast, smooth, and distributable among team members.

VMs and containers contribute to a CI/CD pipeline differently. One specific way they can work together is by using a VM as a machine executor to run more demanding containers. People already using a Docker executor may need to migrate from Docker to a machine executor. However, the machine executor’s more isolated environment and greater access to system resources can make this a worthwhile effort.

Conclusion

Containers and VMs are two critical tools for virtualizing your programs. Which you use depends on what you need to do, and both make significant contributions to the management of your CI/CD pipeline.

By combining the security of virtual machines and the efficiency of containers, you can take advantage of all the benefits of virtualization. Using VMs to secure your applications and containers to move and deploy code between different machines combines the strength of both of their technologies.