Choosing an Executor Type
This document describes the available executor types (
macos) in the following sections:
- Using Docker
- Using machine
- Using macOS
- Using the Windows executor
- Using GPUs
- See also
An executor type defines the underlying technology or environment in which to run a job. CircleCI enables you to run jobs in one of four environments:
- Within Docker images (
- Within a Linux virtual machine (VM) image (
- Within a macOS VM image (
- Within a windows VM image (
It is possible to specify a different executor type for every job in your .circleci/config.yml by specifying the executor type and an appropriate image. An image is a packaged system that has the instructions for creating a running environment. A container or virtual machine is the term used for a running instance of an image. For example:
- Jobs that require Docker images (
docker) may use an image for Node.js or Python. The pre-built CircleCI Docker image from the CircleCI Docker Hub will help you get started quickly without learning all about Docker. These images are not a full operating system, so they will generally make building your software more efficient.
- Jobs that require a complete Linux virtual machine (VM) image (
machine) may use an Ubuntu version supported by the list of available machine images.
- Jobs that require a macOS VM image (
macos) may use an Xcode version such as 12.5.1.
docker key defines Docker as the underlying technology to run your jobs using Docker containers. Containers are an instance of the Docker image you specify and the first image listed in your configuration is the primary container image in which all steps run. If you are new to Docker, see the Docker Overview documentation for concepts.
Docker increases performance by building only what is required for your application. Specify a Docker image in your
.circleci/config.yml file that will generate the primary container where all steps run:
jobs: build: docker: - image: cimg/node:lts
In this example, all steps run in the container created by the first image listed under the
To make the transition easy, CircleCI maintains convenience images on Docker Hub for popular languages. See Using Pre-Built CircleCI Docker Images for the complete list of names and tags. If you need a Docker image that installs Docker and has Git, consider using
Docker image best practices
If you encounter problems with rate limits imposed by your registry provider, using authenticated docker pulls may grant higher limits.
CircleCI has partnered with Docker to ensure that our users can continue to access Docker Hub without rate limits. As of November 1st 2020, with few exceptions, you should not be impacted by any rate limits when pulling images from Docker Hub through CircleCI. However, these rate limits may go into effect for CircleCI users in the future. We encourage you to add Docker Hub authentication to your CircleCI configuration and consider upgrading your Docker Hub plan, as appropriate, to prevent any impact from rate limits in the future.
Avoid using mutable tags like
1as the image version in your
config.yml file. It is best practice to use precise image versions or digests, like
redis@sha256:95f0c9434f37db0a4f...as shown in the examples. Mutable tags often lead to unexpected changes in your job environment. CircleCI cannot guarantee that mutable tags will return an up-to-date version of an image. You could specify
alpine:latestand actually get a stale cache from a month ago.
If you experience increases in your run times due to installing additional tools during execution, consider creating and using a custom-built image that comes with those tools pre-installed. See the Using Custom-Built Docker Images page for more information.
When you use AWS ECR images, it is best practice to use
us-east-1region. Our job execution infrastructure is in
us-east-1region, so having your image on the same region reduces the image download time.
In the event that your pipelines are failing despite there being little to no changes in your project, you may need to investigate upstream issues with the Docker images being used.
More details on the Docker executor are available in the Configuring CircleCI document.
Using multiple Docker images
It is possible to specify multiple images for your job. Specify multiple images if, for example, you need to use a database for your tests or for some other required service. In a multi-image configuration job, all steps are executed in the container created by the first image listed. All containers run in a common network and every exposed port will be available on
localhost from a primary container.
jobs: build: docker: # Primary container image where all steps run. - image: cimg/base:current # Secondary container image on common network. - image: cimg/mariadb:10.6 command: [mongod, --smallfiles] steps: # command will execute in an Ubuntu-based container # and can access MariaDB on localhost - run: sleep 5 && nc -vz localhost 3306
Docker images may be specified in a few ways:
- By the image name and version tag on Docker Hub, or
- By using the URL to an image in a registry.
The following examples show how you can use public images from various sources:
CircleCI’s public convenience images on Docker Hub
Public images on Docker Hub
Public images on Docker registries
Nearly all of the public images on Docker Hub and other Docker registries are supported by default when you specify the
docker: key in your
config.yml file. If you want to work with private images/registries, please refer to Using Docker Authenticated Pulls.
A RAM disk is available at
/mnt/ramdisk that offers a temporary file storage paradigm, similar to using
/dev/shm. Using the RAM disk can help speed up your build, provided that the
resource_class you are using has enough memory to fit the entire contents of your project (all files checked out from git, dependencies, assets generated etc).
The simplest way to use this RAM disk is to configure the
working_directory of a job to be
jobs: build: docker: - image: alpine working_directory: /mnt/ramdisk steps: - run: | echo '#!/bin/sh' > run.sh echo 'echo Hello world!' >> run.sh chmod +x run.sh - run: ./run.sh
Docker benefits and limitations
Docker also has built-in image caching and enables you to build, run, and publish Docker images via Remote Docker. Consider the requirements of your application as well. If the following are true for your application, Docker may be the right choice:
- Your application is self-sufficient.
- Your application requires additional services to be tested.
- Your application is distributed as a Docker image (requires using Remote Docker).
- You want to use
docker-compose(requires using Remote Docker).
Choosing Docker limits your runs to what is possible from within a Docker container (including our Remote Docker feature). For instance, if you require low-level access to the network or need to mount external volumes, consider using
There are tradeoffs to using a
docker image versus an Ubuntu-based
machine image as the environment for the container, as follows:
|Capability|| || |
|Start time||Instant||30-60 sec|
|Custom images||Yes (1)||No|
|Build Docker images||Yes (2)||Yes|
|Full control over job environment||No||Yes|
|Full root access||No||Yes|
|Run multiple databases||Yes (3)||Yes|
|Run multiple versions of the same software||No||Yes|
|Docker layer caching||Yes||Yes|
|Run privileged containers||No||Yes|
|Use docker compose with volumes||No||Yes|
|Configurable resources (CPU/RAM)||Yes||Yes|
(1) See Using Custom Docker Images.
(2) Requires using Remote Docker.
(3) While you can run multiple databases with Docker, all images (primary and secondary) share the underlying resource limits. Performance in this regard will be dictated by the compute capacities of your container plan.
For more information on
machine, see the next section below.
Caching Docker images
This section discusses caching the Docker images used to spin up a Docker execution environment. It does not apply to Docker layer caching, which is a feature used to speed up building new Docker images in your projects.
The time it takes to spin up a Docker container to run a job can vary based on several different factors, such as the size of the image and if some, or all, of the layers are already cached on the underlying Docker host machine.
If you are using a more popular image, such as CircleCI convenience images, then cache hits are more likely for a larger number of layers. Most of the popular CircleCI images use the same base image. The majority of the base layers are the same between images, so you have a greater chance of having a cache hit.
The environment has to spin up for every new job, regardless of whether it is in the same workflow or if it is a re-run/subsequent run. (CircleCI never reuses containers, for security reasons.) Once the job is finished, the container is destroyed. There is no guarantee that jobs, even in the same workflow, will run on the same Docker host machine. This implies that the cache status may differ.
In all cases, cache hits are not guaranteed, but are a bonus convenience when available. With this in mind, a worst-case scenario of a full image pull should be accounted for in all jobs.
In summary, the availability of caching is not something that can be controlled via settings or configuration, but by choosing a popular image, such as CircleCI convenience images, you will have more chances of hitting cached layers in the “Spin Up Environment” step.
Available Docker resource classes
resource_class key allows you to configure CPU and RAM resources for each job. In Docker, the following resources classes are available:
Where example usage looks like the following:
jobs: build: docker: - image: cimg/base:current resource_class: xlarge steps: # ... other config
Ubuntu 14.04 and 16.04 machine images are deprecated and will be removed permanently May 31, 2022. These images will be temporarily unavailable March 29 and April 26, 2022. Migrate from 14.04 or 16.04.
machine option runs your jobs in a dedicated, ephemeral VM that has the following specifications:
machine executor gives your application full access to OS resources and provides you with full control over the job environment. This control can be useful in situations where you need full access to the network stack; for example, to listen on a network interface, or to modify the system with
sysctl commands. To find out about migrating a project from using the Docker executor to using
machine, see the Executor Migration from Docker to Machine document.
machine executor also means that you get full access to the Docker process. This allows you to run privileged Docker containers and build new Docker images.
To use the machine executor, set the
machine key in
version: 2.1 jobs: build: machine: image: ubuntu-2004:current resource_class: large
version: 2.1 jobs: build: machine: true
You can view the list of available images here.
The following example uses an image and enables Docker layer caching (DLC) which is useful when you are building Docker images during your job or workflow.
machine: image: ubuntu-2004:202104-01 docker_layer_caching: true # default - false
The IP range
192.168.53.0/24 is reserved by CircleCI for internal use on the machine executor. This range should not be used in your jobs.
macos executor allows you to run your job in a macOS environment on a VM. In macOS, the following resources classes are available:
|medium||4 @ 2.7 GHz||8GB|
|macos.x86.medium.gen2||4 @ 3.2 GHz||8GB|
|large||8 @ 2.7 GHz||16GB|
|macos.x86.metal.gen1(1)||12 @ 3.2 GHz||32GB|
You can also specify which version of Xcode should be used. See the Supported Xcode Versions section of the Testing iOS document for the complete list of version numbers and information about technical specifications for the VMs running each particular version of Xcode.
jobs: build: macos: xcode: 12.5.1 steps: # Commands will execute in macOS container # with Xcode 12.5.1 installed - run: xcodebuild -version
macOS VM Storage
The amount of available storage on our macOS VMs depends on the resource class and Xcode image being used. The size of the Xcode images varies based on which tools are pre-installed.
|Xcode Version||Class||Minimum Available Storage|
(1) This resource requires a minimum 24-hour lease. See the Dedicated Host for macOS page to learn more about this resource class.
(2) Exception: Xcode 12.0.1, 12.4.0 and 12.5.1 have a minimum 100GB of available storage.
Using the Windows executor
windows executor allows you to run your job in a Windows environment. The following is an example configuration that will run a simple Windows job. The syntax for using the Windows executor in your config differs depending on whether you are using:
- CircleCI Cloud – config version 2.1.
- Self-hosted installation of CircleCI server with config version 2.0 – this option is an instance of using the
machineexecutor with a Windows image – Introduced in CircleCI server v2.18.3.
version: 2.1 # Use version 2.1 to enable Orb usage. orbs: win: firstname.lastname@example.org # The Windows orb give you everything you need to start using the Windows executor. jobs: build: # name of your job executor: win/default # executor type steps: # Commands are run in a Windows virtual machine environment - checkout - run: Write-Host 'Hello, Windows'
version: 2 jobs: build: # name of your job machine: image: windows-default resource_class: windows.medium steps: # Commands are run in a Windows virtual machine environment - checkout - run: Write-Host 'Hello, Windows'
Cloud users will notice the Windows orb is used to set up the Windows executor to simplify the configuration. See the Windows orb details page for more details.
CircleCI server users should contact their system administrator for specific information about the image used for Windows jobs. The Windows image is configured by the system administrator, and in the CircleCI config is always available as the
windows-default image name.
CircleCI Cloud has execution environments with Nvidia GPUs for specialized workloads. The hardware is Nvidia Tesla T4 Tensor Core GPU, and our GPU executors come in both Linux and Windows VMs.
version: 2.1 jobs: build: machine: resource_class: gpu.nvidia.small image: ubuntu-1604-cuda-10.1:201909-23 steps: - run: nvidia-smi
version: 2.1 orbs: win: email@example.com jobs: build: executor: win/gpu-nvidia steps: - run: '&"C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe"'
Customers using CircleCI server can configure their VM service to use GPU-enabled machine executors. See Running GPU Executors in Server.
Help make this document better
This guide, as well as the rest of our docs, are open source and available on GitHub. We welcome your contributions.
- Suggest an edit to this page (please read the contributing guide first).
- To report a problem in the documentation, or to submit feedback and comments, please open an issue on GitHub.
- CircleCI is always seeking ways to improve your experience with our platform. If you would like to share feedback, please join our research community.
Our support engineers are available to help with service issues, billing, or account related questions, and can help troubleshoot build configurations. Contact our support engineers by opening a ticket.
You can also visit our support site to find support articles, community forums, and training resources.
CircleCI Documentation by CircleCI is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.