Choosing an Executor Type
This document describes the available executor types (
macos) in the following sections:
- Using Docker
- Using machine
- Using macOS
- Using the Windows executor
- Using GPUs
- See also
An executor type defines the underlying technology or environment in which to run a job. CircleCI enables you to run jobs in one of four environments:
- Within Docker images (
- Within a Linux virtual machine (VM) image (
- Within a macOS VM image (
- Within a windows VM image (
It is possible to specify a different executor type for every job in your ‘.circleci/config.yml’ by specifying the executor type and an appropriate image. An image is a packaged system that has the instructions for creating a running environment. A container or virtual machine is the term used for a running instance of an image. For example, you could specify an executor type and an image for every job:
- Jobs that require Docker images (
docker) may use an image for Node.js or Python. The pre-built CircleCI Docker image from the CircleCI Dockerhub will help you get started quickly without learning all about Docker. These images are not a full operating system, so they will generally make building your software more efficient.
- Jobs that require a complete Linux virtual machine (VM) image (
machine) may use an Ubuntu version such as 16.04.
- Jobs that require a macOS VM image (
macos) may use an Xcode version such as 10.0.0.
docker key defines Docker as the underlying technology to run your jobs using Docker Containers. Containers are an instance of the Docker Image you specify and the first image listed in your configuration is the primary container image in which all steps run. If you are new to Docker, see the Docker Overview documentation for concepts.
Docker increases performance by building only what is required for your application. Specify a Docker image in your
.circleci/config.yml file that will generate the primary container where all steps run:
jobs: build: docker: - image: buildpack-deps:trusty
In this example, all steps run in the container created by the first image listed under the
build job. To make the transition easy, CircleCI maintains convenience images on Docker Hub for popular languages. See Using Pre-Built CircleCI Docker Images for the complete list of names and tags. If you need a Docker image that installs Docker and has Git, consider using
docker:stable-git, which is an offical Docker image.
Docker image best practices
If you encounter problems with rate limits imposed by your registry provider, using authenticated docker pulls may grant higher limits.
CircleCI has partnered with Docker to ensure that our users can continue to access Docker Hub without rate limits. As of November 1st 2020, with few exceptions, you should not be impacted by any rate limits when pulling images from Docker Hub through CircleCI. However, these rate limits may go into effect for CircleCI users in the future. That’s why we’re encouraging you and your team to add Docker Hub authentication to your CircleCI configuration and consider upgrading your Docker Hub plan, as appropriate, to prevent any impact from rate limits in the future.
Avoid using mutable tags like
1as the image version in your
config.yml file. It is best practice to use precise image versions or digests, like
redis@sha256:95f0c9434f37db0a4f...as shown in the examples. Mutable tags often lead to unexpected changes in your job environment. CircleCI cannot guarantee that mutable tags will return an up-to-date version of an image. You could specify
alpine:latestand actually get a stale cache from a month ago.
If you experience increases in your run times due to installing additional tools during execution, it is best practice to use the Building Custom Docker Images Documentation to create a custom image with tools that are pre-loaded in the container to meet the job requirements.
When you use AWS ECR images, it is best practice to use
us-east-1region. Our job execution infrastructure is in
us-east-1region, so having your image on the same region reduces the image download time.
In the event that your pipelines are failing despite there being little to no changes in your project, you may need to investigate upstream issues with docker images being used.
More details on the Docker Executor are available in the Configuring CircleCI document.
Using multiple Docker images
It is possible to specify multiple images for your job. Specify multiple images if, for example, you need to use a database for your tests or for some other required service. In a multi-image configuration job, all steps are executed in the container created by the first image listed. All containers run in a common network and every exposed port will be available on
localhost from a primary container.
jobs: build: docker: # Primary container image where all steps run. - image: buildpack-deps:trusty # Secondary container image on common network. - image: mongo:2.6.8-jessie command: [mongod, --smallfiles] working_directory: ~/ steps: # command will execute in trusty container # and can access mongo on localhost - run: sleep 5 && nc -vz localhost 27017
Docker Images may be specified in three ways, by the image name and version tag on Docker Hub or by using the URL to an image in a registry:
Public convenience images on Docker Hub
Public images on Docker Hub
Public Docker registries
Nearly all of the public images on Docker Hub and Docker Registry are supported by default when you specify the
docker: key in your
config.yml file. If you want to work with private images/registries, please refer to Using Docker Authenticated Pulls.
A RAM disk is available at
/mnt/ramdisk that offers a temporary file storage paradigm, similar to using
/dev/shm. Using the RAM disk can help speed up your build, provided that the
resource_class you are using has enough memory to fit the entire contents of your project (all files checked out from git, dependencies, assets generated etc).
The simplest way to use this RAM disk is to configure the
working_directory of a job to be
jobs: build: docker: - image: alpine working_directory: /mnt/ramdisk steps: - run: | echo '#!/bin/sh' > run.sh echo 'echo Hello world!' >> run.sh chmod +x run.sh - run: ./run.sh
Docker benefits and limitations
Docker also has built-in image caching and enables you to build, run, and publish Docker images via Remote Docker. Consider the requirements of your application as well. If the following are true for your application, Docker may be the right choice:
- Your application is self-sufficient
- Your application requires additional services to be tested
- Your application is distributed as a Docker Image (requires using Remote Docker)
- You want to use
docker-compose(requires using Remote Docker)
Choosing Docker limits your runs to what is possible from within a Docker container (including our Remote Docker feature). For instance, if you require low-level access to the network or need to mount external volumes consider using
There are tradeoffs to using a
docker image versus an Ubuntu-based
machine image as the environment for the container, as follows:
|Start time||Instant||30-60 sec|
|Custom images||Yes (1)||No|
|Build Docker images||Yes (2)||Yes|
|Full control over job environment||No||Yes|
|Full root access||No||Yes|
|Run multiple databases||Yes (3)||Yes|
|Run multiple versions of the same software||No||Yes|
|Docker Layer Caching||Yes||Yes|
|Run privileged containers||No||Yes|
|Use docker compose with volumes||No||Yes|
|Configurable resources (CPU/RAM)||Yes||Yes|
(1) See Using Custom Docker Images.
(2) Requires using Remote Docker.
(3) While you can run multiple databases with Docker, all images (primary and secondary) share the underlying resource limits. Performance in this regard will be dictated by the compute capacities of your container plan.
For more information on
machine, see the next section below.
Available Docker resource classes
resource_class key allows you to configure CPU and RAM resources for each
job. In Docker, the following resources classes are available:
(2) Requires using Remote Docker.
Where example usage looks like the following:
jobs: build: docker: - image: buildpack-deps:trusty resource_class: xlarge steps: # ... other config
machine option runs your jobs in a dedicated, ephemeral VM that has the following specifications:
machine executor gives your application full access to OS resources and provides you with full control over the job environment. This control can be useful in situations where you need full access to the network stack, for example to listen on a network interface, or to modify the system with
sysctl commands. To find out about migrating a project from using the Docker executor to using
machine, see the Executor Migration from Docker to Machine document.
machine executor also means that you get full access to the Docker process. This allows you to run privileged Docker containers and build new Docker images.
machine may require additional fees in a future pricing update.
To use the machine executor,
machine key in
version: 2.1 jobs: build: machine: image: ubuntu-1604:202007-01
You can view the list of available images here.
The following example uses an image and enables Docker Layer Caching (DLC) which is useful when you are building Docker images during your job or Workflow. Note: Check our pricing page to see which plans include the use of Docker Layer Caching.
version: 2.1 jobs: build: machine: docker_layer_caching: true # default - false
image key is not supported on private installations of CircleCI.
See the VM Service documentation for more information.
Available on CircleCI Cloud - not currently available on self-hosted installations
macos executor allows you to run your job in a macOS environment on a VM. You can also specify which version of Xcode should be used. See the Supported Xcode Versions section of the Testing iOS document for the complete list of version numbers and information about technical specifications for the VMs running each particular version of Xcode.
jobs: build: macos: xcode: 11.3.0 steps: # Commands will execute in macOS container # with Xcode 11.3 installed - run: xcodebuild -version
Using the Windows executor
windows executor allows you to run your job in a Windows environment. The following is an example configuration that will run a simple Windows job. The syntax for using the Windows executor in your config differs depending on whether you are using:
- CircleCI Cloud – config version 2.1.
- Self-hosted installation of CircleCI Server with config version 2.0 – this option is an instance of using the
machineexecutor with a Windows image – Introduced in CircleCI Server v2.18.3.
version: 2.1 # Use version 2.1 to enable Orb usage. orbs: win: firstname.lastname@example.org # The Windows orb give you everything you need to start using the Windows executor. jobs: build: # name of your job executor: win/default # executor type steps: # Commands are run in a Windows virtual machine environment - checkout - run: Write-Host 'Hello, Windows'
version: 2 jobs: build: # name of your job machine: image: windows-default # Windows machine image resource_class: windows.medium steps: # Commands are run in a Windows virtual machine environment - checkout - run: Write-Host 'Hello, Windows'
Cloud users will notice the Windows Orb is used to set up the Windows executor to simplify the configuration. See the Windows orb details page for more details.
CircleCI Server users should contact their system administrator for specific information about the image used for Windows jobs. The Windows image is configured by the system administrator, and in the CircleCI config is always available as the
windows-default image name.
CircleCI Cloud has execution environments with Nvidia GPUs for specialized workloads. The hardware is Nvidia Tesla T4 Tensor Core GPU, and our GPU executors come in both Linux and Windows VMs.
version: 2.1 jobs: build: machine: resource_class: gpu.nvidia.small image: ubuntu-1604-cuda-10.1:201909-23 steps: - run: nvidia-smi
version: 2.1 orbs: win: email@example.com jobs: build: executor: win/gpu-nvidia steps: - run: '&"C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe"'
Customers using CircleCI server can configure their VM service to use GPU-enabled machine executors. See Running GPU Executors in Server.