Configuring CircleCI
This document is a reference for the CircleCI 2.x configuration keys that are used in the config.yml
file. The presence of a .circleci/config.yml
file in your CircleCI-authorized repository branch indicates that you want to use the 2.x infrastructure.
You can see a complete config.yml
in our full example.
Note: If you already have a CircleCI 1.0 configuration, the config.yml
file allows you to test 2.x builds on a separate branch, leaving any existing configuration in the old circle.yml
style unaffected and running on the CircleCI 1.0 infrastructure in branches that do not contain .circleci/config.yml
.
Table of contents
version
orbs
(requires version: 2.1)commands
(requires version: 2.1)parameters
(requires version: 2.1)executors
(requires version: 2.1)-
jobs
-
<
job_name
>environment
parallelism
parameters
-
docker
/machine
/macos
/windows
(executor) branches
– DEPRECATED-
resource_class
-
steps
-
<
-
workflows
- Logic statements
- Example full configuration
version
Key | Required | Type | Description |
---|---|---|---|
version | Y | String |
2 , 2.0 , or 2.1 See the Reusing Config doc for an overview of new 2.1 keys available to simplify your .circleci/config.yml file, reuse, and parameterized jobs. |
The version
field is intended to be used in order to issue warnings for deprecation or breaking changes.
orbs
(requires version: 2.1)
Key | Required | Type | Description |
---|---|---|---|
orbs | N | Map | A map of user-selected names to either: orb references (strings) or orb definitions (maps). Orb definitions must be the orb-relevant subset of 2.1 config. See the Creating Orbs documentation for details. |
executors | N | Map | A map of strings to executor definitions. See the Executors section below. |
commands | N | Map | A map of command names to command definitions. See the Commands section below. |
The following example calls an Orb named hello-build
that exists in the certified circleci
namespace.
version: 2.1
orbs:
hello: circleci/hello-build@0.0.5
workflows:
"Hello Workflow":
jobs:
- hello/hello-build
In the above example, hello
is considered the orbs reference; whereas circleci/hello-build@0.0.5
is the fully-qualified orb reference.
commands
(requires version: 2.1)
A command definition defines a sequence of steps as a map to be executed in a job, enabling you to reuse a single command definition across multiple jobs.
Key | Required | Type | Description |
---|---|---|---|
steps | Y | Sequence | A sequence of steps run inside the calling job of the command. |
parameters | N | Map | A map of parameter keys. See the Parameter Syntax section of the Reusing Config document for details. |
description | N | String | A string that describes the purpose of the command. |
Example:
commands:
sayhello:
description: "A very simple command for demonstration purposes"
parameters:
to:
type: string
default: "Hello World"
steps:
- run: echo << parameters.to >>
parameters
(requires version: 2.1)
Pipeline parameters declared for use in the configuration. See Pipeline Variables for usage details.
Key | Required | Type | Description |
---|---|---|---|
parameters | N | Map | A map of parameter keys. Supports string , boolean , integer and enum types. See Parameter Syntax for details. |
executors
(requires version: 2.1)
Executors define the environment in which the steps of a job will be run, allowing you to reuse a single executor definition across multiple jobs.
Key | Required | Type | Description |
---|---|---|---|
docker | Y (1) | List | Options for docker executor |
resource_class | N | String | Amount of CPU and RAM allocated to each container in a job. Note: A performance plan is required to access this feature. |
machine | Y (1) | Map | Options for machine executor |
macos | Y (1) | Map | Options for macOS executor |
windows | Y (1) | Map | Windows executor currently working with orbs. Check out the orb. |
shell | N | String | Shell to use for execution command in all steps. Can be overridden by shell in each step (default: See Default Shell Options) |
working_directory | N | String | In which directory to run the steps. Will be interpreted as an absolute path. |
environment | N | Map | A map of environment variable names and values. |
(1) One executor type should be specified per job. If more than one is set you will receive an error.
Example:
version: 2.1
executors:
my-executor:
docker:
- image: circleci/ruby:2.5.1-node-browsers
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
jobs:
my-job:
executor: my-executor
steps:
- run: echo outside the executor
See the Using Parameters in Executors section of the Reusing Config document for examples of parameterized executors.
jobs
A Workflow is comprised of one or more uniquely named jobs. Jobs are specified in the jobs
map, see Sample 2.0 config.yml for two examples of a job
map. The name of the job is the key in the map, and the value is a map describing the job.
Note: Jobs have a maximum runtime of 5 hours. If your jobs are timing out, consider running some of them concurrently using workflows.
<job_name
>
Each job consists of the job’s name as a key and a map as a value. A name should be unique within a current jobs
list. The value map has the following attributes:
Key | Required | Type | Description |
---|---|---|---|
docker | Y (1) | List | Options for docker executor |
machine | Y (1) | Map | Options for machine executor |
macos | Y (1) | Map | Options for macOS executor |
shell | N | String | Shell to use for execution command in all steps. Can be overridden by shell in each step (default: See Default Shell Options) |
parameters | N | Map |
Parameters for making a job explicitly configurable in a workflow . |
steps | Y | List | A list of steps to be performed |
working_directory | N | String | In which directory to run the steps. Will be interpreted as an absolute path. Default: ~/project (where project is a literal string, not the name of your specific project). Processes run during the job can use the $CIRCLE_WORKING_DIRECTORY environment variable to refer to this directory. Note: Paths written in your YAML configuration file will not be expanded; if your store_test_results.path is $CIRCLE_WORKING_DIRECTORY/tests , then CircleCI will attempt to store the test subdirectory of the directory literally named $CIRCLE_WORKING_DIRECTORY , dollar sign $ and all. |
parallelism | N | Integer | Number of parallel instances of this job to run (default: 1) |
environment | N | Map | A map of environment variable names and values. |
branches | N | Map | A map defining rules to allow/block execution of specific branches for a single job that is not in a workflow or a 2.1 config (default: all allowed). See Workflows for configuring branch execution for jobs in a workflow or 2.1 config. |
resource_class | N | String | Amount of CPU and RAM allocated to each container in a job. Note: A performance plan is required to access this feature. |
(1) One executor type should be specified per job. If more than one is set you will receive an error.
environment
A map of environment variable names and values. These will override any environment variables you set in the CircleCI application.
parallelism
If parallelism
is set to N > 1, then N independent executors will be set up and each will run the steps of that job in parallel. This can help optimize your test steps; you can split your test suite, using the CircleCI CLI, across parallel containers so the job will complete in a shorter time. Certain parallelism-aware steps can opt out of the parallelism and only run on a single executor (for example deploy
step). Learn more about parallel jobs.
working_directory
will be created automatically if it doesn’t exist.
Example:
jobs:
build:
docker:
- image: buildpack-deps:trusty
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
environment:
FOO: bar
parallelism: 3
resource_class: large
working_directory: ~/my-app
steps:
- run: go test -v $(go list ./... | circleci tests split)
parameters
The parameters
can be used when calling that job
in a workflow
.
Reserved parameter-names:
name
requires
context
type
filters
-
matrix
See Parameter Syntax for definition details.
docker
/ machine
/ macos
/ windows
(executor)
An “executor” is roughly “a place where steps occur”. CircleCI 2.0 can build the necessary environment by launching as many docker containers as needed at once, or it can use a full virtual machine. Learn more about different executors.
docker
Configured by docker
key which takes a list of maps:
Key | Required | Type | Description |
---|---|---|---|
image | Y | String | The name of a custom docker image to use |
name | N | String | The name the container is reachable by. By default, container services are accessible through localhost
|
entrypoint | N | String or List | The command used as executable when launching the container |
command | N | String or List | The command used as pid 1 (or args for entrypoint) when launching the container |
user | N | String | Which user to run commands as within the Docker container |
environment | N | Map | A map of environment variable names and values |
auth | N | Map | Authentication for registries using standard docker login credentials |
aws_auth | N | Map | Authentication for AWS Elastic Container Registry (ECR) |
The first image
listed in the file defines the primary container image where all steps will run.
entrypoint
overrides the image’s ENTRYPOINT
.
command
overrides the image’s COMMAND
; it will be used as arguments to the
image ENTRYPOINT
if it has one, or as the executable if the image has no ENTRYPOINT
.
For a primary container
(the first container in the list), if neither command
nor entrypoint
is
specified in the config, then any ENTRYPOINT
and COMMAND
in the image are
ignored. This is because the primary container is typically used only for
running the steps
and not for its ENTRYPOINT
, and an ENTRYPOINT
may consume significant resources or exit prematurely. (A custom image may disable
this behavior and force the ENTRYPOINT
to run.) The job steps
run in the primary
container only.
name
defines the name for reaching the secondary service containers. By default, all services are exposed directly on localhost
. The field is appropriate if you would rather have a different host name instead of localhost, for example, if you are starting multiple versions of the same service.
The environment
settings apply to entrypoint/command run by the docker container, not the job steps.
You can specify image versions using tags or digest. You can use any public images from any public Docker registry (defaults to Docker Hub). Learn more about specifying images.
Some registries, Docker Hub, for example, may rate limit anonymous docker pulls. It’s recommended you authenticate in such cases to pull private and public images. The username and password can be specified in the auth
field. See Using Docker Authenticated Pulls for details.
Example:
jobs:
build:
docker:
- image: buildpack-deps:trusty # primary container
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
environment:
ENV: CI
- image: mongo:2.6.8
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
command: [--smallfiles]
- image: postgres:9.4.1
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
environment:
POSTGRES_USER: root
- image: redis@sha256:54057dd7e125ca41afe526a877e8bd35ec2cdd33b9217e022ed37bdcf7d09673
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
- image: acme-private/private-image:321
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
Using an image hosted on AWS ECR requires authentication using AWS credentials. By default, CircleCI uses the AWS credentials that you add to the Project > Settings > AWS Permissions page in the CircleCI application or by setting the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
project environment variables. It is also possible to set the credentials by using aws_auth
field as in the following example:
jobs:
build:
docker:
- image: account-id.dkr.ecr.us-east-1.amazonaws.com/org/repo:0.1
aws_auth:
aws_access_key_id: AKIAQWERVA # can specify string literal values
aws_secret_access_key: $ECR_AWS_SECRET_ACCESS_KEY # or project UI envar reference
It is possible to reuse declared commands in a job when using version 2.1. The following example invokes the sayhello
command.
jobs:
myjob:
docker:
- image: "circleci/node:9.6.1"
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
steps:
- sayhello:
to: "Lev"
machine
The machine executor is configured by using the machine
key, which takes a map:
Key | Required | Type | Description |
---|---|---|---|
image | Y | String | The VM image to use. View available images. Note: This key is not supported on the installable CircleCI. For information about customizing machine executor images on CircleCI installed on your servers, see our VM Service documentation. |
docker_layer_caching | N | Boolean | Set to true to enable Docker Layer Caching. Note: You must open a support ticket to have a CircleCI Sales representative contact you about enabling this feature on your account for an additional fee. |
Example:
version: 2.1
jobs:
build:
machine:
image: ubuntu-2004:202010-01
steps:
- checkout
- run:
name: "Testing"
command: echo "Hi"
Available machine
images
CircleCI supports multiple machine images that can be specified in the image
field:
-
ubuntu-2004:202101-01
- Ubuntu 20.04, Docker v20.10.2, Docker Compose v1.28.2, -
ubuntu-2004:202010-01
- Ubuntu 20.04, Docker v19.03.13, Docker Compose v1.27.4,ubuntu-2004:202008-01
is an alias -
ubuntu-1604:202101-01
- Ubuntu 16.04, Docker v19.03.14, Docker Compose v1.28.2, 2nd to last release -
ubuntu-1604:202010-01
- Ubuntu 16.04, Docker v19.03.13, Docker Compose v1.27.4 -
ubuntu-1604:202007-01
- Ubuntu 16.04, Docker v19.03.12, Docker Compose v1.26.1 -
ubuntu-1604:202004-01
- Ubuntu 16.04, Docker v19.03.8, Docker Compose v1.25.5 -
ubuntu-1604:201903-01
- Ubuntu 16.04, Docker v18.09.3, Docker Compose v1.23.1
Note: Ubuntu 16.04 reaches the end of its LTS window at the end of April 2021 and will no longer be supported by Canonical. As a result, the final 16.04 CircleCI machine image release by us will take place in April to include the most recent security patches. We suggest upgrading to the Ubuntu 20.04 image for continued releases past April. 2021.
The machine executor supports Docker Layer Caching which is useful when you are building Docker images during your job or Workflow.
Available Linux GPU images
When using the Linux GPU executor, the available images are:
-
ubuntu-1604-cuda-11.1:202012-01
- CUDA v11.1, Docker v19.03.13, nvidia-container-toolkit v1.4.0-1 -
ubuntu-1604-cuda-10.2:202012-01
- CUDA v10.2, Docker v19.03.13, nvidia-container-toolkit v1.3.0-1 -
ubuntu-1604-cuda-10.1:201909-23
- CUDA v10.1, Docker v19.03.0-ce, nvidia-docker v2.2.2 -
ubuntu-1604-cuda-9.2:201909-23
- CUDA v9.2, Docker v19.03.0-ce, nvidia-docker v2.2.2
Available Windows GPU image
When using the Windows GPU executor, the available image is:
-
windows-server-2019-nvidia:stable
- Windows Server 2019, CUDA 10.1. This image is the default.
Example
version: 2.1
workflows:
main:
jobs:
- build
jobs:
build:
machine:
image: windows-server-2019-nvidia:stable
docker_layer_caching: true # default - false
macos
CircleCI supports running jobs on macOS, to allow you to build, test, and deploy apps for macOS, iOS, tvOS and watchOS. To run a job in a macOS virtual machine, you must add the macos
key to the top-level configuration for the job and specify the version of Xcode you would like to use.
Key | Required | Type | Description |
---|---|---|---|
xcode | Y | String | The version of Xcode that is installed on the virtual machine, see the Supported Xcode Versions section of the Testing iOS document for the complete list. |
Example: Use a macOS virtual machine with Xcode version 11.3:
jobs:
build:
macos:
xcode: "11.3.0"
windows
CircleCI supports running jobs on Windows. To run a job on a Windows machine, you must add the windows
key to the top-level configuration for the job. Orbs also provide easy access to setting up a Windows job. To learn more about prerequisites to running Windows jobs and what Windows machines can offer, consult the Hello World on Windows document.
Example: Use a windows executor to run a simple job.
version: 2.1
orbs:
win: circleci/windows@2.3.0
jobs:
build:
executor: win/default
steps:
- checkout
- run: echo 'Hello, Windows'
branches
– DEPRECATED
This key is deprecated. Use workflows filtering to control which jobs run for which branches.
Defines rules for allowing/blocking execution of some branches if Workflows are not configured and you are using 2.0 (not 2.1) config. If you are using Workflows, job-level branches will be ignored and must be configured in the Workflows section of your config.yml
file. If you are using 2.1 config, you will need to add a workflow in order to use filtering. See the workflows section for details. The job-level branch
key takes a map:
Key | Required | Type | Description |
---|---|---|---|
only | N | List | List of branches that only will be executed |
ignore | N | List | List of branches to ignore |
Both only
and ignore
lists can have full names and regular expressions. Regular expressions must match the entire string. For example:
jobs:
build:
branches:
only:
- master
- /rc-.*/
In this case, only “master” branch and branches matching regex “rc-.*” will be executed.
jobs:
build:
branches:
ignore:
- develop
- /feature-.*/
In this example, all the branches will be executed except “develop” and branches matching regex “feature-.*”.
If both ignore
and only
are present in config, only ignore
will be taken into account.
A job that was not executed due to configured rules will show up in the list of jobs in UI, but will be marked as skipped.
To ensure the job runs for all branches, either don’t use the branches
key, or use the only
key along with the regular expression: /.*/
to catch all branches.
resource_class
The resource_class
feature allows configuring CPU and RAM resources for each job. Different resource classes are available for different executors, as described in the tables below.
We implement soft concurrency limits for each resource class to ensure our system remains stable for all customers. If you are on a Performance or custom plan and experience queuing for certain resource classes, it’s possible you are hitting these limits. Contact CircleCI support to request a raise on these limits for your account.
Note: This feature is automatically enabled on free and Performance plans. Available resources classes are restricted for customers on the free plan to small/medium for linux, and medium for Windows. MacOS is not yet available on the free plan.
For self-hosted installations of CircleCI Server contact your system administrator for a list of available resource classes. See Server Administration documents for further information: Nomad Client System Requirements and Server Resource Classes.
Docker executor
Class | vCPUs | RAM |
---|---|---|
small | 1 | 2GB |
medium (default) | 2 | 4GB |
medium+ | 3 | 6GB |
large | 4 | 8GB |
xlarge | 8 | 16GB |
2xlarge(2) | 16 | 32GB |
2xlarge+(2) | 20 | 40GB |
Example usage
jobs:
build:
docker:
- image: buildpack-deps:trusty
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
resource_class: xlarge
steps:
... // other config
You may also use the resource_class
to configure a runner instance.
For example:
jobs:
job_name:
machine: true
resource_class: my-namespace/my-runner
Machine executor (Linux)
Class | vCPUs | RAM |
---|---|---|
medium (default) | 2 | 7.5GB |
large | 4 | 15GB |
xlarge | 8 | 32GB |
2xlarge | 16 | 64GB |
Example usage
jobs:
build:
machine:
image: ubuntu-2004:202010-01 # recommended linux image
resource_class: large
steps:
... // other config
You may also use the machine
class to configure a runner instance.
For example:
jobs:
job_name:
machine: true
resource_class: my-namespace/my-runner
macOS executor
Class | vCPUs | RAM |
---|---|---|
medium (default) | 4 | 8GB |
large(3) | 8 | 16GB |
Example usage
jobs:
build:
macos:
xcode: "11.3.0"
resource_class: large
steps:
... // other config
Windows executor
Class | vCPUs | RAM |
---|---|---|
medium (default) | 4 | 15GB |
large | 8 | 30GB |
xlarge | 16 | 60GB |
2xlarge | 32 | 128GB |
Example usage
version: 2.1
orbs:
win: circleci/windows@2.3.0
jobs:
build:
executor:
name: win/default
size: "medium" # can be "medium", "large", "xlarge", "2xlarge"
steps:
- run: Write-Host 'Hello, Windows'
Note the way resource class is set is different for windows
because the executor is defined within the windows orb.
See the Windows Getting Started document for more details and examples of using the Windows executor.
GPU executor (Linux)
Class | vCPUs | RAM | GPUs | GPU model | GPU Memory (GiB) |
---|---|---|---|---|---|
gpu.nvidia.small(2) | 4 | 15 | 1 | Nvidia Tesla P4 | 8 |
gpu.nvidia.medium(2) | 8 | 30 | 1 | Nvidia Tesla T4 | 16 |
Example usage
version: 2.1
jobs:
build:
machine:
resource_class: gpu.nvidia.small
image: ubuntu-1604-cuda-10.1:201909-23
steps:
- run: nvidia-smi
- run: docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
See the Available Linux GPU images section for the full list of available images.
GPU executor (Windows)
Class | vCPUs | RAM | GPUs | GPU model | GPU Memory (GiB) |
---|---|---|---|---|---|
windows.gpu.nvidia.medium(2) | 16 | 60 | 1 | Nvidia Tesla T4 | 16 |
Example usage
version: 2.1
orbs:
win: circleci/windows@2.3.0
jobs:
build:
executor: win/gpu-nvidia
steps:
- checkout
- run: '&"C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe"'
(2) This resource requires review by our support team. Open a support ticket if you would like to request access.
(3) This resource is available only for customers with an annual contract. Open a support ticket if you would like to learn more about our annual plans.
Note: Java, Erlang and any other languages that introspect the /proc
directory for information about CPU count may require additional configuration to prevent them from slowing down when using the CircleCI 2.0 resource class feature. Programs with this issue may request 32 CPU cores and run slower than they would when requesting one core. Users of languages with this issue should pin their CPU count to their guaranteed CPU resources.
Note: If you want to confirm how much memory you have been allocated, you can check the cgroup memory hierarchy limit with grep hierarchical_memory_limit /sys/fs/cgroup/memory/memory.stat
.
steps
The steps
setting in a job should be a list of single key/value pairs, the key of which indicates the step type. The value may be either a configuration map or a string (depending on what that type of step requires). For example, using a map:
jobs:
build:
working_directory: ~/canary-python
environment:
FOO: bar
steps:
- run:
name: Running tests
command: make test
Here run
is a step type. The name
attribute is used by the UI for display purposes. The command
attribute is specific for run
step and defines command to execute.
Some steps may implement a shorthand semantic. For example, run
may be also be called like this:
jobs:
build:
steps:
- run: make test
In its short form, the run
step allows us to directly specify which command
to execute as a string value. In this case step itself provides default suitable values for other attributes (name
here will have the same value as command
, for example).
Another shorthand, which is possible for some steps, is to simply use the step name as a string instead of a key/value pair:
jobs:
build:
steps:
- checkout
In this case, the checkout
step will checkout project source code into the job’s working_directory
.
In general all steps can be described as:
Key | Required | Type | Description |
---|---|---|---|
<step_type> | Y | Map or String | A configuration map for the step or some string whose semantics are defined by the step. |
Each built-in step is described in detail below.
run
Used for invoking all command-line programs, taking either a map of configuration values, or, when called in its short-form, a string that will be used as both the command
and name
. Run commands are executed using non-login shells by default, so you must explicitly source any dotfiles as part of the command.
Key | Required | Type | Description |
---|---|---|---|
command | Y | String | Command to run via the shell |
name | N | String | Title of the step to be shown in the CircleCI UI (default: full command ) |
shell | N | String | Shell to use for execution command (default: See Default Shell Options) |
environment | N | Map | Additional environmental variables, locally scoped to command |
background | N | Boolean | Whether or not this step should run in the background (default: false) |
working_directory | N | String | In which directory to run this step. Will be interpreted relative to the working_directory of the job). (default: . ) |
no_output_timeout | N | String | Elapsed time the command can run without output. The string is a decimal with unit suffix, such as “20m”, “1.25h”, “5s” (default: 10 minutes) |
when | N | String |
Specify when to enable or disable the step. Takes the following values: always , on_success , on_fail (default: on_success ) |
Each run
declaration represents a new shell. It is possible to specify a multi-line command
, each line of which will be run in the same shell:
- run:
command: |
echo Running test
mkdir -p /tmp/test-results
make test
You can also configure commands to run in the background if you don’t want to wait for the step to complete before moving on to subsequent run steps.
Default shell options
For jobs that run on Linux, the default value of the shell
option is /bin/bash -eo pipefail
if /bin/bash
is present in the build container. Otherwise it is /bin/sh -eo pipefail
. The default shell is not a login shell (--login
or -l
are not specified). Hence, the shell will not source your ~/.bash_profile
, ~/.bash_login
, ~/.profile
files.
For jobs that run on macOS, the default shell is /bin/bash --login -eo pipefail
. The shell is a non-interactive login shell. The shell will execute /etc/profile/
followed by ~/.bash_profile
before every step.
For more information about which files are executed when bash is invocated, see the INVOCATION
section of the bash
manpage.
Descriptions of the -eo pipefail
options are provided below.
-e
Exit immediately if a pipeline (which may consist of a single simple command), a subshell command enclosed in parentheses, or one of the commands executed as part of a command list enclosed by braces exits with a non-zero status.
So if in the previous example mkdir
failed to create a directory and returned a non-zero status, then command execution would be terminated, and the whole step would be marked as failed. If you desire the opposite behaviour, you need to add set +e
in your command
or override the default shell
in your configuration map of run
. For example:
- run:
command: |
echo Running test
set +e
mkdir -p /tmp/test-results
make test
- run:
shell: /bin/sh
command: |
echo Running test
mkdir -p /tmp/test-results
make test
-o pipefail
If pipefail is enabled, the pipeline’s return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully. The shell waits for all commands in the pipeline to terminate before returning a value.
For example:
- run: make test | tee test-output.log
If make test
fails, the -o pipefail
option will cause the whole step to fail. Without -o pipefail
, the step will always run successfully because the result of the whole pipeline is determined by the last command (tee test-output.log
), which will always return a zero status.
Note that even if make test
fails the rest of pipeline will be executed.
If you want to avoid this behaviour, you can specify set +o pipefail
in the command or override the whole shell
(see example above).
In general, we recommend using the default options (-eo pipefail
) because they show errors in intermediate commands and simplify debugging job failures. For convenience, the UI displays the used shell and all active options for each run
step.
For more information, see the Using Shell Scripts document.
Background commands
The background
attribute enables you to configure commands to run in the background. Job execution will immediately proceed to the next step rather than waiting for return of a command with the background
attribute set to true
. The following example shows the config for running the X virtual framebuffer in the background which is commonly required to run Selenium tests:
- run:
name: Running X virtual framebuffer
command: Xvfb :99 -screen 0 1280x1024x24
background: true
- run: make test
Shorthand syntax
run
has a very convenient shorthand syntax:
- run: make test
# shorthanded command can also have multiple lines
- run: |
mkdir -p /tmp/test-results
make test
In this case, command
and name
become the string value of run
, and the rest of the config map for that run
have their default values.
The when
Attribute
By default, CircleCI will execute job steps one at a time, in the order that they are defined in config.yml
, until a step fails (returns a non-zero exit code). After a command fails, no further job steps will be executed.
Adding the when
attribute to a job step allows you to override this default behaviour, and selectively run or skip steps depending on the status of the job.
The default value of on_success
means that the step will run only if all of the previous steps have been successful (returned exit code 0).
A value of always
means that the step will run regardless of the exit status of previous steps. This is useful if you have a task that you want to run regardless of whether the previous steps are successful or not. For example, you might have a job
step that needs to upload logs or code-coverage data somewhere.
A value of on_fail
means that the step will run only if one of the preceding steps has failed (returns a non-zero exit code). It is common to use on_fail
if you want to store some diagnostic data to help debug test failures, or to run custom notifications about the failure, such as sending emails or triggering alerts in chatrooms.
Note: Some steps, such as store_artifacts
and store_test_results
will always run, even if a step has failed (returned a non-zero exit code) previously. The when
attribute, store_artifacts
and store_test_results
are not run if the job has been killed by a cancel request or reaching the global 5 hour timeout.
- run:
name: Upload CodeCov.io Data
command: bash <(curl -s https://codecov.io/bash) -F unittests
when: always # Uploads code coverage results, pass or fail
Ending a job from within a step
A job can exit without failing by using run: circleci-agent step halt
. This can be useful in situations where jobs need to conditionally execute.
Here is an example where halt
is used to avoid running a job on the develop
branch:
run: |
if [ "$CIRCLE_BRANCH" = "develop" ]; then
circleci-agent step halt
fi
Example
steps:
- run:
name: Testing application
command: make test
shell: /bin/bash
working_directory: ~/my-app
no_output_timeout: 30m
environment:
FOO: bar
- run: echo 127.0.0.1 devhost | sudo tee -a /etc/hosts
- run: |
sudo -u root createuser -h localhost --superuser ubuntu &&
sudo createdb -h localhost test_db
- run:
name: Upload Failed Tests
command: curl --data fail_tests.log http://example.com/error_logs
when: on_fail
The when
Step (requires version: 2.1)
A conditional step consists of a step with the key when
or unless
. Under the when
key are the subkeys condition
and steps
. The purpose of the when
step is customizing commands and job configuration to run on custom conditions (determined at config-compile time) that are checked before a workflow runs. See the Conditional Steps section of the Reusing Config document for more details.
Key | Required | Type | Description |
---|---|---|---|
condition | Y | Logic | A logic statement |
steps | Y | Sequence | A list of steps to execute when the condition is true |
Example
version: 2.1
jobs: # conditional steps may also be defined in `commands:`
job_with_optional_custom_checkout:
parameters:
custom_checkout:
type: string
default: ""
machine: true
steps:
- when:
condition: <<parameters.custom_checkout>>
steps:
- run: echo "my custom checkout"
- unless:
condition: <<parameters.custom_checkout>>
steps:
- checkout
workflows:
build-test-deploy:
jobs:
- job_with_optional_custom_checkout:
custom_checkout: "any non-empty string is truthy"
- job_with_optional_custom_checkout
checkout
A special step used to check out source code to the configured path
(defaults to the working_directory
). The reason this is a special step is because it is more of a helper function designed to make checking out code easy for you. If you require doing git over HTTPS you should not use this step as it configures git to checkout over ssh.
Key | Required | Type | Description |
---|---|---|---|
path | N | String | Checkout directory. Will be interpreted relative to the working_directory of the job). (default: . ) |
If path
already exists and is:
- a git repo - step will not clone whole repo, instead will fetch origin
- NOT a git repo - step will fail.
In the case of checkout
, the step type is just a string with no additional attributes:
- checkout
Note: CircleCI does not check out submodules. If your project requires submodules, add run
steps with appropriate commands as shown in the following example:
- checkout
- run: git submodule sync
- run: git submodule update --init
This command will automatically add the required authenticity keys for interacting with GitHub and Bitbucket over SSH, which is detailed further in our integration guide – this guide will also be helpful if you wish to implement a custom checkout command.
Note: The checkout
step will configure Git to skip automatic garbage collection. If you are caching your .git
directory with restore_cache and would like to use garbage collection to reduce its size, you may wish to use a run step with command git gc
before doing so.
setup_remote_docker
Creates a remote Docker environment configured to execute Docker commands. See Running Docker Commands for details.
Key | Required | Type | Description |
---|---|---|---|
docker_layer_caching | N | boolean | set this to true to enable Docker Layer Caching in the Remote Docker Environment (default: false ) |
version | N | String | Version string of Docker you would like to use (default: 17.09.0-ce ). View the list of supported docker versions here. |
Notes:
- A paid account on a Performance or Custom Plan is required to access Docker Layer Caching.
-
setup_remote_docker
is not compatible with themachine
executor. See Docker Layer Caching in Machine Executor for information on how to enable DLC with themachine
executor. - The
version
key is not currently supported on CircleCI installed in your private cloud or datacenter. Contact your system administrator for information about the Docker version installed in your remote Docker environment.
save_cache
Generates and stores a cache of a file or directory of files such as dependencies or source code in our object storage. Later jobs can restore this cache. Learn more in the caching documentation.
Key | Required | Type | Description |
---|---|---|---|
paths | Y | List | List of directories which should be added to the cache |
key | Y | String | Unique identifier for this cache |
name | N | String | Title of the step to be shown in the CircleCI UI (default: “Saving Cache”) |
when | N | String |
Specify when to enable or disable the step. Takes the following values: always , on_success , on_fail (default: on_success ) |
The cache for a specific key
is immutable and cannot be changed once written.
Note If the cache for the given key
already exists it won’t be modified, and job execution will proceed to the next step.
When storing a new cache, the key
value may contain special templated values for your convenience:
Template | Description |
---|---|
{{ .Branch }} |
The VCS branch currently being built. |
{{ .BuildNum }} |
The CircleCI build number for this build. |
{{ .Revision }} |
The VCS revision currently being built. |
{{ .CheckoutKey }} |
The SSH key used to checkout the repo. |
{{ .Environment.variableName }} |
The environment variable variableName (supports any environment variable exported by CircleCI or added to a specific Context—not any arbitrary environment variable). |
{{ checksum "filename" }} |
A base64 encoded SHA256 hash of the given filename’s contents. This should be a file committed in your repo and may also be referenced as a path that is absolute or relative from the current working directory. Good candidates are dependency manifests, such as package-lock.json , pom.xml or project.clj . It’s important that this file does not change between restore_cache and save_cache , otherwise the cache will be saved under a cache key different than the one used at restore_cache time. |
{{ epoch }} |
The current time in seconds since the unix epoch. |
{{ arch }} |
The OS and CPU information. Useful when caching compiled binaries that depend on OS and CPU architecture, for example, darwin amd64 versus linux i386/32-bit . |
During step execution, the templates above will be replaced by runtime values and use the resultant string as the key
.
Template examples:
-
myapp-{{ checksum "package-lock.json" }}
- cache will be regenerated every time something is changed inpackage-lock.json
file, different branches of this project will generate the same cache key. -
myapp-{{ .Branch }}-{{ checksum "package-lock.json" }}
- same as the previous one, but each branch will generate separate cache -
myapp-{{ epoch }}
- every run of a job will generate a separate cache
While choosing suitable templates for your cache key
, keep in mind that cache saving is not a free operation, because it will take some time to upload the cache to our storage. So it make sense to have a key
that generates a new cache only if something actually changed and avoid generating a new one every run of a job.
v1-...
. That way you will be able to regenerate all your caches just by incrementing the version in this prefix.
Example
- save_cache:
key: v1-myapp-{{ arch }}-{{ checksum "project.clj" }}
paths:
- /home/ubuntu/.m2
restore_cache
Restores a previously saved cache based on a key
. Cache needs to have been saved first for this key using save_cache
step. Learn more in the caching documentation.
Key | Required | Type | Description |
---|---|---|---|
key | Y (1) | String | Single cache key to restore |
keys | Y (1) | List | List of cache keys to lookup for a cache to restore. Only first existing key will be restored. |
name | N | String | Title of the step to be shown in the CircleCI UI (default: “Restoring Cache”) |
(1) at least one attribute has to be present. If key
and keys
are both given, key
will be checked first, and then keys
.
A key is searched against existing keys as a prefix.
Note: When there are multiple matches, the most recent match will be used, even if there is a more precise match.
For example:
steps:
- save_cache:
key: v1-myapp-cache
paths:
- ~/d1
- save_cache:
key: v1-myapp-cache-new
paths:
- ~/d2
- run: rm -f ~/d1 ~/d2
- restore_cache:
key: v1-myapp-cache
In this case cache v1-myapp-cache-new
will be restored because it’s the most recent match with v1-myapp-cache
prefix even if the first key (v1-myapp-cache
) has exact match.
For more information on key formatting, see the key
section of save_cache
step.
When CircleCI encounters a list of keys
, the cache will be restored from the first one matching an existing cache. Most probably you would want to have a more specific key to be first (for example, cache for exact version of package-lock.json
file) and more generic keys after (for example, any cache for this project). If no key has a cache that exists, the step will be skipped with a warning.
A path is not required here because the cache will be restored to the location from which it was originally saved.
Example
- restore_cache:
keys:
- v1-myapp-{{ arch }}-{{ checksum "project.clj" }}
# if cache for exact version of `project.clj` is not present then load any most recent one
- v1-myapp-
# ... Steps building and testing your application ...
# cache will be saved only once for each version of `project.clj`
- save_cache:
key: v1-myapp-{{ arch }}-{{ checksum "project.clj" }}
paths:
- /foo
deploy
– DEPRECATED
This key is deprecated. For improved control over your deployments use workflows plus associated filtering and scheduling keys.
Special step for deploying artifacts.
deploy
uses the same configuration map and semantics as run
step. Jobs may have more than one deploy
step.
In general deploy
step behaves just like run
with two exceptions:
- In a job with
parallelism
, thedeploy
step will only be executed by node #0 and only if all nodes succeed. Nodes other than #0 will skip this step. - In a job that runs with SSH, the
deploy
step will not execute, and the following action will show instead:skipping deploy Running in SSH mode. Avoid deploying.
When using the deploy
step, it is also helpful to understand how you can use workflows to orchestrate jobs and trigger jobs. For more information about using workflows, refer to the following pages:
Example
- deploy:
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
ansible-playbook site.yml
fi
Note: The run
step allows you to use a shortcut like run: my command
; however, if you try to use a similar shortcut for the deploy
step like deploy: my command
, then you will receive the following error message in CircleCI:
In step 3 definition: This type of step does not support compressed syntax
store_artifacts
Step to store artifacts (for example logs, binaries, etc) to be available in the web app or through the API. See the Uploading Artifacts document for more information.
Key | Required | Type | Description |
---|---|---|---|
path | Y | String | Directory in the primary container to save as job artifacts |
destination | N | String | Prefix added to the artifact paths in the artifacts API (default: the directory of the file specified in path ) |
There can be multiple store_artifacts
steps in a job. Using a unique prefix for each step prevents them from overwriting files.
Example
- run:
name: Build the Jekyll site
command: bundle exec jekyll build --source jekyll --destination jekyll/_site/docs/
- store_artifacts:
path: jekyll/_site/docs/
destination: circleci-docs
store_test_results
Special step used to upload and store test results for a build. Test results are visible on the CircleCI web application, under each build’s “Test Summary” section. Storing test results is useful for timing analysis of your test suites.
It is also possible to store test results as a build artifact; to do so, please refer to the store_artifacts step.
Key | Required | Type | Description |
---|---|---|---|
path | Y | String | Path (absolute, or relative to your working_directory ) to directory containing subdirectories of JUnit XML or Cucumber JSON test metadata files |
Note: Please write your tests to subdirectories of your store_test_results
path, ideally named to match the names of your particular test suites, in order for CircleCI to correctly infer the names of your reports. If you do not write your reports to subdirectories, you will see reports in your “Test Summary” section such as Your build ran 71 tests in unknown
, instead of, for example, Your build ran 71 tests in rspec
.
Example
Directory structure:
test-results
├── jest
│ └── results.xml
├── mocha
│ └── results.xml
└── rspec
└── results.xml
config.yml
syntax:
- store_test_results:
path: test-results
persist_to_workspace
Special step used to persist a temporary file to be used by another job in the workflow.
Note: Workspaces are stored for up to 15 days after being created. All jobs that try to use a Workspace older than 15 days, including partial reruns of a Workflow and SSH reruns of individual jobs, will fail.
Key | Required | Type | Description |
---|---|---|---|
root | Y | String | Either an absolute path or a path relative to working_directory
|
paths | Y | List | Glob identifying file(s), or a non-glob path to a directory to add to the shared workspace. Interpreted as relative to the workspace root. Must not be the workspace root itself. |
The root key is a directory on the container which is taken to be the root directory of the workspace. The paths values are all relative to the root.
Example for root Key
For example, the following step syntax persists the specified paths from /tmp/dir
into the workspace, relative to the directory /tmp/dir
.
- persist_to_workspace:
root: /tmp/dir
paths:
- foo/bar
- baz
After this step completes, the following directories are added to the workspace:
/tmp/dir/foo/bar
/tmp/dir/baz
Example for paths Key
- persist_to_workspace:
root: /tmp/workspace
paths:
- target/application.jar
- build/*
The paths
list uses Glob
from Go, and the pattern matches filepath.Match.
pattern:
{ term }
term:
'*' matches any sequence of non-Separator characters
'?' matches any single non-Separator character
'[' [ '^' ] { character-range }
']' character class (must be non-empty)
c matches character c (c != '*', '?', '\\', '[')
'\\' c matches character c
character-range:
c matches character c (c != '\\', '-', ']')
'\\' c matches character c
lo '-' hi matches character c for lo <= c <= hi
The Go documentation states that the pattern may describe hierarchical names such as /usr/*/bin/ed
(assuming the Separator is ‘/’). Note: Everything must be relative to the work space root directory.
attach_workspace
Special step used to attach the workflow’s workspace to the current container. The full contents of the workspace are downloaded and copied into the directory the workspace is being attached at.
Key | Required | Type | Description |
---|---|---|---|
at | Y | String | Directory to attach the workspace to. |
Example
- attach_workspace:
at: /tmp/workspace
Each workflow has a temporary workspace associated with it. The workspace can be used to pass along unique data built during a job to other jobs in the same workflow.
Jobs can add files into the workspace using the persist_to_workspace
step and download the workspace content into their file system using the attach_workspace
step.
The workspace is additive only, jobs may add files to the workspace but cannot delete files from the workspace. Each job can only see content added to the workspace by the jobs that are upstream of it.
When attaching a workspace the “layer” from each upstream job is applied in the order the upstream jobs appear in the workflow graph. When two jobs run concurrently the order in which their layers are applied is undefined. If multiple concurrent jobs persist the same filename then attaching the workspace will error.
If a workflow is re-run it inherits the same workspace as the original workflow. When re-running failed jobs only the re-run jobs will see the same workspace content as the jobs in the original workflow.
Note the following distinctions between Artifacts, Workspaces, and Caches:
Type | lifetime | Use | Example |
---|---|---|---|
Artifacts | 1 Month | Preserve long-term artifacts. | Available in the Artifacts tab of the Job page under the tmp/circle-artifacts.<hash>/container or similar directory. |
Workspaces | Duration of workflow (up to 15 days) | Attach the workspace in a downstream container with the attach_workspace: step. |
The attach_workspace copies and re-creates the entire workspace content when it runs. |
Caches | 15 Days | Store non-vital data that may help the job run faster, for example npm or Gem packages. | The save_cache job step with a path to a list of directories to add and a key to uniquely identify the cache (for example, the branch, build number, or revision). Restore the cache with restore_cache and the appropriate key . |
Refer to the Persisting Data in Workflows: When to Use Caching, Artifacts, and Workspaces for additional conceptual information about using workspaces, caching, and artifacts.
add_ssh_keys
Special step that adds SSH keys from a project’s settings to a container. Also configures SSH to use these keys.
Key | Required | Type | Description |
---|---|---|---|
fingerprints | N | List | List of fingerprints corresponding to the keys to be added (default: all keys added) |
steps:
- add_ssh_keys:
fingerprints:
- "b7:35:a6:4e:9b:0d:6d:d4:78:1e:9a:97:2a:66:6b:be"
Note:
Even though CircleCI uses ssh-agent
to sign all added SSH keys, you must use the add_ssh_keys
key to actually add keys to a container.
Using pipeline.
Values
Pipeline values are available to all pipeline configurations and can be used without previous declaration. The pipeline values available are as follows:
Value | Description |
---|---|
pipeline.id | A globally unique id representing for the pipeline |
pipeline.number | A project unique integer id for the pipeline |
pipeline.project.git_url | E.g. https://github.com/circleci/circleci-docs |
pipeline.project.type | E.g. “github” |
pipeline.git.tag | The tag triggering the pipeline |
pipeline.git.branch | The branch triggering the pipeline |
pipeline.git.revision | The current git revision |
pipeline.git.base_revision | The previous git revision |
For example:
version: 2.1
jobs:
build:
docker:
- image: circleci/node:latest
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
environment:
IMAGETAG: latest
working_directory: ~/main
steps:
- run: echo "This is pipeline ID << pipeline.id >>"
workflows
Used for orchestrating all jobs. Each workflow consists of the workflow name as a key and a map as a value. A name should be unique within the current config.yml
. The top-level keys for the Workflows configuration are version
and jobs
.
version
The Workflows version
field is used to issue warnings for deprecation or breaking changes during Beta.
Key | Required | Type | Description |
---|---|---|---|
version | Y | String | Should currently be 2
|
<workflow_name
>
A unique name for your workflow.
triggers
Specifies which triggers will cause this workflow to be executed. Default behavior is to trigger the workflow when pushing to a branch.
Key | Required | Type | Description |
---|---|---|---|
triggers | N | Array | Should currently be schedule . |
schedule
A workflow may have a schedule
indicating it runs at a certain time, for example a nightly build that runs every day at 12am UTC:
workflows:
version: 2
nightly:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- master
- beta
jobs:
- test
cron
The cron
key is defined using POSIX crontab
syntax.
Key | Required | Type | Description |
---|---|---|---|
cron | Y | String | See the crontab man page. |
filters
Filters can have the key branches
.
Key | Required | Type | Description |
---|---|---|---|
filters | Y | Map | A map defining rules for execution on specific branches |
branches
The branches
key controls whether the current branch should have a schedule trigger created for it, where current branch is the branch containing the config.yml
file with the trigger
stanza. That is, a push on the master
branch will only schedule a workflow for the master
branch.
Branches can have the keys only
and ignore
which either map to a single string naming a branch. You may also use regular expressions to match against branches by enclosing them with /
’s, or map to a list of such strings. Regular expressions must match the entire string.
- Any branches that match
only
will run the job. - Any branches that match
ignore
will not run the job. - If neither
only
norignore
are specified then all branches will run the job. - If both
only
andignore
are specified theonly
is considered beforeignore
.
Key | Required | Type | Description |
---|---|---|---|
branches | Y | Map | A map defining rules for execution on specific branches |
only | Y | String, or List of Strings | Either a single branch specifier, or a list of branch specifiers |
ignore | N | String, or List of Strings | Either a single branch specifier, or a list of branch specifiers |
jobs
A job can have the keys requires
, context
, type
, and filters
.
Key | Required | Type | Description |
---|---|---|---|
jobs | Y | List | A list of jobs to run with their dependencies |
<job_name
>
A job name that exists in your config.yml
.
requires
Jobs are run in parallel by default, so you must explicitly require any dependencies by their job name.
Key | Required | Type | Description |
---|---|---|---|
requires | N | List | A list of jobs that must succeed for the job to start. Note: When jobs in the current workflow that are listed as dependencies are not executed (due to a filter function for example), their requirement as a dependency for other jobs will be ignored by the requires option. However, if all dependencies of a job are filtered, then that job will not be executed either. |
name | N | String | A replacement for the job name. Useful when calling a job multiple times. If you want to invoke the same job multiple times and a job requires one of the duplicate jobs, this is required. (2.1 only) |
context
Jobs may be configured to use global environment variables set for an organization, see the Contexts document for adding a context in the application settings.
Key | Required | Type | Description |
---|---|---|---|
context | N | String/List | The name of the context(s). The initial default name is org-global . Each context name must be unique. If using CircleCI Server, only a single Context per workflow is supported. |
type
A job may have a type
of approval
indicating it must be manually approved before downstream jobs may proceed. Jobs run in the dependency order until the workflow processes a job with the type: approval
key followed by a job on which it depends, for example:
- hold:
type: approval
requires:
- test1
- test2
- deploy:
requires:
- hold
Note: The hold
job name must not exist in the main configuration.
filters
Filters can have the key branches
or tags
. Note Workflows will ignore job-level branching. If you use job-level branching and later add workflows, you must remove the branching at the job level and instead declare it in the workflows section of your config.yml
, as follows:
Key | Required | Type | Description |
---|---|---|---|
filters | N | Map | A map defining rules for execution on specific branches |
The following is an example of how the CircleCI documentation uses a regex to filter running a workflow for building PDF documentation:
# ...
workflows:
build-deploy:
jobs:
- js_build
- build_server_pdfs: # << the job to conditionally run based on the filter-by-branch-name.
filters:
branches:
only: /server\/.*/
The above snippet causes the job build_server_pdfs
to only be run when the branch being built starts with “server/” in it.
You can read more about using regex in your config in the Workflows document.
branches
Branches can have the keys only
and ignore
which either map to a single string naming a branch. You may also use regular expressions to match against branches by enclosing them with slashes, or map to a list of such strings. Regular expressions must match the entire string.
- Any branches that match
only
will run the job. - Any branches that match
ignore
will not run the job. - If neither
only
norignore
are specified then all branches will run the job. - If both
only
andignore
are specified theonly
is considered beforeignore
.
Key | Required | Type | Description |
---|---|---|---|
branches | N | Map | A map defining rules for execution on specific branches |
only | N | String, or List of Strings | Either a single branch specifier, or a list of branch specifiers |
ignore | N | String, or List of Strings | Either a single branch specifier, or a list of branch specifiers |
tags
CircleCI does not run workflows for tags unless you explicitly specify tag filters. Additionally, if a job requires any other jobs (directly or indirectly), you must specify tag filters for those jobs.
Tags can have the keys only
and ignore
keys. You may also use regular expressions to match against tags by enclosing them with slashes, or map to a list of such strings. Regular expressions must match the entire string. Both lightweight and annotated tags are supported.
- Any tags that match
only
will run the job. - Any tags that match
ignore
will not run the job. - If neither
only
norignore
are specified then the job is skipped for all tags. - If both
only
andignore
are specified theonly
is considered beforeignore
.
Key | Required | Type | Description |
---|---|---|---|
tags | N | Map | A map defining rules for execution on specific tags |
only | N | String, or List of Strings | Either a single tag specifier, or a list of tag specifiers |
ignore | N | String, or List of Strings | Either a single tag specifier, or a list of tag specifiers |
For more information, see the Executing Workflows For a Git Tag section of the Workflows document.
matrix
(requires version: 2.1)
The matrix
stanza allows you to run a parameterized job multiple times with different
arguments.
Key | Required | Type | Description |
---|---|---|---|
parameters | Y | Map | A map of parameter names to every value the job should be called with |
exclude | N | List | A list of argument maps that should be excluded from the matrix |
alias | N | String | An alias for the matrix, usable from another job’s requires stanza. Defaults to the name of the job being executed |
The following is a basic example of using matrix jobs.
workflows:
workflow:
jobs:
- build:
matrix:
parameters:
version: ["0.1", "0.2", "0.3"]
platform: ["macos", "windows", "linux"]
This expands to 9 different build
jobs, and could be equivalently written as:
workflows:
workflow:
jobs:
- build:
name: build-macos-0.1
version: 0.1
platform: macos
- build:
name: build-macos-0.2
version: 0.2
platform: macos
- build:
name: build-macos-0.3
version: 0.3
platform: macos
- build:
name: build-windows-0.1
version: 0.1
platform: windows
- ...
Excluding sets of parameters from a matrix
Sometimes you may wish to run a job with every combination of arguments except
some value or values. You can use an exclude
stanza to achieve this:
workflows:
workflow:
jobs:
- build:
matrix:
parameters:
a: [1, 2, 3]
b: [4, 5, 6]
exclude:
- a: 3
b: 5
The matrix above would expand into 8 jobs: every combination of the parameters
a
and b
, excluding {a: 3, b: 5}
Dependencies and matrix jobs
To require
an entire matrix (every job within the matrix), use its alias
.
The alias
defaults to the name of the job being invoked.
workflows:
workflow:
jobs:
- deploy:
matrix:
parameters:
version: ["0.1", "0.2"]
- another-job:
requires:
- deploy
This means that another-job
will require both deploy jobs in the matrix to
finish before it runs.
Additionally, matrix jobs expose their parameter values via << matrix.* >>
which can be used to generate more complex workflows. For example, here is a
deploy
matrix where each job waits for its respective build
job in another
matrix.
workflows:
workflow:
jobs:
- build:
name: build-v<< matrix.version >>
matrix:
parameters:
version: ["0.1", "0.2"]
- deploy:
name: deploy-v<< matrix.version >>
matrix:
parameters:
version: ["0.1", "0.2"]
requires:
- build-v<< matrix.version >>
This workflow will expand to:
workflows:
workflow:
jobs:
- build:
name: build-v0.1
version: 0.1
- build:
name: build-v0.2
version: 0.2
- deploy:
name: deploy-v0.1
version: 0.1
requires:
- build-v0.1
- deploy:
name: deploy-v0.2
version: 0.2
requires:
- build-v0.2
pre-steps
and post-steps
(requires version: 2.1)
Every job invocation in a workflow may optionally accept two special arguments: pre-steps
and post-steps
.
Steps under pre-steps
are executed before any of the other steps in the job. The steps under post-steps
are executed after all of the other steps.
Pre and post steps allow you to execute steps in a given job without modifying the job. This is useful, for example, to run custom setup steps before job execution.
version: 2.1
jobs:
bar:
machine: true
steps:
- checkout
- run:
command: echo "building"
- run:
command: echo "testing"
workflows:
build:
jobs:
- bar:
pre-steps: # steps to run before steps defined in the job bar
- run:
command: echo "install custom dependency"
post-steps: # steps to run after steps defined in the job bar
- run:
command: echo "upload artifact to s3"
Using when
in Workflows
With CircleCI v2.1 configuration, you may use a when
clause (the inverse clause unless
is also supported) under a workflow declaration with a logic statement to determine whether or not to run that workflow.
The example configuration below uses a pipeline parameter, run_integration_tests
to drive the integration_tests
workflow.
version: 2.1
parameters:
run_integration_tests:
type: boolean
default: false
workflows:
integration_tests:
when: << pipeline.parameters.run_integration_tests >>
jobs:
- mytestjob
jobs:
...
This example prevents the workflow integration_tests
from running unless the tests are invoked explicitly when the pipeline is triggered with the following in the POST
body:
{
"parameters": {
"run_integration_tests": true
}
}
Refer to the Orchestrating Workflows document for more examples and conceptual information.
Logic statements
Certain dynamic configuration features accept logic statements as arguments. Logic statements are evaluated to boolean values at configuration compilation time, that is - before the workflow is run. The group of logic statements includes:
Type | Arguments |
true if |
Example |
---|---|---|---|
YAML literal | None | is truthy |
true /42 /"a string"
|
YAML alias | None | resolves to a truthy value | *my-alias |
Pipeline Value | None | resolves to a truthy value | << pipeline.git.branch >> |
Pipeline Parameter | None | resolves to a truthy value | << pipeline.parameters.my-parameter >> |
and | N logic statements | all arguments are truthy | and: [ true, true, false ] |
or | N logic statements | any argument is truthy | or: [ false, true, false ] |
not | 1 logic statement | the argument is not truthy | not: true |
equal | N values | all arguments evaluate to equal values | equal: [ 42, << pipeline.number >>] |
The following logic values are considered falsy:
- false
- null
- 0
- NaN
- empty strings (“”)
- statements with no arguments
All other values are truthy. Further, Please also note that using logic with an empty list will cause a validation error.
Logic statements always evaluate to a boolean value at the top level, and coerce as necessary. They can be nested in an arbitrary fashion, according to their argument specifications, and to a maximum depth of 100 levels.
Note:
When using logic statements at the workflow level, do not include the condition:
key (the condition
key is only needed for job
level logic statements).
Logic statement examples
workflows:
my-workflow:
when:
or:
- equal: [ master, << pipeline.git.branch >> ]
- equal: [ staging, << pipeline.git.branch >> ]
workflows:
my-workflow:
when:
and:
- not:
equal: [ master, << pipeline.git.branch >> ]
- or:
- equal: [ canary, << pipeline.git.tag >> ]
- << pipeline.parameters.deploy-canary >>
version: 2.1
executors:
linux-13:
docker:
- image: cimg/node:13.13
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
macos: &macos-executor
macos:
xcode: 11.4
jobs:
test:
parameters:
os:
type: executor
node-version:
type: string
executor: << parameters.os >>
steps:
- checkout
- when:
condition:
equal: [ *macos-executor, << parameters.os >> ]
steps:
- run: echo << parameters.node-version >>
- run: echo 0
workflows:
all-tests:
jobs:
- test:
os: macos
node-version: "13.13.0"
Example full configuration
version: 2
jobs:
build:
docker:
- image: ubuntu:14.04
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
- image: mongo:2.6.8
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
command: [mongod, --smallfiles]
- image: postgres:9.4.1
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
# some containers require setting environment variables
environment:
POSTGRES_USER: root
- image: redis@sha256:54057dd7e125ca41afe526a877e8bd35ec2cdd33b9217e022ed37bdcf7d09673
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
- image: rabbitmq:3.5.4
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
environment:
TEST_REPORTS: /tmp/test-reports
working_directory: ~/my-project
steps:
- checkout
- run:
command: echo 127.0.0.1 devhost | sudo tee -a /etc/hosts
# Create Postgres users and database
# Note the YAML heredoc '|' for nicer formatting
- run: |
sudo -u root createuser -h localhost --superuser ubuntu &&
sudo createdb -h localhost test_db
- restore_cache:
keys:
- v1-my-project-{{ checksum "project.clj" }}
- v1-my-project-
- run:
environment:
SSH_TARGET: "localhost"
TEST_ENV: "linux"
command: |
set -xu
mkdir -p ${TEST_REPORTS}
run-tests.sh
cp out/tests/*.xml ${TEST_REPORTS}
- run: |
set -xu
mkdir -p /tmp/artifacts
create_jars.sh ${CIRCLE_BUILD_NUM}
cp *.jar /tmp/artifacts
- save_cache:
key: v1-my-project-{{ checksum "project.clj" }}
paths:
- ~/.m2
# Save artifacts
- store_artifacts:
path: /tmp/artifacts
destination: build
# Upload test results
- store_test_results:
path: /tmp/test-reports
deploy-stage:
docker:
- image: ubuntu:14.04
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
working_directory: /tmp/my-project
steps:
- run:
name: Deploy if tests pass and branch is Staging
command: ansible-playbook site.yml -i staging
deploy-prod:
docker:
- image: ubuntu:14.04
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD # context / project UI env-var reference
working_directory: /tmp/my-project
steps:
- run:
name: Deploy if tests pass and branch is Master
command: ansible-playbook site.yml -i production
workflows:
version: 2
build-deploy:
jobs:
- build:
filters:
branches:
ignore:
- develop
- /feature-.*/
- deploy-stage:
requires:
- build
filters:
branches:
only: staging
- deploy-prod:
requires:
- build
filters:
branches:
only: master