Runner Configuration reference

Last updated
Tags Cloud Server v3.x

Runner configuration reference

A YAML file is used to configure the launch agent, how it communicates with our servers and how it will launch the task agent.

The configuration file uses the following format with the various parameters explained in more detail below:

  auth_token: AUTH_TOKEN
The launch agent can also be configured via environment variables. These will be prioritized over the YAML file, if set.



This is a token used to identify the launch agent to CircleCI and can be generated by the CircleCI CLI. An existing token may be shared among many installations, but this token only allows a particular resource_class to be specified.



This is the URL used by runner to communicate with CircleCI. This variable is only required for runners installed for the 3.2 version of server.


RUNNER_NAME is a unique name assigned to this particular running launch agent. CircleCI recommends using the hostname of the machine so that it can be used to identify the agent when viewing statuses and job results in the CircleCI UI.


This controls the file where the launch agent and task agent logs will go. See CircleCI runner operation for more information on the launch-agent and the task-agent.


  file: /Library/Logs/com.circleci.runner.log



This prefix enables you to customize how the task agent process is launched. Using a custom script here can allow you to execute arbitrary commands before and after the task runner. You should take care to ensure the supplied arguments are executed, and the correct exit code is returned from the script upon completion.



This directory allows you to control the default working directory used by each job. If the directory already exists, the task agent will need permissions to write to the directory. If the directory does not exist, then the task agent will need permissions to create the directory. If %s is present in the value, this value will be replaced with a different value for each job. Note that these directories will not automatically be removed.



This directory enables you to control the working directory cleanup after each job. The default value is false.



This parameter allows you to specify whether you want to terminate this runner instance upon completion of a job (single-task) or to continuously poll for new available jobs.

The possible values are:

  • continuous

  • single-task

The default value is continuous.



This value can be used to override the default maximum duration the task agent will run each job. Note that the value is a string with the following unit identifiers h, m or s for hour, minute, and seconds respectively:

Here are a few valid examples:

  • 72h - 3 days

  • 1h30m - 1 hour 30 minutes

  • 30s - 30 seconds

  • 50m - 50 minutes

  • 1h30m20s - An overly specific (yet still valid) duration

The default value is 5 hours.

Customizing job timeouts and drain timeouts

If you would like to customize the job timeout setting, you can “drain” the job by sending the launch agent a termination (TERM) signal, which then causes the launch agent to attempt to gracefully shutdown. When this TERM signal is received, the launch agent enters “draining” mode, preventing the launch agent from accepting any new jobs, but still allowing any current active job to be completed. At the end of “draining,” the launch agent then signals the task agent to cancel any active job (by sending it a TERM signal).

If the task agent does not exit a brief period after the TERM, the launch agent will manually kill it by sending it a KILL signal.

Draining can end in one of two ways:

  • The task has been in the draining state for longer than the configured max_run_time

  • An additional TERM signal is received by the launch agent during “draining”



This timeout will enable a launch agent to terminate if no task has been claimed within the given time period. The value is a string with the following unit identifiers: h, m or s for hours, minutes, and seconds respectively (e.g., 5m is 5 minutes).

The default behaviour is to never time out due to inactivity.



This parameter enables the “Rerun job with SSH” feature. Before enabling this feature, there are important considerations that should be made.

The address is of the form host:port and is displayed in the “Enable SSH” and “Wait for SSH” sections for a job that is rerun.

While the presence of the runner.ssh.advertise_addr variable enables the “Rerun job with SSH” feature, the value it holds is for publishing purposes only in the web UI. The address does not need to match the actual host and port of the machine that the runner is installed on and can be a proxy configuration.

Considerations before enabling SSH debugging

Task agent runs an embedded SSH server and agent on a dedicated port when the “Rerun job with SSH” option is activated. This feature will not affect any other SSH servers or agents on the system that the runner is installed on.

  • The host port used by the SSH server is currently fixed to 54782. Ensure this port is unblocked and available for SSH connections. A port conflict can occur if multiple launch agents are installed on the same host.

  • The SSH server will inherit the same user privileges and associated access authorizations as the task agent, defined by the runner.command_prefix parameter.

  • The SSH server is configured for public key authentication. Anyone with permission to initiate a job can rerun it with SSH. However, only the user who initiated the rerun will have their SSH public keys added to the server for the duration of the SSH session.

  • Rerunning a job with SSH will hold the job open for two hours if a connection is made to the SSH server, or ten minutes if no connection is made, unless cancelled. While in this state, the job is counted against an organization’s concurrency limit, and the task agent will be unavailable to handle other jobs. Therefore, it is recommended to cancel an SSH rerun job explicitly (through the web UI or CLI) when finished debugging.

Help make this document better

This guide, as well as the rest of our docs, are open-source and available on GitHub. We welcome your contributions.