CircleCI’s self-hosted runner FAQs
On This Page
- FAQs
- What is a CircleCI task vs. a job?
- What is a runner resource class? What is a resource class token?
- What is the security model for CircleCI’s self-hosted runners?
- How do I install dependencies needed for my jobs that use machine runners?
- What connectivity is required?
- How do caching, workspaces, and artifacts work with CircleCI’s self-hosted runners?
- What are the best practices for managing state between jobs?
- How long do inactive self-hosted runners persist in the self-hosted runner inventory page?
- Can I delete self-hosted runner resource classes?
- Who can create, delete, and view self-hosted runner resource classes?
- Can I delete runner resource class tokens?
- Can I create additional runner resource class tokens?
- Can jobs on forks of my OSS project use my organization’s self-hosted runners if the fork is not a part of my organization?
- Why did my test splitting job step error with circleci: command not found?
- Container runner specific FAQs
- Is there only one resource class allowed per container runner deployment?
- Does container runner use a pull or push based model?
- Does container runner scale my Kubernetes cluster for me?
- Is there a limit for the number of concurrent tasks that container runner can handle?
- Can I build Docker images with container runner either via Remote Docker or Docker in Docker (DIND)?
- Can I use something other than Kubernetes with container runner?
- Does container runner require specific Kubernetes providers?
- Does container runner need to sit within the cluster that it deploys pods to?
- What platforms can you install container runner on?
- Does container runner support arm64 Docker images?
- How do I uninstall container runner?
- Does container runner replace the existing self-hosted runner from CircleCI?
- If there are two container runners deployed to a single Kubernetes cluster, how does the agent.maxConcurrentTasks parameter work?
- How do I upgrade to the latest Helm chart?
- How is container runner versioned?
- How is a version of container runner supported?
- What are the security considerations for container runner?
- How can an IAM role be used to authorize pulling images from ECR?
- What if I want to run my CI job within a container, but do not want to use Kubernetes?
- Machine runner specific FAQs
- How can I tell whether a host with a self-hosted runner installed is executing a job?
This page answers frequently asked questions for CircleCI’s self-hosted container and machine runners.
FAQs
What is a CircleCI task vs. a job?
A task is the smallest unit of work on CircleCI. If a job has parallelism of one, it is one task. If a job has parallelism = n and n > 1, then the job creates n tasks to execute.
What is a runner resource class? What is a resource class token?
A resource class is a label to match your CircleCI job with a type of runner (or container runner) that is identified to process that job. The first part of the resource class is your organization’s namespace. For example, a CircleCI resource class could be circleci/documentation
.
Resource classes help you identify a pool of self-hosted runners, which allow you to set up your configuration to send jobs to specific resources. For example, if you have multiple machines running macOS, and multiple machines running Linux, you could create resource classes for each of these, orgname/macOS
and orgname/linux
, respectively. At the job level in your .circleci/config.yml
, you can associate which self-hosted runner resources to send a job to based on the resource class.
Every time you create a resource class, a resource class token is generated that is associated with the given resource class. This token is the method by which CircleCI authenticates that the resource class is valid. The resource class token only has access to claim tasks.
What is the security model for CircleCI’s self-hosted runners?
Machine runners let you choose the user that executes jobs when the self-hosted runner is installed. It is up to you to ensure this user only has permissions you are comfortable letting jobs use. The container runner installation page describes the security model for container runners.
Allowing jobs to access a Docker daemon is equivalent to providing root access to the machine. |
How do I install dependencies needed for my jobs that use machine runners?
Install dependencies in one of two ways:
-
Allow the jobs to install their own dependencies
This approach is the most flexible, but will require providing the jobs sufficient privileges to install tools or install the tools in a non-overlapping manner. For example into the working directory.
-
Pre-install dependencies on the machine where the machine runner is installed
This approach is the most secure; however, this means that if the job’s dependencies change, the self-hosted runner machine must be reconfigured.
What connectivity is required?
In order to connect back to CircleCI to receive and execute jobs, outbound HTTPS connections to runner.circleci.com
, circleci-binary-releases.s3.amazonaws.com
are required.
No inbound connectivity is required by a self-hosted runner. Any other required connectivity is dependent on the content of the jobs themselves. |
Using the checkout step will require access to your VCS provider. Using the cache, workspace or artifact features will require outbound HTTPS connections to circleci-tasks-prod.s3.us-east-1.amazonaws.com . |
How do caching, workspaces, and artifacts work with CircleCI’s self-hosted runners?
Caches, workspaces, and artifacts are methods you can implement to help persist data between jobs, and speed up builds. All three features are compatible with self-hosted runners.
Find out more about these concepts below:
You can also find out more on the Persisting data page.
If you would prefer to take complete control of artifact storage, CircleCI recommends you avoid the built-in steps and upload the artifacts directly to your chosen storage backend.
What are the best practices for managing state between jobs?
Machine runners are not opinionated about state between jobs. Machine runners can be configured to give each job a unique working directory and clean it up afterwards - but this is optional. And by default, nothing restricts the job from placing files outside of its working directory.
In general CircleCI recommends jobs rely on as little state as possible to improve their reproducibility. An effective way to accomplish this is to put cleanup steps at the start of a job so they are guaranteed to run regardless of what happened to a previous job.
It may be possible to reduce build times by making use of caches that persist on the host between jobs, however this is a trade-off against reproducibility - and may also lead to disks filling up over time. As a result, this trade-off could lead to higher billable usage. |
How long do inactive self-hosted runners persist in the self-hosted runner inventory page?
If a self-hosted runner has not contacted CircleCI in 12 hours, it will not show up in the inventory page on the CircleCI web app.
Can I delete self-hosted runner resource classes?
Yes, self-hosted runner resource classes can be deleted through the CLI. Be sure you want to permanently delete the resource class and its associated tokens, as this action cannot be undone.
circleci runner resource-class delete <resource-class> --force
Who can create, delete, and view self-hosted runner resource classes?
Organization admins in your VCS provider can create and delete self-hosted runner resource classes. Any organization user in your VCS provider that the resource class is associated with can view the resource class list through the CLI.
Can I delete runner resource class tokens?
Yes, runner resource class tokens can be deleted through the CLI. Be sure you want to permanently delete the token as this action cannot be undone. Note this will not delete the resource class itself, only the token.
To get the list of tokens and their identifiers:
circleci runner token list <resource-class name>
To delete the token itself:
circleci runner token delete <token identifier>
Can I create additional runner resource class tokens?
Yes, additional runner resource class tokens for a specific runner resource class can be created through the CLI.
To create the token:
circleci runner token create <resource-class-name> <nickname>
Can jobs on forks of my OSS project use my organization’s self-hosted runners if the fork is not a part of my organization?
No, runner resource classes cannot be used by jobs that are not associated with the organization that owns the runner resource classes. Only forks of your OSS project that are a part of your organization may use the organization’s self-hosted runners.
Why did my test splitting job step error with circleci: command not found
?
On self-hosted runners, circleci-agent
is used for all commands in which you may use either circleci-agent
or circleci
on CircleCI cloud (such as test splitting and step halt commands). Note, circleci
is not to be confused with the local CircleCI CLI, and is simply an alias of circleci-agent
.
If you would like to use the local CircleCI CLI in your self-hosted runner jobs, which can proxy test commands to circleci-agent
, you can install the CLI via a job step. Install the CLI as a dependency on your machine for machine runner, or include it in a Docker image for container runner.
Container runner specific FAQs
This section answers frequently asked questions for CircleCI’s container runner.
Is there only one resource class allowed per container runner deployment?
No, you can use as many resource classes as you desire with your container runner deployment. At least one resource class is required in order to run a job successfully with container runner.
Does container runner use a pull or push based model?
Container runner uses a pull-based model.
Does container runner scale my Kubernetes cluster for me?
Container runner itself is its own deployment of a single replica set that does not currently require scaling. Container runner will not scale the Kubernetes cluster itself. It schedules work if there are available resources in the cluster.
You can use the queue depth API as a signal for cluster scaling.
Is there a limit for the number of concurrent tasks that container runner can handle?
Container runner will claim and schedule work up to your runner concurrency limit. Additionally, by default, container runner is configured with a limit of 20 tasks it will allow to be concurrently scheduled and running. This can be configured via Helm to be a different value if your runner concurrency allows for a value greater than 20. See the agent.maxConcurrentTasks
parameter on the Container runner page.
An organization’s runner concurrency limit is shared with any existing machine
self-hosted runners. If you do not know what your organization’s runner concurrency limit is, ask your point of contact at CircleCI, or submit a support ticket.
Can I build Docker images with container runner either via Remote Docker or Docker in Docker (DIND)?
See building container images for details.
Can I use something other than Kubernetes with container runner?
At this time, no. Kubernetes and Helm are required.
Does container runner require specific Kubernetes providers?
No, any Kubernetes provider can be used.
Does container runner need to sit within the cluster that it deploys pods to?
At this time, yes.
What platforms can you install container runner on?
amd64 and arm64 Linux for both container runner, and the pods that execute tasks.
Does container runner support arm64 Docker images?
Yes, container runner supports jobs that use either amd64 or arm64 Docker images, as well as Kubernetes clusters that use a mixture of amd64 and arm64 nodes. When using images built for a specific architecture, resource classes will need to be configured to target a node with that CPU architecture. Kubernetes provides several node labels automatically that are helpful in configuring the resource class pod specifications for a job to be deployed on the correct node. An example resource class configuration is shown in the example below. More information about these labels can be found in the Kubernetes documentation.
agent:
resourceClasses:
<amd64 image resource class>:
token: <amd64 resource class token>
spec:
nodeSelector: # nodeSelector will cause this resource class to only create pods on nodes with the specified labels and values
kubernetes.io/arch=amd64
<arm64 image resource class>:
token: <arm64 resource class token>
spec:
nodeSelector:
kubernetes.io/arch=arm64
<multiarchitecture image resource class>: # note no nodeSelector is defined for the multiarchitecture image resource class
token: <multiarchitecture resource class token>
How do I uninstall container runner?
To uninstall the container-agent
deployment, run:
$ helm uninstall container-agent
The command removes all the Kubernetes objects associated with the chart and deletes the release.
Does container runner replace the existing self-hosted runner from CircleCI?
No, container runner is meant to complement machine runners. With container runner and machine runners, CircleCI users have the flexibility to choose the execution environment they desire (Container vs. Machine) just like they are afforded on CircleCI’s cloud platform.
If there are two container runners deployed to a single Kubernetes cluster, how does the agent.maxConcurrentTasks
parameter work?
The agent.maxConcurrentTasks
parameter applies to each agent individually. However, multiple container runner deployments per Kubernetes cluster is not recommended at this time.
How do I upgrade to the latest Helm chart?
Updates to the Helm chart can be applied via:
$ helm repo update
$ helm upgrade container-agent
How is container runner versioned?
Container runner uses semantic versioning for both the container runner application as well as the Helm chart used for installation. The container runner image provides a floating tag for each major and minor version, that points to the most recent release of each, as well as a fully qualified tag that points to a specific patch release for a minor version.
How is a version of container runner supported?
The container runner application promises backwards compatibility for releases within the same major version, as well as vulnerability and bug support for the most recent minor version. The Helm chart for container runner promises backwards compatibility with the values file within the same major version.
What are the security considerations for container runner?
Just like a machine runner, a container runner allows users to run arbitrary code in the infrastructure where container runner is hosted, meaning a bad actor could potentially use it as a method to gain knowledge of internal systems. Ensure you are following all best practices for security to mitigate this risk.
How can an IAM role be used to authorize pulling images from ECR?
An IAM role can be associated with the service account used for the container runner by following the AWS documentation. If an image in a job configuration specifies AWS credentials, those credentials will be used instead of the IAM role attached to the container runner service account. See the Container runner documentation for more details about the container runner service account.
What if I want to run my CI job within a container, but do not want to use Kubernetes?
If you would like to run your CI job within a container, but do not want to use Kubernetes, you can use a machine runner with Docker installed.
Machine runner specific FAQs
This section answers frequently asked questions for CircleCI’s machine runner.
How can I tell whether a host with a self-hosted runner installed is executing a job?
The recommended approach at this time is to query the host with the following command:
ps aux | pgrep -f circleci-launch-agent
If the result of the command above returns greater than two processes, you can assume that the machine runner is executing a task.
Note that you must check to see if there are greater than two processes because the grep
process itself will count as one process and the launch-agent process will count as a separate process.