Fixed compatibility issues between logging collector updates and older Helm chart versions. The container-agent-instance label in the Pod informer is now optional, preventing scheduling conflicts when container agent instances run with mismatched component versions.
Fixed a bug where an error downloading the task agent binary was not correctly propagated, causing the runner to get stuck without retrying the download and requiring a manual restart to unblock.
Set resource requirements (requests and limits) on the orchestrator init container (https://github.com/circleci/runner-init). This change helps ensure the Pod is schedulable when resource quotas are applied.
Added options to configure the image name for the orchestrator container. This can be used for hosting the image in a private registry or within an air-gapped environment on CircleCI server. Note this change requires version v101.1.3 of the Helm chart.
Fixed the volume for GOAT and task-agent binaries to be read-write. It was unnecessarily read-only and caused issues with the circleci-cli orb, which overwrites the circleci symlink with the CLI.
The Self-hosted Runners section of the CircleCI web app showed the self-hosted runner’s hostname instead of the self-hosted runner’s name. This has been fixed to show the name.
Support for all 1.x variants of runner launch agent has ended. It has been replaced with Machine Runner 3.0. Any remaining access will be removed September 17th, 2024. For those that haven’t migrated, your builds on runner will fail.
You can learn more about machine runner and migration here.
Fix a bug which broke OIDC token injection for Cloud Runner users
NOTE: Release 3.0.23 has been removed from our package repository. It contained bug breaking OIDC token injection for Cloud Runner users. Please use this release 3.0.24 instead.
We will end support for all 1.x variants of runner launch agent on July 31st 2024. It has been replaced with Machine Runner 3.0. You can learn more about machine runner and migration here.
Add lifecycle pre-stop command to gracefully terminate task agent on shutdown
Ensure all Task lifecycle step events can be sent, even on shutdown of the agent and the context is cancelled
Don’t delete the task pod if task agent is still running
This is related to issue https://github.com/kubernetes/client-go/issues/724. The remotecommand package may not always handle network disconnects on the exec command, returning a nil error. Therefore, we don’t delete the task pod in this case, giving a chance for the task to complete in an “orphaned” state.
Prevent Task lifecycle step output from being lost during long-running jobs
Improve task cleanup liveness check robustness and add extra logging
Fix draining and cleanup of a task on shutdown of the runner agent
Use stricter task pod config parsing with more helpful error messaging
Log errors from calls to Kubernetes API when loading token secrets on Kubernetes driver startup
Set reasonable resource requests and limits for logging containers
Add a liveness probe to check the health of the task-agent process
Respect node RBAC permission setting in K8s GC
Fix for custom Docker registry names
Fix for broken pipes while copying task agent when using Alpine-based images on MicroK8s
Fix for the task token from being able to be exposed in a job
Support Bash parameter expansion syntax in the job config file
Improve logging container configuration:
Only mount the service account token if there are service containers and therefore a logging container
Make the logging container respect the image pull policy of the task pod config, if set. This matches the behaviour of the primary and secondary containers in the task pod.
Improve task pod event logging in the Task lifecycle step