Phase 3 GCP - execution environments
Before you begin with the CircleCI server v4.8 execution environment installation phase, ensure you have run through the following:
In the following sections, replace any sections indicated by < > with your details.
|
Introduction
Execution environments in CircleCI server are resourced in the following ways:
-
HashiCorp Nomad is used for scheduling jobs and running Docker jobs.
-
Nomad Servers can either be installed and configured within your Kubernetes cluster, or outside of your cluster within their own VMs. Nomad Servers are used for scheduling jobs.
-
Nomad Clients are installed outside your Kubernetes cluster and are used for running jobs that are configured to use the Docker execution environment.
-
CircleCI provides sample Terraform modules for you to reference and use to streamline the installation process if you are installing within AWS or GCP. Installing in this way also enables the use of the Nomad Autoscaler.
-
-
Machine provisioner is used in AWS/GCP installations to provision resources and run jobs that are configured to use the
machine
execution environment, including Linux and Windows VMs, including Arm resources, if your server installation is within AWS. If you install outside AWS or GCP (that is, a local installation) you will use self-hosted runners to run VM-configured jobs. -
Runner can be used for all execution environments, including macOS.
1. Nomad clients
Nomad is a workload orchestration tool that CircleCI uses to schedule (through Nomad server) and run (through Nomad clients) CircleCI jobs.
Nomad clients are installed outside of the Kubernetes cluster, while their control plane (Nomad Server) can be installed within the cluster or externally. Communication between your Nomad Clients and the Nomad control plane is secured with mTLS. The mTLS certificate, private key, and certificate authority will be output after you complete installation of the Nomad Clients.
If you are installing CircleCI server locally, rather than in AWS or GCP, see the Air-gapped docs on setting up Nomad clients. |
a. Create your cluster with Terraform
CircleCI curates Terraform modules to help install Nomad clients in your chosen cloud provider. You can browse the modules in our public repository, including example Terraform config files for both AWS and GCP.
GCP cluster
You need the following information:
-
The Domain name of the CircleCI application
-
The GCP Project you want to run Nomad clients in
-
The GCP Zone you want to run Nomad clients in
-
The GCP Region you want to run Nomad clients in
-
The GCP Network you want to run Nomad clients in
-
The GCP Subnetwork you want to run Nomad clients in
A full example, as well as a full list of variables, can be found in the example GCP Terraform configuration.
Once you have filled in the appropriate information, you can deploy your Nomad clients by running the following commands:
terraform init
terraform plan
terraform apply
After Terraform is done spinning up the Nomad client(s), it outputs the certificates and key needed for configuring the Nomad control plane in CircleCI server. Copy them somewhere safe.
b. Nomad Autoscaler configuration
Nomad can automatically scale up or down your Nomad clients, provided your clients are managed by a cloud provider’s auto scaling resource. With Nomad Autoscaler, you need to provide permission for the utility to manage your auto scaling resource and specify where it is located. CircleCI’s Nomad Terraform module can provision the permissions resources, or it can be done manually.
GCP autoscaler service account
Create a service account for Nomad Autoscaler. You may take one of the following approaches:
-
CircleCI creates Secret
-
Use Workload Identity
-
You create Secret
Option 1: CircleCI creates the Kubernetes Secret.
The CircleCI Nomad module can create a service account and output a file with the JSON key. For this option, set the variable nomad_auto_scaler = true
. You may reference the examples in the link for more details. The created service account key will be available in a file named nomad-as-key.json
.
Option 2: Use Workload Identity.
The CircleCI Nomad module can create a service account using Workload Identity and send out the email. Set the variables nomad_auto_scaler = true
and enable_workload_identity = true
.
Option 3: Create the Kubernetes Secret yourself.
When creating the Kubernetes Secret manually, an additional field is required, as outlined below. |
# Base64 encoded additional configuration field
ADDITIONAL_CONFIG=dGFyZ2V0ICJnY2UtbWlnIiB7CiAgZHJpdmVyID0gImdjZS1taWciCiAgY29uZmlnID0gewogICAgY3JlZGVudGlhbHMgPSAiL2V0Yy9ub21hZC1hdXRvc2NhbGVyL2NyZWRzL2djcF9zYS5qc29uIgogIH0KfQo=
kubectl create secret generic nomad-autoscaler-secret \
--from-literal=gcp_sa.json=<service-account> \
--from-literal=secret.hcl=$ADDITIONAL_CONFIG
When creating a Nomad GCP service account manually, the service account will need the role compute.admin
. It will also need the role iam.workloadIdentityUser
if using Workload Identity. This step is only required if you choose not to create the service account using Terraform.
2. Nomad servers
In the previous section you deployed your Nomad clients and have the IAM resources provisioned. Now you can deploy and configure the Nomad Servers.
a. Where to deploy Nomad servers
Nomad Servers are by default deployed within your CircleCI server cluster. However, Nomad Servers may be deployed externally. As with the Nomad clients, you can use the Terraform module CircleCI provides to deploy your Nomad Servers or as a guide for how such a deployment might look.
-
If you wish to deploy your Nomad servers inside your CircleCI server cluster, continue to b. Nomad gossip encryption key below.
-
If you wish to deploy your Nomad servers externally, follow these steps:
-
Update your Terraform by setting the variable
nomad_server_enabled
totrue
. -
Run
terraform apply
to apply your changes. This will deploy Nomad Servers with the default configurations to the existing infrastructure you created when setting up the Nomad clients earlier. For a full list of the Nomad Server options, refer to thevariables.tf
in our public repository. -
Update the
nomad
block in your Helmvalues.yaml
with the following:nomad: server: internal: false serverHostname: <your nomad server domain or load-balancer>
-
Once the above is complete, you can skip ahead to d. Nomad Autoscaler.
-
b. Nomad gossip encryption key
Nomad requires a key to encrypt communications. This key must be exactly 32 bytes long. CircleCI will not be able to recover the values if lost. Depending on how you prefer to manage Kubernetes Secrets, there are two options:
-
You create Secret
-
CircleCI creates Secret
Option 1: Create the Kubernetes Secret yourself.
kubectl -n <namespace> create secret generic nomad-gossip-encryption-key \
--from-literal=gossip-key=<secret-key-32-chars>
kubectl -n <namespace> annotate secret/nomad-gossip-encryption-key \
meta.helm.sh/release-name=<helm-release-name> \
meta.helm.sh/release-namespace=<namespace> \
helm.sh/resource-policy=keep --overwrite
kubectl -n <namespace> label secret/nomad-gossip-encryption-key \
app.kubernetes.io/managed-by=Helm --overwrite
Once the Kubernetes Secret exists, no change to values.yaml
is required. The Kubernetes Secret will be referenced by default.
Option 2: CircleCI creates the Kubernetes Secret.
CircleCI will create the Kubernetes Secret nomad-gossip-encryption-key
automatically.
c. Nomad mTLS
The CACertificate
, certificate
and privateKey
can be found in the output of the Terraform module. You may either add the mTLS cert values to your values.yaml
or you may create a secret named nomad-mtls
in the same namespace as your CircleCI server installation. The values must be base64 encoded.
-
You create Secret
-
CircleCI creates Secret
Option 1: Generate a secret with the mTLS cert values
kubectl -n <namespace> create secret generic nomad-mtls \
--ca.pem="<base64-encoded-certificate>" \
--key.pem="<base64-encoded-private-key>" \
--cert.pem="<base64-encoded-ca-certificate>"
kubectl -n <namespace> annotate secret/nomad-mtls \
meta.helm.sh/release-name=<helm-release-name> \
meta.helm.sh/release-namespace=<namespace> \
helm.sh/resource-policy=keep --overwrite
kubectl -n <namespace> label secret/nomad-mtls \
app.kubernetes.io/managed-by=Helm --overwrite
Option 2: Add the base64 encoded values to your values.yaml
nomad:
server:
...
rpc:
mTLS:
enabled: true
certificate: "<base64-encoded-certificate>"
privateKey: "<base64-encoded-private-key>"
CACertificate: "<base64-encoded-ca-certificate>"
d. Nomad Autoscaler
If you have enabled Nomad Autoscaler, also include the following section under nomad
:
GCP
You created these values in the Nomad Autoscaler Configuration section.
nomad:
...
auto_scaler:
enabled: true
scaling:
max: <max-node-limit>
min: <min-node-limit>
gcp:
enabled: true
project_id: "<project-id>"
mig_name: "<instance-group-name>"
region: "<region>"
# or
zone: "<zone>"
workloadIdentity: "<service-account-email>"
# or
service_account: "<service-account-json>"
e. Helm upgrade
Apply the changes made to your values.yaml
file:
namespace=<your-namespace>
helm upgrade circleci-server oci://cciserver.azurecr.io/circleci-server -n $namespace --version 4.7.0 -f <path-to-values.yaml>
f. Nomad clients validation
CircleCI has created a project called Reality Check which allows you to test your server installation. We are going to follow the project so we can verify that the system is working as expected. As you continue through the next phase, sections of Reality Check will move from red (fail) to green (pass).
Before running Reality Check, check if the Nomad servers can communicate with the Nomad clients by executing the below command.
kubectl -n <namespace> exec -it $(kubectl -n <namespace> get pods -l app=nomad-server -o name | tail -1) -- nomad node status
You should be able to see output like this:
ID DC Name Class Drain Eligibility Status
132ed55b default ip-192-168-44-29 linux-64bit false eligible ready
To run Reality Check, you need to clone the repository. Depending on your GitHub setup, you can use one of the following commands:
GitHub Enterprise
git clone https://github.com/circleci/realitycheck.git
git remote set-url origin <YOUR_GH_REPO_URL>
git push
Once you have successfully cloned the repository, you can follow it from within your CircleCI server installation. You need to set the following variables. For full instructions refer to the repository README.
Name | Value |
---|---|
CIRCLE_HOSTNAME |
<YOUR_CIRCLECI_INSTALLATION_URL> |
CIRCLE_TOKEN |
<YOUR_CIRCLECI_API_TOKEN> |
CIRCLE_CLOUD_PROVIDER |
< |
Name | Environmental Variable Key | Environmental Variable Value |
---|---|---|
org-global |
CONTEXT_END_TO_END_TEST_VAR |
Leave blank |
individual-local |
MULTI_CONTEXT_END_TO_END_VAR |
Leave blank |
Once you have configured the environmental variables and contexts, rerun the Reality Check tests. You should see the features and resource jobs complete successfully. Your test results should look something like the following:

3. Machine provisioner
Overriding scaling options is currently not supported, but will be supported in the future. |
Machine provisioner is used to configure virtual machines for jobs that run in Linux VM, Windows and Arm VM execution environments, and those that are configured to use remote Docker. Machine provisioner is unique to AWS and GCP installations because it relies on specific features of these cloud providers.
Once you have completed the server installation process you can further configure machine provisioner, including the following:
-
Building and specifying a Windows image to give developers access to the Windows execution environment.
-
Specifying an alternative Linux machine image, and specifying a number of preallocated instances to remain spun up at all times.
For more information, see the Manage Virtual Machines with machine provisioner page.
Before moving on to platform specific steps, create your firewall rules. External VMs need the networking rules described in Hardening your Cluster
GCP
You need additional information about your cluster to complete the next section. Run the following command:
gcloud container clusters describe
This command returns something like the following, which includes network, region, and other details that you need to complete the next section:
addonsConfig:
gcePersistentDiskCsiDriverConfig:
enabled: true
kubernetesDashboard:
disabled: true
networkPolicyConfig:
disabled: true
clusterIpv4Cidr: 10.100.0.0/14
createTime: '2021-08-20T21:46:18+00:00'
currentMasterVersion: 1.20.8-gke.900
currentNodeCount: 3
currentNodeVersion: 1.20.8-gke.900
databaseEncryption:
…
-
Create user
We recommend you create a unique service account to be used exclusively by machine provisioner. The Compute Instance Admin (Beta) role is broad enough to allow machine provisioner to operate. If you wish to make permissions more granular, you can use the Compute Instance Admin (beta) role documentation as reference.
gcloud iam service-accounts create circleci-server-vm --display-name "circleci-server-vm service account"
If you are deploying CircleCI server in a shared VCP, you should create this user in the project in which you intend to run your VM jobs. -
Get the service account email address
gcloud iam service-accounts list --filter="displayName:circleci-server-vm service account" --format 'value(email)'
-
Apply role to service account
Apply the Compute Instance Admin (beta) role to the service account:
gcloud projects add-iam-policy-binding <YOUR_PROJECT_ID> --member serviceAccount:circleci-server-vm@<PROJECT_ID>.iam.gserviceaccount.com --role roles/compute.instanceAdmin --condition=None
And:
gcloud projects add-iam-policy-binding <YOUR_PROJECT_ID> --member serviceAccount:circleci-server-vm@<PROJECT_ID>.iam.gserviceaccount.com --role roles/iam.serviceAccountUser --condition=None
-
Enable Workload Identity for Service Account or get JSON key file
Choose one of the following options, depending on whether you are using Workload Identity.
-
Enable Workload Identity for Service Account
-
Get Service Account JSON key file
This step is required only if you are using Workload Identities for GKE. Steps to enable Workload Identities are provided in Phase 1 - Prerequisites.
gcloud projects add-iam-policy-binding <PROJECT_ID> \ --member serviceAccount:circleci-server-vm@<PROJECT_ID>.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --condition=None gcloud iam service-accounts add-iam-policy-binding circleci-server-vm@<PROJECT_ID>.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:<GCP_PROJECT_ID>.svc.id.goog[circleci-server/machine-provisioner]"
If you are using Workload Identities for GKE, this step is not required.
After running the following command, you should have a file named
circleci-server-vm-keyfile
in your local working directory. You will need this when you configure your server installation.gcloud iam service-accounts keys create circleci-server-vm-keyfile --iam-account <YOUR_SERVICE_ACCOUNT_EMAIL>
-
-
Configure CircleCI server
When using service account keys for configuring access for the machine provisioner, there are two options.
-
CircleCI creates Secret
-
You create Secret
Option 1: CircleCI creates the Kubernetes Secret.
Add the machine provisioner configuration to
values.yaml
.machine_provisioner: enabled: true replicas: 1 providers: gcp: enabled: true project_id: <project-id> network_tags: - circleci-vm - <your-network> zones: - <zone1> - <zone2> network: <network> subnetwork: <subnetwork> service_account: <service-account-json> # OR workloadIdentity: "" # Leave blank if using JSON keys of service account else service account email address
Option 2: Create the Kubernetes Secret yourself.
Instead of providing the service account in your
values.yaml
file, you may create the Kubernetes Secret yourself.kubectl create secret generic machine-provisioner-secret \ --from-literal=gcp_sa.json=<access-key>
-
Machine provisioner validation
Apply they changes made to your values.yaml
file.
namespace=<your-namespace>
helm upgrade circleci-server oci://cciserver.azurecr.io/circleci-server -n $namespace --version 4.7.0 -f <path-to-values.yaml>
Once you have configured and deployed CircleCI server, you should validate that machine provisioner is operational. You can rerun the Reality Check project within your CircleCI installation and you should see the machine provisioner jobs complete. At this point, all tests should pass.
4. Runner
Overview
CircleCI runner does not require any additional server configuration. CircleCI server ships ready to work with runner. However, you need to create a runner and configure the runner agent to be aware of your server installation. For complete instructions for setting up runner, see the runner documentation.
Runner requires a namespace per organization. CircleCI server can have many organizations. If your company has multiple organizations within your CircleCI installation, you need to set up a runner namespace for each organization within your server installation. |