# Phase 3 - execution environments

Before you begin with the CircleCI server v4.4 execution environment installation phase, ensure you have run through [Phase 1 – Prerequisites](https://circleci.com/docs/server-admin/server-4.4/installation/phase-1-prerequisites/) and [Phase 2 - Core Services Installation](https://circleci.com/docs/server-admin/server-4.4/installation/phase-2-core-services/).

In the following sections, replace any sections indicated by `< >` with your details.

## Introduction

Execution environments in CircleCI server are resourced in the following ways:

*   HashiCorp Nomad is used for scheduling jobs and running Docker jobs
    
    *   Nomad Server is installed and configured within your Kubernetes cluster and is used for scheduling jobs
        
    *   Nomad Clients are installed outside your Kubernetes cluster and are used for running jobs that are configured to use the Docker execution environment. CircleCI provides sample Terraform modules for you to reference and use to streamline the installation process if you are installing within AWS or GCP. Installing in this way also enables the use of the Nomad Autoscaler
        
    
*   [Machine provisioner](#machine-provisioner) is used in AWS/GCP installations to provision resources and run jobs that are configured to use the `machine` execution environment, including Linux and Windows VMs, including Arm resources, if your server installation is within AWS. If you install outside AWS or GCP (that is, a local installation) you will use self-hosted runners to run VM-configured jobs.
    
*   [Runner](https://circleci.com/docs/guides/execution-runner/runner-overview/) can be used for all execution environments, including macOS.
    

## 1\. Nomad clients

Nomad is a workload orchestration tool that CircleCI uses to schedule (through Nomad server) and run (through Nomad clients) CircleCI jobs.

Nomad clients are installed outside of the Kubernetes cluster, while their control plane (Nomad Server) is installed within the cluster. Communication between your Nomad Clients and the Nomad control plane is secured with mTLS. The mTLS certificate, private key, and certificate authority will be output after you complete installation of the Nomad Clients.

If you are installing CircleCI server locally, rather than in AWS or GCP, see the [Air-gapped](https://circleci.com/docs/server-admin/server-4.4/air-gapped-installation/phase-4-configure-nomad-clients/) docs on setting up Nomad clients.

### a. Create your cluster with Terraform

CircleCI curates Terraform modules to help install Nomad clients in your chosen cloud provider. You can browse the modules in our [public repository](https://github.com/CircleCI-Public/server-terraform), including example Terraform config files for both AWS and GCP.

#### AWS cluster

You need some information about your cluster and server installation to populate the required variables for the Terraform module. A full example, as well as a full list of variables, can be found in the [example AWS Terraform configuration](https://github.com/CircleCI-Public/server-terraform/tree/main/nomad-aws).

*   `Server_endpoint` - This is the domain name of the CircleCI application.
    
*   **Subnet ID (`subnet`)**, **VPC ID (`vpc_id`)**, and **DNS server (`dns_server`)** of your cluster. Run the following commands to get the cluster VPC ID (`vpcId`), and subnets (`subnetIds`):
    
    `````````
    # Fetch VPC ID
    aws eks describe-cluster --name=<cluster-name> --query "cluster.resourcesVpcConfig.vpcId" --region=<region> --output text | xargs
    
    # Fetch Subnet IDs
    aws eks describe-cluster --name=<cluster-name> --query "cluster.resourcesVpcConfig.subnetIds" --region=<region> --output text | xargs
    `````````
    
    This returns something similar to the following:
    
    `````````
    # VPC Id
    vpc-02fdfff4ca
    
    # Subnet Ids
    subnet-08922063f12541f93 subnet-03b94b6fb1e5c2a1d subnet-0540dd7b2b2ddb57e subnet-01833e1fa70aa4488
    `````````
    
    Then, using the VPCID you just found, run the following command to get the CIDR Block for your cluster. For AWS, the DNS server is the third IP in your CIDR block (`CidrBlock`), for example your CIDR block might be `10.100.0.0/16`, so the third IP would be `10.100.0.2`.
    
    `````````
    aws ec2 describe-vpcs --filters Name=vpc-id,Values=<vpc-id> --query "Vpcs[].CidrBlock" --region=<region> --output text | xargs
    `````````
    
    This returns something like the following:
    
    `````````
    192.168.0.0/16
    `````````
    

Once you have filled in the appropriate information, you can deploy your Nomad clients by running the following commands:

`````````
terraform init
`````````

`````````
terraform plan
`````````

`````````
terraform apply
`````````

After Terraform is done spinning up the Nomad client(s), it outputs the certificates and keys needed for configuring the Nomad control plane in CircleCI server. Copy them somewhere safe. The apply process usually only takes a minute.

#### GCP cluster

You need the following information:

*   The Domain name of the CircleCI application
    
*   The GCP Project you want to run Nomad clients in
    
*   The GCP Zone you want to run Nomad clients in
    
*   The GCP Region you want to run Nomad clients in
    
*   The GCP Network you want to run Nomad clients in
    
*   The GCP Subnetwork you want to run Nomad clients in
    

A full example, as well as a full list of variables, can be found in the [example GCP Terraform configuration](https://github.com/CircleCI-Public/server-terraform/tree/main/nomad-gcp).

Once you have filled in the appropriate information, you can deploy your Nomad clients by running the following commands:

`````````
terraform init
`````````

`````````
terraform plan
`````````

`````````
terraform apply
`````````

After Terraform is done spinning up the Nomad client(s), it outputs the certificates and key needed for configuring the Nomad control plane in CircleCI server. Copy them somewhere safe.

### b. Nomad Autoscaler configuration

Nomad can automatically scale up or down your Nomad clients, provided your clients are managed by a cloud provider’s auto scaling resource. With Nomad Autoscaler, you need to provide permission for the utility to manage your auto scaling resource and specify where it is located. CircleCI’s Nomad Terraform module can provision the permissions resources, or it can be done manually.

#### AWS autoscaler IAM/role

Create an IAM user or role and policy for Nomad Autoscaler. You may take **one** of the following approaches:

*   The CircleCI [Nomad module](https://github.com/CircleCI-Public/server-terraform/tree/main/nomad-aws) creates an IAM user and outputs the keys if you set variable `nomad_auto_scaler = true`. You may reference the example in the link for more details. If you have already created the clients, you can update the variable and run `terraform apply`. The created user’s access and secret key will be available in Terraform’s output.
    
*   Create a Nomad Autoscaler IAM user manually with the [IAM policy below](#iam-policy-for-nomad-autoscaler). Then, generate an access and secret key for this user.
    
*   You may create a [Role for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) for Nomad Autoscaler and attach the [IAM policy below](#iam-policy-for-nomad-autoscaler):
    

When using access and secret keys, you have two options for configuration:

**CircleCI creates Secret:**

**Option 1:** CircleCI creates the Kubernetes Secret for you.

Add the access key and secret to your `values.yaml` file with any additional required configuration:

`````````
nomad:
...
  auto_scaler:
    aws:
      accessKey: "<access-key>"
      secretKey: "<secret-key>"
`````````

**You create Secret:**

**Option 2:** Create the Kubernetes Secret yourself

Instead of storing the access key and secret in your `values.yaml` file, you may create the Kubernetes Secret yourself.

When using this method, an additional field is required for this secret, as outlined below.

First, add your access key, secret, and region to the following text, and encode it all with base64.

`````````
ADDITIONAL_CONFIG=`cat << EOF | base64
target "aws-asg" {
  driver = "aws-asg"
  config = {
    aws_region = "<aws-region>"
    aws_access_key_id = "<access-key>"
    aws_secret_access_key = "<secret-key>"
  }
}
EOF`
`````````

Then, using that additional base64 encoded config, create the Kubernetes Secret.

`````````
# With the base64-encoded additional config from above
kubectl create secret generic nomad-autoscaler-secret \
  --from-literal=secret.hcl=$ADDITIONAL_CONFIG
`````````

#### IAM policy for Nomad Autoscaler

`````````
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "autoscaling:CreateOrUpdateTags",
                "autoscaling:UpdateAutoScalingGroup",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "<<Your Autoscaling Group ARN>>"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeScalingActivities",
                "autoscaling:DescribeAutoScalingGroups"
            ],
            "Resource": "*"
        }
    ]
}
`````````

#### GCP autoscaler service account

Create a service account for Nomad Autoscaler. You may take **one** of the following approaches:

**CircleCI creates Secret:**

**Option 1:** CircleCI creates the Kubernetes Secret.

The CircleCI [Nomad module](https://github.com/CircleCI-Public/server-terraform/tree/main/nomad-gcp) can create a service account and output a file with the JSON key. For this option, set the variable `nomad_auto_scaler = true`. You may reference the examples in the link for more details. The created service account key will be available in a file named `nomad-as-key.json`.

**Use Workload Identity:**

**Option 2:** Use Workload Identity.

The CircleCI [Nomad module](https://github.com/CircleCI-Public/server-terraform/tree/main/nomad-gcp) can create a service account using [Workload Identity](https://circleci.com/docs/server-admin/server-4.4/installation/phase-1-prerequisites/#enable-workload-identity-in-gke) and send out the email. Set the variables `nomad_auto_scaler = true` and `enable_workload_identity = true`.

**You create Secret:**

**Option 3:** Create the Kubernetes Secret yourself.

When creating the Kubernetes Secret manually, an additional field is required, as outlined below.

`````````
# Base64 encoded additional configuration field
ADDITIONAL_CONFIG=dGFyZ2V0ICJnY2UtbWlnIiB7CiAgZHJpdmVyID0gImdjZS1taWciCiAgY29uZmlnID0gewogICAgY3JlZGVudGlhbHMgPSAiL2V0Yy9ub21hZC1hdXRvc2NhbGVyL2NyZWRzL2djcF9zYS5qc29uIgogIH0KfQo=
kubectl create secret generic nomad-autoscaler-secret \
  --from-literal=gcp_sa.json=<service-account> \
  --from-literal=secret.hcl=$ADDITIONAL_CONFIG
`````````

When creating a Nomad GCP service account manually, the service account will need the role `compute.admin`. It will also need the role `iam.workloadIdentityUser` if using Workload Identity. This step is only required if you choose not to create the service account using Terraform.

## 2\. Nomad servers

Now that you have successfully deployed your Nomad clients and have the permission resources, you can configure the Nomad Servers.

### a. Nomad gossip encryption key

Nomad requires a key to encrypt communications. This key must be exactly 32 bytes long. CircleCI will not be able to recover the values if lost. Depending on how you prefer to manage Kubernetes Secrets, there are two options:

**You create Secret:**

**Option 1:** Create the Kubernetes Secret yourself.

`````````
kubectl -n <namespace> create secret generic nomad-gossip-encryption-key \
--from-literal=gossip-key=<secret-key-32-chars>

kubectl -n <namespace> annotate secret/nomad-gossip-encryption-key \
  meta.helm.sh/release-name=<helm-release-name> \
  meta.helm.sh/release-namespace=<namespace> \
  helm.sh/resource-policy=keep --overwrite

kubectl -n <namespace> label secret/nomad-gossip-encryption-key \
  app.kubernetes.io/managed-by=Helm --overwrite
`````````

Once the Kubernetes Secret exists, no change to `values.yaml` is required. The Kubernetes Secret will be referenced by default.

**CircleCI creates Secret:**

**Option 2:** CircleCI creates the Kubernetes Secret.

CircleCI will create the Kubernetes Secret `nomad-gossip-encryption-key` automatically.

### b. Nomad mTLS

The `CACertificate`, `certificate` and `privateKey` can be found in the output of the Terraform module. You may either add the mTLS cert values to your `values.yaml` or you may create a secret named `nomad-mtls` in the same namespace as your CircleCI server installation. The values must be base64 encoded.

**You create Secret:**

**Option 1:** Generate a secret with the mTLS cert values

`````````
kubectl -n <namespace> create secret generic nomad-mtls \
  --ca.pem="<base64-encoded-certificate>" \
  --key.pem="<base64-encoded-private-key>" \
  --cert.pem="<base64-encoded-ca-certificate>"

kubectl -n <namespace> annotate secret/nomad-mtls \
  meta.helm.sh/release-name=<helm-release-name> \
  meta.helm.sh/release-namespace=<namespace> \
  helm.sh/resource-policy=keep --overwrite

kubectl -n <namespace> label secret/nomad-mtls \
  app.kubernetes.io/managed-by=Helm --overwrite
`````````

**CircleCI creates Secret:**

**Option 2:** Add the base64 encoded values to your `values.yaml`

`````````
nomad:
  server:
    ...
    rpc:
      mTLS:
        enabled: true
        certificate: "<base64-encoded-certificate>"
        privateKey: "<base64-encoded-private-key>"
        CACertificate: "<base64-encoded-ca-certificate>"
`````````

### c. Nomad Autoscaler

If you have enabled Nomad Autoscaler, also include the following section under `nomad`:

#### AWS

You created these values in the [Nomad Autoscaler Configuration section](#aws-iam-role).

`````````
nomad:
  ...
  auto_scaler:
    enabled: true
    scaling:
      max: <max-node-limit>
      min: <min-node-limit>

    aws:
      enabled: true
      region: "<region>"
      autoScalingGroup: "<asg-name>"

      accessKey: "<access-key>"
      secretKey: "<secret-key>"
      # or
      irsaRole: "<role-arn>"
`````````

#### GCP

You created these values in the [Nomad Autoscaler Configuration section](#gcp-service-account).

`````````
nomad:
  ...
  auto_scaler:
    enabled: true
    scaling:
      max: <max-node-limit>
      min: <min-node-limit>

    gcp:
      enabled: true
      project_id: "<project-id>"
      mig_name: "<instance-group-name>"

      region: "<region>"
      # or
      zone: "<zone>"

      workloadIdentity: "<service-account-email>"
      # or
      service_account: "<service-account-json>"
`````````

### d. Helm upgrade

Apply the changes made to your `values.yaml` file:

`````````
namespace=<your-namespace>
helm upgrade circleci-server oci://cciserver.azurecr.io/circleci-server -n $namespace --version 4.4.7 -f <path-to-values.yaml>
`````````

### e. Nomad clients validation

CircleCI has created a project called [Reality Check](https://github.com/circleci/realitycheck) which allows you to test your server installation. We are going to follow the project so we can verify that the system is working as expected. As you continue through the next phase, sections of Reality Check will move from red (fail) to green (pass).

Before running Reality Check, check if the Nomad servers can communicate with the Nomad clients by executing the below command.

`````````
kubectl -n <namespace> exec -it $(kubectl -n <namespace> get pods -l app=nomad-server -o name | tail -1) -- nomad node status
`````````

You should be able to see output like this:

`````````
ID        DC       Name              Class        Drain  Eligibility  Status
132ed55b  default  ip-192-168-44-29  linux-64bit  false  eligible     ready
`````````

To run Reality Check, you need to clone the repository. Depending on your GitHub setup, you can use one of the following commands:

#### GitHub cloud

`````````
git clone https://github.com/circleci/realitycheck.git
`````````

#### GitHub Enterprise

`````````
git clone https://github.com/circleci/realitycheck.git
git remote set-url origin <YOUR_GH_REPO_URL>
git push
`````````

Once you have successfully cloned the repository, you can follow it from within your CircleCI server installation. You need to set the following variables. For full instructions see the [repository README](https://github.com/circleci/realitycheck#prerequisites-1).

Table 1. Environmental Variables  

Name

Value

CIRCLE\_HOSTNAME

<YOUR\_CIRCLECI\_INSTALLATION\_URL>

CIRCLE\_TOKEN

<YOUR\_CIRCLECI\_API\_TOKEN>

CIRCLE\_CLOUD\_PROVIDER

< `aws`, `gcp`, or `other` >

Table 2. Contexts   

Name

Environmental Variable Key

Environmental Variable Value

org-global

CONTEXT\_END\_TO\_END\_TEST\_VAR

Leave blank

individual-local

MULTI\_CONTEXT\_END\_TO\_END\_VAR

Leave blank

Once you have configured the environmental variables and contexts, rerun the Reality Check tests. You should see the features and resource jobs complete successfully. Your test results should look something like the following:

## 3\. Machine provisioner

Overriding scaling options is currently not supported, but will be supported in the future.

Machine provisioner is used to configure virtual machines for jobs that run in Linux VM, Windows and Arm VM execution environments, and those that are configured to use [remote Docker](https://circleci.com/docs/reference/configuration-reference/#setupremotedocker). Machine provisioner is unique to AWS and GCP installations because it relies on specific features of these cloud providers.

Once you have completed the server installation process you can further configure machine provisioner, including building and specifying a Windows image to give developers access to the Windows execution environment, specifying an alternative Linux machine image, and specifying a number of preallocated instances to remain spun up at all times. For more information, see the [Manage Virtual Machines with machine provisioner](https://circleci.com/docs/server-admin/server-4.4/operator/manage-virtual-machines-with-machine-provisioner/) page.

Before moving on to platform specific steps, create your firewall rules. External VMs need the networking rules described in [Hardening your Cluster](https://circleci.com/docs/server-admin/server-4.4/installation/hardening-your-cluster/#external-vms)

### AWS

#### Set up security group

1.  **Get the information needed to create security groups**
    
    The following command returns your VPC ID (`vpcId`) and CIDR Block (`serviceIpv4Cidr`) which you need throughout this section:
    
    `````````
    # Fetch VPC Id
    aws eks describe-cluster --name=<cluster-name> --query "cluster.resourcesVpcConfig.vpcId" --region=<region> --output text | xargs
    
    # Fetch CIDR Block
    aws eks describe-cluster --name=<cluster-name> --query "cluster.kubernetesNetworkConfig.serviceIpv4Cidr" --region=<region> --output text | xargs
    `````````
    
2.  **Create a security group**
    
    Run the following commands to create a security group for machine provisioner:
    
    `````````
    aws ec2 create-security-group --vpc-id "<VPC_ID>" --description "CircleCI machine provisioner security group" --group-name "circleci-machine-provisioner-sg"
    `````````
    
    This outputs a GroupID to be used in the next steps:
    
    `````````
    {
        "GroupId": "<VM_SECURITY_GROUP_ID>"
    }
    `````````
    
3.  **Apply the security group for SSH (If using public IP addresses for machines)**
    
    If using public IP addresses for your machine provisioner instances, run the following command to apply the security group rules so users can SSH into their jobs:
    
    `````````
    aws ec2 authorize-security-group-ingress --group-id "<VM_SECURITY_GROUP_ID>" --protocol tcp --port 54782 --cidr "0.0.0.0/0"
    `````````
    

#### Set up authentication

Authenticate CircleCI with your cloud provider in one of two ways:

*   IAM Roles for Service Accounts (IRSA) - recommended.
    
*   IAM access keys.
    

**IRSA:**

The following is a summary of [AWS’s Documentation on IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) that is sufficient for configuring machine provisioner in CircleCI.

1.  **Create Identity Provider**
    
    Create an IAM OIDC Identity Provider for your EKS Cluster:
    
    `````````
    eksctl utils associate-iam-oidc-provider --cluster <CLUSTER_NAME> --approve
    `````````
    
2.  **Get ARN**
    
    Get the OIDC provider ARN with the following command, you will need it in later steps:
    
    `````````
    aws iam list-open-id-connect-providers | grep $(aws eks describe-cluster --name <CLUSTER_NAME> --query "cluster.identity.oidc.issuer" --output text | awk -F'/' '{print $NF}')
    `````````
    
3.  **Get URL**
    
    Get your OIDC provider URL, you will need it in later steps
    
    `````````
    aws eks describe-cluster --name <CLUSTER_NAME> --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///"
    `````````
    
4.  **Create role**
    
    Create the role using the command and trust policy template below, you will need the Role ARN and name in later steps:
    
    `````````
    aws iam create-role --role-name circleci-vm --assume-role-policy-document file://<TRUST_POLICY_FILE>
    `````````
    
    `````````
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Federated": "<OIDC_PROVIDER_ARN>"
          },
          "Action": "sts:AssumeRoleWithWebIdentity",
          "Condition": {
            "StringEquals": {
              "<OIDC_PROVIDER_URL>:sub": "system:serviceaccount:<K8S_NAMESPACE>:machine-provisioner"
            }
          }
        }
    
      ]
    }
    `````````
    
5.  **Create policy**
    
    Create the policy using the command and template below. Fill in the security group ID and the VPC ID:
    
    `````````
    aws iam create-policy --policy-name circleci-vm --policy-document file://<POLICY_FILE>
    `````````
    
    `````````
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": "ec2:RunInstances",
          "Effect": "Allow",
          "Resource": [
            "arn:aws:ec2:*::image/*",
            "arn:aws:ec2:*::snapshot/*",
            "arn:aws:ec2:*:*:key-pair/*",
            "arn:aws:ec2:*:*:launch-template/*",
            "arn:aws:ec2:*:*:network-interface/*",
            "arn:aws:ec2:*:*:placement-group/*",
            "arn:aws:ec2:*:*:subnet/*",
            "arn:aws:ec2:*:*:security-group/<SECURITY_GROUP_ID>",
            "arn:aws:ec2:*:*:volume/*"
          ]
        },
        {
          "Action": "ec2:RunInstances",
          "Effect": "Allow",
          "Resource": "arn:aws:ec2:*:*:instance/*",
          "Condition": {
            "StringEquals": {
              "aws:RequestTag/ManagedBy": "circleci-machine-provisioner"
            }
          }
        },
        {
          "Action": [
            "ec2:Describe*"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": [
            "ec2:CreateTags"
          ],
          "Resource": "arn:aws:ec2:*:*:*/*",
          "Condition": {
            "StringEquals": {
              "ec2:CreateAction" : "RunInstances"
            }
          }
        },
        {
          "Action": [
            "ec2:CreateTags",
            "ec2:StartInstances",
            "ec2:StopInstances",
            "ec2:TerminateInstances"
          ],
          "Effect": "Allow",
          "Resource": "arn:aws:ec2:*:*:*/*",
          "Condition": {
            "StringEquals": {
              "ec2:ResourceTag/ManagedBy": "circleci-machine-provisioner"
            }
          }
        },
        {
          "Action": [
            "ec2:RunInstances",
            "ec2:StartInstances",
            "ec2:StopInstances",
            "ec2:TerminateInstances"
          ],
          "Effect": "Allow",
          "Resource": "arn:aws:ec2:*:*:subnet/*",
          "Condition": {
            "StringEquals": {
              "ec2:Vpc": "<VPC_ID>"
            }
          }
        }
      ]
    }
    `````````
    
6.  **Attach policy**
    
    Attach the policy to the role:
    
    `````````
    aws iam attach-role-policy --role-name <VM_ROLE_NAME> --policy-arn=<VM_POLICY_ARN>
    `````````
    
7.  **Configure machine provisioner**
    
    Configure machine provisioner by adding the following to `values.yaml`:
    
    If deploying in specific zone need to specify subnetwork for GCP
    
    `````````
    machine_provisioner:
      providers:
        ec2:
          enabled: true
          region: "<REGION>"
          assignPublicIP: true
          irsaRole: "<IRSA_ROLE_ARN>"
          subnets:
            - "<SUBNET_ID>"
          securityGroupId: "<SECURITY_GROUP_ID>"
    `````````
    

**IAM Access Keys:**

1.  **Create user**
    
    Create a new user with programmatic access:
    
    `````````
    aws iam create-user --user-name circleci-machine-provisioner
    `````````
    
    Optionally, machine provisioner does support the use of a [service account role](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) in place of AWS keys. If you would prefer to use a role, follow these [instructions](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) using the policy in step 6 below. Once done, you may skip to step 9, enabling machine provisioner.
    
2.  **Create policy**
    
    Create a `policy.json` file with the following content. You should fill in the ID of the machine provisioner security group created in step 2 (`MachineProvisionerSecurityGroupID`) and VPC ID (`vpcID`) below.
    
    `````````
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Action": "ec2:RunInstances",
          "Effect": "Allow",
          "Resource": [
            "arn:aws:ec2:*::image/*",
            "arn:aws:ec2:*::snapshot/*",
            "arn:aws:ec2:*:*:key-pair/*",
            "arn:aws:ec2:*:*:launch-template/*",
            "arn:aws:ec2:*:*:network-interface/*",
            "arn:aws:ec2:*:*:placement-group/*",
            "arn:aws:ec2:*:*:subnet/*",
            "arn:aws:ec2:*:*:security-group/<SECURITY_GROUP_ID>",
            "arn:aws:ec2:*:*:volume/*"
          ]
        },
        {
          "Action": "ec2:RunInstances",
          "Effect": "Allow",
          "Resource": "arn:aws:ec2:*:*:instance/*",
          "Condition": {
            "StringEquals": {
              "aws:RequestTag/ManagedBy": "circleci-machine-provisioner"
            }
          }
        },
        {
          "Action": [
            "ec2:Describe*"
          ],
          "Effect": "Allow",
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": [
            "ec2:CreateTags"
          ],
          "Resource": "arn:aws:ec2:*:*:*/*",
          "Condition": {
            "StringEquals": {
              "ec2:CreateAction" : "RunInstances"
            }
          }
        },
        {
          "Action": [
            "ec2:CreateTags",
            "ec2:StartInstances",
            "ec2:StopInstances",
            "ec2:TerminateInstances"
          ],
          "Effect": "Allow",
          "Resource": "arn:aws:ec2:*:*:*/*",
          "Condition": {
            "StringEquals": {
              "ec2:ResourceTag/ManagedBy": "circleci-machine-provisioner"
            }
          }
        },
        {
          "Action": [
            "ec2:RunInstances",
            "ec2:StartInstances",
            "ec2:StopInstances",
            "ec2:TerminateInstances"
          ],
          "Effect": "Allow",
          "Resource": "arn:aws:ec2:*:*:subnet/*",
          "Condition": {
            "StringEquals": {
              "ec2:Vpc": "<VPC_ID>"
            }
          }
        }
      ]
    }
    `````````
    
3.  **Attach policy to user**
    
    Once you have created the policy.json file, attach it to an IAM policy and created user:
    
    `````````
    aws iam put-user-policy --user-name circleci-machine-provisioner --policy-name circleci-machine-provisioner --policy-document file://policy.json
    `````````
    
4.  **Create an access key and secret for the user**
    
    If you have not already created them, you will need an access key and secret for the `circleci-machine-provisioner` user. You can create those by running the following command:
    
    `````````
    aws iam create-access-key --user-name circleci-machine-provisioner
    `````````
    
5.  **Configure server (there are two options)**
    
    *   **Option 1 - Add the keys to `values.yaml`** Add the machine provisioner configuration to `values.yaml`.
        
        `````````
        machine_provisioner:
          providers:
            ec2:
              enabled: true
              region: "<REGION>"
              assignPublicIP: true
              accessKey: "<ACCESS-KEY>"
              secretKey: "<SECRET-KEY>"
              subnets:
                - "<SUBNET_ID>"
              securityGroupId: "<SECURITY_GROUP_ID>"
        `````````
        
    *   **Option 2 - Create the Kubernetes Secret yourself** Instead of providing the access key and secret in your `values.yaml` file, you may create the Kubernetes Secret yourself.
        
        `````````
        kubectl create secret generic machine-provisioner-secret \
          --from-literal=accessKey=<access-key> \
          --from-literal=secretKey=<secret-key>
        `````````
        
    

### GCP

You need additional information about your cluster to complete the next section. Run the following command:

`````````
gcloud container clusters describe
`````````

This command returns something like the following, which includes network, region and other details that you need to complete the next section:

`````````
addonsConfig:
  gcePersistentDiskCsiDriverConfig:
    enabled: true
  kubernetesDashboard:
    disabled: true
  networkPolicyConfig:
    disabled: true
clusterIpv4Cidr: 10.100.0.0/14
createTime: '2021-08-20T21:46:18+00:00'
currentMasterVersion: 1.20.8-gke.900
currentNodeCount: 3
currentNodeVersion: 1.20.8-gke.900
databaseEncryption:
…
`````````

1.  **Create user**
    
    We recommend you create a unique service account to be used exclusively by machine provisioner. The Compute Instance Admin (Beta) role is broad enough to allow machine provisioner to operate. If you wish to make permissions more granular, you can use the Compute Instance Admin (beta) role [documentation](https://cloud.google.com/compute/docs/access/iam#compute.instanceAdmin) as reference.
    
    `````````
    gcloud iam service-accounts create circleci-server-vm --display-name "circleci-server-vm service account"
    `````````
    
    If you are deploying CircleCI server in a shared VCP, you should create this user in the project in which you intend to run your VM jobs.
    
2.  **Get the service account email address**
    
    `````````
    gcloud iam service-accounts list --filter="displayName:circleci-server-vm service account" --format 'value(email)'
    `````````
    
3.  **Apply role to service account**
    
    Apply the Compute Instance Admin (beta) role to the service account:
    
    `````````
    gcloud projects add-iam-policy-binding <YOUR_PROJECT_ID> --member serviceAccount:<YOUR_SERVICE_ACCOUNT_EMAIL> --role roles/compute.instanceAdmin --condition=None
    `````````
    
    And:
    
    `````````
    gcloud projects add-iam-policy-binding <YOUR_PROJECT_ID> --member serviceAccount:<YOUR_SERVICE_ACCOUNT_EMAIL> --role roles/iam.serviceAccountUser --condition=None
    `````````
    
4.  **Enable Workload Identity for Service Account or get JSON key file**
    
    Choose one of the following options, depending on whether you are using Workload Identity.
    
    **Enable Workload Identity for Service Account:**
    
    This step is required only if you are using [Workload Identities](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) for GKE. Steps to enable Workload Identities are provided in [Phase 1 - Prerequisites](https://circleci.com/docs/server-admin/server-4.4/installation/phase-1-prerequisites/#enable-workload-identity-in-gke).
    
    `````````
    gcloud iam service-accounts add-iam-policy-binding <YOUR_SERVICE_ACCOUNT_EMAIL> \
        --role roles/iam.workloadIdentityUser \
        --member "serviceAccount:<GCP_PROJECT_ID>.svc.id.goog[circleci-server/machine-provisioner]"
    `````````
    
    **Get Service Account JSON key file:**
    
    If you are using [Workload Identities](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity) for GKE, this step is not required.
    
    After running the following command, you should have a file named `circleci-server-vm-keyfile` in your local working directory. You will need this when you configure your server installation.
    
    `````````
    gcloud iam service-accounts keys create circleci-server-vm-keyfile --iam-account <YOUR_SERVICE_ACCOUNT_EMAIL>
    `````````
    
5.  **Configure CircleCI server**
    
    When using service account keys for configuring access for the machine provisioner, there are two options.
    
    **CircleCI creates Secret:**
    
    **Option 1:** CircleCI creates the Kubernetes Secret.
    
    Add the machine provisioner configuration to `values.yaml`.
    
    `````````
    machine_provisioner:
      enabled: true
      replicas: 1
      providers:
        gcp:
          enabled: false
          project_id: <project-id>
          network_tags:
            - circleci-vm
            - <your-network>
          zones:
            - <zone1>
            - <zone2>
          network: <network>
          subnetwork: <subnetwork>
    
          service_account: <service-account-json>
          # OR
          workloadIdentity: ""  # Leave blank if using JSON keys of service account else service account email address
    `````````
    
    **You create Secret:**
    
    **Option 2:** Create the Kubernetes Secret yourself.
    
    Instead of providing the service account in your `values.yaml` file, you may create the Kubernetes Secret yourself.
    
    `````````
    kubectl create secret generic machine-provisioner-secret \
      --from-literal=gcp_sa.json=<access-key>
    `````````
    

### Machine provisioner validation

Apply they changes made to your `values.yaml` file.

`````````
namespace=<your-namespace>
helm upgrade circleci-server oci://cciserver.azurecr.io/circleci-server -n $namespace --version 4.4.7 -f <path-to-values.yaml>
`````````

Once you have configured and deployed CircleCI server, you should validate that machine provisioner is operational. You can rerun the Reality Check project within your CircleCI installation and you should see the machine provisioner jobs complete. At this point, all tests should pass.

## 4\. Runner

### Overview

CircleCI runner does not require any additional server configuration. CircleCI server ships ready to work with runner. However, you need to create a runner and configure the runner agent to be aware of your server installation. For complete instructions for setting up runner, see the [runner documentation](https://circleci.com/docs/guides/execution-runner/runner-overview/).

Runner requires a namespace per organization. CircleCI server can have many organizations. If your company has multiple organizations within your CircleCI installation, you need to set up a runner namespace for each organization within your server installation.

## Next steps

*   [Phase 4: Post Installation](https://circleci.com/docs/server-admin/server-4.4/installation/phase-4-post-installation/)