Start Building for Free

CircleCI Server v3.x Installation Phase 2

8 months ago5 min read
Server v3.x
Server Admin
On This Page

Before you begin with the CircleCI server v3.x core services installation phase, ensure all prerequisites are met.

Flow chart showing the installation flow for server 3.x with phase 2 highlighted
Figure 1. Installation Experience Flow Chart Phase 2

Phase 2: Core services installation

  1. Ensure you are running the minimum KOTS version (1.65.0) by running the command:

    kubectl kots version
  2. From the terminal, run (if you are installing behind a proxy see Installing behind HTTP Proxy):

    kubectl kots install circleci-server

    You will be prompted for:

    • namespace for the deployment

    • password for the KOTS Admin Console

  3. When complete, you should be provided with a URL to access the KOTS admin console, usually http://localhost:8800.

Installing behind an HTTP Proxy (optional)

If you wish to install CircleCI server behind a proxy, use the following command structure should be used for step 2 above (for more information see the KOTS docs here):

kubectl kots install circleci-server --http-proxy <YOUR_HTTP_PROXY_URI> --https-proxy <https-proxy> --no-proxy <YOUR_NO_PROXY_LIST>

The load balancer endpoints must be added to the no-proxy list for the following services: output processor and vm-service. This is because the no-proxy list is shared between the application and build-agent. The application and build-agent are assumed to be behind the same firewall and therefore cannot have a proxy between them.

For further information see the Configuring a Proxy guide.

Frontend Settings

Frontend settings control the web-application-specific aspects of the CircleCI system.

Screenshot showing frontend settings
Figure 2. Frontend Settings

Complete the fields described below.

  • CircleCI Domain Name (required) - Enter the domain name you specified when creating your Frontend TLS key and certificate.

  • Frontend Replicas - Used to increase the amount of traffic that can be handled by the frontend.

  • Frontend TLS Private Key (required) - You created this during your prerequisite steps. You can retrieve this value with the following command:

    cat /etc/letsencrypt/live/<CIRCLECI_SERVER_DOMAIN>/privkey.pem
  • Frontend TLS Certificate (required) - You created this during your prerequisite steps. You can retrieve this value with the following command:

    cat /etc/letsencrypt/live/<CIRCLECI_SERVER_DOMAIN>/fullchain.pem
  • Private Load Balancer (optional) - Load balancer does not generate external IP addresses.

For the Frontend TLS private key and certificate you have 4 options:

  • You can supply a private key and certificate

  • Check the box that allows Let’s Encrypt to automatically request and manage certificates for you.

  • Check the box that allows AWS Certificate Manager (ACM) to automatically request and manage certificates for you. For more information about using ACM see the Using ACM TLS certificates section below.

  • You can also disable TLS termination at this point, but the system will still need to be accessed over HTTPS.

Using ACM TLS certificates

If you would like to use AWS Certificate Manager (ACM) to manage your TLS certificates, follow the ACM documentation for instructions on how to generate ACM certificates.

Once you have generated your certificates, enable ACM from the KOTS admin console under the Frontend section. Check the ACM box and provide your ACM ARN (Amazon Resource Name).


Encryption and artifact signing keys were created during prerequisites phase. You can enter them here now.

Screenshot showing encryption settings
Figure 3. Encryption Settings

Complete the following fields:

  • Artifact Signing Key (required)

  • Encryption Signing Key (required)


You created your GitHub OAuth application in the prerequisite phase. Use the data to complete the following settings:

Screenshot showing GitHub settings
Figure 4. GitHub Settings
  • GitHub Type (required) - Select Cloud or Enterprise (on premises).

  • OAuth Client ID (required) - The OAuth Client ID provided by GitHub.

  • OAuth Client Secret (required) - The OAuth Client Secret provided by GitHub.

  • GitHub Enterprise Fingerprint - Required when using a proxy. Include the output of ssh-keyscan in the text field.

Object storage

You created your Object Storage Bucket and Keys in the prerequisite steps. Use the data to complete the following settings depending on your platform.

Screenshot showing object storage settings
Figure 5. Object Storage Settings

S3 compatible

  • Storage Bucket Name (required) - The bucket used for server.

  • AWS S3 Region (optional) - AWS region of bucket if your provider is AWS. S3 Endpoint is ignored if this option is set.

  • S3 Endpoint (optional) - API endpoint of S3 storage provider. Required if your provider is not AWS. AWS S3 Region is ignored if this option is set.

  • Storage Object Expiry (required) - Number of days to retain your test results and artifacts. Set to 0 to disable and retain objects indefinitely.


One of the following is required. Either select IAM keys and provide:

  • Access Key ID (required) - Access Key ID for S3 bucket access.

  • Secret Key (required) - Secret Key for S3 bucket access.

  • policy.json (required) - As outlined in the server AWS Installation Guide, on page 11.

Or select IAM role and provide:

Google Cloud Storage

You should have created your Google Cloud Storage bucket and service account during the prerequisite steps.

  • Storage Bucket Name (required) - The bucket used for server.

  • Storage Object Expiry (required) - Number of days to retain your test results and artifacts. Set to 0 to disable and retain objects indefinitely.

  • You can choose one of the following:

    • Service Account JSON (required) - A JSON format key of the Service Account to use for bucket access.

    • Service Account Email (required) - Service Account Email id if using Google Workload Identity.

Skip over the next few sections - Output Processor, Nomad and VM Service. We will set these up in the next phase of the installation.

Postgres, MongoDB, Vault settings

You can skip these sections unless you plan on using an existing Postgres, MongoDB or Vault instance, in which case, see the Externalizing Services doc. By default, CircleCI server v3.x will create its own Postgres, MongoDB and Vault instances within the CircleCI namespace. The instances inside the CircleCI namespace will be included in the CircleCI backup and restore process.

Save and deploy

Once you have completed the fields detailed above, you can deploy. The deployment installs the core services and provides you with an IP address for the Kong load balancer. That IP address is critical in setting up a DNS record and completing the first phase of the installation.

Create DNS entry

Create a DNS entry for your Kong load balancer, for example, and The DNS entry should align with the DNS names used when creating your TLS certificate and GitHub OAuth app during the prerequisites steps. All traffic will be routed through this DNS record.

You need the IP address or, if using AWS, the DNS name of the Kong load balancer. You can find this information with the following command:

kubectl get service circleci-server-traefik --namespace=<YOUR_CIRCLECI_NAMESPACE>

For more information on adding a new DNS record, see the following documentation:


You should now be able to navigate to your CircleCI server installation and log in to the application successfully.

Now we will move on to build services. It may take a while for all your services to be up. You can periodically check by running the following command (you are looking for the “frontend” pod to show a status of running and ready should show 1/1):

kubectl get pods -n <YOUR_CIRCLECI_NAMESPACE>

Help make this document better

This guide, as well as the rest of our docs, are open source and available on GitHub. We welcome your contributions.

Need support?

Our support engineers are available to help with service issues, billing, or account related questions, and can help troubleshoot build configurations. Contact our support engineers by opening a ticket.

You can also visit our support site to find support articles, community forums, and training resources.