Start Building for Free
CircleCI.comAcademyBlogCommunitySupport

Phase 4 - Post installation

1 month ago5 min read
Server v4.x
Server Admin
On This Page

Before you begin with the CircleCI server v4.x post installation phase, ensure you have run through Phase 1 – Prerequisites, Phase 2 - Core Services Installation and Phase 3 - Execution Environments Installation.

Backup and restore

Backups of CircleCI server can be created through Velero. You installed Velero in your cluster during the prerequisites installation phase.

Set up backup and restore on AWS

These instructions were sourced from the Velero docs.

  1. Create an AWS S3 bucket

    BUCKET=<YOUR_BUCKET>
    REGION=<YOUR_REGION>
    aws s3api create-bucket \
        --bucket $BUCKET \
        --region $REGION \
        --create-bucket-configuration LocationConstraint=$REGION
  2. Set up permissions for Velero

    Create an IAM user:

    aws iam create-user --user-name velero

    Attach policies to give user velero the necessary permissions:

    cat > velero-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVolumes",
                    "ec2:DescribeSnapshots",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:CreateSnapshot",
                    "ec2:DeleteSnapshot"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:PutObject",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": [
                    "arn:aws:s3:::${BUCKET}/*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket"
                ],
                "Resource": [
                    "arn:aws:s3:::${BUCKET}"
                ]
            }
        ]
    }
    EOF
    aws iam put-user-policy \
      --user-name velero \
      --policy-name velero \
      --policy-document file://velero-policy.json

    Create an access key for user velero:

    aws iam create-access-key --user-name velero

    The result should look like this:

    {
      "AccessKey": {
            "UserName": "velero",
            "Status": "Active",
            "CreateDate": "2017-07-31T22:24:41.576Z",
            "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>,
            "AccessKeyId": <AWS_ACCESS_KEY_ID>
      }
    }

    Create a Velero-specific credentials file (for example: ./credentials-velero) in your local directory, with the following contents:

    [default]
    aws_access_key_id=<AWS_ACCESS_KEY_ID>
    aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
  3. Install and start Velero

    Run the following velero install command. This creates a namespace called velero and installs all the necessary resources to run Velero. Make sure that you pass the correct file name containing the AWS credentials that you created in step two.

    velero install \
        --provider aws \
        --plugins velero/velero-plugin-for-aws:v1.2.0 \
        --bucket $BUCKET \
        --backup-location-config region=$REGION \
        --snapshot-location-config region=$REGION \
        --secret-file ./credentials-velero \
        --use-restic \
        --wait
  4. Verify Velero

    Once Velero is installed on your cluster, check the new velero namespace. You should have a Velero deployment and a restic daemonset, for example:

    $ kubectl get pods --namespace velero
    NAME                      READY   STATUS    RESTARTS   AGE
    restic-5vlww              1/1     Running   0          2m
    restic-94ptv              1/1     Running   0          2m
    restic-ch6m9              1/1     Running   0          2m
    restic-mknws              1/1     Running   0          2m
    velero-68788b675c-dm2s7   1/1     Running   0          2m

Set up backup and restore on GCP

These instructions were sourced from the documentation for the Velero GCP plugin.

  1. Create a GCP bucket

To reduce the risk of typos, you can set some of the parameters as shell variables. Should you be unable to complete all the steps in the same session, do not forget to reset variables as necessary before proceeding. In the step below, for example, you can define a variable for your bucket name. Replace the <YOUR_BUCKET> placeholder with the name of the bucket you want to create for your backups.

+

BUCKET=<YOUR_BUCKET>

gsutil mb gs://$BUCKET/
  1. Set up permissions for Velero

    1. Set a shell variable for your project ID. Make sure that your gcloud CLI points to the correct project by looking at the current configuration:

      gcloud config list

      If the project is correct, set the variable:

      PROJECT_ID=$(gcloud config get-value project)
    2. Create a service account:

      gcloud iam service-accounts create velero \
          --display-name "Velero service account"

      You can check if the service account has been created successfully by running the following command:

      gcloud iam service-accounts list
    3. Next, store the email address for the Service Account in a variable. Modify the command as needed to match the display name you have chosen for your Service Account:

      SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
        --filter="displayName:Velero service account" \
        --format 'value(email)')

      Grant the necessary permissions to the Service Account:

      ROLE_PERMISSIONS=(
          compute.disks.get
          compute.disks.create
          compute.disks.createSnapshot
          compute.snapshots.get
          compute.snapshots.create
          compute.snapshots.useReadOnly
          compute.snapshots.delete
          compute.zones.get
      )
      
      gcloud iam roles create velero.server \
          --project $PROJECT_ID \
          --title "Velero Server" \
          --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
      
      gcloud projects add-iam-policy-binding $PROJECT_ID \
          --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
          --role projects/$PROJECT_ID/roles/velero.server
      
      gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
    4. Next, ensure that Velero can use this Service Account.

      • Option 1: JSON key file

        You can simply pass a JSON credentials file to Velero to authorize it to perform actions as the Service Account. To do this, you first need to create a key:

        gcloud iam service-accounts keys create credentials-velero \
            --iam-account $SERVICE_ACCOUNT_EMAIL

        After running this command, you should see a file named credentials-velero in your local working directory.

      • Option 2: Workload Identities

        If you are already using Workload Identity in your cluster, you can bind the GCP Service Account you just created to Velero’s Kubernetes service account. In this case, the GCP Service Account needs the iam.serviceAccounts.signBlob role in addition to the permissions already specified above.

  2. Install and start Velero

    Run one of the following velero install commands, depending on how you authorized the service account. This creates a namespace called velero and installs all the necessary resources to run Velero.

    • using a JSON key file

      velero install \
          --provider gcp \
          --plugins velero/velero-plugin-for-gcp:v1.2.0 \
          --bucket $BUCKET \
          --secret-file ./credentials-velero \
          --use-restic \
          --wait
    • using Workload Identities

      velero install \
          --provider gcp \
          --plugins velero/velero-plugin-for-gcp:v1.2.0 \
          --bucket $BUCKET \
          --no-secret \
          --sa-annotations iam.gke.io/gcp-service-account=$SERVICE_ACCOUNT_EMAIL \
          --backup-location-config serviceAccount=$SERVICE_ACCOUNT_EMAIL \
          --use-restic \
          --wait

      For more options on customizing your installation, refer to the Velero documentation.

  3. Verify Velero

    Once Velero is installed on your cluster, check the new velero namespace. You should have a Velero deployment and a restic daemonset, for example:

    $ kubectl get pods --namespace velero
    NAME                      READY   STATUS    RESTARTS   AGE
    restic-5vlww              1/1     Running   0          2m
    restic-94ptv              1/1     Running   0          2m
    restic-ch6m9              1/1     Running   0          2m
    restic-mknws              1/1     Running   0          2m
    velero-68788b675c-dm2s7   1/1     Running   0          2m

Set up backup and restore with S3-compatible storage

The following steps assume you are using S3-compatible object storage, but not necessarily AWS S3, for your backups.

These instructions were sourced from the Velero docs.

  1. Configure mc client

    To start, configure mc to connect to your storage provider:

    # Alias can be any name as long as you use the same value in subsequent commands
    export ALIAS=my-provider
    mc alias set $ALIAS <YOUR_MINIO_ENDPOINT> <YOUR_MINIO_ACCESS_KEY_ID> <YOUR_MINIO_SECRET_ACCESS_KEY>

    You can verify your client is correctly configured by running mc ls my-provider and you should see the buckets in your provider enumerated in the output.

  2. Create a bucket

    Create a bucket for your backups. It is important that a new bucket is used, as Velero cannot use a preexisting bucket that contains other content.

    mc mb ${ALIAS}/<YOUR_BUCKET>
  3. Create a user and policy

    Create a user and policy for Velero to access your bucket.

    # Create user
    mc admin user add $ALIAS <YOUR_MINIO_ACCESS_KEY_ID> <YOUR_MINIO_SECRET_ACCESS_KEY>
    
    # Create policy
    cat > velero-policy.json << EOF
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Action": [
            "s3:*"
          ],
          "Resource": [
            "arn:aws:s3:::<YOUR_BUCKET>",
            "arn:aws:s3:::<YOUR_BUCKET>/*"
          ]
        }
      ]
    }
    EOF
    
    mc admin policy add $ALIAS velero-policy velero-policy.json
    
    # Bind user to policy
    mc admin policy set $ALIAS velero-policy user=<YOUR_VELERO_ACCESS_KEY_ID>

    Finally, you add your new user’s credentials to a file (./credentials-velero in this example) with the following contents:

    [default]
    aws_access_key_id=<YOUR_VELERO_ACCESS_KEY_ID>
    aws_secret_access_key=<YOUR_VELERO_SECRET_ACCESS_KEY>
  4. Install and start Velero

    Run the following velero install command. This creates a namespace called velero and installs all the necessary resources to run Velero.

    velero install --provider aws \
      --plugins velero/velero-plugin-for-aws:v1.2.0 \
      --bucket <YOUR_BUCKET> \
      --secret-file ./credentials-velero \
      --use-volume-snapshots=false \
      --use-restic \
      --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=<YOUR_ENDPOINT> \
      --wait
  5. Verify Velero

    Once Velero is installed on your cluster, check the new velero namespace. You should have a Velero deployment and a restic daemonset, for example:

    $ kubectl get pods --namespace velero
    NAME                      READY   STATUS    RESTARTS   AGE
    restic-5vlww              1/1     Running   0          2m
    restic-94ptv              1/1     Running   0          2m
    restic-ch6m9              1/1     Running   0          2m
    restic-mknws              1/1     Running   0          2m
    velero-68788b675c-dm2s7   1/1     Running   0          2m

Take a backup

Now that Velero is installed on your cluster, you are ready to create your first backup. If you encounter problems, please refer to the troubleshooting section.

  • To create the backup, run:

    K8S_NS=$(helm list -o yaml  | yq '.[].namespace')
    CHART=$(helm list -o yaml  | yq '.[].chart' )
    REV=$(helm list -o yaml  | yq '.[].revision')
    RANDOM_STR=$(cat /dev/urandom | env LC_ALL=C tr -dc 'a-z0-9' | head -c 8)
    
    velero backup create "${K8S_NS}-${RANDOM_STR}" --include-namespaces "${K8S_NS}" --labels "chart--rev=${CHART}--${REV}"
  • To restore from a backup, run:

    # List all existing backups
    velero backup get --show-labels
    
    # Restore the specific backup
    velero restore create --include-namespaces <circleci-namespace> --from-backup <backup-name>

See the Velero documentation or more details.

Email Notifications

Add email notification support by adding the following to values.yaml:

smtp:
  host: <hostname-of-submission-server>
  user: <username-for-submission-server>
  password: <password-for-submission-server
  port: <mail-port>

Managing orbs

Server installations include their own local orb registry. This registry is private to the server installation. All orbs referenced in project configs reference the orbs in the server orb registry. You are responsible for maintaining orbs. This includes:

  • Copying orbs from the public registry.

  • Updating orbs that may have been copied previously.

  • Registering your company’s private orbs, if you have any.

For more information and steps to complete these tasks, see the Orbs on server guide.


Help make this document better

This guide, as well as the rest of our docs, are open source and available on GitHub. We welcome your contributions.

Need support?

Our support engineers are available to help with service issues, billing, or account related questions, and can help troubleshoot build configurations. Contact our support engineers by opening a ticket.

You can also visit our support site to find support articles, community forums, and training resources.