Configuring external services
On This Page
- PostgreSQL
- Best practices for PostgreSQL
- Migrating from an internal PostgreSQL to an externalized source
- 1. Disable the application
- 2. Validate access to your external PostgreSQL from within the cluster (optional)
- 3. Generate export of your internal PostgreSQL
- 4. Restore your data in your external PostgreSQL
- Connecting your external PostgreSQL instance to CircleCI server
- Back up PostgreSQL
- MongoDB
- Migrating from an internal MongoDB to an externalized source
- 1. Disable the application
- 2. Validate access to your external MongoDB from within the cluster (optional)
- 3. Generate export of your internal MongoDB
- 4. Restore your data in your external MongoDB
- Connecting your external MongoDB instance to CircleCI server
This page describes how to configure external services for use with either a new CircleCI server v4.5 installation or migrating internal PostgreSQL and MongoDB data from existing CircleCI server v4.5 installation to your externalized datastores.
PostgreSQL
Best practices for PostgreSQL
Your externalized PostgreSQL instance needs to be version 12.1 or higher. |
Consider running at least two PostgreSQL replicas to allow recovery from primary failure and for backups. The table below shows the recommended specifications for PostgreSQL machines:
# of Daily Active Users | # of PostgreSQL Replicas | CPU | RAM | Disk | NIC Speed |
---|---|---|---|---|---|
<50 | 2 | 8 Cores | 16 GB | 100 GB | 1 Gbps |
50 - 250 | 2 | 8 Cores | 16 GB | 200 GB | 1 Gbps |
250 - 1000 | 3 | 8 Cores | 32 GB | 500 GB | 10 Gbps |
1000 - 5000 | 3 | 8 Cores | 32 GB | 1 TB | 10 Gbps |
5000+ | 3 | 8 Cores | 32 GB | 1 TB | 10 Gbps |
Migrating from an internal PostgreSQL to an externalized source
If you are doing a fresh install of CircleCI server, then you can skip this section and head to Connecting your external PostgreSQL instance to CircleCI server |
When a CircleCI server instance is deployed, Postgres is deployed internally by default via its helm chart. However, as an operator, you may wish to externalize this database to have better control over scalability and availability. Once you have configured your external Postgres, you may use the guide below to migrate your Postgres data to your external database.
This process requires downtime. |
1. Disable the application
Disable the CircleCI server application by scaling down the application layer pods. No Data is lost in this process, but the application will be unreachable.
Scale down your application layer pods:
namespace=<your-server-namespace>
kubectl -n "$namespace" scale deploy -l "layer=application" --replicas="0"
Running kubectl -n "$namespace" get pods
will show most of your pods scaling to down, leaving your database pods running including PostgreSQL.
2. Validate access to your external PostgreSQL from within the cluster (optional)
-
Confirm that pods within your CircleCI server cluster can access your external PostgreSQL. You can do this from within your internal PostgreSQL.
PG_POD=$(kubectl -n "$namespace" get pods | grep postgresql | tail -1 | awk '{print $1}') kubectl exec -it -n "$namespace" "$PG_POD" -- bash
-
While still connected to the pod run:
psql -h <your-external-postgres-host> -U postgres -p <your-external-postgres-port>
You should be able to connect to your external Postgres at this point. If not, resolve any issues before proceeding.
You may use helm upgrade … to restore your CircleCI server instance to a running state. |
3. Generate export of your internal PostgreSQL
-
Retrieve your internal Postgres credentials:
PG_PASSWORD=$(kubectl -n "$namespace" get secrets postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
The username for your internal Postgres is postgres
. The password is randomly generated unless directly set at installation. -
Connect to your Postgres pod and perform a Postgres dump:
kubectl -n "$namespace" exec -it "$PG_POD" -- bash -c "export PGPASSWORD='$PG_PASSWORD' && pg_dumpall -U postgres -c" > circle.sql
This backup is created in the filesystem used by the Postgres pod. If you wish to store it locally, you may use kubectl cp -n "$namespace" "$PG_POD":circle.sql /local/dir/circle.sql
-
Clean up the Postgres Dump. Your internally deployed Postgres uses the username
postgres
. However, during the restore, the Postgres dump will drop all resources before trying to create new ones, including thepostgres
user. Access the Postgres pod where the dump is stored and run the following commands on the Postgres dump file to remove the lines that would delete the Postgres user.PG_POD=$(kubectl -n "$namespace" get pods | grep postgresql | tail -1 | awk '{print $1}') kubectl exec -it -n "$namespace" "$PG_POD" -- bash sed -i".bak" '/DROP ROLE postgres/d' circle.sql sed -i".bak" '/CREATE ROLE postgres/d' circle.sql sed -i".bak" '/ALTER ROLE postgres WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN REPLICATION BYPASSRLS PASSWORD/d' circle.sql
4. Restore your data in your external PostgreSQL
While still connected to your the internally deployed Postgres, restore the dumped data to your external Postgres:
psql -h <your-external-postgres-host> -U postgres -p <your-external-postgres-port> < circle.sql
Now your external Postgres will have your CircleCI server data. In the next section you will update CircleCI server to point to your external Postgres.
Connecting your external PostgreSQL instance to CircleCI server
Once you have set up your external PostgreSQL instance, add the following to your values.yaml
file so that your CircleCI server instance can access it.
postgresql:
internal: false
postgresqlHost: <domain> # The domain or IP address of your PostgreSQL instance
postgresqlPort: <port> # The port of your PostgreSQL instance
postgresql.internal: false will remove any previously deployed PostgreSQL instance deployed internally |
The changes will take effect upon running helm install/upgrade
. If you are completing a migration to an externalized PostgreSQL instance then when you perform helm upgrade
, the scaled down pods will be scaled back to their replica numbers as defined by your values.yaml
.
Back up PostgreSQL
PostgreSQL provides official documentation for backing up and restoring your PostgreSQL 12 install, which can be found here.
We strongly recommend the following:
-
Take daily backups
-
Keep at least 30 days of backups
-
Use encrypted storage for backups as databases might contain sensitive information
-
Perform a backup before each upgrade of CircleCI server
MongoDB
If using your own MongoDB instance, it needs to be version 3.6 or higher. |
Migrating from an internal MongoDB to an externalized source
If you are doing a fresh install of CircleCI server, then you can skip this section and head to Connecting your external MongoDB instance to CircleCI server |
When a CircleCI server instance deployed, MongoDB is deployed internally by default via its helm chart. However, as an operator, you may wish to externalize this database to have better control over scalability and availability. Once you have configured your external MongoDB, you may use the guide below to migrate your Mongo data to your external database.
This process requires downtime. |
1. Disable the application
Disable the CircleCI server application by scaling down the application layer pods. No Data is lost in this process, but the application will be unreachable.
Scale down your application layer pods:
namespace=<your-server-namespace>
kubectl -n "$namespace" scale deploy -l "layer=application" --replicas="0"
Running kubectl -n "$namespace" get pods
will show most of your pods scaling to down, leaving your database pods running, including Mongo.
2. Validate access to your external MongoDB from within the cluster (optional)
-
Confirm that pods within your CircleCI server cluster can access your external MongoDB. You can do this from within your internal MongoDB pod:
MONGO_POD="mongodb-0" kubectl exec -it -n "$namespace" "$MONGO_POD" -- bash
-
While still connected to the pod run the following:
mongo --username <username> --password --authenticationDatabase admin --host <external-mongodb-host> --port <external-mongodb-port>
You should be able to connect to your external MongoDB at this point. If not, resolve any issues before proceeding.
You may use helm upgrade … to restore your CircleCI server instance to a running state. |
3. Generate export of your internal MongoDB
-
Retrieve your internal MongoDB credentials:
MONGO_POD="mongodb-0" MONGODB_USERNAME="root" MONGODB_PASSWORD=$(kubectl -n "$namespace" get secrets mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
-
Create a backup directory in your MongoDB pod:
kubectl -n "$namespace" exec "$MONGO_POD" -- mkdir -p /tmp/backups/
-
Generate a MongoDB database dump to the backup directory you just created:
kubectl -n "$namespace" exec -it "$MONGO_POD" -- bash -c "mongodump -u '$MONGODB_USERNAME' -p '$MONGODB_PASSWORD' --authenticationDatabase admin --db=circle_ghe --out=/tmp/backups/"
4. Restore your data in your external MongoDB
Use the generated MongoDB backup to restore the data to your external MongoDB:
kubectl -n "$namespace" exec "$MONGO_POD" -- mongorestore --drop -u "$MONGODB_USERNAME" -p "$MONGODB_PASSWORD" --host <external-mongodb-host> --port <external-mongodb-port> --authenticationDatabase admin /tmp/backups/circle_ghe;
Now your external MongoDB will have your CircleCI server data. In the next section you will update CircleCI server to point to your external MongoDB.
Connecting your external MongoDB instance to CircleCI server
Once you have configured your external MongoDB instance, add the following to your values.yaml
file to connect your CircleCI server instance.
mongodb:
internal: false
hosts: <hostname:port> # this can be a comma-separated list of multiple hosts for sharded instances
ssl: <ssl-enabled>
# If using an SSL connection with custom CA or self-signed certs, set this
# to true
tlsInsecure: false
# Any other options you'd like to append to the MongoDB connection string.
# Format as query string (key=value pairs, separated by &, special characters
# need to be URL encoded)
options: <additional-options>
auth:
database: <authentication-source-database
mechanism: SCRAM-SHA-1
The changes will take effect upon running helm install/upgrade
. If you are completing a migration to an externalized MongoDB instance then when you perform helm upgrade
, the scaled down pods will be scaled back to their replica numbers as defined by your values.yaml
.