Building CI/CD pipelines using dynamic config
Developer Advocate, CircleCI
Creating robust, manageable, and reusable functionality is a big part of my job as a CI/CD engineer. Recently, I wrote about managing reusable pipeline configuration by adopting and implementing pipeline variables within pipeline configuration files. As I showed in that tutorial, pipeline variables and orbs have added some flexibility to this process, but they are still a bit limited. The nature of pipeline configuration files sometimes restricts developers who want a solution that fits their specific build processes. Those restrictions can lead developers to create “workarounds” like executing scripts in pre-commit hooks to generate config before a commit. Another example of a workaround is using jobs to trigger pipeline runs via the API that set pipeline parameters. Some of these solutions achieve their desired effect, but they can be inefficient and overly complex, require unfamiliar workarounds, or have edge cases that are not easily solved.
To address this need, CircleCI has released dynamic configuration. Dynamic config gives you the ability to natively inject dynamism in pipeline configurations. You can use dynamic configuration to execute a separate config file using scripts. It is a big step forward in flexibility, and it means you can customize which sections of the config you want to test and validate. Dynamic config also lets you maintain multiple config.yml files in a single code repository, to selectively identify and execute your primary config.yml files. This feature offers a wide range of powerful capabilities to easily specify and execute a variety of dynamic pipeline workloads.
In this post, I will walk you through how to implement dynamic configuration by creating a config file that is not in the root configuration folder.
Getting started with dynamic config using setup workflows
I will be using this code repo and code as examples in this post. You can either fork the project or use import project to branch your own version and follow along. After forking or importing the example, add the project to CircleCI and enable dynamic config using setup workflows. This is easily accomplished by completing the following:
- Go to the Projects dashboard in the CircleCI application.
- Select the project you want to use.
- Select Project Settings in the upper-right corner.
- On the left-hand panel, select Advanced.
- Towards the bottom, toggle the switch for Run Setup Workflows to the “on” position.
Your project now has the ability to run dynamic config using a setup workflow.
To get started, you will need to create a new config.yml file that lives in the .circleci/
directory. The syntax within this file has a setup:
key with a value of true
. This is the entry point to execute or trigger a variety of commands that can be defined within it. Commands can include things like passing parameter values and triggering a separate config.yml pipeline that exists outside of the default .circleci/
directory.
Generating a config.yml file from a shell script
Next, I will show you how to use dynamic configuration to execute a separate config file using scripts.
This pattern will essentially trigger the ./circleci/config.yml
file and execute the scripts/generate-pipeline-config
script that generates a new config file to the configs/
directory which will be processed in subsequent steps.
# This file demonstrates how to leverage dynamic configuration to execute a separate config file using scripts.
version: 2.1
setup: true
orbs:
continuation: circleci/continuation@0.1.0
jobs:
generate-config:
executor: continuation/default
steps:
- checkout
- run:
name: Generate Pipeline generated_config.yml file
command: |
#The generate script has 2 arguments: 1) Terraform Version 2) DigitalOcean CLI Version
./scripts/generate-pipeline-config "0.14.5" "1.59.0"
- continuation/continue:
parameters: '{}'
configuration_path: configs/generated_config.yml
workflows:
setup-workflow:
jobs:
- generate-config
In the code example above, the setup:
key is set to true, which makes this configuration a dynamic config file that uses setup workflows. The continuation: circleci/continuation@0.1.0 orb enables the ability to orchestrate the execution of your primary configurations. The generate-config:
job steps list executes the command:
./scripts/generate-pipeline-config "0.14.5" "1.59.0"
This command executes a generate-pipeline-config
bash script that generates a new pipeline config file that needs two arguments:
- Argument 1 is the version of Terraform to install
- Argument 2 is the version of DigitalOcean CLI to install
These arguments exist because the config being generated has jobs that create and deploy images on DigitalOcean Kubernetes clusters using Terraform and DigitalOcean CLI tools. Once the generate-pipeline-config
script executes and creates the configs/
directory and new config file generated_config.yml
within it, the script uses the arguments to dynamically inject the version values into the appropriate places. We will delve into the generate-pipeline-config
file in a moment, but for now I want to move onto the continuation/continue:
element of the setup workflow config.
The parameters:
key value specifies pipeline configuration parameters that can be passed to the target config files defined in the configuration_path:
element. In this case, the generated config file does not have any pipeline variables defined so the parameters:
element is assigned a blank value using {}
. Finally, the configuration_path:
element specifies the config file the setup workflow pipeline will execute next, which in this case, is configs/generated_config.yml
. This file was previously generated in the Generate Pipeline generated_config.yml file run step.
Next, we should explore what is inside the generate-pipeline-config
to better understand how it works.
Tearing down the generate-pipeline-config file
The generate-pipeline-config
script was executed in the dynamic configuration using setup workflows. It produced the configs/generated_config.yml
file which was then executed as a follow-up pipeline.
#!/bin/bash
set -o pipefail
TF_VERSION=$1 #Terraform CLI Version to install
DOCTL_VERSION=$2 #Digital Ocean CLI Version to install
mkdir configs/
cat << EOF > configs/generated_config.yml
version: 2.1
orbs:
docker: circleci/docker@1.5.0
node: circleci/node@4.2.0
snyk: snyk/snyk@0.0.12
terraform: circleci/terraform@2.0.0
jobs:
scan_app:
docker:
- image: circleci/node:12
steps:
- checkout
- node/install-packages:
override-ci-command: npm install
cache-path: ~/project/node_modules
- snyk/scan:
fail-on-issues: false
monitor-on-build: false
scan_push_docker_image:
docker:
- image: circleci/node:12
steps:
- checkout
- setup_remote_docker
- docker/check
- docker/build:
image: \$DOCKER_LOGIN/\$CIRCLE_PROJECT_REPONAME
tag: 0.1.<< pipeline.number >>
- snyk/scan:
fail-on-issues: false
monitor-on-build: false
target-file: "Dockerfile"
docker-image-name: \$DOCKER_LOGIN/\$IMAGE_NAME:0.1.<< pipeline.number >>
project: \${CIRCLE_PROJECT_REPONAME}/\${CIRCLE_BRANCH}-app
- docker/push:
image: \$DOCKER_LOGIN/\$CIRCLE_PROJECT_REPONAME
tag: 0.1.<< pipeline.number >>
run_tests:
docker:
- image: circleci/node:12
steps:
- checkout
- node/install-packages:
override-ci-command: npm install
cache-path: ~/project/node_modules
- run:
name: Run Unit Tests
command: |
./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
create_do_k8s_cluster:
docker:
- image: circleci/node:12
steps:
- checkout
- run:
name: Create .terraformrc file locally
command: echo "credentials \"app.terraform.io\" {token = \"\$TERRAFORM_TOKEN\"}" > \$HOME/.terraformrc
- terraform/install:
terraform_version: $TF_VERSION
arch: "amd64"
os: "linux"
- terraform/init:
path: ./terraform/do_create_k8s
- run:
name: Create K8s Cluster on DigitalOcean
command: |
export CLUSTER_NAME=\${CIRCLE_PROJECT_REPONAME}
export TAG=0.1.<< pipeline.number >>
curl -sL https://github.com/digitalocean/doctl/releases/download/v$DOCTL_VERSION/doctl-$DOCTL_VERSION-linux-amd64.tar.gz | tar -xzv
sudo mv doctl /usr/local/bin
export DO_K8S_SLUG_VER="\$(doctl kubernetes options versions -o json -t \$DIGITAL_OCEAN_TOKEN | jq -r '.[0] | .slug')"
cd terraform/do_create_k8s
terraform init
terraform apply -var do_token=\$DIGITAL_OCEAN_TOKEN -var cluster_name=\$CLUSTER_NAME -var do_k8s_slug_ver=\$DO_K8S_SLUG_VER -auto-approve
deploy_to_k8s:
docker:
- image: circleci/node:12
steps:
- checkout
- run:
name: Create .terraformrc file locally
command: echo "credentials \"app.terraform.io\" {token = \"\$TERRAFORM_TOKEN\"}" > \$HOME/.terraformrc
- terraform/install:
terraform_version: $TF_VERSION
arch: "amd64"
os: "linux"
- run:
name: Deploy Application to K8s on DigitalOcean
command: |
export CLUSTER_NAME=\${CIRCLE_PROJECT_REPONAME}
export TAG=0.1.<< pipeline.number >>
export DOCKER_IMAGE="\${DOCKER_LOGIN}/\${CIRCLE_PROJECT_REPONAME}:\$TAG"
curl -sL https://github.com/digitalocean/doctl/releases/download/v$DOCTL_VERSION/doctl-$DOCTL_VERSION-linux-amd64.tar.gz | tar -xzv
sudo mv doctl /usr/local/bin
cd terraform/do_k8s_deploy_app
doctl auth init -t \$DIGITAL_OCEAN_TOKEN
doctl kubernetes cluster kubeconfig save \$CLUSTER_NAME
terraform init
terraform apply -var do_token=\$DIGITAL_OCEAN_TOKEN -var cluster_name=\$CLUSTER_NAME -var docker_image=\$DOCKER_IMAGE -auto-approve
# Save the Load Balancer Public IP Address
export ENDPOINT="\$(terraform output lb_public_ip)"
mkdir -p /tmp/do_k8s/
echo 'export ENDPOINT='\${ENDPOINT} > /tmp/do_k8s/dok8s-endpoint
- persist_to_workspace:
root: /tmp/do_k8s
paths:
- "*"
smoketest_k8s_deployment:
docker:
- image: circleci/node:12
steps:
- checkout
- attach_workspace:
at: /tmp/do_k8s/
- run:
name: Smoke Test K8s App Deployment
command: |
source /tmp/do_k8s/dok8s-endpoint
./test/smoke_test \$ENDPOINT
destroy_k8s_cluster:
docker:
- image: circleci/node:12
steps:
- checkout
- run:
name: Create .terraformrc file locally
command: echo "credentials \"app.terraform.io\" {token = \"\$TERRAFORM_TOKEN\"}" > \$HOME/.terraformrc && cat \$HOME/.terraformrc
- terraform/install:
terraform_version: $TF_VERSION
arch: "amd64"
os: "linux"
- terraform/init:
path: ./terraform/do_k8s_deploy_app
- run:
name: Destroy App Deployment
command: |
export CLUSTER_NAME=\${CIRCLE_PROJECT_REPONAME}
export TAG=0.1.<< pipeline.number >>
export DOCKER_IMAGE="\${DOCKER_LOGIN}/\${CIRCLE_PROJECT_REPONAME}:\$TAG"
curl -sL https://github.com/digitalocean/doctl/releases/download/v$DOCTL_VERSION/doctl-$DOCTL_VERSION-linux-amd64.tar.gz | tar -xzv
sudo mv doctl /usr/local/bin
cd terraform/do_k8s_deploy_app/
doctl auth init -t \$DIGITAL_OCEAN_TOKEN
doctl kubernetes cluster kubeconfig save \$CLUSTER_NAME
terraform init
terraform destroy -var do_token=\$DIGITAL_OCEAN_TOKEN -var cluster_name=\$CLUSTER_NAME -var docker_image=\$DOCKER_IMAGE -auto-approve
- terraform/init:
path: ./terraform/do_create_k8s/
- run:
name: Destroy K8s Cluster
command: |
export CLUSTER_NAME=\${CIRCLE_PROJECT_REPONAME}
export TAG=0.1.<< pipeline.number >>
cd terraform/do_create_k8s/
terraform init
terraform destroy -var do_token=\$DIGITAL_OCEAN_TOKEN -var cluster_name=\$CLUSTER_NAME -auto-approve
workflows:
scan_deploy:
jobs:
- scan_app
- scan_push_docker_image
- run_tests
- create_do_k8s_cluster
- deploy_to_k8s:
requires:
- create_do_k8s_cluster
- scan_push_docker_image
- smoketest_k8s_deployment:
requires:
- deploy_to_k8s
- approve_destroy:
type: approval
requires:
- smoketest_k8s_deployment
- destroy_k8s_cluster:
requires:
- approve_destroy
EOF
This example is a Bash script that specifies a heredoc, which is a method of defining a file within a file. This script has pipeline config syntax defined within it and uses the argument variables TF_VERSION
and DOCTL_VERSION
to specify the respective version numbers of the CLI tools to install. These variables are assigned within the heredoc config syntax and applied when the script is executed and the final configs/generated_config.yml
file is created. This example shows how you can use scripting languages to easily and consistently generate dynamic config that can be nicely paired with setup workflows.
The dynamic configuration defined in this script:
- Defines and executes orbs
- Runs unit tests
- Performs security scans
- Builds Docker images
- Creates a new Kubernetes cluster on DigitalOcean using Terraform
- Deploys the app to the new Kubernetes cluster using Terraform
- Performs a validation test on the deployment
- Triggers an approval step to destroy the Kubernetes cluster
This pattern can be accomplished with pretty much any language, framework, or stack. I chose Bash for this example, but you could use Python, JavaScript, or any other language.
Conclusion
Dynamic config gives developers more flexibility to create tailored CI/CD pipelines that execute their unique software development processes. In this post we introduced the concept of dynamic configuration and demonstrated a single use case. Of course, dynamic config has many more use cases and patterns that will help expand software development options and customize workload orchestration.
Keep a look out for another article on how to conditionally run a workflow based on changes made to a specific file set. This will be of particular interest to developers working in microservices or with code that is stored in a monorepo or single repository. These users can now use the path filter orb to validate and test custom sections of code.
I look forward to learning about some of the use cases and patterns that dynamic config could help you solve. I would love to know your thoughts and opinions, so please join the discussion by tweeting to me @punkdata or @circleci.
Thanks for reading!