Infrastructure as Code, part 3: automate Kubernetes deployments with CI/CD and Terraform
Developer Advocate, CircleCI
This series shows you how to get started with infrastructure as code (IaC). The goal is to help developers build a strong understanding of IaC through tutorials and code examples.
- Part 01: create a Kubernetes cluster
- Part 02: build Docker images and deploy to Kubernetes
- Part 03: automate deployments with CI/CD
In this post, I will demonstrate how to create continuous integration and deployment (CI/CD) pipelines that automate the Terraform IaC deployments covered in part 1 and part 2 of this series. Here is a quick list of things we will accomplish in this post:
- Build a new CircleCI .config.yml file for the project
- Configure new jobs and workflows
- Automate the execution of Terraform code to create Google Kubernetes Engine (GKE) clusters and deploy the application
Note: Before you can go through this part of the tutorial, make sure you have completed all the actions in the prerequisites section of part 1.
We will start with a quick explanation of what CI/CD is, and a review of the previous two installments of this tutorial series. Then you can start learning about the CircleCI .config.yml file included in this code repo.
Continuous integration and continuous deployment
CI/CD pipelines help developers and teams automate their build and test processes. CI/CD creates valuable feedback loops that provide a near-realtime status of software development processes. The CI/CD automation also provides consistent process execution and accurate results. This helps in optimizing these processes and contributes to velocity gains. Streamlining development practices with CI/CD is becoming common practice among teams. Understanding how to integrate and automate repeated tasks is critical in building valuable CI/CD pipelines.
In part 1 and part 2, we used Terraform to create a new GKE cluster and related Kubernetes objects that deploy, execute, and serve an application. These Terraform commands were executed manually from our terminal. That works well when you are developing the Terraform code or modifying it, but we want to automate the execution of those commands. There are many ways to automate them, but we are going to focus on how to do it from within CI/CD pipelines.
What are CircleCI pipelines?
CircleCI pipelines are the full set of processes you run when you trigger work on your projects. Pipelines encompass your workflows, which in turn coordinate your jobs. This is all defined in your project configuration file. In the next sections of this tutorial, we will define a CI/CD pipeline to build our project.
Setting up the project on CircleCI
Before we start building a config.yml
file for this project, we need to add the project to CircleCI. If you are unfamiliar with the process, you can use the setting up CircleCI guide here. Once you have completed the Setting up CircleCI section, stop there so we can configure project level environment variables.
Project level environment variables
Some jobs in this pipeline will need access to authentication credentials to execute commands on the target service. In this section, we will define the required credentials for some of our jobs and demonstrate how to input them into CircleCI as project level environment variables. For each variable, input the EnVar Name:
value in the Name field and input the credentials in the Value field. Here is a list of credentials that our pipeline will need, and their values:
- EnVar Name: TF_CLOUD_TOKEN - Value: The Base64 encoded value of the local .terraformrc file which hosts the Terraform Cloud user token
- EnVar Name: DOCKER_LOGIN - Value: Docker Hub username
- EnVar Name: DOCKER_PWD - Value: Docker Hub password
- EnVar Name: GOOGLE_CLOUD_KEYS - Value: The Base64 encoded value of the GCP credential JSON file
Once all of the environment variables above are in place, we can begin building our pipeline in the config.yml file
.
The CircleCI config.yml
The config.yml file is where you define CI/CD related jobs to be processed and executed. In this section, we will define the jobs and workflows for our pipeline.
Open the .circleci/.config.yml
file in an editor, delete its contents and paste this code:
version: 2.1
jobs:
The version:
key specifies the platform features to use when running this pipeline. The jobs:
key represents the list of individual jobs that we will define for this pipeline. Next, we will create the jobs that our pipeline will execute.
Job - run_tests:
I encourage you to familiarize yourself with the special keys, capabilities, and features in this CircleCI reference doc which should help you gain experience with the platform. Here is a general outline and explanation for each of the keys in the job we are about to discuss.
- docker: is a key represents the runtime our job will be executing in
- image: is a key represents the Docker container to use for this job
- steps: is a key represents a list or collection of executable commands which are run during a job
- checkout: is a key is a special step used to check out source code to the configured path
- run: is a key is used to invoke all command-line programs
- store_test_results: is a key represents a special step used to upload and store test results for a build
- path: is the path (absolute, or relative to your
working_directory
) to the directory containing subdirectories of JUnit XML or Cucumber JSON test metadata files
- path: is the path (absolute, or relative to your
- store_artifacts: is a key that represents a step to store artifacts (for example logs, binaries, etc) for availability in the web app or through the API
- path: is the path to the directory in the primary container used for saving job artifacts
A valuable benefit of CI/CD is the ability to execute automated testing on newly written code. It helps identify known and unknown bugs in code by executing tests on code every time it is modified.
Our next step is to define a new job in the config.yml
file. Paste the following into the file:
run_tests:
docker:
- image: circleci/node:12
steps:
- checkout
- run:
name: Install npm dependencies
command: |
npm install --save
- run:
name: Run Unit Tests
command: |
./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results
- store_test_results:
path: test-results
Here is a breakdown of what we just added.
- The docker: and image: keys specify the executor and the Docker image we are using in this job
- The command:
npm install --save
key installs the application dependencies used in the app - name: Run Unit Tests executes the automated tests and saves them to a local directory called
test-results/
- store_test_results: is a special command that saves and pins the
test-results/
directory results to the build in CircleCI
This job serves as a unit testing function. It helps with identifying errors in the code. If any of these tests fail, the entire pipeline build will fail, and prompt the developers to fix the errors. The goal is for all the tests and jobs to pass. Next, we will create a job that will build a Docker image and push it to the Docker Hub registry.
Job - build_docker_image
In part 2 of this series, we manually created a Docker image and pushed it to the Docker Hub registry. In this job, we will use automation to complete this task instead. Append this code block to the config.yml
file:
build_docker_image:
docker:
- image: circleci/node:12
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: false
- run:
name: Build Docker image
command: |
export TAG=0.2.<< pipeline.number >>
export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME
docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
docker push $DOCKER_LOGIN/$IMAGE_NAME
The build_docker_image
job is pretty straightforward. You have already encountered most of the CircleCI YAML keys it uses, so I will just jump into the name: Build Docker Image
command block.
- The
export TAG=0.2.<< pipeline.number >>
line defines a local environment variable that uses the pipeline.number value to associate the Docker tag value to the pipeline number being executed - The
export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME
defines the variable we’ll use in naming the Docker image docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
executes the Docker build command using a combination of the project level variables that we set earlier and the local environment variables we specifiedecho $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
authenticates our Docker Hub credentials to access the platform- The
docker push $DOCKER_LOGIN/$IMAGE_NAME
uploads the new docker image to the Docker Hub registry
This should all look and feel familiar because these are the very same commands you ran manually in part 2. In this example, we added the environment variable naming bits. Next we will build a job to execute Terraform code that builds a GKE cluster.
Job - gke_create_cluster
In this job, we will automate the execution of the Terraform code found in the part03/iac_gke_cluster/
directory. Append this code block to the config.yml
file, then save it:
gke_create_cluster:
docker:
- image: ariv3ra/terraform-gcp:latest
environment:
CLOUDSDK_CORE_PROJECT: cicd-workshops
steps:
- checkout
- run:
name: Create GKE Cluster
command: |
echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
cd part03/iac_gke_cluster/
terraform init
terraform plan -var credentials=$HOME/gcloud_keys -out=plan.txt
terraform apply plan.txt
One thing to take note of in this code block is the executor Docker image image: ariv3ra/terraform-gcp:latest
. This is an image that I built that has both the Google SDK and Terraform CLI installed. If we were not using this, we would need to add installation steps to this job to install and configure the tools every time. The environment: CLOUDSDK_CORE_PROJECT: cicd-workshops
keys are also an important element. This sets the environment variable value needed for the gcloud cli
commands we will be executing later.
Other elements used in the code block:
echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
is a command that decodes the$TF_CLOUD_TOKEN
value, which creates the./terraformrc
file required by Terraform to access the state data on the respective Terraform cloud workspaceecho $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
is a command that decodes the$GOOGLE_CLOUD_KEYS
value, which created thegcloud_keys
file required byglcoud cli
to access GCPgcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
is a command that authorizes access to GCP using thegcloud_keys
file we decoded and generated earlier
The rest of the commands are the terraform cli
commands with -var
parameters that specify and override the default
values of variables defined in the respective Terraform variables.tf
file. Once the terraform apply plan.txt
executes, this job will create a new GKE Cluster.
Job - gke_deploy_app
In this job, we will automate the execution of the Terraform code found in the part03/iac_kubernetes_app/
directory. Append this code block to the config.yml file, then save it:
gke_deploy_app:
docker:
- image: ariv3ra/terraform-gcp:latest
environment:
CLOUDSDK_CORE_PROJECT: cicd-workshops
steps:
- checkout
- run:
name: Deploy App to GKE
command: |
export CLUSTER_NAME="cicd-workshops"
export TAG=0.2.<< pipeline.number >>
export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"
echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"
cd part03/iac_kubernetes_app
terraform init
terraform plan -var $DOCKER_IMAGE -out=plan.txt
terraform apply plan.txt
export ENDPOINT="$(terraform output endpoint)"
mkdir -p /tmp/gke/ && echo 'export ENDPOINT='${ENDPOINT} > /tmp/gke/gke-endpoint
- persist_to_workspace:
root: /tmp/gke
paths:
- "*"
Here are the important elements of this job code block and some new elements we haven’t previously discussed.
export CLUSTER_NAME="cicd-workshops"
defines a variable that holds the name of the GCP project that we’ll be deploying to.gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"
is a command that retrieves thekubeconfig
data from the GKE cluster we created in the previous job.terraform plan -var $DOCKER_IMAGE -out=plan.txt
is a command that overrides thedefault
values of respective variables defined in the respective Terraformvariables.tf
file.export ENDPOINT="$(terraform output endpoint)"
assigns the outputendpoint
value generated by the Terraform command to a local environment variable which will be saved to a file and persisted to a CircleCI workspace. It can then be retrieved, attached from an attached CircleCI workspace and used in follow up jobs.
Job - gke_destroy_cluster
This job is the last one we will build for this pipeline. It will basically destroy all of the resources and infrastructure that we have built in previous CI/CD jobs. As part of testing, ephemeral resources are used for smoke testing, integration testing, performance testing, and other types. A job that executes destroy commands is great for getting rid of these constructs when they are no longer required.
In this job we will automate the execution of the Terraform code found in the part03/iac_kubernetes_app/
directory. Append this code block to the config.yml file, then save it:
gke_destroy_cluster:
docker:
- image: ariv3ra/terraform-gcp:latest
environment:
CLOUDSDK_CORE_PROJECT: cicd-workshops
steps:
- checkout
- run:
name: Destroy GKE Cluster
command: |
export CLUSTER_NAME="cicd-workshops"
export TAG=0.2.<< pipeline.number >>
export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"
echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
cd part03/iac_kubernetes_app
terraform init
gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"
terraform destroy -var $DOCKER_IMAGE --auto-approve
cd ../iac_gke_cluster/
terraform init
terraform destroy -var credentials=$HOME/gcloud_keys --auto-approve
The important element of this job code block is the terraform destroy -var credentials=$HOME/gcloud_keys --auto-approve
command. This command that executes the Terraform command that destroys all of the resources created with the Terraform code in the part03/iac_gke_cluster
and part03/iac_kubernetes_app/
directories respectively.
Now that we have defined all of the jobs in our pipeline we are ready to create CircleCI workflows that will orchestrate how the jobs will be executed and processed within the pipeline.
Creating CircleCI workflows
Our next step is to create the workflows that define how jobs will be executed and processed. Think of a workflow as an ordered list for jobs. You can specify when and how to execute these jobs using the workflow. Append this workflow code block to the config.yml
file:
workflows:
build_test:
jobs:
- run_tests
- build_docker_image
- gke_create_cluster
- gke_deploy_app:
requires:
- run_tests
- build_docker_image
- gke_create_cluster
- approve-destroy:
type: approval
requires:
- gke_create_cluster
- gke_deploy_app
- gke_destroy_cluster:
requires:
- approve-destroy
This code block represents the workflows definition of our pipeline. Here is what is going on in this block:
- The
workflows:
key specifies a workflow element build_test:
represents the name/identifier of this workflow- The
jobs:
key represents the list of jobs defined in theconfig.yml
file to execute
In this list, you specify the jobs you want to execute in this pipeline. Here is our list of jobs:
- run_tests
- build_docker_image
- gke_create_cluster
- gke_deploy_app:
requires:
- run_tests
- build_docker_image
- gke_create_cluster
- approve-destroy:
type: approval
requires:
- gke_create_cluster
- gke_deploy_app
- gke_destroy_cluster:
requires:
- approve-destroy
The run_tests
, build_docker_image
, and gke_create_cluster
workflow jobs run in parallel or concurrently, unlike the gke_deploy_app:
item that has a requires:
key. Jobs are run in parallel by default, so you must explicitly require any dependencies by their job name using a requires:
key and a list of jobs that must be complete before kicking off the job specified. Think of requires:
keys as building dependencies on the success of other jobs. These keys let you segment and control the execution of your pipeline.
The approve-destroy:
item specifies a job with a manual approval step. It requires human intervention where someone must approve the execution of the next job in the workflow jobs list. The next job, gke_destroy_cluster:
, is dependent on the approval-destroy:
job being completed before it executes. It destroys all the resources created by previously executed jobs in the pipeline.
The complete .config.yml file
A complete config.yml
based for this post and can be found in the project code repo in the .circleci/
directory. It is here for you to review:
version: 2.1
jobs:
run_tests:
docker:
- image: circleci/node:12
steps:
- checkout
- run:
name: Install npm dependencies
command: |
npm install --save
- run:
name: Run Unit Tests
command: |
./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
build_docker_image:
docker:
- image: circleci/node:12
steps:
- checkout
- setup_remote_docker:
docker_layer_caching: false
- run:
name: Build Docker image
command: |
export TAG=0.2.<< pipeline.number >>
export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME
docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
docker push $DOCKER_LOGIN/$IMAGE_NAME
gke_create_cluster:
docker:
- image: ariv3ra/terraform-gcp:latest
environment:
CLOUDSDK_CORE_PROJECT: cicd-workshops
steps:
- checkout
- run:
name: Create GKE Cluster
command: |
echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
cd part03/iac_gke_cluster/
terraform init
terraform plan -var credentials=$HOME/gcloud_keys -out=plan.txt
terraform apply plan.txt
gke_deploy_app:
docker:
- image: ariv3ra/terraform-gcp:latest
environment:
CLOUDSDK_CORE_PROJECT: cicd-workshops
steps:
- checkout
- run:
name: Deploy App to GKE
command: |
export CLUSTER_NAME="cicd-workshops"
export TAG=0.2.<< pipeline.number >>
export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"
echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"
cd part03/iac_kubernetes_app
terraform init
terraform plan -var $DOCKER_IMAGE -out=plan.txt
terraform apply plan.txt
export ENDPOINT="$(terraform output endpoint)"
mkdir -p /tmp/gke/
echo 'export ENDPOINT='${ENDPOINT} > /tmp/gke/gke-endpoint
- persist_to_workspace:
root: /tmp/gke
paths:
- "*"
gke_destroy_cluster:
docker:
- image: ariv3ra/terraform-gcp:latest
environment:
CLOUDSDK_CORE_PROJECT: cicd-workshops
steps:
- checkout
- run:
name: Destroy GKE Cluster
command: |
export CLUSTER_NAME="cicd-workshops"
export TAG=0.2.<< pipeline.number >>
export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"
echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
cd part03/iac_kubernetes_app
terraform init
gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"
terraform destroy -var $DOCKER_IMAGE --auto-approve
cd ../iac_gke_cluster/
terraform init
terraform destroy -var credentials=$HOME/gcloud_keys --auto-approve
workflows:
build_test:
jobs:
- run_tests
- build_docker_image
- gke_create_cluster
- gke_deploy_app:
requires:
- run_tests
- build_docker_image
- gke_create_cluster
- approve-destroy:
type: approval
requires:
- gke_create_cluster
- gke_deploy_app
- gke_destroy_cluster:
requires:
- approve-destroy
Conclusion
Congratulations! You have just completed part 3 of this series and leveled up your experience by building a new config.yml
file that executes IaC resources using Terraform. This post explained and demonstrated some critical elements in the config.yml
file and the internal concepts related to the CircleCI platform.
In this series we covered many concepts and technologies such as Docker, GCP, Kubernetes, Terraform and CircleCI and included some hands-on experience with them. We also covered how to wire up your projects to use CircleCI and leverage the Terraform code to test your application in target deployment environments. This series is intended to increase your knowledge of important DevOps concepts, technologies, and how they all work together.
I encourage you to experiment on your own and with your team; changing code, adding new Terraform providers, and reconfiguring CI/CD jobs and pipelines. Challenge each other to accomplish release goals using a combination of what you learned and other ideas the team comes up with. By experimenting, you will learn more than any blog post could teach you.
Thank you for following along with this series. I hope you found it useful. Please feel free to reach out with feedback on Twitter @punkdata.
The following resources will help you expand your knowledge: