In this series, I’ll demonstrate how to get started with infrastructure as code (IaC). My goal is to help developers build a strong understanding of this concept through tutorials and code examples. Here are the topics this series will cover:

In this post, I will demonstrate how to create continuous integration and deployment (CI/CD) pipelines that automate the Terraform IaC deployments that were covered in part 1 and part 2 of this series. Here is a quick list of things we’ll accomplish in this post:

Before we begin, you must complete all of the actions in the prerequisites section of part 1 in this series. Once you have completed the prerequisites sections, we’ll start with learning about the CircleCI .config.yml file included in this code repo.

Continuous integration and continuous deployment

CI/CD pipelines help developers and teams automate their build and test processes. This creates valuable feedback loops that provide a near realtime status of their software development processes. The CI/CD automation also provides consistent process execution and accurate results. This helps in optimizing these processes and contributes to velocity gains. Streamlining development practices with CI/CD is becoming common practice among teams and understanding how to integrate and automate repeated tasks is critical in building valuable CI/CD pipelines.

In part 1 and part 2, we used Terraform to create a new GKE cluster and related Kubernetes objects that deploy, execute, and serve an application. These Terraform commands were executed manually from our terminal. That is fine when you’re developing the Terraform code or modifying it, but we want to automate the execution of those commands. There are many ways to automate them, but we’re going to focus on how to do it from within CI/CD pipelines.

CircleCI pipelines

CircleCI pipelines are the full set of processes you run when you trigger work on your projects. Pipelines encompass your workflows, which in turn coordinate your jobs. This is all defined in your project configuration file and in the next sections we’re going to define a CI/CD pipeline to build our project.

Settin up the project on CircleCI

Before we start building a config.yml file for this project, we need to add this project to CircleCI. You can use the setting up CircleCI guide here to achieve this. Once you’ve completed the Setting up CircleCI section, stop there so we can configure project level environment variables.

Project level environment variables

Some jobs in this pipeline will need access to authentication credentials to execute commands on the target service. In this section, we’ll define the required credentials for some of our jobs and demonstrate how to input them into CircleCI as project level environment variables. Below is a list of environment variable names we’ll need to create on the CircleCi dashboard. Input the EnVar Name: value in the Name field and input the respective credentials in the Value field. Here is a list of credentials that our pipeline will need and their values:

Once all of the environment variables above are in place, we can begin building our pipeline in the config.yml file.

The CircleCI config.yml

The config.yml file is where you define CI/CD related jobs to be processed and executed. In this section, we’re going define the jobs and workflows for our pipeline. Open the .circleci/.config.yml file in an editor, delete its contents and paste the following:

version: 2.1
jobs:

The version: key specifies the platform features to use when running this pipeline. The jobs: key represents the list of individual jobs that we’ll define for this pipeline. Let’s create the jobs that we’ll be executing in our pipeline.

Job - run_tests:

I encourage you to familiarize yourself with the special keys, capabilities, and features in this CircleCI reference doc which should help ease your experience with the platform. Below is a general outline and explanation for each of the keys in the job we’re about to discuss.

  • docker: is a key represents the runtime our job will be executing in
    • image: is a key represents the Docker container to use for this job
  • steps: is a key represents a list or collection of executable commands which are run during a job
    • checkout: is a key is a special step used to check out source code to the configured path
    • run: is a key is used to invoke all command-line programs
      • name: is a key represents the title of the step to be shown in the CircleCI UI
      • command: is a key represents the command to run via the shell
    • store_test_results: is a key represents a special step used to upload and store test results for a build
      • path: is the path (absolute, or relative to your working_directory) to the directory containing subdirectories of JUnit XML or Cucumber JSON test metadata files
    • store_artifacts: is a key represents a step to store artifacts (for example logs, binaries, etc) for availability in the web app or through the API
      • path: is the path to the directory in the primary container used for saving job artifacts

A valuable aspect of CI/CD is the ability to execute automated testing on newly written code. It helps identify known and unknown bugs in code by executing tests on code every time it is modified. Let’s define a new job in our config.yml file. Paste the following into the file and then we’ll breakdown what’s going on in the code block:

  run_tests:
    docker:
      - image: circleci/node:12
    steps:
      - checkout
      - run:
          name: Install npm dependencies
          command: |
            npm install --save
      - run:
          name: Run Unit Tests
          command: |
            ./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results
      - store_test_results:
          path: test-results
  • The docker: and image: keys specify the executor and the Docker image we’ll be using in this job
  • The command: npm install --save key installs the application dependencies used in the app
  • name: Run Unit Tests executes the automated tests and saves them to a local directory called test-results/
  • store_test_results: is a special command that saves and pins the test-results/ directory results to the build in CircleCI.

This job serves as a unit testing function. It helps with identifying errors in the code. If any of these tests fail, the entire pipeline build will fail while prompting the developers to fix the errors. The goal is for all the tests and jobs to pass without fail. Next, we’ll create a job that will build a Docker image and push it to the Docker Hub registry.

Job - build_docker_image

In part 2 of this series, we manually created a Docker image and pushed it to the Docker Hub registry. In this job, we’ll achieve the same results using automation. Below is the build_docker_image: job. Append this code block to the config.yml file, then we’ll break down the elements in it:

  build_docker_image:
    docker:
      - image: circleci/node:12
    steps:
      - checkout
      - setup_remote_docker:
          docker_layer_caching: false
      - run:
          name: Build Docker image
          command: |
            export TAG=0.2.<< pipeline.number >>
            export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME            
            docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
            echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
            docker push $DOCKER_LOGIN/$IMAGE_NAME

The build_docker_image job is pretty straight forward and you’ve already encountered most of the CircleCI YAML keys shown, so we’ll just jump into the name: Build Docker Image command block.

The export TAG=0.2.<< pipeline.number >> line defines a local environment variable that uses the pipeline.number value to associate the Docker tag value to the pipeline number being executed. The export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME defines the variable we’ll use in naming the Docker image.

docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG . executes the Docker build command using a combination of the project level variables that we set earlier and the local environment variables we specified above.

echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin authenticates our Docker Hub credentials to access the platform and finally the docker push $DOCKER_LOGIN/$IMAGE_NAME uploads the new docker image to the Docker Hub registry.

This should all look and feel familiar because these are the very same commands you ran manually in part 2 with the addition of the environment variable naming bits. Next we’ll build a job to execute Terraform code that builds a GKE cluster.

Job - gke_create_cluster

In this job, we’ll automate the execution of the Terraform code found in the part03/iac_gke_cluster/ directory. Append the code block below to the config.yml file, then save it:

  gke_create_cluster:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Create GKE Cluster
          command: |
            echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            cd part03/iac_gke_cluster/
            terraform init
            terraform plan -var credentials=$HOME/gcloud_keys -out=plan.txt
            terraform apply plan.txt

Let’s discuss the important elements of this job code block. One thing to take note of is the executor Docker image image: ariv3ra/terraform-gcp:latest. This is an image that I’ve built that has both the Google SDK and Terraform CLI installed on the image. Otherwise, we’d need to add installation steps to this job which would install and configure the tools every time. Also notice the environment: CLOUDSDK_CORE_PROJECT: cicd-workshops keys. What this does is it sets an environment variable value needed for the gcloud cli commands we’ll be executing later.

echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc is a command that decodes the $TF_CLOUD_TOKEN value, which creates the ./terraformrc file required by Terraform to access the state data on the respective Terraform cloud workspace.

echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys is a command that decodes the $GOOGLE_CLOUD_KEYS value, which created the gcloud_keys file required by glcoud cli to access GCP.

gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys is a command that authorizes access to GCP using the gcloud_keys file we decoded and generated earlier.

The rest of the commands are the terraform cli commands with -var parameters that specify and override the default values of respective variables defined in the respective Terraform variables.tf file. Once the terraform apply plan.txt executes, this job will create a new GKE Cluster.

Job - gke_deploy_app

In this job, we’ll automate the execution of the Terraform code found in the part03/iac_kubernetes_app/ directory. Append the code block below to the config.yml file then save it.

  gke_deploy_app:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Deploy App to GKE
          command: |
            export CLUSTER_NAME="cicd-workshops"
            export TAG=0.2.<< pipeline.number >>
            export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"
            echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"
            cd part03/iac_kubernetes_app
            terraform init
            terraform plan -var $DOCKER_IMAGE -out=plan.txt
            terraform apply plan.txt
            export ENDPOINT="$(terraform output endpoint)"
            mkdir -p /tmp/gke/ && echo 'export ENDPOINT='${ENDPOINT} > /tmp/gke/gke-endpoint
      - persist_to_workspace:
          root: /tmp/gke
          paths:
            - "*"

Let’s discuss the important elements of this job code block and any new elements we haven’t previously discussed.

export CLUSTER_NAME="cicd-workshops" defines a variable that holds the name of the GCP project that we’ll be deploying to.

gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d" is a command that retrieves the kubeconfig data from the GKE cluster we created in the previous job.

terraform plan -var $DOCKER_IMAGE -out=plan.txt is a command that overrides the default values of respective variables defined in the respective Terraform variables.tf file.

export ENDPOINT="$(terraform output endpoint)" assigns the output endpoint value generated by the Terraform command to a local environment variable which will be saved to a file and persisted to a CircleCI workspace. It can then be retrieved, attached from an attached CircleCI workspace and used in follow up jobs if needed.

Job - gke_destroy_cluster

This is the last job we’ll build for this pipeline and it’s going to basically destroy all of the resources and infrastructures that we’ve built is previous CI/CD jobs. A job that execute destroy commands is great for standing ephemeral resources that can be used for various forms of testing such as smoke testing, integration testing, performance testing etc. and then destroy these constructs when they’re no longer required.

In this job we’ll automate the execution of the Terraform code found in the part03/iac_kubernetes_app/ directory. Append the code block below to the config.yml file then save it.

  gke_destroy_cluster:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Destroy GKE Cluster
          command: |
            export CLUSTER_NAME="cicd-workshops"
            export TAG=0.2.<< pipeline.number >>
            export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"            
            echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            cd part03/iac_kubernetes_app
            terraform init
            gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"            
            terraform destroy -var $DOCKER_IMAGE --auto-approve
            cd ../iac_gke_cluster/
            terraform init
            terraform destroy -var credentials=$HOME/gcloud_keys --auto-approve

Let’s discuss the important elements of this job code block and any new elements we haven’t previously discussed.

terraform destroy -var credentials=$HOME/gcloud_keys --auto-approve is a command that executes the Terraform destroy command that destroys all of the resources created with the Terraform code in the part03/iac_gke_cluster and part03/iac_kubernetes_app/ directories respectively.

Now that we’ve defined all of the jobs in our pipeline we’re ready to create CircleCI workflows that will orchestrate how the jobs will be executed and processed within the pipeline.

CircleCI workflows

All of the jobs in this pipeline have been created, so now we have to create the workflows that define how jobs will be executed and processed. Think of a workflow as an an order list for jobs where you can specify when and how to execute them. Below is the workflow block that we’ll use in our pipeline. Append the following workflow code block to the config.yml file:

workflows:
  build_test:
    jobs:
      - run_tests
      - build_docker_image
      - gke_create_cluster
      - gke_deploy_app:
          requires:
            - run_tests
            - build_docker_image
            - gke_create_cluster
      - approve-destroy:
          type: approval
          requires:
            - gke_create_cluster
            - gke_deploy_app
      - gke_destroy_cluster:
          requires:
            - approve-destroy

The above code block represents the workflows definition of our pipeline. Let’s discuss what’s going on in this block. The workflows: key specifies a workflow element and the build_test: represents the name/identifier of this workflow.

The jobs: key represents the list of jobs defined in the config.yml file to execute. In this list, you specify the jobs you want to execute in this pipeline. What we have is a list of jobs.

      - run_tests
      - build_docker_image
      - gke_create_cluster
      - gke_deploy_app:
          requires:
            - run_tests
            - build_docker_image
            - gke_create_cluster
      - approve-destroy:
          type: approval
          requires:
            - gke_create_cluster
            - gke_deploy_app
      - gke_destroy_cluster:
          requires:
            - approve-destroy

The run_tests, build_docker_image, and gke_create_cluster workflow jobs above all run in parallel or concurrently unlike the gke_deploy_app: item that has a requires: key. Jobs are run in parallel by default, so you must explicitly require any dependencies by their job name using a requires: key with a list of jobs that must successfully complete before kicking off the job specified. Think of requires: keys as building dependencies on the success of other jobs. This enables you to segment and control the execution of your pipleline.

The approve-destroy: item specifies a job with a manual approval step. It requires human intervention where someone must approve the execution of the next job in the workflow jobs list. The next job, gke_destroy_cluster:, is dependant on the approval-destroy: job being completed before it executes and destroys all the resources created by previously executed jobs in the pipeline.

The complete .config.yml file

Below is an example a complete config.yml based off of this post and it can also be found in the project code repo in the .circleci/ directory.

version: 2.1
jobs:
  run_tests:
    docker:
      - image: circleci/node:12
    steps:
      - checkout
      - run:
          name: Install npm dependencies
          command: |
            npm install --save
      - run:
          name: Run Unit Tests
          command: |
            ./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results
      - store_test_results:
          path: test-results
      - store_artifacts:
          path: test-results
  build_docker_image:
    docker:
      - image: circleci/node:12
    steps:
      - checkout
      - setup_remote_docker:
          docker_layer_caching: false
      - run:
          name: Build Docker image
          command: |
            export TAG=0.2.<< pipeline.number >>
            export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME
            docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
            echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
            docker push $DOCKER_LOGIN/$IMAGE_NAME
  gke_create_cluster:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Create GKE Cluster
          command: |
            echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            cd part03/iac_gke_cluster/
            terraform init
            terraform plan -var credentials=$HOME/gcloud_keys -out=plan.txt
            terraform apply plan.txt
  gke_deploy_app:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Deploy App to GKE
          command: |
            export CLUSTER_NAME="cicd-workshops"
            export TAG=0.2.<< pipeline.number >>
            export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"
            echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"
            cd part03/iac_kubernetes_app
            terraform init
            terraform plan -var $DOCKER_IMAGE -out=plan.txt
            terraform apply plan.txt
            export ENDPOINT="$(terraform output endpoint)"
            mkdir -p /tmp/gke/
            echo 'export ENDPOINT='${ENDPOINT} > /tmp/gke/gke-endpoint
      - persist_to_workspace:
          root: /tmp/gke
          paths:
            - "*"
  gke_destroy_cluster:
    docker:
      - image: ariv3ra/terraform-gcp:latest
    environment:
      CLOUDSDK_CORE_PROJECT: cicd-workshops
    steps:
      - checkout
      - run:
          name: Destroy GKE Cluster
          command: |
            export CLUSTER_NAME="cicd-workshops"
            export TAG=0.2.<< pipeline.number >>
            export DOCKER_IMAGE="docker-image=${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}:$TAG"            
            echo $TF_CLOUD_TOKEN | base64 -d > $HOME/.terraformrc
            echo $GOOGLE_CLOUD_KEYS | base64 -d > $HOME/gcloud_keys
            gcloud auth activate-service-account --key-file ${HOME}/gcloud_keys
            cd part03/iac_kubernetes_app
            terraform init
            gcloud container clusters get-credentials $CLUSTER_NAME --zone="us-east1-d"            
            terraform destroy -var $DOCKER_IMAGE --auto-approve
            cd ../iac_gke_cluster/
            terraform init
            terraform destroy -var credentials=$HOME/gcloud_keys --auto-approve
workflows:
  build_test:
    jobs:
      - run_tests
      - build_docker_image
      - gke_create_cluster
      - gke_deploy_app:
          requires:
            - run_tests
            - build_docker_image
            - gke_create_cluster
      - approve-destroy:
          type: approval
          requires:
            - gke_create_cluster
            - gke_deploy_app
      - gke_destroy_cluster:
          requires:
            - approve-destroy

Summary

Congratulations! You’ve just completed part 3 of this series and leveled up your experience by building a new config.yml file that executes IaC resources using Terraform. This post explained and demonstrated some critical elements in the config.yml file and the internal concepts related to the CircleCI platform.

In this series we covered many concepts and technologies such as Docker, GCP, Kubernetes, Terraform and CircleCI and included some hands on experience with them. We also covered how to wire up your projects to use CircleCI and leverage the Terraform code to test your application in target deployment environments. This series is intended to ease your learning of important DevOps concepts, technologies, and how they all tie together.

I encourage you to experiment on your own in changing code, adding new Terraform providers, and reconfiguring CI/CD jobs and pipelines. Challenge yourself to accomplish release goals using a combination of what you learned and ideas you may have. By experimenting, you will learn more than any blog post could teach you.

Thank you for following along with this series. I hope you found it useful. Please feel free to reach out with feedback on Twitter @punkdata.

The following resources will help you expand your knowledge from here: