If you are looking for an Infrastructure as Code (IaC) tool, Terraform probably tops your list. In this tutorial, you will learn how to automate the deployment of changes to your infrastructure using Terraform and CircleCI workflows. The workflows will use Approval Jobs. For this project, we will deploy the infrastructure we build to Google Cloud Platform (GCP). You will be able to adapt and apply what you have learned here even if you use another cloud provider, like DigitalOcean or Amazon Web Services (AWS).

Note: I will describe what Terraform does in more detail a little later in the tutorial.

Prerequisites

To follow this tutorial, a few things are required:

About CircleCI workflows

CircleCI workflows are a declarative way to specify how, when, and in what order a list of jobs should run in a pipeline. In this tutorial, we will use a CircleCI workflow to deploy our infrastructure changes on each commit to the master branch of our GitHub repository.

About Terraform

Terraform is a tool that allows us to code infrastructure using configuration files written in a specialized programming language called Hashicorp Configuration Language (HCL). HCL configuration files use the .tf extension.

We will be using Terraform for writing resources. A Terraform resource is a code representation of an infrastructure resource. The resource can be a server, a firewall rule, a network address, or another part of infrastructure.

To keep resources in sync with what is running in the cloud provider, Terraform uses a state. The state can be stored in different backends: locally, on a remote storage service, or in state management software.

Installing the Terraform CLI

I have found that the easiest way to install Terraform CLI is to download the prebuilt binary for your platform from the official download page and move it to a folder that is currently in your PATH environment variable.

Note: Terraform offers extensive documentation about installation, if you would like to try a different method.

On Linux, for example, installing Terraform is as easy as running:

wget -O terraform.zip https://releases.hashicorp.com/terraform/0.13.5/terraform_0.13.5_linux_amd64.zip
unzip terraform.zip && rm terraform.zip
sudo mv terraform /usr/local/bin/

Confirm that it was installed correctly by running:

terraform version

It should print something similar to Terraform v0.13.5 (Terraform is updated regularly so don’t be surprised if your version is something different)

Now that Terraform is working, we can create our project.

Setting up Google Cloud

Our first task is to create a project on Google Cloud to store everything related to Terraform itself, like the state and service accounts. Using the gcloud CLI enter:

Note: Please confirm that you are logged in (with the appropriate Google account) by running gcloud auth login which redirects to your default browser for you to log in before running the commands below.

If you are using Linux/Unix:

RANDOM_ID=$(perl -pe 'binmode(STDIN, ":bytes"); tr/a-z0-9//dc;' < /dev/urandom | head -c 8; echo)
gcloud projects create terraform-admin-$RANDOM_ID --name terraform-admin --set-as-default

If you are on Windows (and not using WSL - Windows Subsystem for Linux), run the following commands instead:

set RANDOM_ID=%random%%random%
gcloud projects create terraform-admin-%RANDOM_ID% --name terraform-admin --set-as-default

The first line creates a random ID we can use as a suffix for our project id on Google Cloud. Using a random ID as suffix is important because the project ID must be unique. The second line creates the project on Google Cloud itself. We named it terraform-admin.

If you are using an Organization on Google Cloud and you want this project to be part of it, add the organization to the gcloud command above, like so: --organization [org-id].

The last output for this command should be something similar to:

Updated property [core/project] to [terraform-admin-0bqjep28]

To make the next steps easier we can put our full project identifier in a variable:

export TERRAFORM_PROJECT_IDENTIFIER=$(gcloud config get-value project)

Link the recently created project to your billing account by running:

gcloud beta billing projects \
	link $TERRAFORM_PROJECT_IDENTIFIER \
	--billing-account [billing-account-id]

Remember to replace [billing-account-id] with the actual ID of your billing account. If you do not know what ID to use, the easiest way to get it is to run:

gcloud beta billing accounts list

Your ID is the first column of the output. Use the ID whose OPEN status (the third column) is TRUE.

Next, enable the required APIs on the Terraform project:

gcloud services enable \
	cloudresourcemanager.googleapis.com \
	cloudbilling.googleapis.com \
	compute.googleapis.com \
	iam.googleapis.com \
	serviceusage.googleapis.com \
	container.googleapis.com

Creating a service account

Create the service account we will be using with Terraform:

gcloud iam service-accounts create terraform \
  --display-name "Terraform admin account"

It should show something like this:

Created service account [terraform].

Store the service account email in a variable:

export TERRAFORM_SERVICE_ACCOUNT_EMAIL="terraform@$TERRAFORM_PROJECT_IDENTIFIER.iam.gserviceaccount.com"

Creating a service account JSON key

After creating the service account, we need a JSON key for it. The JSON key is the file we will use to authenticate as this service account:

gcloud iam service-accounts keys create \
	--iam-account $TERRAFORM_SERVICE_ACCOUNT_EMAIL \
  ~/gcloud-terraform-admin.json

You should get an output similar to this:

created key [d4a3ef60690cb42faa1a71c5d75c5c04f6535c5a] of type [json] as [~/gcloud-terraform-admin.json] for [terraform@terraform-admin-0bqjep28.iam.gserviceaccount.com]

The output shows that the Terraform service account key was created at ~/gcloud-terraform-admin.json. Remember this path, because we will need it later.

Adding roles

Next, we need to create roles so Terraform can store their state inside a storage bucket we will create later on. In this step, we will add the viewer and storage.admin roles to the service account inside our Terraform Google Cloud project:

gcloud projects add-iam-policy-binding $TERRAFORM_PROJECT_IDENTIFIER \
  --member serviceAccount:$TERRAFORM_SERVICE_ACCOUNT_EMAIL \
  --role roles/viewer

gcloud projects add-iam-policy-binding $TERRAFORM_PROJECT_IDENTIFIER  \
  --member serviceAccount:$TERRAFORM_SERVICE_ACCOUNT_EMAIL  \
  --role roles/storage.admin

It should output the updated IAM policy for the project after each command.

The Terraform state can be stored either locally, or remotely. If you work as part of a team, or using multiple machines, storing it remotely works best. Because we are using Google Cloud, we are going to store the state directly in a storage bucket. To create the bucket on Google Cloud, run:

gsutil mb -p $TERRAFORM_PROJECT_IDENTIFIER gs://$TERRAFORM_PROJECT_IDENTIFIER
gsutil versioning set on gs://$TERRAFORM_PROJECT_IDENTIFIER

If you skipped the earlier step to link the project to your billing account, you will get an error similar to: Error: Failed to get existing workspaces: querying Cloud Storage failed: storage: bucket doesn't exist

The bucket name must be unique, so we used the project identifier for it. We are also enabling versioning in the storage bucket.

There are now only a few steps left to finish the Google Cloud configuration.

Create a separate project to create our Kubernetes Cluster on:

gcloud projects create circleci-k8s-cluster-$RANDOM_ID --name circleci-k8s-cluster

Note that we are using the commands that we used for the previous project, but we are using the name circleci-k8s-cluster. Another difference is that we are not setting this project as the default for the gcloud CLI.

Put the identifier in a variable just like we did before:

export CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER=circleci-k8s-cluster-$RANDOM_ID

Just like the Terraform project, the new project must be linked to your billing account. Run:

gcloud beta billing projects \
	link $CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER \
	--billing-account [billing-account-id]

Remember to replace [billing-account-id] with your billing account ID.

To complete Google Cloud setup, give the Terraform service account full access on this project. Assign it the owner role:

gcloud projects add-iam-policy-binding $CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER \
  --member serviceAccount:$TERRAFORM_SERVICE_ACCOUNT_EMAIL  \
  --role roles/owner

Keeping project resources secure

With owner privileges, the Terraform service account has full access to this project and all its resources. It might be wise to be extra careful with the service account key.

If you have an Organization on Google Cloud, you can also give similar roles to the service account at the organization level. That gives Terraform service account access to all projects inside this organization. I also recommend giving the resourcemanager.projectCreator and roles/billing.user roles to the service account at the organization level. Giving those roles access lets Terraform itself create new projects on Google Cloud. For this tutorial, we need to manually create the projects before they are usable in Terraform.

Now we can move to the Terraform project itself.

Creating our Infrastructure as Code project

For this tutorial, we will deploy a simple Kubernetes Cluster to Google Cloud. The first thing we need to do is to create our repository on GitHub and initialize a local repository in our machine that points to this repo. Name your project circleci-terraform-automated-deploy.

With the GitHub CLI this is as easy as running:

gh repo create circleci-terraform-automated-deploy

Set Visibility to the one you want, and answer Yes to both questions.

Go to the new repository:

cd circleci-terraform-automated-deploy

Create new files:

.
|- backend.tf
|- k8s-cluster.tf
|- main.tf
|- outputs.tf
|- variables.tf

Create each file, leaving them empty:

touch backend.tf k8s-cluster.tf main.tf outputs.tf variables.tf

Here is a description of each file:

  • backend.tf sets up the backend where our state will be stored
  • k8s-cluster.tf is for Kubernetes-related resources
  • main.tf is for resources and configurations, like the provider we are using
  • variables.tf will be empty for this tutorial
  • outputs.tf is also empty for this tutorial

Update backend.tf by adding:

terraform {
  backend "gcs" {
    bucket = "[full-project-identifier]"
    prefix = "terraform/state"
  }
}

Remember to replace [full-project-identifier] with the full identifier of the Google project Terraform is using. It is stored in the $TERRAFORM_PROJECT_IDENTIFIER shell variable. We named our bucket exactly the same as the project identifier, so it is the same value.

Finally, we will export a new shell variable called GOOGLE_APPLICATION_CREDENTIALS. The google provider we are using in Terraform checks for this variable to authenticate. The new variable points to the service account key we created earlier:

export GOOGLE_APPLICATION_CREDENTIALS=~/gcloud-terraform-admin.json

Check that Terraform is able to authenticate with Google Cloud to create the initial state. In your git repository, run:

terraform init

If that fails, try running terraform init -reconfigure

You should see multiple lines of output, including:

Terraform has been successfully initialized!

Terraform has created a .terraform folder inside the current directory. We do not want to commit this to our repository, so add it to .gitignore:

echo ".terraform" >> .gitignore

Now we can create our Kubernetes cluster using Terraform. First, let Terraform know what version of the Google Terraform provider we want to use. Open main.tf and add:

provider "google" {
  project = "[circleci-project-full-identifier]"
  region  = "us-west1"
}

In addition to the version of the Google provider, we are also setting the default values for project and region. These values will used by default when creating resources. Make sure to replace [circleci-project-full-identifier] with the actual value. In our case, this is the value of the $CIRCLECI_K8S_CLUSTER_PROJECT_IDENTIFIER shell variable.

Because we changed a provider, we must also reinitialize our Terraform state. Run:

terraform init

Your output should include:

Terraform has been successfully initialized!

Terraform resources are based on actual infrastructure resources, so to create a Kubernetes Cluster on Google Cloud, we must create two resources:

The links go to Terraform documentation for each resource.

Create a new file called k8s-cluster.tf inside your repository (if it is not already created):

touch k8s-cluster.tf

Open it with your favorite editor, and write:

locals {
  k8s_services = [
    "cloudapis.googleapis.com",
    "cloudresourcemanager.googleapis.com",
    "cloudtrace.googleapis.com",
    "compute.googleapis.com",
    "container.googleapis.com",
    "dataflow.googleapis.com",
    "logging.googleapis.com",
    "monitoring.googleapis.com",
    "pubsub.googleapis.com",
    "replicapool.googleapis.com",
    "replicapoolupdater.googleapis.com",
    "servicemanagement.googleapis.com",
    "serviceusage.googleapis.com",
  ]
}

resource "google_project_service" "k8s_cluster" {
  count                      = length(local.k8s_services)
  service                    = local.k8s_services[count.index]
  disable_dependent_services = true
}

resource "google_container_cluster" "circleci_cluster" {
  name     = "circleci-cluster"
  location = "us-west1-a"

  # Kubernetes Version
  min_master_version = "1.17.13-gke.2600"

  # We can't create a cluster with no node pool defined, but we want to use
  # a separately managed node pool. For that, we create the smallest possible
  # default node pool and immediately delete it.
  # This is the recommended way to manage node pools with Terraform.
  remove_default_node_pool = true
  initial_node_count       = 1

  # Setting an empty username and password explicitly disables basic auth
  master_auth {
    username = ""
    password = ""
    client_certificate_config {
      issue_client_certificate = false
    }
  }

  depends_on = [
    google_project_service.k8s_cluster
  ]
}

resource "google_container_node_pool" "circleci_cluster_primary" {
  name     = "primary"
  location = google_container_cluster.circleci_cluster.location
  cluster  = google_container_cluster.circleci_cluster.name

  node_count = 1

  node_config {
    machine_type = "e2-standard-2"

    oauth_scopes = [
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
      "https://www.googleapis.com/auth/service.management.readonly",
      "https://www.googleapis.com/auth/servicecontrol",
      "https://www.googleapis.com/auth/trace.append",
    ]
  }
}

With our Terraform configuration file ready, we can test what the Terraform action planner will output for the changes we introduced:

terraform plan

If everything is working correctly, you should see multiple lines of output, showing the diff between what is currently in the state, and what we have in the .tf files. It should also give a summary of the changes:

Plan: 15 to add, 0 to change, 0 to destroy.

Terraform plans to add 15 resources to our state, and by doing so, create them on Google Cloud.

Apply the changes:

terraform apply

Now Terraform will show the same diff, but instead of exiting at the end, it will prompt you to apply these changes:

Plan: 15 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only ‘yes’ will be accepted to approve.

Answer yes.

After the Terraform CLI exits successfully, the resources are ready on Google Cloud. Confirm this by accessing the Google Cloud Console and going to the Kubernetes Engine page.

Kubernetes Cluster created

If it wasn’t successful quoting a google api error claiming that the version is unsupported, refer to the latest minor versions available in GKE here and update line 30 of k8s-cluster.tf with the default patch version of the Regular GKE release channel; then rerun terraform apply.

Hopefully, everything is working now. We will destroy our infrastructure by running:

terraform destroy

The output should include:

Plan: 0 to add, 0 to change, 15 to destroy.

Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only ‘yes’ will be accepted to confirm.

Answer yes and the whole infrastructure will be destroyed and removed from the state.

Now is a good time to commit everything to our repository. Before we do, though, there is another Terraform command that is very helpful. At the project root, run:

terraform fmt

This command rewrites our .tf files to a canonical format.

Commit your changes:

git add --all
git commit -m "terraform initial files"
git push -u origin master

Creating our CircleCI config

To make our CircleCI configuration easier, we will use a CircleCI Orb. Orbs are reusable snippets of code that help automate repeated processes, speed up project setup, and make it easy to integrate with third-party tools. The orb we are going to use is the official Terraform orb from CircleCI: circleci/terraform.

Create a new folder called .circleci in the root of your git repository. Add an empty file called config.yml:

mkdir .circleci
touch .circleci/config.yml

Write:

version: 2.1
orbs:
  terraform: "circleci/terraform@1.1.0"
workflows:
  deploy_infrastructure:
    jobs:
      - terraform/fmt:
          checkout: true
          context: terraform
      - terraform/validate:
          checkout: true
          context: terraform
          requires:
            - terraform/fmt
      - terraform/plan:
          checkout: true
          context: terraform
          persist-workspace: true
          requires:
            - terraform/validate
      - terraform/apply:
          attach-workspace: true
          context: terraform
          filters:
            branches:
              only: master
          requires:
            - terraform/plan

We are using multiple jobs from the circleci/terraform orb to do the heavy lifting for us. The deploy_infrastructure workflow will do everything.

For every commit it will verify that our .tf files are formatted correctly (terraform/fmt job) and that they are valid (terraform/validate job). The workflow also plans the changes we made to our infrastructure (terraform/plan job).

If the commit is in the master branch, we will apply the changes that were planned in the terraform/plan job.

This is already a lot of automation. We probably do not want to apply changes to our infrastructure without direct approval from someone.

Requiring approval

To require approval before applying the changes, we need to make changes to the .circleci/config.yml file:

version: 2.1
orbs:
  terraform: "circleci/terraform@1.1.0"
workflows:
  deploy_infrastructure:
    jobs:
      - terraform/fmt:
          checkout: true
          context: terraform
      - terraform/validate:
          checkout: true
          context: terraform
          requires:
            - terraform/fmt
      - terraform/plan:
          checkout: true
          context: terraform
          persist-workspace: true
          requires:
            - terraform/validate
      - hold-apply:
          type: approval
          requires:
            - terraform/plan
      - terraform/apply:
          attach-workspace: true
          context: terraform
          filters:
            branches:
              only: master
          requires:
            - hold-apply

We introduced a new approval job to our workflow. This job will hold our workflow until someone approves or cancels the request. Now our terraform/apply job requires that the hold-apply job is approved before it starts.

After it is approved, the terraform/apply job will start running.

Commit all the changes so far to our repository:

git add --all
git commit -m "add circleci config"
git push

Setting up the project in CircleCI

After you have signed into your CircleCI account, go to the Organization Settings page by clicking on it in the sidebar. Now we can add the context we are using in our configuration file, named terraform. Contexts are a way to securely share environment variables across multiple projects/jobs in CircleCI.

In the next page, Contexts, click Create Context. Name the context terraform and click Create Context. Select the new context from the list to review details about it.

Click Add Environment Variable. Name the environment variable GOOGLE_CREDENTIALS. The value of the variable is the full content of the .json key for our terraform service account. We saved that file to ~/gcloud-terraform-admin.json. To get the contents, run:

cat ~/gcloud-terraform-admin.json

After setting the value of the environment variable to the contents of the JSON key file, click Add Environment Variable.

Now that we have finished creating our context, go back to the projects page in CircleCI (by clicking on the X icon on the top right). Search for the GitHub repository you created for your infrastructure. Click Set Up Project.

Setting Up Project on CircleCI

Click Use Existing Config.

Config file Wizard

And then Start Building.

Config file Wizard confirmation modal

This will run the first build of our repository.

Repository first build running

If everything works correctly, the UI will show a different icon it gets to the hold-apply job.

First build on hold

Click the terraform/plan job to review the output of the terraform plan step. If it is correct, go back to the workflow page and click the hold-apply job. You can approve or cancel it.

Workflow job approval dialog

Approve it to start the terraform/apply.

Workflow running after approval

This job may take a while to finish. Once it ends, the infrastructure should be available on Google Cloud Console and our build would be green.

Success!

When you do not need the infrastructure anymore, you can run (locally):

terraform destroy

Conclusion

Using infrastructure as code with Terraform can improve not just the speed of how new changes can be applied, but tracking of the ownership of those changes. Having all that automatically deployed using CircleCI is going to make your infrastructure even more reliable while providing an efficient software development lifecycle.