Continuous delivery enables developers, teams, and organizations to effortlessly update code and release new features to their customers. This is all possible due to recent culture shifts within teams and organizations as they begin embrace CI/CD and DevOps practices. Implementing CI/CD and DevOps practices enables these teams to better leverage modern tools to build, test, and deploy their software with high confidence and consistentency.

Infrastructure as code (IaC) enables teams to easily manage their cloud resources by statically defining and declaring these resources in code, then deploying and dynamically maintaining these resources via code. In this post, I’ll demonstrate how to implement IaC within a CI/CD pipeline. This post also demonstrates how to implement our partner’s Pulumi orb into a pipeline. This orb defines and deploys an application to a Google Kubernetes Engine (GKE) cluster. Pulumi will provide the IaC component of this post.

Assumptions

This post assumes that you have an existing Git repository hosted on GitHub. The code examples provided here will need to live in a directory within the project’s repo.

Technologies used

This post also assumes a basic understanding of the following;

Pulumi Setup

Pulumi enables developers to write code in their favorite language (e.g., JavaScript, Python, Go, etc.), deploying cloud apps and infrastructure easily, without the need to learn specialized DSLs or YAML templating solutions. The use of first class languages enables abstractions and reuse. Pulumi provides high-level cloud packages in addition to low-level resource definitions across all the major clouds service providers (e.g., AWS, Azure, GCP, etc.) so that you can master one system to deliver to all of them.

In an application repository of your choice, create a new directory where your Pulumi app will live.

mkdir -p pulumi/gke
cd pulumi/gke

Next, sign up for Pulumi and Google Cloud accounts, if you haven’t done so previously.

After creating a new Pulumi project, you now have three files in the pulumi/gke directory:

  • Pulumi.yaml - Specifies metadata about your project.
  • Pulumi.<stack name>.yaml - Contains configuration values for the stack we initialized. <stack name> should be replaced with the stack name defined when your new Pulumi project was created. For the purpose of this tutorial, we will name this file Pulumi.k8s.yaml.
  • __main__.py - The Pulumi program that defines our stack resources. This is where the IaC magic happens.

Edit the Pulumi.<stack name>.yaml file and paste the following into it:

config:
  gcp:credentials: ./cicd_demo_gcp_creds.json
  gcp:project: cicd-workshops
  gcp:region: us-east1
  gcp:zone: us-east1-d
  gke:name: k8s

Edit the __main__.py file and replace it’s contents with:

import os
import pulumi
import pulumi_kubernetes
from pulumi import ResourceOptions
from pulumi_kubernetes.apps.v1 import Deployment
from pulumi_kubernetes.core.v1 import Namespace, Pod, Service
from pulumi_gcp import container

conf = pulumi.Config('gke')
gcp_conf = pulumi.Config('gcp')

stack_name = conf.require('name')
gcp_project = gcp_conf.require('project')
gcp_zone = gcp_conf.require('zone')

app_name = 'cicd-app'
app_label = {'appClass':app_name}
cluster_name = app_name

image_tag = ''
if 'CIRCLE_SHA1' in os.environ:
    image_tag = os.environ['CIRCLE_SHA1']
else:
    image_tag = 'latest'

docker_image = 'ariv3ra/orb-pulumi-gcp:{0}'.format(image_tag)

machine_type = 'g1-small'

cluster = container.Cluster(
    cluster_name,
    initial_node_count=3,
    min_master_version='latest',
    node_version='latest',
    node_config={
        'machine_type': machine_type,
        'oauth_scopes': [
            "https://www.googleapis.com/auth/compute",
            "https://www.googleapis.com/auth/devstorage.read_only",
            "https://www.googleapis.com/auth/logging.write",
            "https://www.googleapis.com/auth/monitoring",
        ],
    }
)

# Set the Kubeconfig file values here
def generate_k8_config(master_auth, endpoint, context):
    config = '''apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: {masterAuth}
    server: https://{endpoint}
  name: {context}
contexts:
- context:
    cluster: {context}
    user: {context}
  name: {context}
current-context: {context}
kind: Config
preferences: {prefs}
users:
- name: {context}
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: gcloud
        expiry-key: '{expToken}'
        token-key: '{tokenKey}'
      name: gcp
    '''.format(masterAuth=master_auth, context=context, endpoint=endpoint, 
            prefs='{}', expToken = '{.credential.token_expiry}', tokenKey='{.credential.access_token}')

    return config

gke_masterAuth = cluster.master_auth['clusterCaCertificate']
gke_endpoint = cluster.endpoint
gke_context = gcp_project+'_'+gcp_zone+'_'+cluster_name

k8s_config = pulumi.Output.all(gke_masterAuth,gke_endpoint,gke_context).apply(lambda args: generate_k8_config(*args))

cluster_provider = pulumi_kubernetes.Provider(cluster_name, kubeconfig=k8s_config)
ns = Namespace(cluster_name, __opts__=ResourceOptions(provider=cluster_provider))

gke_deployment = Deployment(
    app_name,
    metadata={
        'namespace': ns,
        'labels': app_label,
    },
    spec={
        'replicas': 3,
        'selector':{'matchLabels': app_label},
        'template':{
            'metadata':{'labels': app_label},
            'spec':{
                'containers':[
                    {
                        'name': app_name,
                        'image': docker_image,
                        'ports':[{'name': 'port-5000', 'container_port': 5000}]
                    }
                ]
            }
        }
    },
    __opts__=ResourceOptions(provider=cluster_provider)
)

deploy_name = gke_deployment

gke_service = Service(
    app_name,
    metadata={
        'namespace': ns,
        'labels': app_label,
    },
    spec={
        'type': "LoadBalancer",
        'ports': [{'port': 80, 'target_port': 5000}],
        'selector': app_label,
    },
    __opts__=ResourceOptions(provider=cluster_provider)
)

pulumi.export("kubeconfig", k8s_config)
pulumi.export("app_endpoint_ip", gke_service.status['load_balancer']['ingress'][0]['ip'])

The content in the __main__.py file specifies the GKE cluster and infrastructure that you will deploy from your pipeline. This Pulumi application creates a three-node Kubernetes cluster that runs an application in pods via Docker containers. This code also creates a Load Balancer resource that routes traffic evenly to the active Docker containers on the various compute nodes. If you would like more information on Pulumi Python apps, visit this site for detailed explanations.

Google Cloud setup

In this section, you’ll create and generate the necessary GCP credentials. These credentials will give our CI/CD pipeline and the Pulumi code access to execute commands on GCP.

Create a GCP project

A default project is set up by default for new accounts. I recommend that you create a new project and keep it separate so that it is easy to tear down later. After creating it, be sure to copy down the project id as it is different from the project name.

How to find your project id.

Getting project credentials

Next, set up a service account key which Pulumi will use to create and manage resources in your GCP project. Go to the Create service account key page. Select the default service account or create a new one, select JSON as the key type, and click Create. Save this .json file in the pulumi/gke folder.

Important security note: Rename the file to cicd_demo_gcp_creds.json in order to protect your Google Cloud credentials from being published and exposed in a public GitHub repository. Additionally, you could add the credential’s .json filename in this project’s .gitignore file. You must be very cautious with the data in this file because, if exposed, anyone with this information can log into your account and create resources and run up charges on your Google Cloud account.

CircleCI setup

Now, we need to configure CircleCI and our pipeline configuration file to integrate Pulumi into our CI/CD pipeline.

Encode the Google Service Account file

The service account file must be encoded into a base64 value in order to store this data as an environment variable in CircleCI. Run the following command in a terminal to encode the values and get the results:

base64 cicd_demo_gcp_creds.json

The results of this command will look similar to this:

ewogICJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsCiAgInByb2plY3RfaWQiOiAiY2ljZC13b3Jrc2hvcHMiLAogICJwcml2YXRlX2tleV9pZCI6ICJiYTFmZDAwOThkNTE1ZTE0NzE3ZjE4NTVlOTY1NmViMTUwNDM4YTQ4IiwKICAicHJpdmF0ZV9rZXkiOiAiLS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUURjT1JuRXFla3F4WUlTXG5UcHFlYkxUbWdWT3VzQkY5NTE1YkhmYWNCVlcyZ2lYWjNQeFFBMFlhK2RrYjdOTFRXV1ZoRDZzcFFwWDBxY2l6XG5GdjFZekRJbXkxMCtHYnlUNWFNV2RjTWw3ZlI2TmhZay9FeXEwNlc3U0FhV0ZnSlJkZjU4U0xWcC8yS1pBbjZ6XG5BTVdHZjM5RWxSNlhDaENmZUNNWXorQmlZd29ya3Nob3BzLmlhbS5nc2VydmljZWFjY291bnQuY29tIgp9Cg==

Copy the results into your clipboard because we’ll be using it in the next section.

Create project variables

In order for this CI/CD pipeline to execute commands on GCP, we must configure project level environment variables within CircleCI.

Create the following project level environment variables using the CircleCI dashboard:

  • $DOCKER_LOGIN = your Docker Hub username
  • $DOCKER_PWD = your Docker Hub password
  • $GOOGLE_CLOUD_KEYS = the base64 encoded result from the previous section
  • $PULUMI_ACCESS_TOKEN = generate an Access Token from the Pulumi dashboard

CI/CD pipeline with Pulumi integration

You now have all the elements required to integrate our Pulumi app into our CircleCI config.yml file. Edit your config.yml and paste the following configuration into your file. The contents of this config.yaml are specific to a sample Python projet that I use in demonstrations and talks so the config.yml for your project will be different. To see the full example project, view the repo on GitHub here. I’ll guide you through and explain the important Pulumi integration bits of this example config so that you have a clear understanding of what’s going on.

version: 2.1
orbs:
  pulumi: pulumi/pulumi@1.0.1
jobs:
  build_test:
    docker:
      - image: circleci/python:3.7.2
        environment:
          PIPENV_VENV_IN_PROJECT: 'true'
    steps:
      - checkout
      - run:
          name: Install Python Dependencies
          command: |
            pipenv install --skip-lock
      - run:
          name: Run Tests
          command: |
            pipenv run pytest
  build_push_image:
    docker:
      - image: circleci/python:3.7.2
    steps:
      - checkout
      - setup_remote_docker:
          docker_layer_caching: false
      - run:
          name: Build and push Docker image
          command: |       
            pipenv install --skip-lock
            pipenv run pyinstaller -F hello_world.py
            echo 'export TAG=${CIRCLE_SHA1}' >> $BASH_ENV
            echo 'export IMAGE_NAME=orb-pulumi-gcp' >> $BASH_ENV
            source $BASH_ENV
            docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
            echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
            docker push $DOCKER_LOGIN/$IMAGE_NAME
  deploy_to_gcp:
    docker:
      - image: circleci/python:3.7.2
        environment:
          CLOUDSDK_PYTHON: '/usr/bin/python2.7'
          GOOGLE_SDK_PATH: '~/google-cloud-sdk/'
    steps:
      - checkout
      - pulumi/login:
          access-token: ${PULUMI_ACCESS_TOKEN}
      - run:
          name: Install dependecies
          command: |
            cd ~/
            sudo pip install --upgrade pip==18.0 && pip install --user pulumi pulumi-gcp pulumi-kubernetes
            curl -o gcp-cli.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/google-cloud-sdk.tar.gz
            tar -xzvf gcp-cli.tar.gz
            echo ${GOOGLE_CLOUD_KEYS} | base64 --decode --ignore-garbage > ${HOME}/project/pulumi/gcp/gke/cicd_demo_gcp_creds.json
            ./google-cloud-sdk/install.sh  --quiet 
            echo 'export PATH=$PATH:~/google-cloud-sdk/bin' >> $BASH_ENV
            source $BASH_ENV
            gcloud auth activate-service-account --key-file ${HOME}/project/pulumi/gcp/gke/cicd_demo_gcp_creds.json
      - pulumi/update:
          stack: k8s
          working_directory: ${HOME}/project/pulumi/gcp/gke/
workflows:
  build_test_deploy:
    jobs:
      - build_test
      - build_push_image:
          requires:
            - build_test
      - deploy_to_gcp:
          requires:
          - build_push_image

The following code snippet specifies that the Pulumi orb will be utilized in this pipeline.

version: 2.1
orbs:
  pulumi: pulumi/pulumi@1.0.1

The jobs: key in the example pipeline has three individual jobs defined:

  • build_test: This job runs application unit tests

  • build_push_image: This job builds a new Docker image based on a Dockerfile which usually co-exists in the project repo

  • deploy_to_gcp: This job deploys to a GKE cluster via the Pulumi orb

I’ll be focusing on the build_push_image and the deploy_to_gcp jobs.

build_push_image:

  build_push_image:
    docker:
      - image: circleci/python:3.7.2
    steps:
      - checkout
      - setup_remote_docker:
          docker_layer_caching: false
      - run:
          name: Build and push Docker image
          command: |       
            pipenv install --skip-lock
            pipenv run pyinstaller -F hello_world.py
            echo 'export TAG=${CIRCLE_SHA1}' >> $BASH_ENV
            echo 'export IMAGE_NAME=orb-pulumi-gcp' >> $BASH_ENV
            source $BASH_ENV
            docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
            echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
            docker push $DOCKER_LOGIN/$IMAGE_NAME

After the application is tested and it successfully passes, the build_push_image job packages the application into a single executable binary. It then kicks of the docker build command which builds a new Docker image based on a Dockerfile that lives in the project’s repo. This job also uses existing environment variables and defines some new environment variables which are used to specify unique Docker image names. Below is the Dockerfile for this project:

FROM python:3.7.2
RUN mkdir /opt/hello_world/
WORKDIR /opt/hello_world/
COPY dist/hello_world /opt/hello_world/
EXPOSE 80
CMD [ "./hello_world" ]

The docker push command uploads our the newly built Docker image to Docker Hub for storage and retrieval in the future.

deploy_to_gcp:

  deploy_to_gcp:
    docker:
      - image: circleci/python:3.7.2
        environment:
          CLOUDSDK_PYTHON: '/usr/bin/python2.7'
          GOOGLE_SDK_PATH: '~/google-cloud-sdk/'
    steps:
      - checkout
      - pulumi/login:
          access-token: ${PULUMI_ACCESS_TOKEN}
      - run:
          name: Install dependencies
          command: |
            cd ~/
            sudo pip install --upgrade pip==18.0 && pip install --user pulumi pulumi-gcp pulumi-kubernetes
            curl -o gcp-cli.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/google-cloud-sdk.tar.gz
            tar -xzvf gcp-cli.tar.gz
            echo ${GOOGLE_CLOUD_KEYS} | base64 --decode --ignore-garbage > ${HOME}/project/pulumi/gcp/gke/cicd_demo_gcp_creds.json
            ./google-cloud-sdk/install.sh  --quiet 
            echo 'export PATH=$PATH:~/google-cloud-sdk/bin' >> $BASH_ENV
            source $BASH_ENV
            gcloud auth activate-service-account --key-file ${HOME}/project/pulumi/gcp/gke/cicd_demo_gcp_creds.json
      - pulumi/update:
          stack: k8s
          working_directory: ${HOME}/project/pulumi/gcp/gke/

The deploy_to_gcp: job specified above is the portion of the pipeline that utilizes the Pulumi app and orb to actually stand up the new GKE cluster on GCP. Below, I’ll briefly walk through the deploy_to_gcp: job.

      - pulumi/login:
          access-token: ${PULUMI_ACCESS_TOKEN}

The above code demonstrates the specification and execution of the Pulumi orb’s login: command. The access-token: parameter is passed the ${PULUMI_ACCESS_TOKEN} environment variable which you set in the CircleCI dashboard.

            curl -o gcp-cli.tar.gz https://dl.google.com/dl/cloudsdk/channels/rapid/google-cloud-sdk.tar.gz
            tar -xzvf gcp-cli.tar.gz
            echo ${GOOGLE_CLOUD_KEYS} | base64 --decode --ignore-garbage > ${HOME}/project/pulumi/gcp/gke/cicd_demo_gcp_creds.json
            ./google-cloud-sdk/install.sh  --quiet 
            echo 'export PATH=$PATH:~/google-cloud-sdk/bin' >> $BASH_ENV
            source $BASH_ENV
            gcloud auth activate-service-account --key-file ${HOME}/project/pulumi/gcp/gke/cicd_demo_gcp_creds.json  

The above commands download and install the Google Cloud SDK. This SDK is required to create and modify GKE clusters on GCP. The first two lines downlad and unpack the SDK. The echo ${GOOGLE_CLOUD_KEYS} | base64 --decode... command decodes the ${GOOGLE_CLOUD_KEYS} environment variable then populates the cicd_gcp_creds.json with its decoded contents. This file must exist in the Pulumi app project’s directory. The rest of the commands in this particular run: block install the SDK and the last line authorizes the service account to access GCP via the cicd_demo_gcp_creds.json file.

  - pulumi/update:
      stack: k8s
      working_directory: ${HOME}/project/pulumi/gcp/gke/

The code above utilizes the Pulumi orb’s update: command to kickoff the application deployment to a new GKE cluster on GCP. The pulumi/update: command shows the stack: and working_directory: parametes which represent the name of the Pulumi stack and the file path to the directory that was initialized as a Pulumi project. Your working_directory: will differ from the code example above.

Conclusion

In this post, I’ve demonstrated how to integrate infrastructure as code solutions into CI/Cd pipelines. I’ve also demonstrated how to declare and execute CircleCI orb technologies within your CI/CD pipelines. These examples provide a solid understanding of using CI/CD automation to build, test, and deploy your code using IaC solutions.

Using CircleCI orbs improves productivity by simplifying the how we write our CircleCI configuration. Orbs can also be shared which saves time by using pre-built commands, jobs, and executors in our configuration files. Orbs are not limited to CircleCI + GKE deployments. You can go through the list of available orbs in the Orb Registry to find the ones that align with your choice of cloud platform, programming language, testng tools, and more.

To see the full example project, view the repo on GitHub here.