Note from the publisher: You have managed to find some of our old content and it may be outdated and/or incorrect. Try searching in our docs or on the blog for current information.

HashiCorp recently announced that they are deprecating Atlas and will offer Terraform Enterprise as a standalone product to its customers. In this post, we will outline how to replicate your Atlas pipeline with Packer and CircleCI, with examples of Packer job configuration, AMI generation, using Terraform to manage change, and storing artifacts in S3. At CircleCI we use AMIs as part of our immutable infrastructure. Being able to apply configuration and change management to the AMI at a single point in our continuous deploy pipeline is hugely beneficial. Immutable infrastructure allows us to scale up to meet demand in as fast and provable manner as possible - if our scale requests had to wait for the provisioning process to finish, we would have to over provision to handle spikes.

Once you have AMIs that are generated in this way you also now benefit from moving your network infrastructure into code with tools like Terraform (or Chef, Puppet, etc.). Having your infrastructure exist as code allows it to benefit from change management as part of your continuous integration and deploy pipelines.

At each step of this process we should have changes being made in an auditable and reproducible manner to generate artifacts that can now be tested, verified and deployed as needed. The pipeline steps are no longer required to be synchronous or tightly coupled. We also gain the ability to change any tool in the pipeline as long as it matches the requirements for any artifacts that it generates.

Now that we’ve covered why you would do this, let’s get to the how. In our example, we will be referencing these tools:

  • Docker, which allows you to test and fine tune each Packer definition
  • Packer, which allows you to create multiple machine images from a single source configuration
  • AWS command-line (or similar) which allows you to deploy an instance of your new AMI

We will have a base AMI and a snowflake AMI that will have as its source the base AMI. We will also generate manifest json files for both and push them to S3.

We are assuming that you have a passing familiarity with Packer and have at least gone through the HashiCorp Packer Getting Started docs.

The Packer configuration for the base AMI is pretty standard and you can see the complete configuration at

The example below has a few items that make debugging and testing your Packer configs easier, for example we use an inline provisioner to loop until we see that the cloud-init phase is complete, otherwise we could be installing onto the OS before it fully initialized.

We also are defining user variables, including the ami_sha variable which will get its value from the Git SHA of the Packer configuration. This is used by the tag_exists() shell function to check if the AMI being generated already exists.

"provisioners": [
     "inline": [
       "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done"
     "type": "shell"
     "scripts": [
     "type": "shell"
 "variables": {
   "ami_name": "baseline-ubuntu-1604",
   "ami_base": "ami-aa2ea6d0",
   "ami_sha":  "{{env `SHA`}}"

The actual work of provisioning is controlled by the three task scripts which prepare the APT environment for Ubuntu, set the Locale to UTC (the only real timezone) and then performs the ever-present apt-get upgrade. Because we perform those basic tasks in our baseline AMI and our dependent AMIs are generated automatically when the baseline AMI changes, the task scripts for our downstream AMIs can be specific to that AMI which improves our DRY (Don’t Repeat Yourself) attitude. We also gain from Packer the ability to generate not only the AMI but also a Docker container by including a Docker builder in our Packer configuration.

We could go deeper into the different knobs and levers that are available to us with Packer, but their documentation does a pretty good job of that, so I would rather dig into the “glue” code within our script that makes it possible to daisy chain AMIs.

You may have already figured out that we are using the Git SHA as our marker to identify if a change to the Packer configuration for an AMI has a generated AMI. Within we have get_base_ami() which looks for the base AMI in two ways. First it will see if we can find it within the generated manifest.json file from a prior run or it will look for the Git SHA within the AWS AMI list. If the AMI is found we can look for the CircleCI artifact ID from that AMI’s tags.

With this, when our dependent CircleCI Workflow step is triggered by a change in the base AMI’s CircleCI Workflow we are able to retrieve the base AMI that was generated and insert that into our Packer build.

Packer is a very flexible tool that can target multiple environments from a configuration definition with CircleCI Workflows, and the combination creates a continuous integration and deploy pipeline for our immutable infrastructure. By having the “business logic” of our AMI generation within bash scripts that can be managed along side of the continuous integration pipeline, we have satisfied our requirements for an auditable and automatable change management flow for the AMI generation.

I hope this admittedly simple example helps point out how this tool combination can move an important process from tedious and manual to the automatic side of your DevOps task list.