Note from the publisher: You have managed to find some of our old content and it may be outdated and/or incorrect. Try searching in our docs or on the blog for current information.


Some Docker containers are perfect for CircleCI 2.0. Postgresql, for instance, spins up everything you need just by passing in a few variables:

version: 2.0
jobs:
  build:
    docker:
      - image: clojure:alpine
      - image: postgres:9.6
        environment:
          POSTGRES_USER: username
          POSTGRES_DB: db
          ...

But sometimes you’ll come across a third party container which doesn’t play so nice. You’ll need access to some resources which are present inside the container and nowhere else. The issue you’ll run into is that CircleCI makes your execution environment your primary image. So above, while I have access to the ports which are exposed by Postgres, psql isn’t in my $PATH. Only the contents of Clojure’s Alpine container are. Hashicorp’s Vault is one such container. Vault, on every boot, will choose a new root token which any access calls to it need to have. Now, if Vault is your primary image, this is really no problem as the file which stores this key lives at $HOME/.vault-token. But if this image is something like this:

version: 2.0
jobs:
  build:
    docker:
      - image: clojure:alpine
      - image: postgres:9.6
        environment:
          POSTGRES_USER: username
          POSTGRES_DB: db
      - image: vault:0.7.3

You’re going to have a bad time.

A Simple Workaround

In performing some testing against Vault, I decided I wanted to actually call the resource instead of mocking out calls to it. Spinning things up locally went more or less smoothly, but getting access to the vault-token in CircleCI was proving to be a chore.

Python Simple Server to the Rescue

If you need to get access to resources in an opaque container, I highly recommend taking the approach of wrapping the container with a python -m SimpleHTTPServer. SimpleHTTPServer serves the file system it’s located at over http, which means you can traverse and access the filesystem on your container over a port.

In the end I simply did this to get my testing working (all code below can be found here):

Dockerfile


FROM vault:0.7.3

RUN apk add --update python

ADD ./vault/ /vault
WORKDIR /vault

ENV VAULT_ADDR=http://127.0.0.1:8200
ENV SKIP_SETCAP=skip

EXPOSE 8201

ENTRYPOINT ["./docker-entrypoint.sh"]

docker-entrypoint.sh

python_http_server() {
  # we want to be able to server the VAULT_TOKEN for testing
  cd $HOME/
  python -m SimpleHTTPServer 8201
}

vault_server () {
  vault server -dev
}

vault_server & python_http_server

Published the container to Dockerhub, and then included that in my multi image layers instead:

.circleci/config.yml

version: 2.0
jobs:
  build:
    docker:
      - image: clojure:alpine
      - image: postgres:9.6
        environment:
          POSTGRES_USER: contexts
          POSTGRES_DB: contexts_service
      - image: mannimal/vault-cci:latest

Now in my .circleci/config.yml I have a line like this:

      - run:
          name: Setup Vault
          command: |
            VAULT__CLIENT_TOKEN=`curl localhost:8201/.vault-token`

            curl --fail -v -X POST -H "X-Vault-Token:$VAULT__CLIENT_TOKEN" -d '{"type":"transit"}' localhost:8200/v1/sys/mounts/transit
      - run:
          name: Run Tests
          command: |
            export VAULT__CLIENT_TOKEN=`curl localhost:8201/.vault-token`
            lein test

All of the above is then continuously deployed to Dockerhub using CCI.