Storing Build Artifacts

This document describes how to work with Artifacts on CircleCI.

Artifacts overview

Artifacts persist data after a job is completed and may be used for storage of the outputs of your build process.

For example, when a Java build/test process finishes, the output of the process is saved as a .jar file. CircleCI can store this file as an artifact, keeping it available after the process has finished.

artifacts data flow

Another example of an artifact is a project that is packaged as an Android app where the .apk file is uploaded to Google Play.

If a job produces persistent artifacts such as screenshots, coverage reports, core files, or deployment tarballs, CircleCI can automatically save and link them for you.

artifacts tab screenshot

Find links to the artifacts under the “Artifacts” tab on the Job page. Artifacts are stored on Amazon S3 and are protected with your CircleCI account for private projects. There is a 3GB curl file size limit.

Artifacts will be accessible for thirty days after creation. If you are relying on them as a source of documentation or persistent content, we recommend deploying the output to a dedicated output target such as S3, or GitHub Pages or Netlify for static websites.

Note: Uploaded artifact filenames are encoded using the Java URLEncoder. Keep this in mind if you are expecting to find artifacts at a given path within the application.

Uploading artifacts

To upload artifacts created during builds, use the following example:

version: 2
      - image: python:3.6.3-jessie
          username: mydockerhub-user
          password: $DOCKERHUB_PASSWORD  # context / project UI env-var reference

    working_directory: /tmp
      - run:
          name: Creating Dummy Artifacts
          command: |
            echo "my artifact file" > /tmp/artifact-1;
            mkdir /tmp/artifacts;
            echo "my artifact files in a dir" > /tmp/artifacts/artifact-2;

      - store_artifacts:
          path: /tmp/artifact-1
          destination: artifact-file

      - store_artifacts:
          path: /tmp/artifacts

The store_artifacts step uploads two build artifacts: a file (/tmp/artifact-1) and a directory (/tmp/artifacts). After the artifacts successfully upload, view them in the Artifacts tab of the Job page in your browser. If you’re uploading hundreds of artifacts, then consider compressing and uploading as a single compressed file to accelerate this step. There is no limit on the number of store_artifacts steps a job can run.

Currently, store_artifacts has two keys: path and destination.

  • path is a path to the file or directory to be uploaded as artifacts.
  • destination (Optional) is a prefix added to the artifact paths in the artifacts API. The directory of the file specified in path is used as the default.

Uploading core files

This section describes how to get core dumps and push them as artifacts for inspection and debugging. The following example creates a short C program that runs abort(3) to crash the program.

  1. Create a Makefile with the following lines:

        gcc -o dump main.c
  2. Create a main.c file with the following lines.

      #include <stdlib.h>
      int main(int argc, char **argv) {
  3. Run make and ./dump on the generated program to print Aborted (core dumped)!

Following is a full config.yml that compiles the example C abort program, and collects the core dumps as artifacts.

version: 2
      - image: gcc:8.1.0
          username: mydockerhub-user
          password: $DOCKERHUB_PASSWORD  # context / project UI env-var reference
    working_directory: ~/work
      - checkout
      - run: make
      - run: |
          # tell the operating system to remove the file size limit on core dump files
          ulimit -c unlimited
      - run:
          command: |
            mkdir -p /tmp/core_dumps
            cp core.* /tmp/core_dumps
          when: on_fail
      - store_artifacts:
          path: /tmp/core_dumps

The ulimit -c unlimited removes the file size limit on core dump files. With the limit removed, every program crash creates a core dump file in the current working directory. The core dump file is named core.%p.%E where %p is the process id and %E is the pathname of the executable. See the specification in /proc/sys/kernel/core_pattern for details.

Finally, the core dump files are stored to the artifacts service with store_artifacts in the /tmp/core_dumps directory.

Core Dump File in Artifacts Page

When CircleCI runs a job, a link to the core dump file appears in the Artifacts tab of the Job page.

Downloading all artifacts for a build on CircleCI

To download your artifacts with curl, follow the steps below.

  1. Create a personal API token and copy it to a clipboard.

  2. In a Terminal window, cd to a directory where you want to store the artifacts.

  3. Run the commands below. Use the table beneath the commands to substitute actual values for all variables that start with :.

# Set an environment variable for your API token.
export CIRCLE_TOKEN=':your_token'

# `curl` gets all artifact details for a build 
# then, the result is piped into `grep` to extract the URLs.
# finally, `wget` is used to download the the artifacts to the current directory in your terminal.

curl -H "Circle-Token: $CIRCLE_TOKEN" \
   | grep -o 'https://[^"]*' \
   | wget --verbose --header "Circle-Token: $CIRCLE_TOKEN" --input-file -

Similarly, if you want to download the latest artifacts of a build, replace the curl call with a URL that follows this scheme:

curl -H "Circle-Token: <circle-token>"

You can read more about using CircleCI’s API to interact with artifacts in our API reference guide.

Placeholder Meaning
:your_token The personal API token you created above.
:vcs-type The version control system (VCS) you are using. Either github or bitbucket.
:username The VCS project account username or organization name for the target project. Located at the top left of the screen in the CircleCI application.
:project The name of the target VCS repository.
:build_num The number of the job (aka. build) for which you want to download artifacts.

See also

Caching Dependencies

Help make this document better

This guide, as well as the rest of our docs, are open-source and available on GitHub. We welcome your contributions.