Smarter Testing Preview

Language Icon 3 days ago · 50 min read
Cloud
Contribute Go to Code
Smarter Testing is available in preview. When the feature is made generally available there will be a cost associated with access and usage.

Use Smarter Testing to optimize test runs as follows:

  • Run only tests that are impacted by code changes.

  • Evenly distribute tests across parallel execution nodes.

Smarter Testing reduces test execution time while maintaining test confidence.

Is my project a good fit for Smarter Testing?

The following list shows some examples of where Smarter Testing can be most beneficial:

  • Tests that exercise code within the same repository.

  • Projects with comprehensive test coverage. The more thorough your tests, the more precisely Smarter Testing can identify which tests are impacted by changes.

  • Test frameworks with built-in coverage support (Jest, pytest, Go test, Vitest) where generating coverage reports is straightforward.

    Smarter Testing uses code coverage data to determine which tests to run when source files are modified. Tests that cover large amounts of the codebase will run more frequently since most source changes will impact those tests.

Limitations

  • Generating code coverage data is essential for determining how tests are related to code. If tests are run in a way that makes generating and accessing code coverage data tricky then Smarter Testing may not be a good fit.

  • Smarter Testing works best when testing a single deployable artifact. For example, a monorepo with integration tests that span multiple packages/services, especially when services run in separate containers, makes coverage generation and consolidation difficult.

  • Smarter Testing needs to be configured with commands to discover all available tests and run a subset of those tests. If you cannot run commands to discover tests and run a subset of tests on the CLI then Smarter Testing may not be a good fit.

Key benefits

  • Faster CI/CD pipelines through intelligent test selection.

  • Optimized resource usage and cost efficiency.

  • Maintain fast feedback loops for development teams.

  • Scale efficiently as test suites grow.

How it works

Smarter Testing runs a test suite in three distinct phases:

  • Discovering tests.

  • Running selected tests.

  • Analysing test impact.

Smarter Testing optimizes test suite execution through two features:

  • Dynamic test splitting.

  • Test impact analysis.

Each feature is described in more detail below.

Dynamic test splitting

Dynamic test splitting distributes your tests across parallel execution nodes. The system maintains a shared queue that each node pulls from to create a balanced workload.

When you configure parallelism in your job and enable dynamic test splitting, Smarter Testing automatically:

  • Retrieves timing data from previous test runs.

  • Uses the previous test run times as a heuristic to guide the order that tests are enqueued. Longer tests will be run earlier in the process.

  • Tunes the size of the test batches run by each node to ensure the workload is evenly balanced.

This approach prevents slower nodes from extending the job runtime while other nodes have finished. All nodes continue to run tests until the entire test suite has been completed.

Test impact analysis

Test impact analysis identifies which tests need to run based on the files that changed in your checked out code. Analysis results are stored as impact data for future test runs. The system works in two phases:

Analysis phase

Builds a mapping between your tests and the code they exercise. Each test is run individually with code coverage enabled to determine which files it covers. Analysis runs on your default branch, but you can configure it to run on any branch with any trigger (webhook, API, or scheduled pipeline).

Selection phase

Compares changed files against the test impact data and selects only tests that exercise modified code. By default, test selection is applied on feature branches and all tests are run on your default branch. You can customize this behavior in your CircleCI configuration.

Test selection works by comparing the current state of the repository against the most recent test impact analysis data. Any tests that cover modified files are selected to run. Additionally, the entire test suite is selected if any of the full-test-run-paths files are modified. See the 4. Test suite configuration options for details.

The analysis phase typically runs slower than running the selected tests because it executes tests individually with coverage instrumentation. However, this one-time cost enables much faster test execution on branches using selection mode.

flowchart LR discover(Discover \n A B C D E F G H I J) discover --> tia(Test Impact Analysis \n A B C D E) tia --> split1(Node 1 \n A... B..\n.........) tia --> split2(Node 2 \n C...... \n .......) tia --> split3(Node 3 \n D...... \n .. E...) subgraph "Dynamic Test Splitting" split1 split2 split3 end

1. Getting started

By the end of this section you will have completed the following:

  • Install the Smarter Testing tooling locally.

  • Configure your .circleci/test-suites.yml to discover and run your tests.

1.1 Install the testsuite plugin locally

The testsuite plugin can be used both locally and in CI. We recommend getting started locally for the faster feedback loop.

  1. Install the latest CircleCI CLI. For instructions, see the Install and Configure the CircleCI Local CLI guide.

  2. Install the latest testsuite CLI plugin using one of the methods described in the following section.

The testsuite CLI plugin is already available for use in CircleCI Docker containers so there is no need to install it in your jobs.

Install the testsuite plugin with Homebrew

The testsuite CLI plugin is distributed through a Homebrew tap.

$ brew install circleci
$ brew install circleci/tap/circleci-testsuite

# or

$ brew upgrade circleci circleci/tap/circleci-testsuite

Install the testsuite plugin manually

Prebuilt binaries are available from the CircleCI binary releases public bucket.

  • Mac Arm

  • Mac Intel

  • Linux AMD64

  • Linux Arm

  • Linux ARM64

  • Windows AMD64

Verify with:

$ sha256sum -c <(echo "04f37e47a12b4677cda35b804357b10c5891ef42081f1674fe74d82f56baf3cb  ./circleci-testsuite.gz")

Verify with:

$ sha256sum -c <(echo "8c933b3358d8659ea7cc228b2fbd83b4e605c472c0b60355b3b0f97d0952852b  ./circleci-testsuite.gz")

Verify with:

$ sha256sum -c <(echo "f189936b32bae5cde22a7522dc58bd4e988c1384f690801c4cc88dfebf90319c  ./circleci-testsuite.gz")

Verify with:

$ sha256sum -c <(echo "b84ccd76ddb4bad1e8226381e308f9a9d95e2314cd3676fcdaffcf234446fcd5  ./circleci-testsuite.gz")

Verify with:

$ sha256sum -c <(echo "a5770f0c964b434df3560358836fa2401f526379e84aac2cd6c7946180ed1ce6  ./circleci-testsuite.gz")

Verify with:

$ sha256sum -c <(echo "ca0f395b1ffeca557428a30cc405ad193fe681792822cfae73c7c7c813687c0c  ./circleci-testsuite.exe.gz")

After downloading the plugin and verifying the checksum, complete the following steps:

  1. Decompress the binary.

    $ gunzip circleci-testsuite.gz
  2. Set the execute bit.

    $ chmod u+x circleci-testsuite
  3. Move it to a directory on your shell’s $PATH.

    $ mv circleci-testsuite "${HOME}/bin/"

1.2 Configure the discover command

The circleci run testsuite CLI tooling must be run from your repository root directory. The commands run as part of the test suite will also be run from your repository root directory, and should not use cd to change the working directory.

Smarter Testing operates on "test atoms".

A "test atom" is a single runnable unit that exercises one or more tests, depending on the specific test runner being used. For example:

  • go test pkg/foo

  • jest index.test.ts

  • ./gradlew test --tests "TestMethod"

The discover command finds all test atoms for a given test suite. The stdout of this command is split on both newlines and whitespace. Each token is treated as a test atom.

The discover command should not execute any tests.

Every line of stdout will be interpreted as test atoms. Any tool that prints metadata in the stdout will need to be suppressed.

Run the discover command in your shell to examine the output.

  1. Create a .circleci/test-suites.yml file in the project root and populate the discover command:

    • Vitest

    • Jest

    • Yarn with Jest

    • pytest

    • Go

    • Shell commands

    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: vitest list --filesOnly
    run: |
      echo "running: <<test.atoms>>" | tr ' ' '\n'
    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: jest --listTests
    run: |
      echo "running: <<test.atoms>>" | tr ' ' '\n'
    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: yarn --silent test --listTests
    run: |
      echo "running: <<test.atoms>>" | tr ' ' '\n'
    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: find ./tests -type f -name 'test*.py'
    run: |
      echo "running: <<test.atoms>>" | tr ' ' '\n'
    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
    run: |
      echo "running: <<test.atoms>>" | tr ' ' '\n'
    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: find test/ -type f -name '*_test.ts'
    run: |
      echo "running: <<test.atoms>>" | tr ' ' '\n'
  2. Run the test suite and confirm that the discover command found the test atoms you expect:

    $ circleci run testsuite "ci tests" --local
    Running test-suite-subcommand version "1.0.14935-630104a" built "2025-11-25T16:15:39Z"
    Testsuite timeout: 4h40m0s
    Running test suite 'ci tests'
    
    Suite Configuration:
    
    name: ci tests
    discover:
        command: vitest list --filesOnly
        shell: /bin/sh
    run:
        command: |-
            echo 'running: <<test.atoms>>' | tr ' ' '
            '
        shell: /bin/sh
    
    
    Discovering...
    Discovered 2 tests in 29ms
    
    Selecting tests...
    Selecting all tests, no impact analysis available
    Selected 2 tests, Skipped 0 tests in 0s
    
    Timing data is not present.
    Sorted tests in 0s
    
    Running 2 tests
       echo 'running: src/pages/dashboard/Dashboard.test.tsx src/pages/dashboard/CreateProjectButton.test.tsx' | tr ' ' '\n'
    running:
    src/pages/dashboard/Dashboard.test.tsx
    src/pages/dashboard/CreateProjectButton.test.tsx
    Ran 2 tests in 34ms
    
    Analysis not configured
    Not updating test impact data, analysis not enabled

1.3 Configure the run command

The run command executes the test atoms discovered by the discover command using a test runner. The run command is the same command you use to launch your test runner locally.

Make sure that the command stores JUnit results, and that it is passed a list of test atoms to run, for example:

$ vitest run --reporter=junit \
             --outputFile="test-reports/tests.xml" \
             --bail 0 \
             src/pages/dashboard/Dashboard.test.tsx src/pages/dashboard/CreateProjectButton.test.tsx

The test runner command needs to be modified to use placeholders for the JUnit output location, and the test atoms that should be run.

"Discovered" test atoms can be specified in one of two ways:

  • Use the template variable << test.atoms >> in the run command. This will be replaced with a space-separated list of test atoms to run.

  • If the template variable is not found in the run command, the command’s stdin will be a newline-separated list of test atoms.

The template variable << outputs.junit >> is used in the run command so Smarter Testing can interpret test results. The location of the test results should be defined in the outputs map so that it can be uploaded in CI with the store_test_results builtin command.

Making these changes to the command above gives:

$ vitest run --reporter=junit \
             --outputFile="<< outputs.junit >>" \
             --bail 0 \
             << test.atoms >>
  1. Update the .circleci/test-suites.yml with the run command.

  2. In order to upload test results in your CI jobs, the location of the JUnit output file path also needs to be set in the test suite configuration. Set outputs.junit in .circleci/test-suites.yml with your preferred JUnit output file path:

    • Vitest

    • Jest

    • Yarn with Jest

    • pytest

    • Go

    • Go with gotestsum

    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: vitest list --filesOnly
    run: vitest run --reporter=junit --outputFile="<< outputs.junit >>" --bail 0 << test.atoms >>
    outputs:
      junit: test-reports/tests.xml
    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: jest --listTests
    run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" jest --runInBand --reporters=jest-junit --bail << test.atoms >>
    outputs:
      junit: test-reports/tests.xml
    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: yarn --silent test --listTests
    run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" yarn test --runInBand --reporters=jest-junit --bail << test.atoms >>
    outputs:
      junit: test-reports/tests.xml
    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: find ./tests -type f -name 'test*.py'
    run: pytest --disable-pytest-warnings --no-header --quiet --tb=short --junit-xml="<< outputs.junit >>" << test.atoms >>
    outputs:
      junit: test-reports/tests.xml
    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
    run: go test -race -count=1 << test.atoms >>
    outputs:
      junit: test-reports/tests.xml
    # .circleci/test-suites.yml
    ---
    name: ci tests
    discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
    run: go tool gotestsum --junitfile="<< outputs.junit >>" -- -race -count=1 << test.atoms >>
    outputs:
      junit: test-reports/tests.xml
  3. Run the test suite and confirm that the run command runs the test atoms you expect:

    $ circleci run testsuite "ci tests" --local
    Running test-suite-subcommand version "1.0.14935-630104a" built "2025-11-25T16:15:39Z"
    Testsuite timeout: 4h40m0s
    Running test suite 'ci tests'
    
    Suite Configuration:
    
    name: ci tests
    discover:
        command: vitest list --filesOnly
        shell: /bin/sh
    run:
        command: |-
            echo 'running: <<test.atoms>>' | tr ' ' '
            '
        shell: /bin/sh
    outputs:
        junit: test-reports/tests.xml
    
    
    Discovering...
    Discovered 2 tests in 29ms
    
    Selecting tests...
    Selecting all tests, no impact analysis available
    Selected 2 tests, Skipped 0 tests in 0s
    
    Timing data is not present.
    Sorted tests in 0s
    
    
    Waiting for tests...
    Running 1 tests
       vitest run --reporter=junit --outputFile="test-reports/tests-1.xml" --bail 0 src/pages/dashboard/Dashboard.test.tsx
    JUNIT report written to /home/circleci/project/test-reports/tests-1.xml
    Running 1 tests
       vitest run --reporter=junit --outputFile="test-reports/tests-2.xml" --bail 0 src/pages/dashboard/CreateProjectButton.test.tsx
    JUNIT report written to /home/circleci/project/test-reports/tests-2.xml
    Ran 2 tests in 527ms
    
    Analysis not configured
    Not updating test impact data, analysis not enabled

2. Enable Smarter Testing

We recommend following the steps in 1. Getting started first before enabling the Smarter Testing feature to ensure the discover and run commands are set up correctly.

Steps:

  1. Configure test impact analysis in test-suites.yml.

  2. Analyze test impact.

  3. Validate that only relevant test are selected when you modify source files.

2.1 Configure the analysis command

The analysis command executes test atoms one at a time using a test runner. The analysis command is the same command you would use to launch your test runner locally with code coverage enabled.

Make sure that the command stores code coverage results, and that it is passed a test atom to run for example:

$ vitest run --coverage.enabled \
             --coverage.all=false \
             --coverage.reporter=lcov \
             --coverage.provider=v8 \
             --coverage.reportsDirectory="coverage/" \
             --bail 0 \
             src/pages/dashboard/Dashboard.test.tsx

The analysis command needs to be modified to use placeholders for the coverage report output location, and the test atom that should be run. Some test runners (including vitest) place coverage reports in a directory, and there may be multiple reports. Reports need to be joined into a single file so that Smarter Testing can interpret the results.

The test atom to analyze can be specified in one of two ways:

  • Use the template variable << test.atoms >> in the analysis command. This will be replaced with the test atom to analyze.

  • If the template variable is not found in the analysis command, the test atom will be passed on stdin.

Different template variables are available for coverage output, depending on the format of the coverage data:

LCOV

<< outputs.lcov >>

Go’s coverage format

<< outputs.go-coverage >>

Your analysis command should use the output variable for the coverage format it produces.

Making these changes to the command above gives:

$ vitest run --coverage.enabled \
             --coverage.all=false \
             --coverage.reporter=lcov \
             --coverage.provider=v8
             --coverage.reportsDirectory="$(dirname << outputs.lcov >>)" \
             --bail 0 \
             << test.atoms >> \
             && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
We use the directory that the << outputs.lcov >> file is stored in for the coverage report directory. We concatenate all coverage files before writing them to << outputs.lcov >>.
  • Vitest

  • Jest

  • Yarn with Jest

  • pytest

  • Go

  • Go with gotestsum

# .circleci/test-suites.yml
---
name: ci tests
discover: vitest list --filesOnly
run: vitest run --reporter=junit --outputFile="<< outputs.junit >>" --bail 0 << test.atoms >>
analysis: |
  vitest run --coverage.enabled \
             --coverage.all=false \
             --coverage.reporter=lcov \
             --coverage.provider=v8 \
             --coverage.reportsDirectory="$(dirname << outputs.lcov >>)" \
             --silent \
             --bail 0 \
             << test.atoms >> \
             && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
outputs:
  junit: test-reports/tests.xml
options:
  # Enable test impact analysis.
  test-impact-analysis: true
  # Limit analysis to about 1 minute, in order to confirm the command is working correctly
  test-analysis-duration: 1
# .circleci/test-suites.yml
---
name: ci tests
discover: jest --listTests
run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" jest --runInBand --reporters=jest-junit --bail << test.atoms >>
analysis: |
  jest --runInBand \
       --silent \
       --coverage \
       --coverageProvider=v8 \
       --coverageReporters=lcovonly \
       --coverage-directory="$(dirname << outputs.lcov >>)" \
       --bail \
       << test.atoms >> \
       && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
outputs:
  junit: test-reports/tests.xml
options:
  # Enable test impact analysis.
  test-impact-analysis: true
  # Limit analysis to about 1 minute, in order to confirm the command is working correctly
  test-analysis-duration: 1
# .circleci/test-suites.yml
---
name: ci tests
discover: yarn --silent test --listTests
run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" yarn test --runInBand --reporters=jest-junit --bail << test.atoms >>
analysis: |
  yarn test --runInBand \
            --coverage \
            --coverageProvider=v8 \
            --coverageReporters=lcovonly \
            --coverage-directory="$(dirname << outputs.lcov >>)" \
            --bail \
            << test.atoms >> \
            && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
outputs:
  junit: test-reports/tests.xml
options:
  # Enable test impact analysis.
  test-impact-analysis: true
  # Limit analysis to about 1 minute, in order to confirm the command is working correctly
  test-analysis-duration: 1
# .circleci/test-suites.yml
---
name: ci tests
discover: find ./tests -type f -name 'test*.py'
run: |
  pytest --disable-pytest-warnings \
         --no-header \
         --quiet \
         --tb=short \
         --junit-xml="<< outputs.junit >>" \
         << test.atoms >>
analysis: |
  pytest --disable-pytest-warnings \
         --no-header \
         --quiet \
         --tb=short \
         --cov \
         --cov-report=lcov:<< outputs.lcov >> \
         << test.atoms >>
outputs:
  junit: test-reports/tests.xml
options:
  # Enable test impact analysis.
  test-impact-analysis: true
  # Limit analysis to about 1 minute, in order to confirm the command is working correctly
  test-analysis-duration: 1
# .circleci/test-suites.yml
---
name: ci tests
discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
run: go test -race -count=1 << test.atoms >>
analysis: go test -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... << test.atoms >>
outputs:
  junit: test-reports/tests.xml
options:
  # Enable test impact analysis.
  test-impact-analysis: true
  # Limit analysis to about 1 minute, in order to confirm the command is working correctly
  test-analysis-duration: 1
# .circleci/test-suites.yml
---
name: ci tests
discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
run: go tool gotestsum --junitfile="<< outputs.junit >>" -- -race -count=1 << test.atoms >>
analysis: go tool gotestsum -- -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... << test.atoms >>
outputs:
  junit: test-reports/tests.xml
options:
  # Enable test impact analysis.
  test-impact-analysis: true
  # Limit analysis to about 1 minute, in order to confirm the command is working correctly
  test-analysis-duration: 1

Run the test suite with test-selection disabled, and analysis enabled, and confirm that the analysis command analyzes the test atoms you expect. Look for the output lines Found N files impacting TEST which show analysis discovering the source files covered by the test:

$ circleci run testsuite "ci tests" --local --test-selection=none --test-analysis=impacted
Running test-suite-subcommand version "1.0.14935-630104a" built "2025-11-25T16:15:39Z"
Testsuite timeout: 4h40m0s
Running test suite 'ci tests'

Suite Configuration:

name: ci tests
discover:
    command: vitest list --filesOnly
    shell: /bin/sh
run:
    command: |-
        echo 'running: <<test.atoms>>' | tr ' ' '
        '
    shell: /bin/sh
analysis:
    command: |
        vitest run --coverage.enabled \
                   --coverage.all=false \
                   --coverage.reporter=lcov \
                   --coverage.provider=v8 \
                   --coverage.reportsDirectory="$(dirname << outputs.lcov >>)" \
                   --silent \
                   --bail 0 \
                   << test.atoms >> \
                   && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
    shell: /bin/sh
outputs:
    junit: test-reports/tests.xml
    lcov: /tmp/test-suite-outputs.lcov-3417803461/outputs.lcov
options:
    test-impact-analysis: true
    test-analysis-duration: 1m0s


Discovering...
Discovered 2 tests in 29ms

Selecting tests...
Selecting all tests, no impact analysis available
Selected 2 tests, Skipped 0 tests in 0s

Timing data is not present.
Sorted tests in 0s


Waiting for tests...
Running 1 tests
   vitest run --reporter=junit --outputFile="test-reports/tests-1.xml" --bail 0 src/pages/dashboard/Dashboard.test.tsx
JUNIT report written to /home/circleci/project/test-reports/tests-1.xml
Running 1 tests
   vitest run --reporter=junit --outputFile="test-reports/tests-2.xml" --bail 0 src/pages/dashboard/CreateProjectButton.test.tsx
JUNIT report written to /home/circleci/project/test-reports/tests-2.xml
Ran 2 tests in 527ms

Analyzing 2 tests
Waiting for tests to analyze...
Analysis duration: 1m0s
Running impact analysis for src/pages/dashboard/Dashboard.test.tsx
   vitest run --coverage.enabled \
           --coverage.all=false \
           --coverage.reporter=lcov \
           --coverage.provider=v8 \
           --coverage.reportsDirectory="$(dirname /tmp/test-suite-outputs.lcov-3417803461/outputs.lcov)" \
           --silent \
           --bail 0 \
           src/pages/dashboard/Dashboard.test.tsx \
           && cat "$(dirname /tmp/test-suite-outputs.lcov-3417803461/outputs.lcov)"/*.info > /tmp/test-suite-outputs.lcov-3417803461/outputs.lcov


 RUN  v4.0.8 /home/circleci/project
      Coverage enabled with v8

 ✓ src/pages/dashboard/Dashboard.test.tsx (1 test) 86ms

 Test Files  1 passed (1)
      Tests  1 passed (1)
   Start at  13:48:53
   Duration  20.08s (transform 759ms, setup 522ms, collect 2.22s, tests 86ms, environment 367ms, prepare 44ms)

Found 127 files impacting test src/pages/dashboard/Dashboard.test.tsx

Running impact analysis for src/pages/dashboard/CreateProjectButton.test.tsx
   vitest run --coverage.enabled \
           --coverage.all=false \
           --coverage.reporter=lcov \
           --coverage.provider=v8 \
           --coverage.reportsDirectory="$(dirname /tmp/test-suite-outputs.lcov-3417803461/outputs.lcov)" \
           --silent \
           --bail 0 \
           src/pages/dashboard/CreateProjectButton.test.tsx \
           && cat "$(dirname /tmp/test-suite-outputs.lcov-3417803461/outputs.lcov)"/*.info > /tmp/test-suite-outputs.lcov-3417803461/outputs.lcov


 RUN  v4.0.8 /home/circleci/project
      Coverage enabled with v8

 ✓ src/pages/dashboard/CreateProjectButton.test.tsx (1 test) 43ms

 Test Files  1 passed (1)
      Tests  1 passed (1)
   Start at  13:48:54
   Duration  13.02s (transform 708ms, setup 540ms, collect 1.4s, tests 43ms, environment 356ms, prepare 29ms)

Found 127 files impacting test src/pages/dashboard/CreateProjectButton.test.tsx

Analyzed 2 tests in 23.357s
Updated test impact data in 23.501s

Using multiple test suites in a single repository

By default Smarter Testing uses per-repository test impact analysis data. If you have multiple test suites in a single repository, the impact analysis data for each test suite may conflict with each other.

Set options.impact-key in the test suite configuration to group impact analysis data.

# .circleci/test-suites.yml
---
name: service-1 tests
options:
  test-impact-analysis: true
  impact-key: service-1
---
name: service-2 tests
options:
  test-impact-analysis: true
  impact-key: service-2

2.2 Validate test selection locally

Test selection requires the test impact data from the analysis phase. Earlier we validated that the analysis command was running and producing coverage data in the correct location and format. But we limited the amount of time spent running analysis to 1 minute.

Update the .circleci/test-suites.yml to run the full analysis process. We will still set a duration in case the process takes a long time. Since analysis is incremental, any work done is not lost if the duration is exceeded.

  • Vitest

  • Jest

  • Yarn with Jest

  • pytest

  • Go

  • Go with gotestsum

# .circleci/test-suites.yml
---
name: ci tests
discover: vitest list --filesOnly
run: vitest run --reporter=junit --outputFile="<< outputs.junit >>" --bail 0 << test.atoms >>
analysis: |
  vitest run --coverage.enabled \
             --coverage.all=false \
             --coverage.reporter=lcov \
             --coverage.provider=v8 \
             --coverage.reportsDirectory="$(dirname << outputs.lcov >>)" \
             --silent \
             --bail 0 \
             << test.atoms >> \
             && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15
# .circleci/test-suites.yml
---
name: ci tests
discover: jest --listTests
run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" jest --runInBand --reporters=jest-junit --bail << test.atoms >>
analysis: |
  jest --runInBand \
       --silent \
       --coverage \
       --coverageProvider=v8 \
       --coverageReporters=lcovonly \
       --coverage-directory="$(dirname << outputs.lcov >>)" \
       --bail \
       << test.atoms >> \
       && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15
# .circleci/test-suites.yml
---
name: ci tests
discover: yarn --silent test --listTests
run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" yarn test --runInBand --reporters=jest-junit --bail << test.atoms >>
analysis: |
  yarn test --runInBand \
            --coverage \
            --coverageProvider=v8 \
            --coverageReporters=lcovonly \
            --coverage-directory="$(dirname << outputs.lcov >>)" \
            --bail \
            << test.atoms >> \
            && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15
# .circleci/test-suites.yml
---
name: ci tests
discover: find ./tests -type f -name 'test*.py'
run: |
  pytest --disable-pytest-warnings \
         --no-header \
         --quiet \
         --tb=short \
         --junit-xml="<< outputs.junit >>" \
         << test.atoms >>
analysis: |
  pytest --disable-pytest-warnings \
         --no-header \
         --quiet \
         --tb=short \
         --cov \
         --cov-report=lcov:<< outputs.lcov >> \
         << test.atoms >>
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15
# .circleci/test-suites.yml
---
name: ci tests
discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
run: go test -race -count=1 << test.atoms >>
analysis: go test -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... << test.atoms >>
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15
# .circleci/test-suites.yml
---
name: ci tests
discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
run: go tool gotestsum --junitfile="<< outputs.junit >>" -- -race -count=1 << test.atoms >>
analysis: go tool gotestsum -- -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... << test.atoms >>
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15

Run analysis, this will run until all the tests are analysed or for approximately the test-analysis-duration timeout you set in your test suite configuration. If the analysis command exits because it runs out of time, you can run the test suite again, the process will pick up from where it left off.

$ circleci run testsuite "ci tests" --local --test-selection=none --test-analysis=impacted

Once the analysis is complete, modify a source file and then run the test-suite. Look for the section starting Selecting tests…​ (some of the output from the command below has been elided):

$ circleci run testsuite "ci tests" --local
Running test-suite-subcommand version "1.0.14935-630104a" built "2025-11-25T16:15:39Z"
Testsuite timeout: 4h40m0s
Running test suite 'ci tests'

Suite Configuration:

name: ci tests
<... TEST SUITE CONFIGURATION ...>

Discovering...
Discovered 34 tests in 504ms

Selecting tests...
Found test impact version: 0
Using `impact-key` `default`
- 0 new tests
- 0 tests impacted by new files
- 4 tests impacted by modified files
Selected 4 tests, Skipped 30 tests in 0s

<... TEST RUN OUTPUT ...>

At this point your test suite is set up correctly:

  1. Test atoms are discovered.

  2. Test selection is driven from test impact analysis data.

  3. The analysis phase is correctly analysing test impact.

The next step is to run your test suite in CI.

Troubleshooting

The analysis found 0 files impacting tests

Check the analysis command is creating a coverage file formatted correctly by running the command locally and examining the coverage data.

If you would like assistance, share the coverage file in the closed beta Slack channel.

Test impact analysis not selecting expected tests

Symptoms: More tests run than expected, or tests you expect to run are skipped.

Solution: Ensure that your analysis phase has completed successfully. Test selection depends on coverage data from previous analysis runs. If analysis data is incomplete or outdated, the system may run more tests than expected or fall back to running all tests.

Debugging steps:

  1. Verify analysis has run successfully.

  2. Check that coverage data is being generated correctly.

  3. Review the full-test-run-paths option - changes to any of these paths trigger a full test run.

  4. Confirm the analysis command is producing valid coverage output, and that your are using the appropriate outputs variable for the coverage format.

Test impact analysis finds many more files impacting a test than it should

Analysis is based off the code covered by running an individual test atom. Sometimes the language runtime or test runner will eagerly load code as part of the test run setup. This behavior can lead to all the eagerly loaded code being considered covered by the test atom though it is only an artifact of running the test suite.

In these cases you can set an analysis-baseline command which can account for this code coverage.

3. Run your test suite in your CircleCI job

Now that your test suite configuration is set up correctly you can run your test suite in CircleCI job.

  1. Make sure that .circleci/test-suites.yml is checked in to your repository.

  2. Update your .circleci/config.yml to call the circleci run testsuite "ci tests" command instead of your regular test command.

  3. Push the change to your VCS.

For example, if your CircleCI test job was:

version: 2.1
jobs:
  test:
    executor: node-with-service
    steps:
      - setup
      - run: vitest run --reporter=junit --outputFile="test-reports/tests.xml" --bail 0
      - store_test_results:
          path: test-reports

You would change it to:

version: 2.1
jobs:
  test:
    executor: node-with-service
    steps:
      - setup
      - run: circleci run testsuite "ci tests"
      - store_test_results:
          # This directory must match the directory of `outputs.junit` in your
          # test-suites.yml
          path: test-reports

Since there is no test impact analysis data stored in CircleCI for your test suite, all the tests will be selected and run when you push to CI. Confirm that the tests execute as expected.

It can be easier to debug issues with your test suite configuration in CI if you initially run the job without parallelism.

The analysis phase runs on the default branch, you can pre-populate the test impact analysis data before merging by temporarily adding the --test-analysis=impacted CLI flag to the test suite invocation:

version: 2.1
jobs:
  test:
    executor: node-with-service
    steps:
      - setup
      # override test selection and analysis defaults to perform analysis on a
      # feature branch
      - run: circleci run testsuite "ci tests" --test-selection=none --test-analysis=impacted
      - store_test_results:
          path: test-reports

Analysis will be limited to approximately 15 minutes due to options.test-analysis-duration in the test suite configuration. If analysis isn’t completing within those 15 minutes, you can remove options.test-analysis-duration to allow the process to run for as long as it needs.

Once the analysis has been finished, remove the --test-selection=none and --test-analysis=impacted CLI flags. You can also remove or tune options.test-analysis-duration in your .circleci/test-suites.yml.

At this point:

  1. Your test suite is configured.

  2. You have run test impact analysis on your test suite.

  3. Your CircleCI config is updated to run your test suite.

The final step is to prepare and merge a PR with the changes to .circleci/config.yml so that your test suite is run in your CircleCI job.

Troubleshooting

Tests not being split correctly across nodes

Symptoms: Some parallel nodes finish much faster than others, or tests are not distributed evenly.

Solution: Verify that your test suite configuration includes historical timing data and that all test files are being detected. Check the step output for the "Sorted X tests" message to confirm that test atoms are being sorted by timing.

Debugging steps:

  1. Check that all test atoms are discovered with the discover command.

  2. Verify parallelism is set correctly in your .circleci/config.yml.

  3. Ensure test results are being stored with store_test_results.

Test results not appearing in the UI

Symptoms: No tests results appear in the CircleCI UI, or tests that were skipped by selection do not appear in the CircleCI UI.

Solution: Confirm that outputs.junit points to the correct location and that the store_test_results step is used in your CI job. The path argument for store_test_results should be the directory that the outputs.junit file is stored in. Output from test batches are written to files in this directory with numeric suffixes. Skipped test results are written to a separate file in this directory with a -skipped suffix.

Example:

# .circleci/test-suites.yml
outputs:
  junit: test-reports/tests.xml
# Skipped tests written to test-reports/tests-skipped.xml
# Batched tests written to incrementing test-reports/tests-1.xml
# .circleci/config.yml
jobs:
  test:
    executor: node-with-service
    steps:
      - setup
      - run: circleci run testsuite "ci tests"
      - store_test_results:
          path: test-reports

Some CI nodes are taking longer to run tests

Dynamic Test Splitting uses a single queue for all parallel nodes, each node fetches a dynamic batch size of tests to run.

At the start of the queue, the batch size is large and becomes smaller as the queue empties.

Batching from the queue allows all tests to be evenly distributed across nodes. This ensures that all nodes get evenly balanced work, even when some nodes have slow start up or take longer than expected to run tests.

Some test runners and language runtimes can have a reasonably large overhead getting to the point where they can start running tests, this interacts poorly with queue-based Dynamic Test Splitting.

Add options.dynamic-test-splitting: true to your test suite configuration:

# .circleci/test-suites.yml
---
name: ci tests
discover: vitest list --filesOnly
run: vitest run --reporter=junit --outputFile="<< outputs.junit >>" --bail 0 << test.atoms >>
outputs:
  junit: test-reports/tests.xml
options:
  dynamic-test-splitting: true

4. Test suite configuration options

The following options are available to be defined in the options map in config:

Options Field Default Description

dynamic-test-splitting

false

Whether the tests should be distributed across a shared queue and fetched across multiple dynamic batches.
If a test runner has slow start up time per batch, disabling this can speed up tests.

test-impact-analysis

false

Enables the Smarter Testing features, such as test impact analysis.

timeout

10

The time in minutes a step will wait for tests to become available when running in parallel.

full-test-run-paths

  • .circleci/*.yml

  • go.mod

  • go.sum

  • package-lock.json

  • package.json

  • project.clj

  • yarn.lock

A List of paths that might have an indirect impact on tests and should run the full test suite if a change is detected. To disable this option, provide an empty array.
full-test-run-paths: []

test-analysis-duration

null

The maximum duration test analysis will run for in minutes.
Any remaining tests will be analysed the next time test analysis is run.

impact-key

"default"

Group relevant impact data together using a matching key within the same project.

The following flags are available to be defined on the circleci run testsuite command.

Flag Default Description

--test-analysis=all|impacted|none

On branch main, impacted.
On all other branches, none

  • all analyzes all discovered tests, used to override any existing impact data.

  • impacted analyzes only tests impacted by a change, used to refresh impact data.

  • none skips analysis.

--test-selection=all|impacted|none

On branch main, all.
On all other branches, impacted

  • all selects and runs all discovered tests, used to run the full test suite.

  • impacted selects and runs only the tests impacted by a change.

  • none skips running tests, used to skip straight to analysis.

5. Optional configuration

5.1 Use the file-mapper command when test atoms are not file names

Skip this step if the test atoms discovered by your test suite are file names. This step is only necessary when test atoms are something other than file names.
One language that requires a file-mapper command is Go, since a test atom is a Go package which may be comprised of several test files.

When your test atoms are something other than file names, Smarter Testing needs a way to determine which files store the implementation of each test atom.

The file-mapper command is a command that maps a test atom to a file. The file-mapper command is used during analysis and test selection to ensure that a test is impacted by the test source code.

It may be useful to run the file-mapper command in your shell to verify the output.

The test atom to map can be specified in one of two ways:

  • Use the template variable << test.atoms >> in the file-mapper command. This will be replaced with the test atom to analyze.

  • If the template variable is not found in the file-mapper command, the test atom will be passed on stdin.

  • Go

  • Go with gotestsum

# .circleci/test-suites.yml
---
name: ci tests
discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
run: go test -race -count=1 << test.atoms >>
analysis: go test -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... << test.atoms >>
file-mapper: go list -json="Dir,ImportPath,TestGoFiles,XTestGoFiles" ./... > << outputs.go-list-json >>
# .circleci/test-suites.yml
---
name: ci tests
discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
run: go tool gotestsum --junitfile="<< outputs.junit >>" -- -race -count=1 << test.atoms >>
analysis: go tool gotestsum -- -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... << test.atoms >>
file-mapper: go list -json="Dir,ImportPath,TestGoFiles,XTestGoFiles" ./... > << outputs.go-list-json >>

5.2 Use the analysis-baseline when test atoms cover too many files

If you see many files impacting each test during analysis, for example, Found 150 files impacting test…​, this may be caused by shared setup code like global imports or framework initialization being included in coverage.

This extraneous coverage can be excluded by providing an analysis-baseline command to compute the code covered during startup that isn’t directly exercised by test code. We call this "baseline coverage data".

The analysis-baseline command must produce coverage output written to a coverage template variable. The baseline coverage data can be in any supported coverage format. While it does not need to match your test coverage output format, using the same format (for example, LCOV format for << outputs.lcov >>) is recommended for consistency.

  1. Create a minimal test that only does imports/setup (no test logic), in the vitest example below this is called src/baseline/noop.test.ts.

  2. Add an analysis-baseline command to your test suite. This command will be broadly similar to your analysis command, except that it should only run the minimal test.

  • Vitest

  • Jest

  • Yarn with Jest

  • pytest

  • Go

  • Go with gotestsum

# .circleci/test-suites.yml
---
name: ci tests
discover: vitest list --filesOnly
run: vitest run --reporter=junit --outputFile="<< outputs.junit >>" --bail 0 << test.atoms >>
analysis: |
  vitest run --coverage.enabled \
             --coverage.all=false \
             --coverage.reporter=lcov \
             --coverage.provider=v8 \
             --coverage.reportsDirectory="$(dirname << outputs.lcov >>)" \
             --bail 0 \
             << test.atoms >> \
             && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
analysis-baseline: |
  vitest run --coverage.enabled \
             --coverage.all=false \
             --coverage.reporter=lcov \
             --coverage.provider=v8 \
             --coverage.reportsDirectory="$(dirname << outputs.lcov >>)" \
             --bail 0 \
             "src/baseline/noop.test.ts" \
             && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15
# .circleci/test-suites.yml
---
name: ci tests
discover: jest --listTests
run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" jest --runInBand --reporters=jest-junit --bail << test.atoms >>
analysis: |
  jest --runInBand \
       --silent \
       --coverage \
       --coverageProvider=v8 \
       --coverageReporters=lcovonly \
       --coverage-directory="$(dirname << outputs.lcov >>)" \
       --bail \
       << test.atoms >> \
       && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
analysis-baseline: |
  jest --runInBand \
       --silent \
       --coverage \
       --coverageProvider=v8 \
       --coverageReporters=lcovonly \
       --coverage-directory="$(dirname << outputs.lcov >>)" \
       --bail \
       "src/baseline/noop.test.ts" \
       && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15
# .circleci/test-suites.yml
---
name: ci tests
discover: yarn --silent test --listTests
run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" yarn test --runInBand --reporters=jest-junit --bail << test.atoms >>
analysis: |
  yarn test --runInBand \
            --coverage \
            --coverageProvider=v8 \
            --coverageReporters=lcovonly \
            --coverage-directory="$(dirname << outputs.lcov >>)" \
            --bail \
            << test.atoms >> \
            && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
analysis-baseline: |
  yarn test --runInBand \
            --coverage \
            --coverageProvider=v8 \
            --coverageReporters=lcovonly \
            --coverage-directory="$(dirname << outputs.lcov >>)" \
            --bail \
            "src/baseline/noop.test.ts" \
            && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15
# .circleci/test-suites.yml
---
name: ci tests
discover: find ./tests -type f -name 'test*.py'
run: |
  pytest --disable-pytest-warnings \
         --no-header \
         --quiet \
         --tb=short \
         --junit-xml="<< outputs.junit >>" \
         << test.atoms >>
analysis: |
  pytest --disable-pytest-warnings \
         --no-header \
         --quiet \
         --tb=short \
         --cov \
         --cov-report=lcov:<< outputs.lcov >> \
         << test.atoms >>
analysis-baseline: |
  pytest --disable-pytest-warnings \
         --no-header \
         --quiet \
         --tb=short \
         --cov \
         --cov-report=lcov:<< outputs.lcov >> \
         "test/noop/noop_test.py"
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15
# .circleci/test-suites.yml
---
name: ci tests
discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
run: go test -race -count=1 << test.atoms >>
analysis: go test -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... << test.atoms >>
analysis-baseline: go test -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... ./noop_test
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15
# .circleci/test-suites.yml
---
name: ci tests
discover: go list -f '{{ if or (len .TestGoFiles) (len .XTestGoFiles) }} {{ .ImportPath }} {{end}}' ./...
run: go tool gotestsum --junitfile="<< outputs.junit >>" -- -race -count=1 << test.atoms >>
analysis: go tool gotestsum -- -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... << test.atoms >>
analysis-baseline: go tool gotestsum -- -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... ./noop_test
outputs:
  junit: test-reports/tests.xml
options:
  test-impact-analysis: true
  test-analysis-duration: 15

The analysis-baseline command will be run just before running analysis. The coverage data produced by the analysis-baseline command will be subtracted from each test’s coverage during analysis. Rerun analysis and you should see fewer impacting files per test.

5.3 built-in commands

Smarter Testing is runner-agnostic and can be configured to support most (if not all) languages/test runners. We provide some built-in configurations which can be used as the basis for your test suite.

  • Vitest

  • Jest

  • Yarn with Jest

  • pytest

  • Go

  • Go with gotestsum

---
name: ci tests
runner: vitest
---
name: ci tests
runner: jest
---
name: ci tests
runner: yarn-jest
---
name: ci tests
runner: pytest
---
name: ci tests
runner: go
---
name: ci tests
runner: gotestsum

You can override the commands provided by the built-in configurations if necessary, for example to use the go defaults but with a different discover command:

---
name: ci tests
runner: go
discover: find . -type f -name '*_test.go' -exec dirname {} \; | sort -u

Examples

Run analysis on your default branch and selection on all other branches

No changes required, this is the default setting.

Run analysis on a non-default and selection on all other branches

CircleCI configuration for running analysis on a branch named develop and selection on all other branches
# .circleci/config.yml
version: 2.1
jobs:
  test:
    executor: node-with-service
    parallelism: 4
    steps:
      - setup
      - run: circleci run testsuite "ci tests" --test-analysis=<< pipeline.git.branch == "develop" and "impacted" or "none" >>
      - store_test_results:
          path: test-reports

Run higher parallelism on the analysis branch

CircleCI configuration for running parallelism of 10 on the main branch and 2 on all other branches
# .circleci/config.yml
version: 2.1
jobs:
  test:
    executor: node-with-service
    parallelism: << pipeline.git.branch == "main" and 10 or 2 >>
    steps:
      - setup
      - run: circleci run testsuite "ci tests"
      - store_test_results:
          path: test-reports

Run full analysis on a scheduled pipeline, and timeboxed analysis on main

CircleCI configuration for running analysis only on scheduled pipelines
# .circleci/config.yml
version: 2.1
parameters:
  run-scheduled-analysis:
    type: boolean
    default: false
jobs:
  analysis:
    executor: node-with-service
    steps:
      - setup
      - run: circleci run testsuite "scheduled tests"
  test:
    executor: node-with-service
    steps:
      - setup
      - run: circleci run testsuite "main tests"
      - store_test_results:
          path: test-reports
workflows:
  scheduled-analysis:
    when: pipeline.parameters.run-scheduled-analysis
    jobs:
      - analysis
  main:
    when: not pipeline.parameters.run-scheduled-analysis
    jobs:
      - test
Test suite config. Set time limit of 10 minutes for the analysis on the main branch
# .circleci/test-suites.yml
---
name: "main tests"
# rest of test suite config.
options:
  test-impact-analysis: true
  test-analysis-duration: 10 # Analyze the slowest tests first for a max of 10 minutes.
---
name: "scheduled tests"
# rest of test suite config.
options:
  test-impact-analysis: true

Frequently asked questions

How often should I run the analysis phase?

The frequency depends on your test execution speed and development pace:

For fast test suites (coverage analysis runs quickly):

Run analysis on every default branch build. This keeps impact data continuously up-to-date, and ensures the most accurate test selection on other branches.

For slower test suites (coverage analysis is expensive):

Balance the freshness of impact data against CI/CD resource costs:

  • Run analysis on a scheduled pipeline targeting your default branch. Use a frequency based on your development pace (for example: nightly or after significant changes).

  • Timebox analysis on every default branch build, for example allow 10 minutes of analysis. This helps keep the analysis data up to date for smaller incremental changes.

You can customize which branches run analysis in your CircleCI configuration - analysis does not have to be limited to the default branch.

What happens if no tests are impacted by a change?

If test selection determines that no tests are affected by your changes then it won’t run anything.

This typically happens when:

  • You modify files that are not covered by any tests.

  • Changes affect configuration files not tracked by impact analysis.

Include relevant paths in full-test-run-paths to explicitly trigger full test runs when configuration files change.

How do I know if Smarter Testing is working?

Look for these indicators in your CircleCI build output.

Historical timing data is being found and used to order test execution:

  • Autodetected filename timings for N tests

  • Autodetected classname timings for N tests

  • Autodetected testname timings for N tests

Impact data is being found and used to select tests:

Found test impact generated by: https://app.circleci.com/pipelines/...
Using `impact-key` `default`
- 2 new tests
- 3 tests impacted by new files
- 5 tests impacted by modified files
Selected 10 tests, Skipped 19 tests in 1ms

Can I run analysis on branches other than the default branch?

Yes, the branch behavior is customizable in your .circleci/config.yml by passing the --test-analysis flag to the circleci run testsuite command.

The argument to --test-analysis can be a CircleCI configuration template expression, allowing you to vary behavior by branch name.

For example:

  • Any specific branch (for example, develop).

  • Feature branches if needed for testing.

See the Run analysis on a non-default and selection on all other branches example for an example of customizing branch behavior.

Can I control test selection on any branch?

Yes, the branch behavior is fully customizable through your CircleCI configuration. While test-selection runs on feature branches by default, you can override this behavior with the --test-selection flag.

The argument to --test-selection can be a CircleCI configuration template expression, allowing you to vary behavior by branch name.

  • Any specific branch (for example, develop or staging).

  • Feature branches if needed for testing.

  • Scheduled pipelines.

See the Run analysis on a non-default and selection on all other branches example for an example of customizing branch behavior.