Adaptive testing Preview
| Adaptive testing is available in closed preview. When the feature is made generally available there will be a cost associated with access and usage. |
| This page is currently in development and will be updated as the feature is developed. |
Use adaptive testing to run only tests that are impacted by code changes and evenly distribute tests across parallel execution nodes. Adaptive testing reduces test execution time while maintaining test confidence.
Key benefits:
-
Faster CI/CD pipelines through intelligent test selection.
-
Optimized resource usage and cost efficiency.
-
Maintain fast feedback loops for development teams.
-
Scale efficiently as test suites grow.
How it works
The adaptive testing feature operates through two main components that work together to optimize your test execution:
Dynamic test splitting
Dynamic test splitting distributes your tests across parallel execution nodes. The system maintains a shared queue that each node pulls from to create a balanced workload.
When you configure parallelism in your job, adaptive testing automatically:
-
Retrieves timing data from previous test runs.
-
Calculates optimal test distribution across your specified number of parallel nodes.
-
Ensures each node receives approximately the same amount of work based on execution time.
This approach prevents situations where one node finishes quickly while another node runs a few slow tests, maximizing the efficiency of your parallel execution.
Test impact analysis
Test impact analysis identifies which tests need to run based on the files that changed in your checked out code, stored as impact data for future test runs. The system works in two phases:
- Analysis phase
-
Builds a mapping between your tests and the code they exercise. Each test runs individually with code coverage enabled to determine which files it covers. By default, this runs on your main branch, but you can configure it to run on any branch or on a scheduled basis through your CircleCI configuration.
- Selection phase
-
Compares changed files against the test impact data and selects only tests that exercise modified code. By default, this runs on feature branches, but you can customize which branches use selection mode through your CircleCI configuration.
The analysis phase typically runs slower because it executes tests individually with coverage instrumentation. However, this one-time cost enables much faster test execution on branches using selection mode.
1. Getting started
Goals of this section:
-
Tests are run with the
testsuitecommand. -
Test results are uploaded and visible in the CircleCI web app.
-
Test timings match the existing test job.
1.1 Create test-suites.yml
Create a stubbed .circleci/test-suites.yml file in the project root. You will populate the values in the next step.
# .circleci/test-suites.yml
name: ci tests
discover: ##
run: ##
outputs:
junit: ##
1.2 Locate your existing test job
Your existing test job will be a reference point for populating a new .circleci/test-suites.yml file.
-
Find your test job in your CircleCI configuration file (
.circleci/config.yml). -
If the step calls a custom script (for example,
task run-tests), find the script.
# .circleci/config.yml
version: 2.1
jobs:
test:
executor: node-with-service
steps:
- setup
- run: JEST_JUNIT_OUTPUT_FILE="test-reports/tests.xml" jest --runInBand --reporters=jest-junit --bail --testPathPattern=src/
- store_test_results:
path: test-reports
1.3 Populate the discover command
The discover command finds all tests for a given test suite. The stdout of this command should be a space-separated or newline-separated list of tests that will be passed into the run command.
The discover command should not execute the tests.
All output will be used as tests, any tool that prints metadata in the stdout will need to be omitted.
-
Update the
.circleci/test-suites.ymlfile with the discover command.
# .circleci/test-suites.yml
name: ci tests
discover: jest --listTests --testPathPattern=src/
run: ##
outputs:
junit: ##
Examples for the discover command
| Test Framework | Command |
|---|---|
Jest |
|
Yarn Jest |
|
Vitest |
|
pytest |
|
Go |
|
1.4 Populate the run command
The run command executes the tests discovered by the discover command using a test runner.
"Discovered" tests can be run in one of two ways:
-
Use the template variable
<< test.atoms >>in theruncommand. This will be replaced with a space-separated list of tests to run. -
If the template variable is not found in the
runcommand, each test will be newline-separated in stdin.
When collecting test results, the template variable << outputs.junit >> in the run command should be used and the location of the test results should be defined in the outputs map. This ensures that each batch of tests do not override previous batches.
-
Update the
.circleci/test-suites.ymlwith the run command.
Checklist
-
The run command defines
<< test.atoms >>to pass in tests, or passes in stdin. -
The run command defines
<< outputs.junit >>to write test results.
# .circleci/test-suites.yml
name: ci tests
discover: jest --listTests --testPathPattern=src/
run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" jest --runInBand --reporters=jest-junit --bail << test.atoms >>
outputs:
junit: test-reports/tests.xml
Examples of run commands
| Test Framework | Command |
|---|---|
Jest |
|
Yarn Jest |
|
Vitest |
|
pytest |
|
Go |
|
gotestsum |
|
1.5 Update your CircleCI configuration to use the test suite
Your .circleci/test-suites.yml file is now set up to match your existing way of running tests. You now need to update your CircleCI configuration to use your test-suites.yml file.
-
Update the your CircleCI configuration file (
.circleci/config.yml) to use thecircleci run testsuite "ci tests"command. -
Push a change and observe the step output of the test job.
version: 2.1
jobs:
test:
executor: node-with-service
steps:
- setup
- run: circleci run testsuite "ci tests"
- store_test_results:
path: test-reports
Checklist
-
The step output runs the tests.
-
The "Test" tab reports the number of tests passed/failed.
Troubleshooting
My tests run slower using the test-suite
When using parallelism, confirm that the timing data is present for the tests. If the step output contains lines starting with No timing found for, the timing data is missing.
The two most common causes for this:
-
The tests were run with a different job name, in this case, rerunning the job should find timing data.
-
The
<< outputs.junit >>template variable is not set up correctly. Ensure that the run command uses the template variable and thestore_test_resultsstep provides a path to a directory so that all batches of<< outputs.junit >>are stored.
If the tests are still slower, the test runner being used might have initial start up time when running tests, this can cause significant slow down using the dynamic batching as each batch needs to do that initial start up.
Add the dynamic-batching: false option to .circleci/test-suites.yml to disable dynamic batching.
# .circleci/test-suites.yml
name: ci tests
discover: jest --listTests --testPathPattern=src/
run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" jest --runInBand --reporters=jest-junit --bail << test.atoms >>
outputs:
junit: test-reports/tests.xml
options:
dynamic-batching: false
If tests are still slower, share the pipeline link in the closed beta slack channel.
2. Enable adaptive testing
We recommend following the steps in 1. Getting started first before enabling the adaptive testing feature to ensure the discover and run commands are set up correctly.
The goal of this section is to enable adaptive testing for your test suite.
2.1 Update the test suites file
When using adaptive testing for test impact analysis, the discover command discovers all tests in a test suite, the run command runs only impacted tests and a new command, the analysis command, analyzes each test impacted.
-
Update the
.circleci/test-suites.ymlfile to include a stubbed analysis command. -
Update the
.circleci/test-suites.ymlfile to include the optionadaptive-testing: true.
# .circleci/test-suites.yml
name: ci tests
discover: jest --listTests --testPathPattern=src/
run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" jest --runInBand --reporters=jest-junit --bail << test.atoms >>
analysis: ##
outputs:
junit: test-reports/tests.xml
options:
adaptive-testing: true
2.2 Populate the analysis command
The analysis command runs each impacted test, instrumented with coverage data to find files impacting tests. This enables the run command to only run tests that are impacted by a change.
"Impacted" tests can be analysed in one of two ways:
-
Using the template variable
<< test.atoms >>in theanalysiscommand will be replaced with a single test. -
If the template variable is not found in the
analysiscommand, the test will be passed in stdin.
When collecting coverage data, the template variable in the analysis command should be used, this ensures that the coverage data can be parsed for each test analyzed.
Supported coverage template variables:
-
<< outputs.lcov >>: Coverage data in LCOV format. -
<< outputs.go-coverage >>: Coverage data in Go coverage format. -
<< outputs.gcov >>: Coverage data ingcovcoverage format.
The coverage location does not need to be set in the outputs map, a temporary file will be created and used during analysis with the template variable from the analysis command.
-
Update your
.circleci/test-suites.ymlfile with the analysis command.
# .circleci/test-suites.yml
name: ci tests
discover: jest --listTests --testPathPattern=src/
run: JEST_JUNIT_OUTPUT_FILE="<< outputs.junit >>" jest --runInBand --reporters=jest-junit --bail << test.atoms >>
analysis: jest --runInBand --silent --bail --coverage --coverageProvider=v8 --coverage-directory="$(dirname << outputs.lcov >>)" << test.atoms >> --coverageReporters=lcovonly && cat "$(dirname << outputs.lcov >>)"/*.info > << outputs.lcov >>
outputs:
junit: test-reports/tests.xml
options:
adaptive-testing: true
Checklist
-
The
analysiscommand defines<< test.atoms >>to pass in the test, or passes in stdin. -
The
analysiscommand defines<< outputs.lcov|go-coverage|gcov >>to write coverage data.
Examples of analysis commands
| Test Framework | Command |
|---|---|
Jest |
|
Yarn Jest |
|
Vitest |
|
pytest |
|
Go |
|
gotestsum |
|
2.3 (Optional) Populate the file-mapper command
If the tests outputted from the discover command are files (for example, src/foo.test.ts), this section can be skipped.
|
The file-mapper command is a command that maps a test to a file, this is used during analysis and test selection to ensure that a test is impacted by itself.
Files can be mapped to tests in one of two ways:
-
Using the template variable
<< test.atoms >>in thefile-mappercommand will be replaced with a single test. -
If the template variable is not found in the
file-mappercommand, the test will be passed in stdin.
# .circleci/test-suites.yml
name: ci tests
discover:
run:
analysis:
file-mapper: ##
outputs:
junit: test-reports/tests.xml
options:
adaptive-testing: true
Examples of file-mapper commands
| Test Framework | Command |
|---|---|
Go |
`go list -f '{{range .TestGoFiles}}{{$.Dir}}/{{.}}{{"\n"}}{{end}}{{range .XTestGoFiles}}{{$.Dir}}/{{.}}{{"\n"}}{{end}}' << test.atoms >>` |
2.4 Running analysis for the first time
By default, analysis will run for impacted tests on branches named main, and will not run for all other branches. The first time analysis is run, all tests are impacted because no tests exist in the impact data.
This section will run analysis on a feature branch to seed the initial impact data.
Action Items
-
Update
.circleci/config.ymlto include the--test-analysis=impactedCLI flag. -
(Recommended) Make use of parallelism to run the first analysis quicker, each test will be analyzed and depending on test runner and number of tests, this can take a long time.
-
(Optional) Include the
--test-selection=noneto skip past theruncommand running tests and go straight to the analysis command, this can be useful during the initial setup if running tests take a long time to run. -
Push a change and observe the step output.
Checklist
-
The step output includes prefix Running impact analysis.
-
The step output finds files impacting a test (for example, found 12 files impacting test
src/foo.test.ts).
version: 2.1
jobs:
test:
executor: node-with-service
parallelism: 15 # Set a high parallelsim to speed up analysis.
steps:
- setup
# Temporarily add test-analysis and (optional) test-selection flags.
- run: circleci run testsuite "ci tests" --test-analysis=impacted --test-selection=none
- store_test_results:
path: test-reports
Troubleshooting
Analysis is taking too long or my job is timing out
There might be some improvements that can be made to speed up coverage depending on test runner and project size.
If no further optimisations can be made, the test-analysis-duration option can be defined to timebox the analysis to a number of minutes.
# .circleci/test-suites.yml
options:
adaptive-testing: true
test-analysis-duration: 60 # 60 minutes.
The analysis found 0 files impacting tests
Check the analysis command is creating a coverage file formatted correctly by running the command locally.
Test suite configuration options
The following options are available to be defined in the options map in config:
| Options Field | Default | Description |
|---|---|---|
|
60 |
The time in minutes a step will wait for tests to become available when running in parallel. |
|
false |
Enables the adaptive testing features, such as test impact analysis. |
|
|
A List of paths that might have an indirect impact on tests and should run the full test suite if a change is detected.
To disable this option, provide an empty array. |
|
null |
The maximum duration test analysis will run for in minutes. |
|
true |
Whether the tests should be distributed across a shared queue and fetched across multiple dynamic batches. |
The following flags are available to be defined on the circleci run testsuite command.
| Flag | Default | Description |
|---|---|---|
|
On branch |
|
|
On branch |
|
3. Start using adaptive testing
Now the test suite is set up, test selection is working and the test analysis is up to date with the latest changes from the feature branch that ran the first test analysis.
Action Items
-
Remove the temporary changes from the "Running analysis for the first time" section.
Checklist
-
The
.circleci/config.ymlis set up to run analysis on the default branch. -
The
.circleci/config.ymlis set up to run selection on non-default branch. -
The
.circleci/config.ymlis set up to use high parallelism on the analysis branch.
Examples
Running analysis on a branch named main and selection on all other branches
No changes required, this is the default setting.
Running analysis on a branch named master and selection on all other branches
# .circleci/config.yml
version: 2.1
jobs:
test:
executor: node-with-service
parallelism: 4
steps:
- setup
- run: circleci run testsuite "ci tests" --test-analysis=<< pipeline.git.branch == "master" and "impacted" or "none" >>
- store_test_results:
path: test-reports
Running higher parallelism on the analysis branch
# .circleci/config.yml
version: 2.1
jobs:
test:
executor: node-with-service
parallelism: << pipeline.git.branch == "main" and 10 or 2 >>
steps:
- setup
- run: circleci run testsuite "ci tests"
- store_test_results:
path: test-reports
Running analysis on a scheduled pipeline and timeboxing some analysis on main
# .circleci/config.yml
version: 2.1
parameters:
run-scheduled-analysis:
type: boolean
default: false
jobs:
analysis:
executor: node-with-service
steps:
- setup
- run: circleci run testsuite "scheduled tests"
test:
executor: node-with-service
steps:
- setup
- run: circleci run testsuite "main tests"
- store_test_results:
path: test-reports
workflows:
scheduled-analysis:
when: << pipeline.parameters.run-scheduled-analysis == true >>
jobs:
- analysis
main:
when: << pipeline.parameters.run-scheduled-analysis == false >>
jobs:
- test
# .circleci/test-suites.yml
name: "main tests"
# rest of test suite config.
options:
adaptive-testing: true
test-analysis-duration: 10 # Analyze the slowest tests first for a max of 10 minutes.
---
name: "scheduled tests"
# rest of test suite config.
options:
adaptive-testing: true
Limitations
The adaptive testing feature has some limitations to consider:
- Initial setup period
-
Test impact analysis requires an initial analysis run on all tests before intelligent selection can begin. This first analysis run will be slower than normal test execution.
- Analysis phase performance
-
The analysis phase can be significantly slower than normal test runs because tests execute individually with coverage instrumentation. Plan for this when setting up analysis on your configured branches.
- Coverage instrumentation overhead
-
Running tests with coverage enabled adds execution time. The trade-off is faster builds through intelligent test selection on branches using selection mode.
- No analysis data fallback
-
When no impact data exists or cannot be determined, the system runs all tests as a safety measure. This ensures you never skip tests incorrectly but may result in longer execution times until impact data is built.
Troubleshooting
Tests not being split correctly across nodes
Symptoms: Some parallel nodes finish much faster than others, or tests are not distributed evenly.
Solution: Verify that your test suite configuration includes historical timing data and that all test files are being detected. Check the step output for the "Sorted X tests" message to confirm sorting by timing.
Debugging steps:
-
Check that all test files are discovered with the discover command
-
Verify parallelism is set correctly in your config.yml
-
Look for timing data in previous test runs
-
Ensure test results are being stored with
store_test_results.
Test impact analysis not selecting expected tests
Symptoms: More tests run than expected, or tests you expect to run are skipped.
Solution: Ensure that your analysis phase has completed successfully on the branch(es) you have configured to run analysis. Test selection depends on coverage data from previous analysis runs. If analysis data is incomplete or outdated, the system may run more tests than expected or fall back to running all tests.
Debugging steps:
-
Verify analysis has run successfully on your configured branch(es)
-
Check that coverage data is being generated correctly
-
Review the full-test-run-paths configuration - changes to these paths trigger full test runs
-
Confirm the analysis command is producing valid LCOV output
When all tests run: If no impact data exists or all tests are determined to be affected, the system runs all tests as a safety measure.
Skipped test results not appearing in the UI
Symptoms: Tests that were skipped by selection do not appear in the CircleCI UI.
Solution: Confirm that your outputs.junit configuration points to the correct location and that the store_test_results step is defined, pointing to that directory. Skipped test results are written to a separate file with a -skipped suffix in the same test results directory.
Example:
# .circleci/test-suites.yml
outputs:
junit: test-reports/tests.xml
# Skipped tests written to test-reports/tests-skipped.xml
# Batched tests written to incrementing test-reports/tests-1.xml
# .circleci/config.yml
jobs:
test:
executor: node-with-service
steps:
- setup
- run: circleci run testsuite "ci tests"
- store_test_results:
path: test-reports
Frequently asked questions
How often should I run the analysis phase?
The frequency depends on your test execution speed and development pace:
For fast test suites (coverage analysis runs quickly):
-
Run analysis on every main branch build.
-
This keeps impact data continuously up-to-date.
-
Ensures the most accurate test selection on other branches.
For slower test suites (coverage analysis is expensive):
-
Run analysis on a scheduled pipeline targeting your main branch.
-
Schedule frequency based on your development pace (for example, nightly or after significant changes).
-
Balance freshness of impact data against CI/CD resource costs.
Consider re-running analysis:
-
After major refactoring or code restructuring
-
When test selection seems inaccurate or outdated
-
After adding significant new code or tests
Remember: You can customize which branches run analysis through your CircleCI configuration - it doesn’t have to be limited to the main branch.
Can I customize the test-suites.yml commands?
Yes, you can fully customize commands by defining discover, run, and analysis commands in your test suite configuration. This allows you to:
-
Use test runners not included in the defaults.
-
Override default runner behavior.
-
Add custom flags or options.
-
Specify different output formats or locations.
Requirements when customizing:
-
Ensure your commands properly handle test execution
-
Generate valid coverage data for the analysis phase
-
Use the correct template variables (
<< test.atoms >>,<< outputs.junit >>,<< outputs.lcov >>) -
Output test results in a format CircleCI can parse (typically JUnit XML)
See the "Custom Configuration" section for detailed examples.
What happens if no tests are impacted by a change?
When test selection determines that no existing tests are affected by your changes, the system will run all tests as a safety measure. This ensures:
-
You never skip tests that should run.
-
Changes without test coverage are still validated.
-
New functionality that does not match existing impact data is tested.
This typically happens when:
-
You modify files that are not covered by any tests.
-
Impact data is outdated or incomplete.
-
Changes affect infrastructure or configuration files not tracked by impact analysis.
Best practice: Include relevant paths in full-test-run-paths to explicitly trigger full test runs for infrastructure changes.
How do I know if adaptive testing is working?
Look for these indicators in your CircleCI build output:
-
"Sorted X tests" message showing test distribution.
-
Reduced test execution time on branches using selection mode compared to branches running analysis.
-
"Skipped tests" output showing which tests were not selected.
-
Test results showing only relevant tests executed.
You can also compare:
-
Feature branch test execution time vs. main branch (if main runs analysis).
-
Number of tests run on feature branches vs. full test suite.
-
Wall time reduction across parallel nodes.
Can I run analysis on branches other than main?
Yes! The branch behavior is fully customizable through your CircleCI configuration. While analysis typically runs on main by default, you can configure it to run on:
-
Any specific branch (for example,
developorstaging). -
Multiple branches simultaneously.
-
Feature branches if needed for testing.
-
Scheduled pipelines independent of branch.
See Scenario 3 in the "Flag Usage Scenarios" section for examples of customizing branch behavior.
What test frameworks are supported?
Adaptive testing is runner-agnostic. We provide default configurations for the following test frameworks:
-
Jest (JavaScript/TypeScript)
-
gotestsum (Go)
-
Go test (Go)
-
pytest (Python)
-
Mocha (JavaScript)
-
Cypress (E2E testing)
-
Vitest
The key requirement is that your test runner can generate coverage data in a parsable format (typically LCOV or similar).