Set up test impact analysis Beta
|
Smarter Testing is available in beta. This means the product is in early stages and you may encounter bugs, unexpected behavior, or incomplete features. When the feature is made generally available, there will be a cost associated with access and usage. Refer our Discuss post for more information about our beta launch. |
Test impact analysis runs only the tests impacted by your code changes, reducing test execution time while maintaining code quality.
Is my project a good fit for test impact analysis?
Test impact analysis works well on test frameworks with built-in coverage support (Jest, pytest, Go test, Vitest, RSpec) where generating coverage reports is straightforward.
Code coverage data is used to determine how tests are related to code. This works best when tests directly import and run your source code. It becomes difficult when tests call code running in separate containers or services, because it is not straightforward to collect and consolidate the coverage data.
How it works
Test impact analysis identifies which tests need to run based on the files that changed in your checked out code. The system works in two phases:
- Analysis phase
-
Builds a mapping between your tests and the code they exercise. Tests are run with code coverage enabled to determine which files they cover. Analysis results are stored as impact data for future test runs.
- Selection phase
-
Compares the current repository state against the most recent impact data. Tests covering modified files are selected to run. This includes new tests, modified tests, and tests that do not exercise any source code.
|
The analysis phase typically runs slower than a normal test run because it executes tests with coverage instrumentation. However, this allows the selection phase to identify only necessary tests to run, which leads to much faster test execution. |
Prerequisites
Before enabling test impact analysis, ensure you have completed the Getting Started With Smarter Testing guide and have:
-
Installed the
testsuiteCLI plugin. -
Configured your
.circleci/test-suites.ymlwithdiscoverandruncommands. -
Verified your tests run successfully with the
testsuitecommand.
1. Enable test impact analysis option in your .circleci/test-suites.yml file
Update the test suite from the getting started to include an analysis command and test-impact-analysis option, keep the discover and run commands in the config.
Some starter configs are shown below. Choose the one for your test runner and copy/paste the analysis: … line into .circleci/test-suites.yml.
-
Vitest
-
Jest
-
Mocha
-
pytest
-
Go with gotestsum
-
RSpec
-
Other
Follow the instructions to add the Vitest CircleCI Coverage plugin as a dev dependency.
Add the analysis command to your .circleci/test-suites.yml.
---
name: ci tests
# ...
analysis: CIRCLECI_COVERAGE=<< outputs.circleci-coverage >> vitest run --silent --bail 0 << test.atoms >>
options:
test-impact-analysis: true
Follow the instructions to add the Jest CircleCI Coverage plugin as a dev dependency.
Add the analysis command to your .circleci/test-suites.yml.
---
name: ci tests
# ...
analysis: CIRCLECI_COVERAGE=<< outputs.circleci-coverage >> jest --runInBand --silent --bail << test.atoms >>
options:
test-impact-analysis: true
Follow the instructions to add the Mocha CircleCI Coverage plugin as a dev dependency.
Add the analysis command to your .circleci/test-suites.yml.
---
name: ci tests
# ...
analysis: CIRCLECI_COVERAGE="<< outputs.circleci-coverage >>" mocha << test.atoms >>
options:
test-impact-analysis: true
Follow the instructions to add the pytest CircleCI Coverage plugin as a dev dependency.
Add the analysis command to your .circleci/test-suites.yml.
---
name: ci tests
# ...
analysis: pytest --disable-pytest-warnings --no-header --quiet --tb=short --cov=myproj --cov-context=test --circleci-coverage=<< outputs.circleci-coverage >> << test.atoms >>
options:
test-impact-analysis: true
---
name: ci tests
# ...
file-mapper: go list -json="Dir,ImportPath,TestGoFiles,XTestGoFiles" ./... > << outputs.go-list-json >>
analysis: go tool gotestsum -- -coverprofile="<< outputs.go-coverage >>" -cover -coverpkg ./... << test.atoms >>
options:
test-impact-analysis: true
Follow the instructions to add the RSpec CircleCI Coverage plugin as a dev dependency.
Add the analysis command to your .circleci/test-suites.yml.
---
name: ci tests
# ...
analysis: CIRCLECI_COVERAGE="<< outputs.circleci-coverage >>" bundle exec rspec << test.atoms >>
options:
test-impact-analysis: true
Smarter Testing is test runner agnostic, replace the below analysis example with your test runner, the next step will verify the commands are set up correctly.
---
name: ci tests
# ...
analysis: my-test-runner --coverage=<< outputs.coverage >> --run << test.atoms >>
options:
test-impact-analysis: true
Configure the test suite
Code coverage cannot detect every relationship between source files and test atoms. The following options can be used to change how many test atoms are selected and run.
For the full list of available options, see the Options section in the test suite configuration reference.
Full test run paths
Some project files affect the running system without being directly covered by tests. Examples include dependency manifests, database migration files, or CI configuration. Use full-test-run-paths to list files that cause all test atoms to be selected and run.
# .circleci/test-suites.yml
---
name: ci tests
# ...
options:
test-impact-analysis: true
full-test-run-paths:
- package.json
- go.mod
- .circleci/*.yml
- database-migrations/**/*.sql
Test selection rules
Use test-selection-rules to extend test selection to cover non-source files, or to always run specific test atoms. For example, run integration tests when database migrations change, or always run acceptance tests regardless of which files changed.
# .circleci/test-suites.yml
---
name: ci tests
# ...
options:
test-impact-analysis: true
test-selection-rules:
- test-atom: db/integration_test.ts
include: database-migrations/**/*.sql
- test-atom: acceptance/test.ts
include: true
2. Run locally
Use --doctor locally to validate the test-suites.yml is set up correctly.
The CLI will run through additional analysis checks when test impact analysis is enabled, if any results look incorrect an action item is provided to resolve it. Follow the steps until all checks pass.
$ circleci run testsuite "ci tests" --doctor
|
The The test suite will automatically find the |
Once all checks pass, you can verify test impact analysis locally by inspecting the locally stored impact data in the .circleci/ directory.
To do this:
Remove the --doctor flag and add the following flags: --local --select-tests=none --analyze-tests=impacted.
$ circleci run testsuite "ci tests" --verbose --local --select-tests=none --analyze-tests=impacted
Then inspect the locally stored impact data in .circleci/ci tests-impact.json to see the mapping between tests and files. The impact data is a JSON file with a form like the following example:
{
"version": 0,
"files": {
"1": {
"path": ".circleci/test-suites.yml",
"hash": "c9684be83632a628"
},
"2": {
"path": "src/foo.ts",
"hash": "c9684be83632a628"
},
...
},
"edges": {
"test-atom-one": ["2", "4", "7"],
...
}
-
filesare a map of IDs to files with thepathrelative to the working directory and ahashof the contents. -
edgesare a map of test atoms with an array of file IDs that impact this test atom.
Try modifying a source file whose ID appears in the edges, then run test selection to verify only impacted test atoms are selected.
$ circleci run testsuite "ci tests" --verbose --local --select-tests=impacted
Look for the "Selecting tests…" section in output to see why a test was selected.
==> Selecting tests...
--> Selecting 'test-atom-one' due to modified file: 'src/foo.ts'
==> - 0 new test atoms
==> - 0 test atoms impacted by new files
==> - 1 test atoms impacted by modified files
==> - 0 test atoms impacted by removed files
==> - 0 test atoms failed previously
==> - 0 test atoms with no source file mappings in impact data
==> - 0 test atoms impacted by include rule
==> - 0 test atoms impacted by full test run paths
==> Selected 1 test atoms, Skipped 123 test atoms in 168ms
Local flags
| Flag | Default | Description |
|---|---|---|
|
|
Provides additional details in output. |
|
|
Run test selection and analysis using locally stored impact data in the |
|
On |
|
|
|
|
4. Run in CI
When adding the testsuite command to your CircleCI jobs, there is no need to install the testsuite plugin as it is already available in CircleCI Docker containers.
|
Use the --select-tests and --analyze-tests flags you used locally to verify analysis on your feature branch before merging into the default branch.
testsuite command running analysis on a feature branch.version: 2.1
jobs:
test:
executor: node-with-service
steps:
- setup
# Skip running all tests and only run analysis on the impacted tests.
- run: circleci run testsuite "ci tests" --select-tests=none --analyze-tests=impacted
- store_test_results:
# This directory must match the directory of `outputs.junit` in your
# test-suites.yml
path: test-reports
| Analysis must only run on the default branch. Once verified, remove the flags to use the defaults or see the Common setup examples section to dynamically change flags. |
CI flags
| Flag | Default | Description |
|---|---|---|
|
|
Provides additional details in output. |
|
On |
|
|
On |
|
Commit both .circleci/test-suites.yml and .circleci/config.yml to your feature branch and push to your VCS.
| Depending on the test runner, the first run of analysis can take a long time as every test needs to run analysis. Consider temporarily increasing the job parallelism on the feature branch. |
Once analysis has completed on the feature branch, restore the .circleci/config.yml back to its original state without the CLI flags and parallelism changes.
testsuite command.version: 2.1
jobs:
test:
executor: node-with-service
steps:
- setup
- run: circleci run testsuite "ci tests"
- store_test_results:
# This directory must match the directory of `outputs.junit` in your
# test-suites.yml
path: test-reports
Commit the restored .circleci/config.yml to your feature branch and push to your VCS. Follow your usual process to merge to your default branch.
| The local impact JSON files generated from running the local CLI must not be checked in to your VCS. |
Verify in CI
After analysis completes on your feature branch, verify test impact analysis is working correctly.
-
In the CircleCI web app, navigate to your pipeline and open the test job.
-
Check the job was successful and analyzed all tests. A successful analysis run will output "Found n files impacting tests" for each test atom.
-
Modify a source file that appears in the impact data, then push. Only the tests that cover the modified file are selected to run. Look for the "Selecting tests…" output in the job to confirm the correct tests are selected and the reason for selection.
Test impact analysis is now set up for your test suite. Feature branches will run the tests impacted by code changes, and your default branch will first run all tests and then analyze the tests impacted by code changes.
Other options are available if these defaults do not suit your project. See the Common setup examples section for more information.
Next steps
-
Use Dynamic Test Splitting to evenly split tests across parallel nodes.
-
Auto Rerun Failed Tests to automatically retry flaky tests.
Common setup examples
Analyze impacted tests and run all tests on your default branch
No changes required, this is the default setting.
Analyze impacted tests as a non-blocking job in the same workflow
This approach runs analysis concurrently with the rest of your workflow jobs. It can be helpful in reducing overall workflow time if analyzing tests takes longer than running tests.
# .circleci/config.yml
version: 2.1
jobs:
test:
executor: my-executor
parallelism: 4
steps:
- setup
# Disable analysis, run all tests on default branches, impacted tests on feature branches.
- run: circleci run testsuite "ci tests" --analyze-tests="none"
- store_test_results:
path: test-reports
analysis:
executor: my-executor
steps:
- setup
# Disable running tests, analyze impacted tests.
- run: circleci run testsuite "ci tests" --select-tests="none" --analyze-tests="impacted"
deploy:
executor: my-executor
steps:
- setup
- run: ./deploy.sh
workflows:
build-and-deploy:
jobs:
- test
# Only analyze tests on main.
- analysis:
filters pipeline.git.branch == "main"
- deploy:
requires:
- test
filters: pipeline.git.branch == "main"
Analyze impacted tests in a separate workflow in the same pipeline
This approach runs analysis concurrently with your main workflow, which is useful if you need to avoid any possible additional latency on your main workflow.
Only use this approach if analyzing impacted tests in a non-blocking job is not sufficient.
# .circleci/config.yml
version: 2.1
jobs:
test:
executor: my-executor
parallelism: 4
steps:
- setup
# Disable analysis.
# (Default) Run all tests on default branches, run impacted tests on feature branches.
- run: circleci run testsuite "ci tests" --analyze-tests="none"
- store_test_results:
path: test-reports
analysis:
executor: my-executor
steps:
- setup
# Disable running tests.
# Analyze impacted tests.
- run: circleci run testsuite "ci tests" --select-tests="none" --analyze-tests="impacted"
deploy:
executor: my-executor
steps:
- setup
- run: ./deploy.sh
workflows:
build-and-deploy:
jobs:
- test
- deploy:
requires:
- test
filters: pipeline.git.branch == "main"
analysis-workflow:
jobs:
# Only analyze tests on main.
- analysis:
filters: pipeline.git.branch == "main"
Analyze impacted tests on a non-default branch and run tests on all other branches
This approach is useful if you use a non-default branch as the base of development, for example in the "git flow" development model.
develop and selection on all other branches# .circleci/config.yml
version: 2.1
jobs:
test:
executor: node-with-service
parallelism: 4
steps:
- setup
# Analyze impacted tests on "develop" branch, otherwise disable.
# (Default) Run all tests on default branches, run impacted tests on feature branches.
- run: circleci run testsuite "ci tests" --analyze-tests=<< pipeline.git.branch == "develop" and "impacted" or "none" >>
- store_test_results:
path: test-reports