Analyze pipelines and job steps during an incident
In an incident situation, use this guide to investigate what happened after you have paused or stopped pipelines. If you have not already done so, start with Pause or Stop Pipelines During an Incident. The tasks outlined on this page supplement your organization’s incident response process with CircleCI-specific API calls, audit log queries, and ready-to-run download scripts.
Introduction
When an incident occurs, you may need to investigate what happened in your pipelines. This guide provides a step-by-step process to help you investigate what ran and what the incident exposed.
Some common incident scenarios that include the following:
-
A secret may have leaked. An API key, token, or credential leaked through job output, artifacts, or a compromised dependency.
-
Unexpected code ran in a pipeline. A pull request, merged commit, or modified configuration triggered jobs that ran unauthorized commands.
-
A dependency or orb is under suspicion. A third-party Docker image, orb, or package in your pipelines has a known vulnerability.
-
A team member’s account is under suspicion. Someone used their access to trigger pipelines, change project settings, or read context secrets.
-
A broader operational incident affected CI/CD behavior. Pipelines started, failed, or reran in ways your team did not expect.
The investigation process follows the same steps for each scenario:
The examples on this page use curl and jq.
|
Set and export |
1. Identify what ran
Use the CircleCI v2 API to list the pipelines, workflows, and jobs that executed during the time window you are investigating. All endpoints paginate results. Follow the next_page_token to retrieve additional results.
1.1 List pipelines for a project
curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v2/project/{project-slug}/pipeline" \
| jq '.items[] | {id, number, created_at, trigger}'
See List pipelines for a project in the API reference.
1.2 List workflows in a pipeline
curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v2/pipeline/{pipeline-id}/workflow" \
| jq '.items[] | {id, name, status, created_at, stopped_at}'
1.3 List jobs in a workflow
curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v2/workflow/{workflow-id}/job" \
| jq '.items[] | {job_number, name, status, started_at, stopped_at}'
1.4 Get job details
Returns executor type, resource class, and timing for a single job.
curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v2/project/{project-slug}/job/{job-number}" \
| jq '{name, status, executor, started_at, stopped_at, duration}'
See Get job details.
For more on project slugs and pagination, see API Developers Guide.
Inspect step output
Step output is what each step printed to the console, the same content you see in the job view in the CircleCI web app. The v1.1 API also returns step metadata (the command that ran, exit code, timing) alongside the output.
Get step metadata for a job
The v1.1 build endpoint returns an array of steps and their metadata, including the bash_command that ran:
curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v1.1/project/{vcs}/{org}/{project}/{build_num}" \
| jq '[.steps[]?.actions[]? | {step, name, bash_command, exit_code, status, start_time, end_time, run_time_millis}]'
Get step output for a single step
Each action in the build details response includes an output_url, a presigned URL that returns the console output (stdout/stderr) for that step. Follow it with a simple GET:
| Presigned URLs expire in minutes but require no authentication — do not share them. |
# 1. Fetch build details to get the output_url for each step
curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v1.1/project/{vcs}/{org}/{project}/{build_num}" \
| jq -r '.steps[2].actions[0].output_url'
# 2. Follow the output_url (no auth header needed -- the URL is presigned)
curl -s "<output_url from above>" | jq -r '.[].message'
-
Steps are zero-indexed.
steps[0]is typically Spin up environment,steps[1]is Checkout code, and so on. -
actions[0]is the first parallel run. For non-parallelized jobs there is only one action per step.
Download step output for a workflow
The following script downloads step metadata and console output from every job in a workflow. Nothing runs. The script saves metadata as JSON and each step’s output as plain text.
WORKFLOW_ID="your-workflow-id"
VCS_SLUG="github/your-org/your-repo"
OUT_DIR="circleci-output/$WORKFLOW_ID"
JOBS_JSON=$(curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v2/workflow/$WORKFLOW_ID/job")
for JOB_NUM in $(printf '%s' "$JOBS_JSON" | jq -r '.items[].job_number'); do
JOB_NAME=$(printf '%s' "$JOBS_JSON" | jq -r ".items[] | select(.job_number == $JOB_NUM) | .name")
JOB_DIR="$OUT_DIR/job-$JOB_NUM"
mkdir -p "$JOB_DIR"
printf '%s' "$JOBS_JSON" \
| jq ".items[] | select(.job_number == $JOB_NUM) | {name, status, started_at, stopped_at}" \
> "$JOB_DIR/job-info.json"
# Fetch v1.1 build details: step metadata + output_url per step
BUILD_JSON=$(curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v1.1/project/$VCS_SLUG/$JOB_NUM")
printf '%s' "$BUILD_JSON" \
| jq '[.steps[]?.actions[]? | {step, index, name, bash_command, start_time, end_time, exit_code, status, run_time_millis, background, parallel}]' \
> "$JOB_DIR/step-info.json"
# Download console output using output_url from each step action
printf '%s' "$BUILD_JSON" \
| jq -r '.steps[]?.actions[]? | select(.has_output == true) | "\(.step)\t\(.output_url)"' \
| while IFS=$'\t' read -r STEP_IDX OUTPUT_URL; do
[ -z "$OUTPUT_URL" ] && continue
curl -s "$OUTPUT_URL" | jq -r '.[].message' \
> "$JOB_DIR/step-$STEP_IDX.txt"
done
echo "Saved steps for job $JOB_NUM ($JOB_NAME)"
done
find "$OUT_DIR" -type f -exec chmod a-x {} +
echo "Done. Output saved to $OUT_DIR/"
echo "Search with: grep -r 'search-term' $OUT_DIR/"
find … -exec chmod a-x removes execute permissions from files only (directories keep traverse bits). The files are plain text and JSON. Treat them as untrusted data.
Use printf '%s' "$VAR" when piping JSON to jq, not echo "$VAR". On macOS, echo can interpret backslash sequences inside JSON strings and corrupt the stream.
|
This produces a folder structure you can browse and search locally:
circleci-output/
bd0f7dfb-134c-4aab-93f4-ab23a1f31c15/
job-1/
job-info.json <-- job name, status, timing (from v2 API)
step-info.json <-- per-step name, bash_command, exit_code, timing
step-0.txt <-- console output (stdout/stderr)
step-99.txt
step-101.txt
job-2/
job-info.json
step-info.json
step-103.txt
step-104.txt
The step-info.json file contains an array of step actions. Each entry includes the bash_command that ran, the step name, exit_code, start_time, end_time, run_time_millis, and whether the step ran in background or parallel mode. The step-*.txt filenames use CircleCI’s step IDs, so the numbers may not be sequential.
Download step output for a project in a time range
To download step metadata and output across all recent pipelines for a project:
PROJECT_SLUG="gh/your-org/your-repo"
VCS_SLUG="github/your-org/your-repo"
AFTER="2026-04-20T00:00:00Z"
OUT_DIR="circleci-output"
PAGE_TOKEN=""
while true; do
if [ -n "$PAGE_TOKEN" ]; then
PIPELINES_JSON=$(curl -s -G -H "Circle-Token: $CIRCLE_TOKEN" \
--data-urlencode "page-token=$PAGE_TOKEN" \
"$CIRCLE_HOST/api/v2/project/$PROJECT_SLUG/pipeline")
else
PIPELINES_JSON=$(curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v2/project/$PROJECT_SLUG/pipeline")
fi
for PIPELINE_ID in $(printf '%s' "$PIPELINES_JSON" \
| jq -r --arg after "$AFTER" '.items[] | select(.created_at >= $after) | .id'); do
for WORKFLOW_ID in $(curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v2/pipeline/$PIPELINE_ID/workflow" \
| jq -r '.items[].id'); do
JOBS_JSON=$(curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v2/workflow/$WORKFLOW_ID/job")
for JOB_NUM in $(printf '%s' "$JOBS_JSON" | jq -r '.items[].job_number'); do
JOB_DIR="$OUT_DIR/$WORKFLOW_ID/job-$JOB_NUM"
mkdir -p "$JOB_DIR"
printf '%s' "$JOBS_JSON" \
| jq ".items[] | select(.job_number == $JOB_NUM) | {name, status, started_at, stopped_at}" \
> "$JOB_DIR/job-info.json"
BUILD_JSON=$(curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v1.1/project/$VCS_SLUG/$JOB_NUM")
printf '%s' "$BUILD_JSON" \
| jq '[.steps[]?.actions[]? | {step, index, name, bash_command, start_time, end_time, exit_code, status, run_time_millis, background, parallel}]' \
> "$JOB_DIR/step-info.json"
printf '%s' "$BUILD_JSON" \
| jq -r '.steps[]?.actions[]? | select(.has_output == true) | "\(.step)\t\(.output_url)"' \
| while IFS=$'\t' read -r STEP_IDX OUTPUT_URL; do
[ -z "$OUTPUT_URL" ] && continue
curl -s "$OUTPUT_URL" | jq -r '.[].message' \
> "$JOB_DIR/step-$STEP_IDX.txt"
done
done
done
done
NEXT_TOKEN=$(printf '%s' "$PIPELINES_JSON" | jq -r '.next_page_token // empty')
[ -z "$NEXT_TOKEN" ] && break
# Pipelines are returned newest-first. Stop paging once we are fully older than AFTER.
OLDEST_ON_PAGE=$(printf '%s' "$PIPELINES_JSON" | jq -r '.items[-1].created_at // empty')
if [ -n "$OLDEST_ON_PAGE" ] && [[ "$OLDEST_ON_PAGE" < "$AFTER" ]]; then
break
fi
PAGE_TOKEN="$NEXT_TOKEN"
done
find "$OUT_DIR" -type f -exec chmod a-x {} +
echo "Done. Output saved to $OUT_DIR/"
echo "Search with: grep -r 'search-term' $OUT_DIR/"
This script follows next_page_token for project pipelines so it can process more than the first page of results. Pipelines are returned in reverse chronological order.
|
These scripts only download text and JSON. Nothing runs. The scripts pipe all content through jq (a data parser) and write to files. No downloaded content is ever evaluated by the shell. find … chmod a-x strips execute bits from downloaded files only. Treat downloaded files as untrusted data. See Limitations for what step output and audit logs can and cannot tell you.
|
Download artifacts
Jobs may store files as artifacts. Use Storing Build Artifacts and the Get job artifacts API reference for the latest retrieval options and examples.
Review audit logs
Audit logs record control-plane events: changes to projects, contexts, orbs, webhooks, and settings. They help answer questions like "who changed what" and "which contexts a job accessed."
What audit logs capture
-
context.secrets.accessed: a job used secrets from a context. -
context.env_var.created/deleted: someone added or removed a context variable. -
project.settings.update: project settings changed. -
checkout-key.create/delete: deploy key or user key changes. -
config_source.create/update/delete: config source modifications. -
workflow.job.start/finish: job lifecycle events. -
project.stop_building: someone used Stop Building.
See Audit Logs for the full event list and instructions on requesting or streaming logs.
How to access audit logs
Cloud: Request audit logs from Org Settings > Security > Audit Logs in the web app. Available for the past 12 months.
Cloud Scale: Stream audit logs to your own storage (S3) for bulk analysis. See Stream Audit Logs.
Server: The Cloud audit log page and Cloud streaming flow do not apply. Review your Support Bundle, platform logs, and your deployment’s required object storage backend used for build output, artifacts, and audit/event logs.
Assess credential exposure
After identifying what ran, determine which secrets and credentials jobs had access to.
Inventory environment variable names
The open source CircleCI Environment Inspector generates an inventory of environment variable names across your projects and contexts. Run it with a Personal API Token and treat the output as sensitive.
List context secrets
circleci context show {vcs-type} {org-name} {context-name}
This returns secret names and creation dates, not values.
List project variables
curl -s -H "Circle-Token: $CIRCLE_TOKEN" \
"$CIRCLE_HOST/api/v2/project/{project-slug}/envvar" \
| jq '.items[] | {name, value}'
The API and CLI return names and masked values (last four characters visible). You cannot export plaintext values from the API.
Identify which contexts jobs used
If you have audit logs, filter for context.secrets.accessed events in your investigation time window. This tells you which contexts CircleCI injected into which jobs.
For rotation steps, see the Secrets and Credentials section of Pause or Stop Pipelines During an Incident.
Limitations
Be aware of these boundaries when investigating:
-
Audit logs are control-plane only. They record who changed settings and which contexts a job accessed, not what happened at runtime inside a job.
-
Step output is stdout/stderr only. If malicious code ran without printing, it does not appear in step output. This also means you may not capture on-disk log filenames, paths, or other execution details that never reached the console stream.
-
Server installations keep some evidence outside CircleCI’s API. If you run CircleCI Server, also check your support bundle, platform logs, and any object storage you configured for build output or artifacts.
-
Environment variable values are masked. You cannot diff against known-good plaintext. You can only identify what was reachable and rotate accordingly.