The CircleCI command line interface (CLI) tool enables developers to build, validate, and test their pipeline jobs in their local development environments. This tutorial demonstrates this process and discusses some of the benefits of this development pattern. I’ll discuss the critical pipeline concepts and elements you’ll need to understand when building a pipeline configuration with the CLI tool.
Before you get started, you can read through the CircleCI CLI getting started guide. Then you’ll need to perform a few things in your local development environment:
- Install Docker locally
- A CircleCI account
- A Snyk account
- Create a new Snyk API token
- Clone this code repository locally
- Add the new repo to CircleCI
- Create a CircleCI personal API token
- Install CircleCI CLI tool locally
- Run the CLI setup using the API token
Upon completing the prerequisites, you will be ready to begin building pipeline jobs locally. In the next sections I’ll discuss some important CircleCI concepts. Then we’ll start building pipeline jobs locally and validating them with the CLI tool.
Before we start building pipeline jobs locally, I want to break down the major elements of a CircleCI pipeline configuration.
|Pipelines||CircleCI pipelines encompass the full set of workflows you run when triggering work on your projects using CircleCI.|
|Workflows||Workflows coordinate the jobs defined within your project configuration.|
|Executors||Each separate job defined within your configuration will run in a unique executor/runtime such as Docker containers and virtual machines.|
|Jobs||Jobs are the building blocks of your configuration. Jobs are collections of steps, which run commands/scripts as required.|
|Steps||Steps are actions that need to be taken to complete your job. Steps are usually a collection of executable commands.|
Being familiar with these concepts helps with understanding CI pipeline configurations and how they’re composed.
CircleCI is a platform that provides automation to perform continuous integration and continuous delivery related tasks for developers, which are defined in a CircleCI configuration file. The config file is the main interface between your code and the CircleCI platform. It basically tells CircleCI when, how, and what actions to perform on your code. The platform is very robust and can execute processes like run automation tests, compile artifacts, build docker images, and deploy code to target environments along with many other advanced processes.
Before you get started learning how to use the CLI tool, I want to set some expectations regarding the CLI tool’s capabilities and limitations. The CLI tool can perform things like:
- Debugging and validating your CI config
- Running jobs locally
- Querying CircleCI’s API
- Creating, publishing, viewing, and managing orbs
- Managing contexts
I’ve listed the limitations below:
- Only jobs can be run locally. Pipelines and workflows do not run locally.
- You cannot use the machine executor in local jobs. This is because the machine executor requires an extra VM to run its jobs.
- It is currently not possible to add SSH keys.
- The CLI tool does not provide support for running workflows because workflows run jobs concurrently on multiple machines allowing you to achieve faster, more complex builds. Because the CLI is only running on your machine, it can only run single jobs.
- Caching is not currently supported in local jobs. When you have either a save_cache or restore_cache step in your config, CircleCI will skip them and display a warning.
- For security reasons, encrypted environment variables configured in the UI will not be imported into local builds. As an alternative, you can specify environment variables to use with the CLI with the
-eflag. If you have multiple environment variables, you must use the flag for each variable. For example,
circleci build -e VAR1=FOO -e VAR2=BAR.
Using the CircleCI CLI
As previously mentioned, the pipeline configuration file is your interface to the CircleCI automation. It defines and controls your CI/CD processes. The CircleCI platform’s default behavior is to trigger a build based on the respective config file, on every code change/git commit pushed upstream to a shared repository on your version control system (VCS). Once the build is triggered, the platform executes the directives in the config file and produces a pass or fail outcome on the pipeline run. This outcome is great feedback for the developer. It gives them detailed information regarding where their code/pipeline is having issues so that they can quickly understand and resolve them. What I’m describing here is a typical process for an existing pipeline config file that has already passed the initial stages of config development.
Developing that initial pipeline config file for a project can be somewhat time and resource intensive. Especially if you’re new to this paradigm. Generally, the pipeline config development process follows this pattern:
- Edit some code in the config file
- Commit those changes locally
- Push those changes to your upstream repo
- CircleCI executes your config file
- The pipeline build passes or fails
- If the build fails, debug the code and repeat these steps until it passes
The above pattern is common. And as you can imagine, it requires many commits in order to develop a meaningful config file. This config file development pattern also uses platform resources. This can lead to some unnecessary consumption of automation cycles. Using the CLI tool, you can reduce the amount of unnecessary code commits and wasted resource cycles. You can streamline the config file development pattern to resemble something like this:
- Edit code locally
- Validate config file syntax
- Execute/run the specific job you modified
- The job passes or fails
- If the job fails, debug the code then repeat these steps until it passes
The main differences here are that you’re reducing the commit pollution, and you’re focusing on specific job elements rather than the entire pipeline configuration. This gives you the ability to quickly make changes, and to test and debug them locally without having to consume valuable platform resources. It also enables you to experiment and optimize your jobs and commands without polluting the versioning history of your VCS.
Let’s learn how to build and run a pipeline job using the CLI tool.
Running jobs using the CLI
In this section, I’ll demonstrate how to build a simple config file locally, validate the syntax, and run a job using the CLI tool.
The code repo from the prerequisites section has a sample application that we’re going to use for building jobs locally. Make sure you are at the root of the project and run this command to create the required directory and an empty config file:
mkdir .circleci/ && touch config.yml
Next, open the
config.yml file with your favorite text editor and add the following:
version: 2.1 jobs: workflows:
The above syntax is the beginning of our config file with the elements of jobs and workflows defined. Now, let’s add an actual job to our config. Update your config file with the following code:
version: 2.1 jobs: run_tests: docker: - image: circleci/node:12 steps: - checkout - run: name: Install npm dependencies command: | sudo npm install -g - run: name: Run Unit Tests command: | ./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results workflows: build_test: jobs: - run_tests
We’ve defined a new job and workflow in the pipeline config file. The job is named
run_tests. It runs tests on the application and saves the results to files. The
run_tests job can be modified and tested locally using the CLI tool, without the need to commit changes or use valuable platform compute cycles.
build_test workflow is where we define which jobs to execute in this pipeline. Workflows can be considered as job orchestrators. They define how and when individual jobs are run within a pipeline. Since we currently only have one job, we’ll direct our workflow to execute our
run_tests job when triggered.
Note: Workflows are not executed locally. Only the jobs defined in the config file are executed from the CLI tool.
Validating config file syntax
The CLI tool has a great feature that validates the YAML in the config file to ensure the syntax is compliant and valid. The validation feature should be run after every change to ensure you didn’t introduce formatting issues. I use it every time I make considerable changes to my config files. That way I can catch mistakes early and focus on more pressing issues with my config. Validate your config file using this command:
circleci config validate
Running this produces the following result:
Config file at .circleci/config.yml is valid.
If you have syntax issues in your config file, the validate process will flag them for you with lots of detail.
Testing jobs locally
Now that you have a valid config file, you can execute and test the
run_tests job locally. Run the following command in a terminal:
circleci local execute --job run_tests
Next, the CLI will process the
run_tests job. It will begin downloading the specified Docker images (the first time will be longer than subsequent runs) needed. Once the Docker executor/container is up and running, the
run blocks will be processed and their commands executed.
====>> Run Unit Tests #!/bin/bash -eo pipefail ./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results Node server is running.. Welcome to CI/CD Server GET / ✓ returns status code 200 (39ms) welcomeMessage ✓ Validate Message 2 passing (48ms) [mochawesome] Report JSON saved to /home/circleci/project/test-results/test-results.json [mochawesome] Report HTML saved to /home/circleci/project/test-results/test-results.html Success!
run_tests job has completed successfully, and you’ve validated that it will function as designed on the CircleCI platform. This is an awesome win! As I mentioned earlier, you were able to build and test a pipeline job locally without having to commit code to your repo or consume valuable compute on the platform. Using the CLI is a great way to maintain common development practices in local environments.
Using environment variables with the CLI tool
So we defined, validated, and tested a job that runs automated unit tests on our app. Since those tests passed, we should build a new job to execute a vulnerability scan on our application so we can easily identify any dangerous vulnerabilities in our code. In this section, I’ll demonstrate how to build a new job that will use the Snyk API token you created in the prerequisites section.
Before we get into the code, I want to discuss how to handle environment variables locally. It’s very common to use environment variables in jobs, and the CircleCI platform has features that enable you to securely define, protect, and use sensitive data from within your config file. Since we’re executing jobs locally, we won’t have access to the environment variables and their values stored on the platform. Lucky for us, there is a workaround with minimal impact.
You can specify the same environment variables that you configured in CircleCI on your local development environment. This provides a seamless experience. Depending on your local environment’s operating system, defining environment variables may be different from what I’m demonstratinge here. In this example, I’m using Linux. I’ll define an environment variable for this new job in my
export SNYK_TOKEN="<Replace this with your Snyk API Token>"
Once I have this environment variable defined locally, I can create new pipeline jobs that use them and simulate accessing them from the platform. Now, let’s build that new vulnerability scanning job.
Testing vulnerability scan jobs locally
We’re ready to create a new pipeline job that will scan the application code for security vulnerabilities. First, update your
config.yml file with the following code:
version: 2.1 orbs: snyk: firstname.lastname@example.org jobs: run_tests: docker: - image: circleci/node:12 steps: - checkout - run: name: Install npm dependencies command: | sudo npm install -g - run: name: Run Unit Tests command: | ./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results vulnerability_scan: docker: - image: circleci/node:12 steps: - checkout - run: name: Install npm dependencies command: | sudo npm install -g - snyk/scan workflows: build_test: jobs: - run_tests - vulnerability_scan
Before we execute the new job, Let’s address some of the new syntax and features in this config. At the top of the file are the new lines:
orbs: snyk: email@example.com
These lines represent an implementation of CircleCI orbs, which are reusable snippets of code that help automate repeated processes, speed up project setup, and make it easy to integrate with third-party tools. Since we’re using the Snyk scan tool in our pipeline job, implementing the Snyk orb is perfect for our pipeline.
- snyk/scan line in the
vulnerability_scan: job demonstrates how to implement the Snyk orb’s
scan function. This triggers the scan and provides results that will identify any vulnerabilities and suggest mitigation steps, if any exist. Let’s execute this new job using the CircleCI CLI. Run this command in a terminal:
circleci local execute -e SNYK_TOKEN=$SNYK_TOKEN --job vulnerability_scan
-e flag in the above command specifies an environment variable named
SNYK_TOKEN. It is assigned the value of the
$SNYK_TOKEN environment variable we defined in the previous section. The value of this environment variable is very sensitive. It must be protected whenever used in a pipeline.
After executing this job using the CLI, it failed because my security scan failed with the following output:
====>> Run Snyk test to scan app for vulnerabilities #!/bin/bash -eo pipefail snyk test --severity-threshold=low Testing /home/circleci/project... Tested 168 dependencies for known issues, found 2 issues, 2 vulnerable paths. Issues to fix by upgrading: Upgrade firstname.lastname@example.org to email@example.com to fix ✗ Prototype Pollution [Medium Severity][https://snyk.io/vuln/SNYK-JS-MINIMIST-559764] in firstname.lastname@example.org introduced by email@example.com > firstname.lastname@example.org > email@example.com ✗ Regular Expression Denial of Service (ReDoS) [High Severity][https://snyk.io/vuln/SNYK-JS-MOCHA-561476] in firstname.lastname@example.org Organization: datapunkz Package manager: npm Target file: package.json Project name: nodejs-circleci Open source: no Project path: /home/circleci/project Licenses: enabled Run `snyk wizard` to address these issues. Error: Exited with code 1 Step failed Error: runner failed Task failed
There is an obvious issue with the version of Mocha and the Snyk tool has identified the vulnerable dependencies and offered mitigation solutions. In this case, the solution is to upgrade the version of Mocha defined in the
package.json file. In reality, vulnerabilities are sometimes acceptable. This is the exception, not the rule. I highly encourage everyone to fix all known vulnerabilities whenever possible, but there are rare instances where this is not possible. In this scenario, your pipeline should not fail because of a failed vulnerability scan that is “acceptable”. Luckily, Snyk has a flag that enables a pipeline to continue, even if the scan fails. The Snyk orb has a parameter called
fail-on-issues, which defaults to true and fails the job if a scan fails. Update your
snyk/scan line to match the following:
- snyk/scan: fail-on-issues: false
Now execute the job again:
circleci local execute -e SNYK_TOKEN=$SNYK_TOKEN --job vulnerability_scan
Which produces these results:
Issues to fix by upgrading: Upgrade email@example.com to firstname.lastname@example.org to fix ✗ Prototype Pollution [Medium Severity][https://snyk.io/vuln/SNYK-JS-MINIMIST-559764] in email@example.com introduced by firstname.lastname@example.org > email@example.com > firstname.lastname@example.org ✗ Regular Expression Denial of Service (ReDoS) [High Severity][https://snyk.io/vuln/SNYK-JS-MOCHA-561476] in email@example.com Explore this snapshot at https://app.snyk.io/org/datapunkz/project/6a7762c9-1447-4162-a776-de26d34ef418/history/88186524-4993-46a0-b67f-f4d4bd2331f3 Notifications about newly disclosed issues related to these dependencies will be emailed to you. Success!
The Snyk scan results are identical to the previous run but this time the pipeline job did not fail because we set the
fail-on-issues parameter value to
false. Again, I want to stress that all identified security vulnerabilities should be addressed and mitigated as soon as possible and setting the flag to
false is a dangerous action that should be thoroughly contemplated and reviewed before implementing.
Below is the complete config file with the
fail-on-issues: parameter set to
version: 2.1 orbs: snyk: firstname.lastname@example.org jobs: run_tests: docker: - image: circleci/node:12 steps: - checkout - run: name: Install npm dependencies command: | sudo npm install -g - run: name: Run Unit Tests command: | ./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results vulnerability_scan: docker: - image: circleci/node:12 steps: - checkout - run: name: Install npm dependencies command: | sudo npm install -g - snyk/scan: fail-on-issues: false workflows: build_test: jobs: - run_tests - vulnerability_scan
Congratulations! You’ve just learned how to develop and test pipeline jobs in your local development environment using the CircleCI CLI. In this post, I focused on the various ways to develop and test jobs locally, but I highly recommend you continue to experiment with the CLI tool to unlock other development patterns that can ease your pipeline development and management efforts. The CLI tool also enables developers to manage and execute commands on other CircleCI products and features such as orbs, the CircleCI API, and contexts.
Thank you for following this post and I hope you found it useful. Please feel free to reach out with feedback on Twitter @punkdata.