Managing CI/CD pipelines with Arm compute resource classes
Developer Advocate, CircleCI
Arm processors and architectures are becoming widely available as development teams adopt them as compute nodes in many application infrastructures. Organizations that need to run microservices, application servers, databases, and other workloads in a cost-effective way will continue to turn to the Arm architecture.
CircleCI customers who need Arm-based compute have several options. They can take advantage of the speed, flexibility, and convenience of CircleCI’s hosted execution environments by accessing our Arm-enabled Docker or machine executors. Teams that want additional control over their infrastructure can also use self-hosted runners on their own Arm compute resources. In this tutorial, I will demonstrate how to use cloud-based resources in your pipelines to build, test, and deploy applications for Arm.
available now
Prerequisites
Before you can get started with this tutorial, you need to complete a number of tasks:
- Git clone this arm-executors example repo GitHub
- Sign-up for a CircleCI account
- Create an Amazon Web Services (AWS) Account
- Create an AWS IAMS user with programmatic access
- Assign this user AWS ECS permissions
- Generate AWS Access keys and secrets and save the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for later use
- Create an AWS IAMS user with programmatic access
- Create a Docker Hub account
- Create a Docker Hub Access Token
- Install Terraform CLI locally
- Create a Terraform Cloud Account
- Create a new Terraform Cloud organization
- Create a new Terraform Cloud workspace named
arm-aws-ecs
and choose the"No VCS connection"
option - Enable local execution mode in the
arm-aws-ecs
workspace - Create a new Terraform API token
- Create these following project environment variables in CircleCI
AWS_ACCESS_KEY_ID
: enter the value of the AWS Access Key IDAWS_SECRET_ACCESS_KEY
: enter the value of the AWS Secret Access KeyDOCKER_LOGIN
: enter the value of your Docker usernameDOCKER_PWD
: enter the value of Docker Hub Access tokenTERRAFORM_TOKEN
: enter the value of Terraform API Token
Arm compute resource classes
The following sections of this tutorial will demonstrate configuring and executing CI/CD pipelines on Arm-based executors along with demonstrating how to create, deploy, and destroy AWS ECS clusters based on AWS Graviton2 compute nodes using Terraform for infrastructure as code.
Implement Arm compute within the config.yml
The pipeline config example below shows how to define Arm resource classes.
version: 2.1
orbs:
node: circleci/node@4.2.0
jobs:
run-tests:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- node/install-packages:
override-ci-command: npm install
cache-path: ~/project/node_modules
- run:
name: Run Unit Tests
command: |
./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
build_docker_image:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- run:
name: "Build Docker Image ARM V8"
command: |
export TAG='0.1.<< pipeline.number >>'
export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME
docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
docker push -a $DOCKER_LOGIN/$IMAGE_NAME
workflows:
build:
jobs:
- run-tests
- build_docker_image
In this code example, the run-tests:
job shows how to specify a machine executor and assign it an Arm compute node resource class. The image:
key specifies the operating system assigned to the executor. The resource_class:
specifies which CircleCI resource class to utilize. In this case we’re using the arm.medium
resource class type, which enables pipelines to execute and build code on and for Arm architectures and resources. The build_docker_image:
job is a great way to use the arm.medium
resource class to build an Arm64 capable Docker image that can be confidently deployed to Arm compute infrastructures, such as AWS Graviton2.
version: 2.1
orbs:
node: circleci/node@4.2.0
commands:
install_terraform:
description: "specify terraform version & architecture to use [amd64 or arm64]"
parameters:
version:
type: string
default: "0.13.5"
arch:
type: string
default: "arm64"
steps:
- run:
name: Install Terraform client
command: |
cd /tmp
wget https://releases.hashicorp.com/terraform/<<
parameters.version >>/terraform_<<
parameters.version >>_linux_<<
parameters.arch >>.zip
unzip terraform_<< parameters.version >>_linux_<<
parameters.arch >>.zip
sudo mv terraform /usr/local/bin
jobs:
run-tests:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- node/install-packages:
override-ci-command: npm install
cache-path: ~/project/node_modules
- run:
name: Run Unit Tests
command: |
./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
build_docker_image:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- run:
name: "Build Docker Image ARM V8"
command: |
export TAG='0.1.<< pipeline.number >>'
export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME
docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
docker push -a $DOCKER_LOGIN/$IMAGE_NAME
deploy_aws_ecs:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- run:
name: Create .terraformrc file locally
command: echo "credentials \"app.terraform.io\" {token = \"$TERRAFORM_TOKEN\"}" > $HOME/.terraformrc
- install_terraform:
version: 0.14.2
arch: arm64
- run:
name: Deploy Application to AWS ECS Cluster
command: |
export TAG=0.1.<< pipeline.number >>
export DOCKER_IMAGE_NAME="${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}"
cd terraform/aws/ecs
terraform init
terraform apply \
-var docker_img_name=$DOCKER_IMAGE_NAME \
-var docker_img_tag=$TAG \
--auto-approve
destroy_aws_ecs:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- run:
name: Create .terraformrc file locally
command: echo "credentials \"app.terraform.io\" {token = \"$TERRAFORM_TOKEN\"}" > $HOME/.terraformrc
- install_terraform:
version: 0.14.2
arch: arm64
- run:
name: Destroy the AWS ECS Cluster
command: |
cd terraform/aws/ecs
terraform init
terraform destroy --auto-approve
workflows:
build:
jobs:
- run-tests
- build_docker_image
- deploy_aws_ecs
- approve_destroy:
type: approval
requires:
- deploy_aws_ecs
- destroy_aws_ecs:
requires:
- approve_destroy
Deploy to AWS ECS
The code example in the previous section shows how to leverage the Arm resource classes and within a pipeline. In this section, I will show you how to extend that code to create AWS resources such as ECS clusters. I will create these resources with underlying AWS Graviton2 EC2 compute nodes using Terraform and infrastructure as code.
version: 2.1
orbs:
node: circleci/node@4.2.0
commands:
install_terraform:
description: "specify terraform version & architecture to use [amd64 or arm64]"
parameters:
version:
type: string
default: "0.13.5"
arch:
type: string
default: "arm64"
steps:
- run:
name: Install Terraform client
command: |
cd /tmp
wget https://releases.hashicorp.com/terraform/<<
parameters.version >>/terraform_<<
parameters.version >>_linux_<<
parameters.arch >>.zip
unzip terraform_<< parameters.version >>_linux_<<
parameters.arch >>.zip
sudo mv terraform /usr/local/bin
jobs:
run-tests:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- node/install-packages:
override-ci-command: npm install
cache-path: ~/project/node_modules
- run:
name: Run Unit Tests
command: |
./node_modules/mocha/bin/mocha test/ --reporter mochawesome --reporter-options reportDir=test-results,reportFilename=test-results
- store_test_results:
path: test-results
- store_artifacts:
path: test-results
build_docker_image:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- run:
name: "Build Docker Image ARM V8"
command: |
export TAG='0.1.<< pipeline.number >>'
export IMAGE_NAME=$CIRCLE_PROJECT_REPONAME
docker build -t $DOCKER_LOGIN/$IMAGE_NAME -t $DOCKER_LOGIN/$IMAGE_NAME:$TAG .
echo $DOCKER_PWD | docker login -u $DOCKER_LOGIN --password-stdin
docker push -a $DOCKER_LOGIN/$IMAGE_NAME
deploy_aws_ecs:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- run:
name: Create .terraformrc file locally
command: echo "credentials \"app.terraform.io\" {token = \"$TERRAFORM_TOKEN\"}" > $HOME/.terraformrc
- install_terraform:
version: 0.14.2
arch: arm64
- run:
name: Deploy Application to AWS ECS Cluster
command: |
export TAG=0.1.<< pipeline.number >>
export DOCKER_IMAGE_NAME="${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}"
cd terraform/aws/ecs
terraform init
terraform apply \
-var docker_img_name=$DOCKER_IMAGE_NAME \
-var docker_img_tag=$TAG \
--auto-approve
destroy_aws_ecs:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- run:
name: Create .terraformrc file locally
command: echo "credentials \"app.terraform.io\" {token = \"$TERRAFORM_TOKEN\"}" > $HOME/.terraformrc
- install_terraform:
version: 0.14.2
arch: arm64
- run:
name: Destroy the AWS ECS Cluster
command: |
cd terraform/aws/ecs
terraform init
terraform destroy --auto-approve
workflows:
build:
jobs:
- run-tests
- build_docker_image
- deploy_aws_ecs
- approve_destroy:
type: approval
requires:
- deploy_aws_ecs
- destroy_aws_ecs:
requires:
- approve_destroy
This code extends the original pipeline config example. As you may have already noticed, a few new jobs have been defined. The deploy_aws_ecs:
, approve_destroy:
, and destroy_aws_ecs:
jobs are the new elements in this extended config. Before I dive into them I will describe the commands:
and install_terraform:
elements.
install_terraform: command
CircleCI is capable of encapsulating and reusing configuration code using pipeline parameters. The install_terraform:
command is an example of defining reusable pipeline code. If your pipelines repeatedly execute specific commands, I recommend defining reusable command:
elements to provide extensible and centrally managed pipeline configuration. Both the deploy_aws_ecs:
and destroy_aws_ecs:
jobs execute Terraform code, so the pipeline will need to download and install the Terraform cli more than once. The install_terraform:
command provides valuable reusability.
commands:
install_terraform:
description: "specify terraform version & architecture to use [amd64 or arm64]"
parameters:
version:
type: string
default: "0.13.5"
arch:
type: string
default: "arm64"
steps:
- run:
name: Install Terraform client
command: |
cd /tmp
wget https://releases.hashicorp.com/terraform/<<
parameters.version >>/terraform_<<
parameters.version >>_linux_<<
parameters.arch >>.zip
unzip terraform_<< parameters.version >>_linux_<<
parameters.arch >>.zip
sudo mv terraform /usr/local/bin
This code block defines the install_terraform:
reusable command. The parameters:
key maintains a list of parameters. The parameters version:
and arch:
define the Terraform CLI version and CPU architecture respectively. These parameters download and install the client in the executor. Because this block of code represents a command:
element, a command steps:
key must be defined. In the previous example, the run:
element executes the corresponding command:
key. This key downloads the specific Terraform client using the << parameter.version >>
and << parameter.arch >>
variables to specify the client version number and CPU architecture. Pipeline parameters are very useful for optimizing and centrally managing functionality within pipeline configuration. If you want to learn more, you can get all the details here.
deploy_aws_ecs job
The deploy_aws_ecs:
job defined in the pipeline leverages infrastructure as code to create a new Amazon ECS cluster. It includes all of the required resources, such as virtual private networks (VPC), subnets, route tables, application load balancers, and EC2 auto scale groups. This job creates and provisions all the infrastructure needed to deploy and run applications. Because the target architecture is Arm, the AWS ECS cluster must be composed of AWS Gravtion2 ECS compute nodes. These nodes will execute the Arm based Docker application image build in previous pipeline jobs.
deploy_aws_ecs:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- run:
name: Create .terraformrc file locally
command: echo "credentials \"app.terraform.io\" {token = \"$TERRAFORM_TOKEN\"}" > $HOME/.terraformrc
- install_terraform:
version: 0.14.2
arch: arm64
- run:
name: Deploy Application to AWS ECS Cluster
command: |
export TAG=0.1.<< pipeline.number >>
export DOCKER_IMAGE_NAME="${DOCKER_LOGIN}/${CIRCLE_PROJECT_REPONAME}"
cd terraform/aws/ecs
terraform init
terraform apply \
-var docker_img_name=$DOCKER_IMAGE_NAME \
-var docker_img_tag=$TAG \
--auto-approve
This code block demonstrates how to use the install_terraform:
command I described previously. We have set the version:
parameter to 0.14.2 and the arch:
parameter to arm64. The final run:
element initializes the terraform, code then executes a terraform apply
command with corresponding parameters that pass through the values of the Docker image name and tag created in this pipeline run. Upon completion, this job will create and deploy the application to a fully functional AWS ECS Graviton2 based cluster.
destroy_aws_ecs job
We created the AWS ECS infrastructure in the deploy_aws_ecs
. The destroy_aws_ecs
jobs obviously performs the inverse and programmatically destroys all the infrastructure and resources created. This is the cleanest method of terminating unnecessary infrastructure.
destroy_aws_ecs:
machine:
image: ubuntu-2004:202101-01
resource_class: arm.medium
steps:
- checkout
- run:
name: Create .terraformrc file locally
command: echo "credentials \"app.terraform.io\" {token = \"$TERRAFORM_TOKEN\"}" > $HOME/.terraformrc
- install_terraform:
version: 0.14.2
arch: arm64
- run:
name: Destroy the AWS ECS Cluster
command: |
cd terraform/aws/ecs
terraform init
terraform destroy --auto-approve
In this code block, most of the job definition is the same as the previous one, except for the final run:
element. In this element we are issuing a Terraform initialization and terraform destroy
command which will, as expected, destroy all of the resources created in the previous step.
Workflows: approve_destroy job
The last item I will discuss is the approve_destroy:
found in the workflows:
element of the config example. This job is a manual approval type where workflow will be intentionally halted and remain in a hold until a manual interaction is completed. In this case, a button must be pressed in the CircleCI dashboard in order for the destroy-aws-ecs:
to execute. Without this approval job, the pipeline would automatically trigger the destroy job and terminate all the resources created in previous jobs. Approval type jobs are useful for situations where manual intervention or approvals are required within pipeline executions.
Conclusion
CircleCI has introduced Arm capable executors in the form of Arm compute nodes, giving developers access to Arm architectures for pipelines. In this tutorial I have demonstrated how to implement the CircleCI Arm compute nodes as pipeline executors. I have also shown how to deploy applications to AWS ECS clusters powered by AWS Graviton2 EC2 nodes using Terraform and infrastructure as code. All of the code examples in this tutorial can be found at in the arm-executors repo on GitHub and I highly encourage you to check it out. I would love to hear your feedback, thoughts, and opinions so please join the discussion by tweeting to me @punkdata.
Thanks for reading!