Containers and microservices have revolutionized the way applications are deployed on the cloud. Since its launch in 2014, Kubernetes has become a de-facto standard as a container orchestration tool.

Helm is a package manager for Kubernetes that makes it easy to install and manage applications on your Kubernetes cluster. One of the benefits of using Helm is that it allows you to package all of the components required to run an application into a single, versioned artifact called a Helm chart. Another benefit is that it allows you to manage the lifecycle of your applications with effortless upgrades or roll back an application to a previous version, or to delete an application entirely.

In this tutorial, you will learn how to deploy a Nodejs application on Azure Kubernetes Service (AKS) with Helm and CircleCI. You will create a continuous integration and continuous delivery (CI/CD) pipeline that will automatically trigger after you push changes to your code. Through this automation, you will always have the latest version of the application running on the Kubernetes cluster as well as the latest Helm chart release.

Prerequisites

To follow along with this tutorial, you will need a few things first:

After you have all the pre-requisites complete, you are ready to proceed to the next section.

Cloning the Node.js application

In this tutorial, our main focus is on deploying the application on Kubernetes. Therefore, you can directly clone the Node.js application to your GitHub and continue with the rest of the process.

To clone the project, run:

git clone https://github.com/CIRCLECI-GWP/aks-nodejs-helm.git

There are two branches in this repository:

  1. main branch: This branch contains only the Node.js application code
  2. circleci-project-setup branch: This branch contains the application code along with all YAML files that we will create in this tutorial.

If you are following this tutorial, then checkout to the main branch.

The Node.js application lives in the app.js file.

const express = require("express");
const path = require("path");
const morgan = require("morgan");
const bodyParser = require("body-parser");

/* eslint-disable no-console */

const port = process.env.PORT || 1337;
const app = express();

app.use(morgan("dev"));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: "true" }));
app.use(bodyParser.json({ type: "application/vnd.api+json" }));

app.use(express.static(path.join(__dirname, "./")));

app.get("*", (req, res) => {
  res.sendFile(path.join(__dirname, "./index.html"));
});

app.listen(port, (err) => {
  if (err) {
    console.log(err);
  } else {
    console.log(`App at: http://localhost:${port}`);
  }
});
module.exports = app;

The key takeaway from this code is the port number on which the application will be running, which is 1337.

You can run the application locally by first installing the dependencies. In the project’s root, type:

npm install

Then run the application with the command:

node app.js

The application should now be up and running at the address http://localhost:1337.

Containerizing the Node.js application

To deploy the application on Kubernetes, first you will need to containerize it. For containerizing applications using Docker as the container runtime tool, you will first need to create a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.

Create a new file in the root directory of the project and name it Dockerfile.
Copy the following content in the file:

# Set the base image to use for subsequent instructions
FROM node:alpine

# Set the working directory for any subsequent ADD, COPY, CMD, ENTRYPOINT,
# or RUN instructions that follow it in the Dockerfile
WORKDIR /usr/src/app

# Copy files or folders from source to the dest path in the image's filesystem.
COPY package.json /usr/src/app/
COPY . /usr/src/app/

# Execute any commands on top of the current image as a new layer and commit the results.
RUN npm install --production

# Define the network ports that this container will listen to at runtime.
EXPOSE 1337

# Configure the container to be run as an executable.
ENTRYPOINT ["npm", "start"]

If you have Docker installed, you can build and run the container locally for testing. In later sections, you will learn how to automate this process with CircleCI orbs.

To build and tag the container, you can type:

docker build -t nodejs-aks-helm:latest .

Confirm that the image was successfully created by running this command from your terminal:

docker images

Then run the container with the command:

docker run -it -p 1337:1337 nodejs-aks-helm:latest

The application should now be up and running at the address http://127.0.0.1:1337.

Commit and push the changes to your GitHub repository.

Pushing the container image to Azure Container Registry (ACR)

In this tutorial, you will be using Azure Container Registry to store the application container images as well the Helm chart you will be creating in the later part of this tutorial.

Note: Helm 3 should be used to host Helm charts in Azure Container Registry. Store Helm charts in your registry as OCI artifacts.

To create the registry, you should have a Microsoft Azure account and Azure CLI installed on your computer. The CLI should be connected to your Azure account.

Once completed, you can create the container registry with the help of Azure CLI. First, you need to sign in to your Azure account using the following login command:

az login
Create a Resource Group with the following command:

```bash
az group create --name NodeRG --location eastus

Create the registry with the following command:

az acr create --resource-group NodeRG --name noderegistry --sku basic

Note: Container registry names are globally unique. The name mentioned in the above command may or may not be available at the time of following this tutorial. Change it with another unique name of your choice.

Before pushing and pulling container images, you must log in to the registry with the following command:

az acr login --name noderegistry

Note: Don’t forget to use the same name for the registry created earlier in the command above.

Next, you will need to tag your container image to include the address of the ACR you have created:

docker tag nodejs-aks-helm:latest noderegistry.azurecr.io/nodejs-aks-helm:latest

Note: This command requires Docker in background. Make sure you have defined proper privileges to Docker. You can also run the above command with sudo.

To push the container image created in the above step, execute the following command:

docker push noderegistry.azurecr.io/nodejs-aks-helm:latest

Now this container image can be downloaded and deployed on any Kubernetes cluster.

Configuring Helm charts for deploying the application

A Helm chart is a package that contains all the configuration files and Kubernetes resources needed to deploy an application on a Kubernetes cluster. It simplifies the process of deploying and managing applications on Kubernetes.

In this section, you will be defining a Helm chart for your Node.js application.

To get the boilerplate code of a basic Helm chart, you can run the following command in the root directory of the project:

helm create nodeapp

This command will create a chart directory containing all the common files and directories used in a chart. With this command, you are creating a Helm chart named nodeapp.

Note: You need to run this command from the root directory of the project.

A Helm chart is organized as a collection of files inside a directory. By executing this command, you will find that a new folder is created with the following directory structure:

nodeapp
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

Here’s a brief summary of the files that are created:

  • Chart.yaml: This file contains metadata about your chart, such as its name, version, and dependencies.

  • values.yaml: This file contains default values for the configuration options of your chart. These values can be overridden by the user when installing the chart.

  • templates/: This directory contains the templates for the Kubernetes resource manifests that make up your chart. These templates can reference values from the values.yaml file, allowing users to customize the behavior of their deployment.

  • charts/: This directory contains any child charts that your chart depends on.

These files form the basic structure of a Helm chart and can be modified or extended as needed to suit the requirements of your application. But since we are building only a basic Helm chart for the Node.js application, a lot of the boilerplate files will be not needed. Therefore, you can safely delete these files.

Finally, the nodeapp directory should have the following structure:

.
└── nodeapp
    ├── Chart.yaml
    ├── templates
    │   └── deployment.yaml
    │   └── service.yaml
    └── values.yaml

Once the directory is properly set, you can move toward writing contents to these files.

To deploy applications on Kubernetes, you will have to configure the various Kubernetes resources with the help of incorporating all the settings required to run your application. Kubernetes uses YAML for configuration.

Kubernetes deployments manage stateless services running on your cluster. Their purpose is to keep a set of identical pods running and upgrade them in a controlled way, performing a rolling update by default.

Here are the contents of the templates/deployment.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: { { .Chart.Name } }
  labels:
    app: nodejs
spec:
  replicas: { { .Values.replicaCount } }
  selector:
    matchLabels:
      app: nodejs
  template:
    metadata:
      labels:
        app: nodejs
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      containers:
        - name: { { .Values.image.name } }
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          ports:
            - name: http
              containerPort: { { .Values.service.containerPort } }

Kubernetes Service is an abstraction that defines a logical set of pods and a policy by which to access them. You need a Kubernetes service of type LoadBalancer to make the deployment accessible to the outside world.

The contents of templates/service.yaml are as follows:

apiVersion: v1
kind: Service
metadata:
  name: { { .Chart.Name } }
  labels:
    app: nodejs
spec:
  type: { { .Values.service.type } }
  ports:
    - port: { { .Values.service.port } }
      targetPort: { { .Values.service.containerPort } }
  selector:
    app: nodejs

In both of these files, you can see the use of several placeholders. This allows different values to be passed to the files at runtime. This process is known as templating. It is the process of generating Kubernetes manifests and other files based on configuration values and variable values that you specify.

To template a Helm chart, you use a template engine to substitute variables and expressions in the chart’s templates with their corresponding values. The templates are written in the Go template language, which provides a set of functions and syntax for defining variables, loops, and conditionals. These template derivatives will be changed by the variable name from a variable file, commonly named Values.yaml, or by Helm’s built-in objects.

Paste the following in the values.yaml file in the nodeapp directory.

replicaCount: 1

image:
  name: aks-nodejs-helm
  repository: noderegistry.azurecr.io/aks-nodejs-helm
  tag: latest

service:
  type: LoadBalancer
  port: 80
  containerPort: 1337

Note: You will need to replace the registry URL from above with yours.

In this file, you can see all the placeholder variables used in deployment.yaml and service.yaml have been referenced.

Further, paste the following in the Charts.yaml file located in the same directory.

apiVersion: v2
name: nodeapp
description: A Helm chart for deploying Node.js application on Kubernetes

type: application

version: 0.1.0

appVersion: "1.16.0"

To check whether all the files are properly configured, you can run the following command from the root directory:

helm lint ./nodeapp

If there are any errors, you will be notified along with the filename and line numbers. This is very useful for debugging large Helm charts.

Commit and push these files into the main branch of the GitHub repository you cloned earlier.

Pushing the Helm chart to Azure Container Registry (ACR)

In this section, you will be packaging the Helm chart you created in the previous step and pushing it to ACR.

ACR allows helm charts to be uploaded as OCI artifacts. Helm 3 is the recommended version that ACR supports. With Helm 3, you can manage your Helm charts in ACR easily with the Helm CLI.

Note: Helm client version 3.7 or later is recommended.

First, you will need to save the chart to a local archive. For this, go inside the nodeapp directory, which contains the Helm chart files. Execute the following:

helm package .

This will create a .tgz archive in the same directory. Now, you need to upload this archive to ACR as an OCI artifact. To do that, you first need to authenticate your Helm CLI with ACR. To authenticate, execute the following command:

helm registry login noderegistry.azurecr.io \
  --username $USER_NAME \
  --password $PASSWORD

The $USER_NAME and $PASSWORD in the above command can be fetched from the Azure Portal. You can find the following values by visiting your container registry in ACR and selecting the Access Keys tab from the sidebar. You need to enable Admin User to view the username and password.

You can refer to the following screenshot:

2022-10-10-acr-access-keys

Copy the username and password from the console and paste it in the above command to authenticate your Helm CLI with ACR.

Finally, push your Helm chart archive to ACR as an OCI artifact with the following command:

helm push nodeapp-0.1.0.tgz oci://noderegistry.azurecr.io/helm

When you visit the Repositories tab in the sidebar, you will find both the container image and the Helm chart. Select the helm chart and you can see the release details.

2022-10-10-helm-upload-acr

Now you are ready to install these items in your Kubernetes cluster.

Launching the Azure Kubernetes Service (AKS) cluster

To get started deploying to AKS, fist ensure that the AKS cluster has the required permissions to pull images and Helm charts from the ACR.

Launch a two-node cluster with the following command:

az aks create --resource-group NodeRG --name NodeCluster --node-count 2 --enable-addons http_application_routing --attach-acr noderegistry

This command allows you to authorize an existing ACR in your subscription and configures the appropriate AcrPull role for the managed identity.

Note: If you generated any SSH keys in your system previously, you need to add an optional parameter, --generate-ssh-keys, to the above command. This will auto-generate SSH public and private key files if missing. The keys will be stored in the ~/.ssh directory.

The AKS cluster will take 10 to 15 minutes to launch.

Creating the continuous integration pipeline

The objective of this tutorial is to show how you can deploy your Helm chart on Kubernetes through a continuous integration (CI) pipeline. The CI pipeline should trigger the process of building the container, pushing it to the container registry, and installing the Helm chart with the new values on the cluster.

The helm install command is used to install a new chart in your Kubernetes cluster. It creates all the resources defined in the chart and installs them in the namespace specified.

The helm upgrade command is used to upgrade an existing chart that has already been installed. It allows you to modify the configuration of an existing chart, and it updates the resources defined in the chart with the new configuration.

When you use helm install to upgrade a chart, Helm treats it as a new installation and creates a new release. This means that it will create new resources and delete the old ones. This can be problematic if you have made any customizations to the resources in your chart that you want to keep, because they will be deleted and replaced with the new resources. Therefore, the recommended way to upgrade an existing chart is to use helm upgrade, which will update the resources in place and preserve any customizations you have made.

To manage installs and upgrades in the CI pipeline, you can write a simple bash script that will automatically determine which of the commands should run.

Create a new file named script.sh at the root directory of the project and paste the following contents into the file:

#!/bin/bash
TAG=$1
NAMESPACE=$2
CHART_NAME=$3
ACR_LOGIN_SERVER=$4
echo "oci://$ACR_LOGIN_SERVER/helm/$CHART_NAME"
export KUBECONFIG=$HOME/.kube/config
result=$(eval helm ls --namespace $NAMESPACE | grep $CHART_NAME)
if [ $? -ne "0" ]; then
   echo "Installing Helm Chart"
   helm install --namespace $NAMESPACE --create-namespace $CHART_NAME "oci://$ACR_LOGIN_SERVER/helm/$CHART_NAME" --version $TAG
else
   echo "Upgrading Helm Chart"
   helm upgrade --namespace $NAMESPACE $CHART_NAME "oci://$ACR_LOGIN_SERVER/helm/$CHART_NAME" --version $TAG
fi

To create the CI pipeline, we will be using CircleCI integrated with your GitHub account. CircleCI configuration lives in the .circleci directory in the project’s root folder in the form of config.yml file.

Here are the contents of the config.yml file:

version: 2.1

orbs:
  azure-acr: circleci/azure-acr@0.2.1
  azure-aks: circleci/azure-aks@0.3.0

jobs:
  update-helm-chart:
    docker:
      - image: cimg/base:stable
    steps:
      - run:
          name: Install yq - portable yaml processor
          command: |
            URL=https://github.com/mikefarah/yq/releases/download/3.3.4/yq_linux_amd64
            [ -w /usr/local/bin ] && SUDO="" || SUDO=sudo
            $SUDO wget $URL
            $SUDO mv ./yq_linux_amd64 /usr/local/bin/yq
            $SUDO chmod +x /usr/local/bin/yq
      - checkout
      - run:
          name: Update Values.yaml
          command: |
            yq w -i $CHART_NAME/values.yaml image.repository "$ACR_LOGIN_SERVER/$APP_NAME"
            yq w -i $CHART_NAME/values.yaml image.tag "$CIRCLE_SHA1"
            yq w -i $CHART_NAME/Chart.yaml version "0.$CIRCLE_BUILD_NUM.0"
      - add_ssh_keys:
          fingerprints:
            - "$SSH_FINGERPRINT"
      - run:
          name: Commit & Push to GitHub
          command: |
            git config user.email "$GITHUB_EMAIL"
            git config user.name "CircleCI User"
            git checkout $CIRCLE_BRANCH           
            git commit -am "Update Values.yaml [skip ci]"
            git push origin $CIRCLE_BRANCH

  package-push-helm-chart:
    docker:
      - image: cimg/base:stable
    steps:
      - run:
          name: Install and configure helm
          command: sudo curl -L https://get.helm.sh/helm-v3.10.1-linux-amd64.tar.gz | tar xz && sudo mv linux-amd64/helm /bin/helm && sudo rm -rf linux-amd64
      - checkout
      - run: echo 'export RELEASE_TAG=$((CIRCLE_BUILD_NUM-1))' >> $BASH_ENV
      - run:
          name: Pull Updated code from repo
          command: git pull origin $CIRCLE_BRANCH
      - run:
          name: Packaging the Helm Chart
          command: helm package $CHART_NAME
      - run:
          name: Login to Helm Registry
          command: helm registry login $ACR_LOGIN_SERVER --username $ACR_USERNAME --password $ACR_PASSWORD
      - run:
          name: Pushing Helm chart to registry
          command: helm push "$CHART_NAME-0.$RELEASE_TAG.0.tgz" "oci://$ACR_LOGIN_SERVER/helm"

  install-helm-chart:
    docker:
      - image: cimg/base:stable
    parameters:
      cluster-name:
        description: |
          Name of the AKS cluster
        type: string
      resource-group:
        description: |
          Resource group that the cluster is in
        type: string
    steps:
      - checkout
      - run: echo 'export RELEASE_TAG=$((CIRCLE_BUILD_NUM-3))' >> $BASH_ENV
      - run:
          name: Pull Updated code from repo
          command: git pull origin $CIRCLE_BRANCH
      - azure-aks/update-kubeconfig-with-credentials:
          cluster-name: << parameters.cluster-name >>
          install-kubectl: true
          perform-login: true
          resource-group: << parameters.resource-group >>
      - run:
          name: Install and configure helm
          command: sudo curl -L https://get.helm.sh/helm-v3.10.1-linux-amd64.tar.gz | tar xz && sudo mv linux-amd64/helm /bin/helm && sudo rm -rf linux-amd64
      - run:
          name: Login to Helm Registry
          command: helm registry login $ACR_LOGIN_SERVER --username $ACR_USERNAME --password $ACR_PASSWORD
      - run:
          name: helm install
          command: bash ./script.sh "0.$RELEASE_TAG.0" $NAMESPACE $CHART_NAME $ACR_LOGIN_SERVER

workflows:
  Deploy-App-on-AKS:
    jobs:
      - azure-acr/build-and-push-image:
          login-server-name: $ACR_LOGIN_SERVER
          registry-name: $ACR_REGISTRY_NAME
          repo: $APP_NAME
          tag: $CIRCLE_SHA1,latest
      - update-helm-chart:
          requires:
            - azure-acr/build-and-push-image
      - package-push-helm-chart:
          requires:
            - update-helm-chart
      - install-helm-chart:
          cluster-name: $CLUSTER_NAME
          resource-group: $RESOURCE_GROUP
          requires:
            - package-push-helm-chart

The CI workflow consists of four jobs:

  1. azure-acr/build-and-push-image builds and pushes the container to Azure Container Registry.
  2. update-helm-chart updates the Helm files to the latest container image tag and release number.
  3. package-push-helm-chart packages the Helm chart and pushes it to ACR.
  4. install-helm-chart installs the Helm chart on the AKS cluster.

In this workflow, we have extensively used CircleCI orbs, which are open-source, shareable packages of parameterizable reusable configuration elements, including jobs, commands, and executors. The orbs have been either used directly or are used in creating custom jobs.

Commit and push the changes to your GitHub repository.

Setting up the project on CircleCI

The next step to deploying our application to AKS is connecting the application in our GitHub repository to CircleCI.

Go to your CircleCI dashboard and select the Projects tab on the left panel. Now click the Set Up Project button corresponding to the GitHub repository that contains the code (nodejs-aks-deploy).

2022-10-10-circleci-dashboard

On the Select your config.yml file screen, select the Fastest option and type main as the branch name. CircleCI will automatically locate the config.yml file. Click Set Up Project.

2022-10-10-yaml-select

The workflow will start running automatically but will fail. This is because you have to set up a user key and configure the environment variables in the CircleCI Project Settings menu.

To set up the user key, select the SSH Keys option from the left panel of the Project Settings page. Under the User Key section, click Authorize with GitHub.

CircleCI needs the user key to push changes to your GitHub account during workflow execution.

2022-10-10-user-key

To configure the environment variables, select the Environment Variables option from the left panel of the Project Settings page. Select Add Environment Variable. On the next screen, type the environment variable and the value you want it to be assigned to.

2022-10-10-env-vars

The environment variables used in the file are listed below:

  • APP_NAME: Container image name (aks-nodejs-helm)
  • AZURE_PASSWORD: Azure account password
  • AZURE_USERNAME: Azure account username
  • ACR_LOGIN_SERVER: Login server URI for noderegistry ACR
  • ACR_PASSWORD: Password for the noderegistry ACR
  • ACR_REGISTRY_NAME: Container registry name of noderegistry ACR
  • ACR_USERNAME: Username of noderegistry ACR
  • CHART_NAME: Name of the Helm chart (nodeapp)
  • NAMESPACE: Namespace in which the Helm chart will be installed (nodejs)
  • CLUSTER_NAME: AKS cluster name (NodeCluster)
  • GITHUB_EMAIL: GitHub account email address
  • RESOURCE_GROUP: AKS resource group (NodeRG)
  • SSH_FINGERPRINT: SSH fingerprint of user key used for pushing the updated Helm chart to GitHub

Note: To locate the SSH fingerprint mentioned above, navigate to Project Settings > SSH Keys. Scroll down to the User Key section and copy the key. The key will be displayed only after clicking Authorize with GitHub.

Once completed, you can re-run the workflow. This time your workflow should complete successfully.

2022-10-10-passed-workflow

You will also find another pipeline marked as Not Run.
This is because we explicitly instructed CircleCI to skip the pipeline by including the term [skip ci] in the commit message when CircleCI commits the updated configuration files to GitHub. This protects from getting in a never-ending loop of self-triggering of the workflow.

Accessing the application on AKS

With that passing workflow, your application has successfully been deployed on the AKS cluster. To access the application, you need the external IP address of the cluster.

To find External-IP, you will need to use the Azure CLI once again.

Configure kubectl to connect to AKS using the following command:

az aks get-credentials --resource-group NodeRG --name NodeCluster

Since we created all the resources in the nodejs namespace, use the following command to get all the resources in the namespace:

kubectl get all --namespace nodejs

Copy the External-IP corresponding to service/nodejs.

2022-10-10-external-ip

You can access the application at http://<EXTERNAL-IP>. In my case, that was http://104.45.180.242/.

2022-10-10-final-application

Conclusion

In this tutorial, you learned how to develop an automated CI/CD pipeline for deploying your applications continuously on a Kubernetes cluster. Once the pipeline is properly configured, any changes made to the application code will be instantly reflected on the application URL.

This setup means you no longer need to manually configure and deploy applications on Kubernetes. You can change the values of the environment variables to use the CircleCI configuration file for similar applications. If you prefer GKE to AKS as your managed Kubernetes provider, we also have an article on deploying to GKE.

The complete source code for this tutorial can also be found here on GitHub.


Avik Kundu is a Software Engineer at Red Hat. He is a full-stack developer, opensource contributor, and tech content creator proficient in DevOps and Cloud. He is an AWS Community Builder and Microsoft Learn Student Ambassador. He has written articles and tutorials on various tools and technologies on Medium, Dev and Opensource.com. He loves to learn new technologies and share his knowledge in public.

Read more posts by Avik Kundu