Progressive delivery on Kubernetes with CircleCI and Argo Rollouts
Software Engineer
Containers and microservices have revolutionized the way applications are deployed on the cloud. Since its launch in 2014, Kubernetes has become a de-facto standard as a container orchestration tool.
With traditional approaches of deploying applications in production, developers often release updates or new features all at once, which can lead to issues if there are bugs or other issues that weren’t caught during testing. Progressive delivery addresses these issues by providing a more controlled and gradual approach to releasing new features and updates.
In this tutorial, you will learn how to deploy a Node.js application on Azure Kubernetes Service (AKS) through a progressive delivery process with CircleCI and Argo Rollouts. You will create a CI/CD pipeline that automatically triggers blue-green deployments with Argo Rollouts on every commit.
Progressive delivery with blue-green deploys
By releasing new features to a small subset of users at first, developers can catch any issues before they impact a large number of users. This allows them to make any necessary changes or fixes before rolling out the feature to a larger audience, which reduces the overall risk of introducing bugs or other issues.
Argo Rollouts is an open-source tool that provides advanced deployment capabilities for Kubernetes. It is built on top of the Kubernetes API and enables progressive delivery techniques such as canary deployments, blue-green deployments, and other deployment strategies.
Blue-green deployment allows for seamless software updates by creating two identical environments, one for the current version of the application and the other for the new version. This approach allows developers to test and verify the new version in a separate environment, without impacting production. Once the new version is verified, traffic can be seamlessly switched from the old version to the new version.
This approach reduces downtime, minimizes the risk of introducing bugs or other issues, and ensures a reliable and safe deployment process for users. If any issues arise, traffic can be quickly switched back to the previous version, which provides an added layer of safety for the deployment process.
Prerequisites
To follow along with this tutorial, you will need a few things first:
- A Docker Hub account account
- A GitHub account (GitLab users can also set up the project using GitLab)
- A Microsoft Azure account
- A CircleCI account
- Kubectl installed on your system
- ArgoCD CLI installed on your system
- Argo Rollouts Kubectl plugin installed on your system
- Azure CLI installed on your system
- Node JS installed on your system
- Docker Engine installed on your system
After you have all the pre-requisites complete, you are ready to proceed to the next section.
Cloning the Node.js application
In this tutorial, our main focus is on deploying the application on Kubernetes. Therefore, you can directly clone the Nodejs application to your GitHub and continue with the rest of the process.
To clone the project, run:
git clone https://github.com/CIRCLECI-GWP/aks-k8s-rollouts.git
There are two branches in this repository:
main
branch: This branch contains only the Nodejs Application codecircleci-project-setup
branch: This branch contains the application code along with all YAML files that we will create in this tutorial.
If you are following this tutorial, then checkout to the main
branch.
The Node.js application lives in the app.js
file.
const express = require("express");
const path = require("path");
const morgan = require("morgan");
const bodyParser = require("body-parser");
/* eslint-disable no-console */
const port = process.env.PORT || 1337;
const app = express();
app.use(morgan("dev"));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: "true" }));
app.use(bodyParser.json({ type: "application/vnd.api+json" }));
app.use(express.static(path.join(__dirname, "./")));
app.get("*", (req, res) => {
res.sendFile(path.join(__dirname, "./index.html"));
});
app.listen(port, (err) => {
if (err) {
console.log(err);
} else {
console.log(`App at: http://localhost:${port}`);
}
});
module.exports = app;
The key takeaway from this code is the port number on which the application will be running, which is 1337
.
You can run the application locally by first installing the dependencies. In the project’s root, type:
npm install
Then run the application with the command:
node app.js
The application should now be up and running at the address http://localhost:1337
.
Containerizing the Node.js application
To deploy the application on Kubernetes, first you will need to containerize it. To containerizing applications using Docker, you will need to create a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.
Create a new file in the root directory of the project and name it Dockerfile
.
Copy the following content in the file:
# Set the base image to use for subsequent instructions
FROM node:alpine
# Set the working directory for any subsequent ADD, COPY, CMD, ENTRYPOINT,
# or RUN instructions that follow it in the Dockerfile
WORKDIR /usr/src/app
# Copy files or folders from source to the dest path in the image's filesystem.
COPY package.json /usr/src/app/
COPY . /usr/src/app/
# Execute any commands on top of the current image as a new layer and commit the results.
RUN npm install --production
# Define the network ports that this container will listen to at runtime.
EXPOSE 1337
# Configure the container to be run as an executable.
ENTRYPOINT ["npm", "start"]
If you have Docker installed, you can build and run the container locally for testing.
In later sections, you will learn how to automate this process with CircleCI orbs.
To build and tag the container, you can type:
docker build -t aks-k8s-rollouts:latest .
Confirm that the image was successfully created by running this command from your terminal:
docker images
Then run the container:
docker run -it -p 1337:1337 aks-k8s-rollouts:latest
The application should now be up and running at the address http://127.0.0.1:1337
.
Commit and push the changes to your GitHub repository.
Configuring Kubernetes manifests for deployment
To deploy containers on Kubernetes, you will need to configure Kubernetes to incorporate all the settings required to run your application. Kubernetes uses YAML for configuration.
Create a directory named manifests
in the root directory of the project.
Then, create the following files within the newly created folder:
namespace.yaml
rollout.yaml
service-active.yaml
service-preview.yaml
kustomization.yaml
In Kubernetes, namespaces provides a mechanism for isolating groups of resources within a single cluster.
Here are the contents of the namespace.yaml
file:
apiVersion: v1
kind: Namespace
metadata:
name: nodejs
labels:
name: nodejs
This file would create a namespace named nodejs
inside the Kubernetes cluster. All the resources would be created in this namespace.
Kubernetes deployments support only the rolling update strategy, which allows gradual updates of an application, minimizing downtime and risk. But since you are going to perform a blue-green deployment of your application with Argo Rollouts, you don’t need to manually create the Kubernetes Deployments. Instead, you will be creating a Rollout custom resource, which will be used by Argo Rollouts to manage the entire blue-green deployment process.
Here are the contents of the rollout.yaml
file:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: nodejs
namespace: nodejs
labels:
app: nodejs
spec:
replicas: 3
selector:
matchLabels:
app: nodejs
template:
metadata:
labels:
app: nodejs
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: aks-k8s-rollouts
image: aks-k8s-rollouts
ports:
- name: http
containerPort: 1337
strategy:
blueGreen:
activeService: svc-active
previewService: svc-preview
autoPromotionEnabled: false
The key takeaway from this code is the type of rollout strategy mentioned under the strategy
section. Also, you have to mention the names of the active and preview Kubernetes Services that you are defining.
The autoPromotionEnabled
is set to false to ensure that a preview service doesnot automatically be promoted to the active service. This promotion will be done manually from the Argo Rollouts web UI or CLI.
Kubernetes Service provides a stable IP address and DNS name for a set of pods, enabling communication and load balancing between them. In this tutorial, since you are performing a blue-green deployment, you need to create two services, one for the active rollout, which are mostly deployed in the production environment, and preview rollout, which are deployed in any non-production environment, especially for testing. Also, you need to set the Kubernetes Service type to LoadBalancer
to make the applications accessible to the outside world.
Here are the contents of the service-active.yaml
:
apiVersion: v1
kind: Service
metadata:
name: svc-active
namespace: nodejs
labels:
app: nodejs
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 1337
selector:
app: nodejs
And here are the contents of the service-preview.yaml
file:
apiVersion: v1
kind: Service
metadata:
name: svc-preview
namespace: nodejs
labels:
app: nodejs
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 1337
selector:
app: nodejs
Notice that the metadata.name
fields of the two services matches with the names mentioned in the Rollouts.yaml
. The key takeaways from this code are the targetPort
, port
and type
, which denotes the container port, the port where the application will be running and the type of service respectively.
To deploy the latest version of the application on the Kubernetes cluster, the resources have to be customized to maintain the updated information. This is managed by Kustomize, which is a tool for customizing Kubernetes configurations.
Here are the contents of the kustomization.yaml
file:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- rollout.yaml
- service-active.yaml
- service-preview.yaml
- namespace.yaml
namespace: nodejs
images:
- name: aks-k8s-rollouts
newName: aks-k8s-rollouts
newTag: v1
The key takeaways from this code are the newName
and newTag
, which will be automatically updated with the latest Docker image information during the Continuous Integration process.
Commit and push these files into the main
branch of the GitHub repository you cloned earlier.
Launching the Azure Kubernetes Service (AKS) cluster
In this tutorial, you will be learning to deploy the application on AKS cluster. To create the AKS cluster, you should have a Microsoft Azure account and Azure CLI installed on your computer. The CLI should be connected to your Azure account.
Once completed, you can launch an AKS cluster with the help of the Azure CLI.
Create a Resource Group with the following command:
az group create --name NodeRG --location eastus
Launch a two-node cluster with the following command:
az aks create --resource-group NodeRG --name NodeCluster --node-count 2 --enable-addons http_application_routing
Note: If you generated any SSH keys in your system previously, you need to add an optional parameter, --generate-ssh-keys
, to the above command. This will auto-generate SSH public and private key files if missing. The keys will be stored in the ~/.ssh
directory.
The AKS cluster will take 10-15 minutes to launch.
Installing ArgoCD and Argo Rollouts in the AKS Cluster
Once the cluster is up and running, you will have to install ArgoCD inside the cluster to use it for deploying your application.
To install the application, you will need to use the Azure CLI once again.
Configure kubectl
to connect to AKS using the following command:
az aks get-credentials --resource-group NodeRG --name NodeCluster
To install ArgoCD, you will need to execute the following commands:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
To install Argo Rollouts, you will need to execute the following commands:
kubectl create namespace argo-rollouts
kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yaml
Once you execute these commands, ArgoCD and Argo Rollouts will be installed in the argocd
and argo-rollouts
namespaces, respectively.
To get all the resources in these namespaces:
kubectl get all --namespace < argocd / argo-rollouts >
By default, the ArgoCD server is not exposed with an external IP. To access the API server, choose one of the following techniques to expose the ArgoCD API server:
Service Type Load Balancer
Change the argocd-server
service type to LoadBalancer:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Port Forwarding
Kubectl port-forwarding can also be used to connect to the API server without exposing the service.
kubectl port-forward svc/argocd-server -n argocd 8080:443
The API server can then be accessed using https://localhost:8080
.
Note: For this tutorial, you need to follow the first method for exposing the ArgoCD server with an external IP via Service Type Load Balancer, as we will accessing the application from the internet.
Accessing the ArgoCD Web Portal
Once you have exposed the ArgoCD API server with an external IP, you can now access the portal with the external IP Address generated.
Since, we installed ArgoCD in the argocd
namespace, use the following command to get all the resources in the namespace:
kubectl get all --namespace argocd
Copy the External-IP
corresponding to service/argocd-server
.
You can access the application at http://<EXTERNAL-IP>
. In my case, that was http://20.72.136.140/
.
Now, to login the portal, you need the username and password.
- The username is set as
admin
by default. - To fetch the password, you need to execute the following command:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
You need to use this username-password combination to log in to the ArgoCD portal.
Configuring Kubernetes manifests for ArgoCD
To configure ArgoCD to deploy your application on Kubernetes, you will have to setup ArgoCD to connect the Git repository and Kubernetes in a declarative way using YAML for configuration.
Apart from this method, you can also setup ArgoCD from the web portal or using the ArgoCD CLI. But since we are following GitOps principles, which state that the Git repository should act as sole source of truth, the declarative method via YAML files serves best.
One of the key features and capabilities of ArgoCD is to sync via manual or automatic policy for deployment to a Kubernetes cluster.
-
Manual sync policy :
As the name suggests, through this policy, you will be able to manually synchronize your application via the CI/CD pipelines.
Whenever a code change will be made, the CI/CD pipeline will get triggered, which will in-turn call the ArgoCD server APIs to start the sync process based on the changes you will commit. For communicating with the ArgoCD server APIs, you can use either the ArgoCD CLI or can use the SDKs available for various programming languages especially for programmatic access. -
Automated sync policy:
ArgoCD has the ability to automatically sync an application when it detects differences between the desired manifests in Git, and the live state in the cluster.
A benefit of automatic sync is that CI/CD pipelines no longer need direct access to the ArgoCD API server to perform the deployment. Instead, the pipeline makes a commit and push to the Git repository with the changes to the manifests in the tracking Git repo.
Create a directory named argocd
in the root directory of the project. Create a new file in the new directory and name it as config.yaml
.
For setting up the manual sync policy for ArgoCD, paste the following in the config.yaml
.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: aks-k8s-rollouts
namespace: argocd
spec:
destination:
namespace: nodejs
server: "https://kubernetes.default.svc"
source:
path: manifests
repoURL: "https://github.com/CIRCLECI-GWP/aks-k8s-rollouts"
targetRevision: circleci-project-setup
project: default
If you want to set to the automated sync policy, you need to paste the following in the config.yaml
.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: aks-k8s-rollouts
namespace: argocd
spec:
destination:
namespace: nodejs
server: "https://kubernetes.default.svc"
source:
path: manifests
repoURL: "https://github.com/CIRCLECI-GWP/aks-k8s-rollouts"
targetRevision: circleci-project-setup
project: default
syncPolicy:
automated:
prune: false
selfHeal: false
Commit and push these files into the main
branch of the GitHub repository you had cloned earlier.
Creating the continuous integration pipeline
The objective of this tutorial is to show how you can deploy applications on Kubernetes through continuous integration (CI) via CircleCI and continous deployment (CD) via ArgoCD. The CI pipeline should trigger the process of building the container and pushing it to Docker Hub, and the CD pipeline should deploy the application on Kubernetes.
To create the CI pipeline, we will be using CircleCI integrated with your GitHub account. CircleCI configuration lives in the .circleci
directory in the project’s root folder in the form of a config.yml
file.
The contents of the config.yml
file for this tutorial are as follows:
version: 2.1
orbs:
docker: circleci/docker@2.1.1
azure-aks: circleci/azure-aks@0.3.0
kubernetes: circleci/kubernetes@1.3.0
jobs:
argocd-manual-sync:
docker:
- image: cimg/base:stable
parameters:
server:
description: |
Server IP of of ArgoCD
type: string
username:
description: |
Username for ArgoCD
type: string
password:
description: |
Password for ArgoCD
type: string
steps:
- run:
name: Install ArgoCD CLI
command: |
URL=https://<< parameters.server >>/download/argocd-linux-amd64
[ -w /usr/local/bin ] && SUDO="" || SUDO=sudo
$SUDO curl --insecure -sSL -o /usr/local/bin/argocd $URL
$SUDO chmod +x /usr/local/bin/argocd
- run:
name: ArgoCD CLI login
command: argocd login << parameters.server >> --insecure --username << parameters.username >> --password << parameters.password >>
- run:
name: Manual sync
command: argocd app sync $APP_NAME
- run:
name: Wait for application to reach a synced and healthy state
command: argocd app wait $APP_NAME
argocd-configure:
executor: azure-aks/default
parameters:
cluster-name:
description: |
Name of the AKS cluster
type: string
resource-group:
description: |
Resource group that the cluster is in
type: string
steps:
- checkout
- run:
name: Pull Updated code from repo
command: git pull origin $CIRCLE_BRANCH
- azure-aks/update-kubeconfig-with-credentials:
cluster-name: << parameters.cluster-name >>
install-kubectl: true
perform-login: true
resource-group: << parameters.resource-group >>
- kubernetes/create-or-update-resource:
resource-file-path: argocd/config.yaml
bump-docker-tag-kustomize:
docker:
- image: cimg/base:stable
steps:
- run:
name: Install kustomize
command: |
URL=https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v4.5.2/kustomize_v4.5.2_linux_amd64.tar.gz
curl -L $URL | tar zx
[ -w /usr/local/bin ] && SUDO="" || SUDO=sudo
$SUDO chmod +x ./kustomize
$SUDO mv ./kustomize /usr/local/bin
- checkout
- run:
name: Bump Docker Tag
command: |
cd manifests
kustomize edit set image $APP_NAME=$DOCKER_LOGIN/$APP_NAME:$CIRCLE_SHA1
- add_ssh_keys:
fingerprints:
- "$SSH_FINGERPRINT"
- run:
name: Commit & Push to GitHub
command: |
git config user.email "$GITHUB_EMAIL"
git config user.name "CircleCI User"
git checkout $CIRCLE_BRANCH
git add manifests/kustomization.yaml
git commit -am "Bumps docker tag [skip ci]"
git push origin $CIRCLE_BRANCH
workflows:
Deploy-App-on-AKS:
jobs:
- docker/publish:
image: $DOCKER_LOGIN/$APP_NAME
tag: $CIRCLE_SHA1,latest
- bump-docker-tag-kustomize:
requires:
- docker/publish
- argocd-configure:
cluster-name: $CLUSTER_NAME
resource-group: $RESOURCE_GROUP
requires:
- bump-docker-tag-kustomize
# Paste the following only when you opt for the ArgoCD manual-sync-policy:
- argocd-manual-sync:
server: $ARGOCD_SERVER
username: $ARGOCD_USERNAME
password: $ARGOCD_PASSWORD
requires:
- argocd-configure
The CI workflow consists of four jobs:
docker/publish
: Builds and pushes the container to Dockerhubbump-docker-tag-kustomize
: Updates the Docker image tag and generates consolidated Kubernetes configuration fileargocd-configure
: Applies the ArgoCD Configuration on the AKS clusterargocd-manual-sync
: This job is needed only when you will be opting for the manual sync policy. For automatic sync, you can omit this job from the file.
In this workflow, we have extensively used CircleCI orbs, which are open-source, shareable packages of parameterizable reusable configuration elements, including jobs, commands, and executors. The orbs have been either used directly or are used in creating custom jobs.
Commit and push the changes to your GitHub repository.
Setting up the project on CircleCI
The next step to deploying our application to AKS is connecting the application in our GitHub repository to CircleCI.
Go to your CircleCI dashboard and select the Projects tab on the left panel. Now, you have to click on the Set Up Project
button corresponding to the GitHub repository which contains the code (aks-k8s-rollouts).
On the Select your config.yml file screen, select the Fastest option and type main
as the branch name. CircleCI will automatically locate the config.yml
file. Click on Set Up Project.
The workflow will start running automatically. However, it will fail because you have to setup a User Key and configure the environment variables from the Project Settings menu in CircleCI.
To set up the User Key, select the SSH Keys option from the left panel of the Project Settings. Under the User Key section, click Authorize with GitHub.
The User Key is needed by CircleCI to push changes to your GitHub account on behalf of the repository owner during workflow execution.
To configure the environment variables, select the Environment Variables option from the left panel of the Project Settings. Select Add Environment Variable. On the next screen, type the environment variable and the value you want it to be assigned to.
The environment variables used in the file are listed below:
APP_NAME
: Container image name (aks-k8s-rollouts)ARGOCD_PASSWORD
: ArgoCD portal passwordARGOCD_SERVER
: ArgoCD server IP addressARGOCD_USERNAME
: ArgoCD portal username (admin)AZURE_PASSWORD
: Azure account passwordAZURE_USERNAME
: Azure account usernameCLUSTER_NAME
: AKS cluster name (NodeCluster)DOCKER_LOGIN
: Docker Hub usernameDOCKER_PASSWORD
: Docker Hub passwordGITHUB_EMAIL
: GitHub account email addressRESOURCE_GROUP
: AKS resource group (NodeRG)SSH_FINGERPRINT
: SSH fingerprint of user key used for pushing the updated Docker tag to GitHub
Note: To locate the SSH Fingerprint mentioned above, navigate to the Project Settings and select SSH Keys from the sidebar. Scroll down to the User Key section and copy the key. Also note that the key will be displayed only after the Authorize with GitHub button is clicked.
Once completed, you can re-run the workflow. This time the workflow should complete successfully.
You will also find another pipeline having the status
as Not Run
.
This is understandable because we have explicitly instructed CircleCI to skip the pipeline by including the term [skip ci]
in the commit message when CircleCI commits the updated configuration files to GitHub. This protects from getting in a never-ending loop of self-triggering of the workflow.
Monitoring the application on the ArgoCD Dashboard
At this point, your applications has successfully been deployed on the AKS cluster.
To observe and monitor the resources that are currently running on the AKS cluster, you need to login to the ArgoCD web portal.
We have already discussed how to fetch the ArgoCD Server IP, username, and password for logging in to the portal. Once logged in, you will see the Applications page.
Now, you need to click on the application name and you will be redirected to a page where you can see the tree view of all the resources that are currently running on the AKS Cluster, along with their real-time status.
When you select the network view from the top-right corner of the Dashboard, you will see that both the services are pointing to the same pods. This means that only a single replica set is created by the Rollouts resource, and both the services are attached to this replica set. Therefore, you will see the same version of the application when you use the external IP addresses provided by either of the services.
Performing Progressive Delivery using Argo Rollouts
To perform a blue-green deployment with Argo Rollouts, you can either use the Argo Rollouts dashboard or the Argo Rollouts Kubectl plugin. In this tutorial, you will see how to use both the methods.
For viewing the Argo Rollouts dashboard, you need to first port-forward the Argo Rollouts dashboard service to your local machine.
To port-forward the Argo Rollouts Dashboard:
kubectl argo rollouts dashboard -n nodejs
Note: This process will be running in the foreground until the process is killed or the terminal is closed. You can also set the above command to run in the background by adding an &
at the end of the command.
The Argo Rollouts dashboard can now be accessed from http://localhost:3100/rollouts
.
Now, you need to click on the corresponding Rollout name tile to view the details of the rollout. Since you named the Rollout resource nodejs
, you will see the same name in the Rollout name tile. Once clicked, you will be redirected to the Rollout details page.
In this page, you can see the strategy is set to BlueGreen
. Currently, there is only a single revision listed, and it is marked as the active
revision. This is expected, as there have been no changes made to the application code, so no preview
revision has been created.
To see Argo Rollouts in action, you need to make some changes to the application code. Go to your remote Git repository and make a small change to the index.html
file. You can append v2
to the header and commit the changes.
Once the changes are committed, the CircleCI workflow will be triggered. If you have configured ArgoCD for manual sync
, then you will notice that the argocd-manual-sync
job will not complete. On viewing the job logs, you will see that in the last step, the status of the Rollout resource is Suspended
. This is the expected behavior as you are performing a blue-green deployment. Until the user manually promotes the new revision of the application, the Rollout resource will be in a suspended state. Once the user promotes the new revision of the application, the Rollout resource Status will turn Healthy
and CircleCI will complete the argocd-manual-sync
job.
Promoting the new version of the application
Now if you look at the Argo Rollouts dashboard, you will see that there are two revisions listed under the Revisions section. Revision 1
is marked as the active
revision and Revision 2
is marked as the preview
revision.
At this stage, if you try to access the application using the external IP address of the svc-active
service, you will see that the application is still running the old version. This is because the svc-active
service is still pointing to old version (Revision 1
) of the application. To access the new version of the application, you need to use the external IP address of the svc-preview
service.
Therefore, you can perform various tests on the new version (Revision 2
) of the application using the external IP address of the svc-preview
service. Once you are satisfied with the new version of the application, you can promote the new version of the application to the active
revision.
To promote Revision 2
as the active
revision from the Argo Rollouts dashboard, you need to click Promote
on the Rollouts details page. You will notice that Revision 2
is now marked as the active
and stable
revision.
If you try to access the application using the external IP address of the svc-active
service, you will see that the application is now running the new version. This is because the svc-active
service is now pointing to the new version (Revision 2
) of the application. You can also access the new version of the application using the external IP address of the svc-preview
service.
You can also promote the new version of the application from the Argo Rollouts Kubectl plugin. To get the details of the Rollout resource, you need to use the following command:
kubectl argo rollouts get rollout nodejs -n nodejs
You can find the similar information that was available in the Argo Rollouts Dashboard, in the above output.
To promote the new version of the application from the Argo Rollouts Kubectl plugin, you need to use the following command:
kubectl argo rollouts promote rollout nodejs -n nodejs
Once the new version of the application is promoted, you will notice that Revision 2
is now marked as the active
and stable
revision.
Performing a rollback
Suppose you have promoted the new version of the application to the active
revision, and you have noticed that the new version of the application is not working as expected. In this case, you can rollback the new version of the application to the previous version (Revision 1
).
To rollback to the previous version of the application from the Argo Rollouts Dashboard, you need to click on the Rollback
button corresponding to the previous version (Revision 1
).
Once the Rollback
button is clicked, you will notice that Revision 1
gets removed and a new Revision 3
is created. Revision 3
is the new instance of the previous version of the application (Revision 1
). However, Revision 2
is marked as the active
and stable
revision because the svc-active
service is still pointing to it. To access the previous version of the application, you need to use the External IP address of the svc-preview
service.
In order to rollback to the previous version of the application, you need to promote Revision 3
as the active
revision. To do so from Argo Rollouts Dashboard, click Promote
on the Rollout details page. Revision 3
is now marked as the active
and stable
revision.
Now, if you try to access the application using the External IP address of the svc-active
service, you will see that the application is now running the previous version. This is because, currently, the svc-active
service is now pointing to the previous version (Revision 3
) of the application. You can also access the previous version of the application using the External IP address of the svc-preview
service.
You can also rollback to the previous version of the application from the Argo Rollouts Kubectl Plugin. To rollback to the previous version, you need to use the following command:
kubectl argo rollouts undo nodejs -n nodejs
kubectl argo rollouts promote rollout nodejs -n nodejs
Accessing the Active and Preview Revisions on AKS
The Active and Preview revisions of the Application can be accessed by the external IP address provided by the svc-active
and svc-preview
services, respectively.
Since we created all the resources in the nodejs
namespace, use the following command to get all the resources in the namespace:
kubectl get all --namespace nodejs
To get the IP address of svc-active
, copy the External-IP
corresponding to service/svc-active
and paste it in your browser. Similarly, to get the IP Address of svc-preview
, copy the External-IP
corresponding to service/svc-preview
.
You can access the application at http://<EXTERNAL-IP>
.
In my case, that was http://40.71.239.51/
.
Now, if you are in a point where you have not promoted your new revision to the active
revision, you can access the new version of the application using the external IP address of the svc-preview
service, while the svc-active
service will still be pointing to the previous version of the application.
If you have promoted your new revision to the active
revision, you can access the new version of the application using the external IP address of both the svc-active
and svc-preview
services.
If you have rolled back to the previous version of the application, you can access the previous version of the application using the external IP address of the svc-active
service, while the svc-preview
service will still be pointing to the new version of the application.
In this tutorial, since you have rolled back to the previous version of the application, the previous version is set as the active
revision. So, both the svc-active
and svc-preview
services will be pointing to the previous version of the application.
Conclusion
In this tutorial, you learned how to perform a progressive delivery for deploying your applications continuously on a Kubernetes cluster following GitOps practices with ArgoCD and Argo Rollouts.
Once the pipeline is properly configured, you can use the blue-green rollout strategy to deploy new versions of the applications to the preview
environment. Once the application is tested in the preview
environment, you can promote it to the active
environment. If the new version of the application is not working as expected, you can easily roll it back to the previous version.
If you want to learn more about various rollout strategies, continue learning with canary vs blue-green deployment to reduce enterprise downtime. The complete source code for this tutorial can also be found here on GitHub.