This post, written by CircleCI Senior Manager, Marketing Insights and Strategy Ron Powell, originally appeared on The New Stack here.

If you follow technology news, it might seem like Kubernetes is everywhere. It has become hugely popular. In fact, developers and DevOps teams might feel like their application development pipeline is hopelessly outdated if they aren’t using Kubernetes.

Kubernetes is an orchestration tool for containerized applications. Starting with a collection of Docker containers, Kubernetes can control resource allocation and traffic management for cloud applications and microservices.

As such, it simplifies many aspects of running a service-oriented application infrastructure. Along with modern continuous integration and continuous deployment (CI/CD) tools, Kubernetes provides the basis for scaling these apps without huge engineering effort.

There’s a lot of excitement about using Kubernetes to manage cloud and hybrid cloud environments. And it is a great tool. But teams often chase the hype and move to Kubernetes prematurely, sometimes at a great cost in time, money, and developer frustration.

In this article, we will try to look beyond the hype and help you answer the question: do I actually need Kubernetes?

What does Kubernetes do, anyway?

Kubernetes is an orchestration tool for containerized applications. It is responsible for:

  • Deploying images and containers
  • Managing the scaling of containers and clusters
  • Resource balancing containers and clusters
  • Traffic management for services

Kubernetes really shines when your application consists of multiple services running in different containers. For a monolithic application with a static user base, this may be more than necessary.

The task of building, testing and delivering your application to a container registry is not part of Kubernetes. Here, CI/CD tools for building and testing applications do the job. Kubernetes, as a part of a CI/CD pipeline, then helps you deploy changes to production without any downtime.

Improve your monolith

Most applications start their life as a monolith, and this is great! With an entire application in one place, it is quick and easy to make and deploy changes. But if your app finds success and grows, you will soon need to find ways to scale it.

Does that mean it is time for Kubernetes? Probably not.

It is tempting to think that only microservices orchestrated via Kubernetes can scale — you’ll read a lot of this on the internet. But scaling is usually more about the application’s internals than about the high-level architecture and tooling. For instance, you can scale a monolith by deploying multiple instances with a load balancer that supports affinity flags.

A practical first step to consider when scaling an application is Test Driven Development (TDD), which ensures quality and prevents defects as your application grows. While smaller modules or services are easier to test, the modularity also means increased demand for mocking, as well as additional tooling to configure and maintain. Good testing makes it easier to build and scale your application confidently.

As you start to scale out a monolith, configuration management tools like Chef and Ansible come in handy. You can use them to automatically configure new servers to ensure they are ready to run your application. You can even go a step further and use a tool like Terraform to help provision new server VMs, so that you won’t have to create them manually.

When other parts of your application become a bottleneck — for example, the database — you may scale those, too. Various patterns allow applications to scale even though they are, by design, not very scalable. For example, if your database becomes a bottleneck, you can move frequently-accessed data to a high-performance in-memory data store, like Redis, to reduce load on your database.

No matter what configuration management and provisioning tools you use, a good CI/CD pipeline is a must. You might have copied a .zip file to your server via FTP the first time you deployed your app, but that approach does not scale. A streamlined CI/CD pipeline ensures that your application gets built, tested and deployed automatically, without any extra work from you or your team.

Together, these practices will make your life easier if you do eventually need Kubernetes.

You can even auto-scale monolithic applications using cloud services like AWS Elastic Beanstalk, Google App Engine, or Azure App Service. These all impose far less administrative overhead than Kubernetes, and they all play nicely with CI/CD tools.

When working with a new application, focus on developing the best application possible. Eventually, a complex tool like Kubernetes may be the right solution for managing infrastructure for your application. As you grow, however, many of the practices we’ve just discussed will be more practical.

Augment your monolith

As your application continues to grow, you will eventually reach a point where continuing to add to the monolith seems untenable. This is often because the app is near the limit of what a single dev team can work on.

At this point, many teams choose to split up their monolith and move fully into microservices. While this is a rather popular decision, it is neither a necessary one nor a silver bullet. Consider starting out by adding services that augment your monolith instead of replacing it. Some of these support services may actually be microservices — so you can benefit from using small services where it makes sense, while still leveraging the benefits of your monolith.

Even when introducing services and microservices, you may not need or want to start with Kubernetes. Kuberntes excels at running and scaling pods of related service and microservice containers. However, there are aspects of adopting Kubernetes that are easy to overlook at first. For example, Kubernetes does not include robust built-in tools for securing pods, nodes and clusters, and deploying Kubernetes clusters across a multi-cloud environment can add a lot of complexity.

Starting with a single-cloud platform like Azure Service Fabric and AWS Fargate may make it easier to spin up and scale services, without forcing you to take on managing a Kubernetes cluster.

Another option is to avoid services with maintenance overhead altogether and opt for Functions as a Service (FaaS), such as AWS Lambda or Azure Functions. FaaS is a great way to minimize potential infrastructure overhead when adding services to your application. Moreover, a Kubernetes-orchestrated cluster — if you eventually need one — can be enhanced with FaaS capabilities. By starting with a mixture of services and functions, the migration to Kubernetes is less complicated than it sounds.

No more monolith

Now imagine that your original monolithic application has grown so much that you can barely see the original monolith.

Its needs and its userbase have grown so far and so fast that you now have a wide variety of helper services, many of which need to talk to each other. You need to make sure the services that depend on each other are always running and visible to each other.

In addition, your customers are demanding the kind of uptime and reliability that make you consider running across multiple availability zones, possibly even across multiple cloud vendors.

This is the point at which you probably need an orchestrator like Kubernetes. It lets you easily define the modules (Pods) of related services and lets you automatically scale them and load-balance between them.

For Kubernetes to be useful, here is what it would require:

  • You are willing to operate a couple of virtual machines
  • You can assign people to do configuration and maintenance of Kubernetes
  • You have more than a single service to take care of
  • You need to automate (as much as possible) a mostly homogenous service deployment
  • You need to be cloud (or hosting) provider-agnostic

Furthermore, Kubernetes has built-in support for high-availability (Amazon RDS Multi-AZ) deployments, which makes it easier to increase app reliability and availability. Of course, it does impose overhead: it takes time and engineering resources to create and manage clusters, define pods, and create containerized apps suitable for deployment to Kubernetes. But if your app is big enough to benefit from Kubernetes, the administrative overhead is worth it.


Kubernetes is powerful, but that does not mean it’s the right choice for every team and every app. As with any piece of technology, it is made to solve a certain set of problems. If you aren’t facing the problems Kubernetes aims to solve, it is more trouble than it’s worth.

Don’t just give Kubernetes a place in your infrastructure toolbox. Make Kubernetes earn it. Especially at the beginning of a new application.

First, use available tools to get the app out quickly. When your application reaches a stage where deployment and scaling is becoming a job of its own, it makes sense to start thinking about orchestration — and, quite naturally, Kubernetes as your orchestration tool. Once you are ready, check out Getting started with Kubernetes: how to set up your first cluster.