With companies expecting software products to handle constantly increasing volumes of requests and network bandwidth use, apps must be primed for scale. If you need resilient, resource-conserving systems with rapid delivery, it is time to design a distributed system. To successfully architect a heterogeneous, secure, fault-tolerant, and efficient distributed system, you need conscientiousness and some level of experience. This playbook will help you steer clear of common problems that might sabotage your design efforts.

Before we get into tips and best practices for designing your distributed system, it might be helpful to look back at the evolution of software architecture.

A brief history of software architecture

Imagine all the different, complex, and (quite often now) physically distant components and services that must communicate with each other just to ensure that Google Maps takes you to Manhattan, Kansas, and not Manhattan, New York.

The industry did not achieve this level of efficiency and timeliness overnight. Instead, a series of need-inspired advancements got us to where we are today. When we look back and appreciate the journey so far, we can envision (and ultimately create) the future with better clarity.

Monolithic design

In past years, when application developers wanted their application to process large data sets, they built mainframe-based applications. This approach was efficient when personal computers (PCs) were less widely used, and end-users had more computer experience than the average consumer.

As PCs became more common, and the pool of users became less experienced, intuitive software programs that could run on independent machines became important. This independence gave rise to a new challenge: application-to-application communication. We met this challenge with advancements in network computing like remote procedure calls over network protocols: Transmission Control Protocol and Internet Protocol (TCP/IP).

However, environmental constraints arose. Users were deploying applications on many different operating systems, hardware platforms, and network protocols. This diversity created a further strain on inter-application interaction and data sharing.

Distributed computing became the inevitable next step.

Distributed computing

In the distributed computing software architecture model, independently-developed objects and components make up an application connected by network infrastructure. This network infrastructure enables and manages communication between the functions regardless of their network location.

Different components and objects can reside on various computers. These components just need to be transparent to the application and interact as though they are local to the application invoking them.

Client-server architecture

Client-server architecture was the forerunner of distributed computing. In its two-layer design, the upper layer manages the application’s user interface (UI) and business logic (the client), while the lower layer manages organization and data management (the server).

The UI and business logic must closely couple with the database server for smooth access to data. Applications, like enterprise resource planning (ERP) software, often use this model, where client interaction with a central database server is crucial to the business process.

However, the client-server application has its drawbacks. Since the algorithms and logic are on the client side, securing the application from hacks is a significant challenge. The client’s individuality and frequent calls make the system expensive to maintain, resource-intensive (relative to today), and challenging to scale, especially for complex business processes,

It is worth noting that client-server systems still exist, although they are most suited to database-oriented independent applications. Developers should generally avoid using this architecture to build a sizable component-oriented application.

Service-Oriented Architecture (SOA)

We have come quite some way from the traditional client-server architecture. With the advent of the dot-com era, developers created service-oriented architecture to upgrade the conventional client-server architecture.

In this architecture, application components communicate over a network, providing services to each other. While SOA architecture gave us the added benefit of business value and reusable, loosely-coupled services, they still relied on monolithic systems with limited scaling.

In time, as business needs grew to surpass the SOA value offering, we were inevitably back to searching for something better.

Microservice architecture

In microservice architecture, developers build an application as a collection of discrete services (modules) with an efficient communication protocol binding them together. Developers divide the application into independent modules that handle different aspects of the business process.

These individual modules can use different databases, be written in distinct programming languages, be stored on different computers, and be deployed independently.

Distributed systems best practices

Although more and more applications are adopting a distributed architecture, it is not uncommon to find an application that starts as a microservice at design but ends up as a near-monolith at deployment. Following these best practices should help you avoid that outcome.

Best practice 1 - split services based on function

The first thing you must do is componentize your application efficiently.

An ideal component is a software unit that you can develop and manage without depending on the application’s other units. In a microservice, this means breaking down your application into its constituent services.

The goal of building a distributed system is to develop an application that performs like choreography: Even though every part retains its independence, it must remain in sync with the whole. We want to be able to isolate underperforming members or modify the sequence without unintentionally affecting other members.

Best practice 2 - clearly define service boundaries

In the software world, just as in many other areas of interest, you must pay attention to the parts that make up the whole. How well you split your application into its component services will impact process synchronization and inter-function communication.

Best practice 3 - determine how distributed services will communicate

In a microservice, the constituting services are out-of-process components. They communicate by a web service request or remote procedure call.

You should minimize communication between services, however, this limit is not possible if your services need to make several external calls, like a payment gateway, for example.

Best practice 4 - choose interactive vs. batch

Your microservices need to communicate with each other and other applications, so determine the most optimal data processing strategy.

While end users interact with your microservices, you want to provide fast responses with minimal latency. To achieve this, you need to evaluate how and when the microservices will manage and process data. You also need to determine which data a component should store and how it should efficiently manage this data to ensure it is readily accessible.

Your application might need to process data in real-time (interactive) or execute in the background (batch) on a schedule. For instance, a microservice app like Uber must accommodate high traffic. Therefore, it is optimal to process profile setup data in real-time and know your customer (KYC) data in the background.

Where to host your application

Thorough software architecture starts from a concept and grows from there. Why take time to design great software only to have it under-perform because of a not-so-great virtual machine or operating system?

Think in terms of functions, not the entire application. For continuous integration (CI), you want your functions to be fully mobile and automation-friendly. You need to ask yourself:

  • Which functions should we host in a virtual machine, and which ones should we put in a container?
  • Which operating system is the best environment for this function: Windows, Linux, Unix, or another one?
  • Which programming language and database manager should the function use, and should we execute it in a cloud or an enterprise data center?
  • What system architecture should our host system have: x64, x86, mainframe, or something else?

And so on.

Do not limit yourself to hardware solutions. Virtualization lets you substitute traditional hardware like servers, memory, and networking equipment with alternatives like virtual memory, virtual machines, and virtual desktop infrastructure (VDI).

These solutions are easier to manage, faster to deploy, and generally more resilient than their hardware counterparts. They are less likely to break down and easier to fix should a problem arise. With virtual environments, you can reduce costs and hardware needs by optimizing your data.

This division is where you can reap one of the most significant benefits of a distributed system: You can do what is best for each component.

Tip: Choose hosting based on each component’s needs.

Performance and maintenance

When you have defined the process flow, your next consideration should be how you expect this process to perform in standard conditions. Are there are resource or environmental constraints that will hamper performance, like memory and processing power? If there are, you should be aware and make those limitations clear in the design.

Ensuring performance and ease of maintenance might mean reviewing your components to divide some even further or merge some. You want a design that can perform optimally in standard conditions. Still, you do not want it to become so complex that it compromises security, and turns maintenance into a herculean task.

The key is to maintain an open mind so that you are not biased toward your design or any particular solution. Your metric should be efficiency. If there is a better way, then opt for better.

Tip: Keep an open mind and be ready to adapt.

Reliability

In reality, service failures are not 100 percent avoidable. Some failures result from network interruptions, insufficient system resources, or some other reason independent of the code.

There is little you can do to prevent these types of failures. There is something that you can do to determine how your application behaves when failures arise: you need to maintain security and a smooth user experience.

The best practice is to design your functions so partially executed updates completely roll back to ensure data integrity. In instances of service failure, the management system should correctly log reports. You should also determine whether the affected functions should re-run after a failure and if they should re-enter data.

Tip: Enable rollbacks.

Security and privacy

Properly securing your distributed system is one of your biggest challenges. It is essential to adopt a security-by-design approach. In addition to securing each function, you must secure communication channels and prevent unauthorized access to sensitive parts.

Privacy cannot be an afterthought, either. Carefully consider the privacy policies and regulations for your intended users’ geographical region.

Tip: Design security into your architecture from the beginning.

Conclusion

Designing a distributed system is like solving a puzzle. It has long-term benefits if you do it right (especially in DevOps), but you must get it right from the design stage. First, take sufficient time to evaluate, then redesign or modify your design where necessary and as many times as you need to.

Centralized systems are still valid. Transitioning from a monolithic architecture to a microservice architecture is not like going from analog to digital.

Your business needs and your application’s utility should drive your choice of software architectural model. However, as the industry builds more complex systems, distributed systems become even more crucial.