Taking a Cloud-Native Approach

Patterns, Paradigms, and Protocols

AlgoRhythmic Systems
6 min readJan 16, 2021

Modern and Dynamic

The economies of scale introduced by cloud services continue to drive down the cost of computing. Moreover, organizations now have access to a richer set of tools than ever before enabled by solution-as-a-service models to technology acquisition. But cloud adoption is not without its challenges. Organizations that leveraged traditional IT services in the past have to embrace new design, development, and deployment patterns that reflect a cloud-native approach to computing.

Taking a Cloud-Native Approach

The Cloud Native Computing Foundation (CNCF) was founded in 2015 with the objective to help promote container technology and provide a common framework for its evolution. According to the CNCF, cloud-native technologies can enable organizations to “build and run scalable applications in modern, dynamic environments”. Along with containers, microservices, service meshes, immutable infrastructure, and APIs are central to a cloud-native approach.

Cloud Native Computing Foundation

Core to the CNCF’s definition of cloud native is the idea that loosely-coupled systems facilitate increased resiliency, manageability, and observability. And while these ideas are not new, they do take on an added level of importance for organizations that are seeking to leverage the cloud.

Starting with SOA

One of the most prominent forerunners to a cloud-native approach is the service-oriented architecture. The term first gained popularity in the late 1990s amongst enterprise architects and was formalized by The Open Group in a 2007 whitepaper that eventually grew into The SOA Source Book. According to The SOA Source Book, the fundamental unit of an enterprise architecture should be a service. Services, as defined by The Open Group, logically represent a single business activity with a specified outcome. In order for a service to successfully conform to SOA patterns, it is important that they are self-contained and function like a black box for end users.

Services on the Cloud

The design principles introduced by service-oriented architecture resulted in a great deal of emphasis placed on developing application programming interfaces or APIs. APIs provide a technology-agnostic mechanism to transfer data between services. Using APIs, services are able to communicate with each other without any concern to the implementation details of adjacent services. This paradigm enables engineers to rapidly make service connections by focusing only on the pertinent data transfer.

For example, before adopting a service-oriented architecture, a bank may have employed a tightly-coupled system where the deposits and balance functions were encapsulated in a single application. When a regulatory update resulted in a change to the balance function, application engineers would need to expend considerable energy ensuring that the deposit function remained operational.

But with a SOA approach, the deposit and balance functions could be split into two distinct services. If the deposit service needed to communicate with the balance service, it can now use an API which does not require any knowledge of how the balance service operates. Instead, it is able to focus only on the relevant information it needs to transmit to the balance service. This shift empowers developers to make updates to one service with confidence that it will not adversely affect another.

Managing Microservices

As service-oriented architectures began to gain adoption, software engineers quickly discovered that the smaller the service the more amenable it was to updates. The microservices approach, which reflects this realization, is a variant of SOA where there is an emphasis on making the service as fine grained as possible. This design pattern has gained traction with engineers who note that the paradigm increases the speed at which updates can be delivered and decreases the likelihood that those updates will have unintended consequences.

Monolith to Microservices

Speed and reliability are just two of the many advantages realized by engineers employing a microservices mindset. A technology that has grown in popularity since the increased utilization in microservices is queuing mechanisms. When enterprise architectures were monolithic, or tightly coupled, the speed of the entire system was limited by its slowest component. By decoupling applications into microservices, engineers are able to create more flexible and dynamic system interfaces that reflect the complexity of the challenges they are seeking to solve.

Imagine administering a tightly-coupled system with an application that can produce 100 data points every second that feeds into a database that can only consume 50 data points per second. In order to ensure that the application will not overwhelm and overburden the database, engineers have to throttle the application so that it produces 50 data points per second, only 50% of its potential capacity.

But if the application function and database function are split into microservices, we can introduce a new queueing service that will buffer traffic. The application microservice can continue to operate at full capacity and store all the processed data points in a queue. The database microservice can ingest the data points from the queue without consideration of how fast the application server is processing information. Furthermore, multiple databases can now pull from the queue depending on the metadata associated with the data points. The capability to leverage queues is just one illustration of how breaking monoliths into microservices introduces greater flexibility and agility into the development process.

Containerized Computing

In 2013, Docker launched a container technology in an attempt to solve a common problem that developers experience. Their development environment is different from their test environment which is different from their production environment. This results in developers running successful tests only to find that their applications fail in production. And so, instead of deploying applications onto virtual machines, developers can use containerization technology to package their application code and relevant dependencies into a pod that will run on any operating system (OS). Now, free from worrying about OS considerations, developers can simply focus on the logic of their code.

Docker Containers

The abstraction of containerized pods has introduced a host of benefits beyond managing disparate environments. One of the most popular development patterns that containerization technology has enabled is blue-green deployments. When using a blue-green deployment pattern, developers split the containerized applications into two groups: one that is actively handling requests and one that is used as a staging environment for updates. Now, application engineers are able to stage their changes in production and slowly direct traffic towards new instances, minimizing the chance of failed deployments.

Another development pattern enabled by containerization is the idea of immutable infrastructure. In traditional IT environments, ensuring consistency across instances during updates proved to be a considerable challenge. But when developers begin to encapsulate applications in containers, it becomes more reasonable to simply terminate old instances and direct traffic to brand new ones. This approach removes the operational burden of preventing configuration drift during patching and allows organizations to ensure that their users have a consistent and stable experience.

Becoming Cloud Native

Many organizations are no longer encumbered by the upfront investment associated with provisioning data centers. Cloud technologies have made it possible for organizations to use the same agile approach they take with their codebase and apply it to their infrastructure. Dividing applications into services, focusing on interfaces, and abstracting away OS administration with containers are all powerful tools for enterprises that are seeking to maximize the benefit of cloud technologies.

But despite the flexibility that cloud-native approaches enable, developers realize that new management techniques are required to ensure that containerized microservices continue to exceed the value delivered by their monolithic predecessors. Service meshes have emerged as a popular mechanism for engineers seeking to facilitate service-to-service communication in their cloud-native architectures. Using sidecar proxies, service meshes provide developers the infrastructure to manage the routing, security, and traceability of their containerized microservices

Despite the technologies deployed, the strategy for becoming cloud native remains consistent: modern IT infrastructure should reflect the ephemeral nature of cloud computing. By leveraging the ability to quickly scale up, down, in, and out, organizations are better suited than ever to be responsive and adaptive to their end users.

--

--