Why Should You Have a Microservices Management Platform? - Part 2
By Hasitha Abeykoon
- 18 May, 2022
In our previous article, we discussed the importance of digitalization for companies and the technologies and concepts they have followed to accelerate it. We highlighted the use of PaaS platforms, microservices architecture, and went in-depth outlining its benefits, challenges during implementation, and possible workarounds.
Today, we will see how a microservice management platform can help overcome the challenges mentioned in part 1. It is useful to understand how the features of such platforms would assist developers. Let’s dive in.
Any company that wants to transfer their digital assets to the cloud must have a cloud space to begin with. Platforms like EC2 and Google Compute Engine provide basic cloud infrastructure such as workspaces, networking, firewalls, and storage. It is necessary to choose the appropriate region to create your infrastructure depending on where you do business, as it will help minimize/avoid delays to requests made by your customers.
An infrastructure layer is sufficient to start deploying applications. However, there is a set of value-added services that must be present to run and manage applications, otherwise they will need to be implemented from scratch. That is why many enterprises choose a Platform-as-a-Service (PaaS), as it enables their developers to focus on implementing business applications and go to market sooner. Databases, caches, messaging capabilities, and application servers are good examples of these. Figure 1 shows the reference architecture for a typical PaaS.
While PaaS platforms are useful when starting your cloud journey, if the intended system has increased complexity and scalability requirements, depending on a PaaS platform alone might be restrictive. While PaaS systems do provide the ability to manage microservices applications independently, they can be quite limited. The best way to mitigate these restrictions is to build your application on a microservices management platform.
As shown by Figure 2, a microservices management platform is a thin layer between the application and infrastructure layers. Unlike PaaS systems, these allow developers to choose databases, caches, and queues without limiting them to the default options provided by the specific PaaS. This does not mean that different components and services in the same platform must use heterogeneous databases and messaging platforms. It is wise to choose a set of options and use them across services, making it easier to learn in the application logic and buy services.
First, let’s look at the composition of a typical microservice as shown by Figure 3.
As shown above, a system will contain a set of interconnected microservices. The microservices management platform layer provides capabilities to easily develop, build, and manage these microservices. Without these capabilities, configuring, deploying, and running several microservices will be a challenge.
A good microservices platform includes a set of features which facilitates efficient developing, building, deploying, and managing of microservices. These are shown by Figure 4 below and will be elaborated further.
To get the most out of microservices, developers must be able to provision their new improvements quickly and reliably. If they execute the provisioning manually, building from the source, setting up configurations, and deploying to existing environments, there is a high likelihood of instability within the software/system. The optimal way to mitigate these issues is to arrange an automated CI/CD pipeline. In the CI process, deployable artifacts are created out of the source code. The built packages depend heavily on the programming language and thus, the most generic package would be a container image. In the CD process, that container can be deployed with configured resources and scheduling configurations. Kubernetes is the most popular container orchestration platform in the world. Simultaneously, there are several platforms built on top of Kubernetes with certain value-added features.
While a microservices management platform may provide the above capabilities, some platforms that only manage microservices might not provide CI/CD capabilities.
Another concern with automated deployments is version management. Every deployed image must have a version and needs to be constant. When a new version is deployed, the platform should manage the deployment to prevent service interruptions or downtime. Rolling updates, blue-green deployments, and Canary deployments are some techniques used to achieve this.
When one set of teams builds and provisions components and services, other teams must be aware. For example, if a team is working on coarse grain components, they might wish to integrate existing microservices. To do that, they must know the other components and services that are already available. Fortunately, package repository stores build packages in a versioned and searchable manner.
Feature flags are a way of changing system behavior without changing the code. Sometimes developers need to expose a feature to a set of users (i.e., perform a canary release) or control behavior depending on a set of permissions, which can be done with feature flags. In the world of microservices, if you need to disable a particular business functionality, you can switch off the microservices delivering that feature. This is feasible as microservices are designed around business boundaries. If feature flag management is across microservices, a better approach is to implement it as a separate microservice itself. Feature flags are useful for A/B testing and to enable deployment patterns like Canary. Here is Cloudbees’s experience around it. Feature flags management can be provided by the microservices management layer.
Routing incoming traffic to relevant services and generating alerts is essential for a microservices management platform. With increased microservices, managing traffic becomes a problem. Load balancing between different nodes in the same cluster and automated scaling are usually provided by the microservices management platform. Some sections of routing capabilities may come from the orchestration engine (i.e., Kubernetes) which is the core part of the microservices management platform.
When microservices communicate between each other, sometimes they may learn of their existence, and make connections instantly. However, there are instances where usernames/passwords are used. A typical use case is when connecting a service component to a database service. When SSL communication is needed for inter-component communication, managing, and configuring SSL certificates is needed. Kubernetes has secret management capabilities while the HashiCorp Vault is known for securing and managing tokens, passwords, certificates, and encryption keys.
This allows microservices to find each other automatically so they may communicate without manual intervention. The selected platform should provide this feature as microservices do not have hard coded endpoints. Once you promote the system from one environment to another (i.e., Dev → Prod) the endpoints will differ. Thus, they must be able to find and connect with each other.
Every microservice has a lifecycle. When auto scaling occurs due to increased traffic, the platform needs to create more containers and plug them to the existing cluster. Afterwards, load balancers route traffic to the new containers. Similarly, when scaling down, containers are terminated, and load balancers stop routing traffic to them.
Aside from an auto scaling feature, auto healing can be leveraged. This is when a container is recycled (i.e., destroyed and restarted) as it may be erroneous or malfunctioning. Likewise, lifecycle management of microservices should be done automatically by the microservices management platform. It is important to ensure the configuration related to the lifecycle is carefully configured to prevent undesirable system behaviors.
Containers in a microservices platform can be terminated at any time. Thus, the logs of the application running inside any container should be transferred to a common repository. There must be dashboards that facilitate the viewing of logs for each service component. Further, relevant monitoring must be added to generate alerts if the system is behaving unnaturally. Cost management, application performance monitoring, and network monitoring need to be tracked carefully.
Needlessly to say, logging and monitoring helps troubleshoot issues. When a transaction is made in the system, a trace of what occurs to that transaction must be present. In a scenario where multiple service calls are involved (i.e., service chaining), if a transaction did not proceed through the system, support personnel must be able to identify the point of failure.
Now, let us review a new microservices management platform, Choreo.
Choreo is an internal developer platform designed to accelerate the creation of digital experiences. Build, deploy, monitor, and manage your cloud native applications with ease as you improve developer productivity and focus on innovation. The following is a list of capabilities performed by Choreo at the time of writing:
As shown by Figure 5, Choreo contains a large range of features when compared against a typical microservices management platform. With Choreo, developers can implement core services, integrated services, inhouse services, or other SaaS services and can test and deploy them as containers on the Kubernetes platform. Necessary service discovery and management features are provided by the platform. If you are a code enthusiast, you can learn the Ballerina language and write your first integration.
Try out Choreo today!