X

WSO2 SUMMIT 2020   ·   Building an Integrated API Supply Chain   ·   Register Now!

Building a Cloud-Native API Management Architecture With CI/CD

  • By Pubudu Gunatilaka
  • 14 Jul, 2020

Executive Summary

Moving from a centralized API management architecture to a decentralized one is essential for modern consumers and businesses. This article explores the following in detail:

  • Traditional centralized architecture and its limitations
  • Decentralized architecture and its key advantages
  • How to handle Ingress gateway traffic routing
  • Grouping of APIs for coming up with CI/CD flow
  • Scaling and management of APIs

Introduction

Consumers play a key role in today’s businesses globally. Modern businesses take great efforts to become more innovative so that they can meet the expectations of increasingly technologically sophisticated consumers. With the rise of social media, consumers are more aware of their surroundings and the competition is not between businesses, but to meet customer needs. In this context, traditional centralized enterprise architectures no longer meet the expectations of both businesses and consumers. Let’s look at the typical API management story as an example.

Many API management solutions today look like the above diagram. There could be multiple service implementations and all of these are fronted from the centralized gateway. There is an API Manager control plane for access token generation, access token validation, traffic management, etc. If your API management platform has hundreds of APIs, then all of these APIs are deployed in this centralized API Gateway. If you scale the API Gateway due to the resource usage of a single API or high volume of traffic of a single API, you basically scale all of your APIs irrespective of API usage. This centralized API management architecture no longer meets the following requirements that arise due to different consumer needs:

  • Different resource usage of the APIs
  • Different security enforcement requirements of the APIs
  • Need of dynamic routing of the API backends
  • Message mediation and transformation
  • API shaping based on different consumers such as mobile users, desktop users, etc.
  • API response caching requirements
  • Private vs public APIs
  • API Gateway per department/unit

Decentralized API Management

This architecture pattern addresses the above requirements in a number of ways.

A decentralized approach has multiple API gateways, compared to a centralized API Gateway that had a single gateway with all the APIs. One API gateway can have one or more APIs based on different groupings. If you have an API that has a high volume of traffic, then you can have your API in a single API gateway. This helps the API to scale based on the traffic and this does not affect the other APIs. There can be cases where you have a set of APIs that come under certain grouping. For example, APIs that are related to an online shopping store can reside in a single API Gateway.

If you look at the traditional API Gateway that is used in centralized API management, it has some limitations such as startup time, resource usage, etc to be used in a decentralized API management solution. This is where we introduce the WSO2 API Microgateway which is a cloud native and developer centric API gateway. This is a lightweight version of API Gateway specifically designed for microservice architecture. The API Microgateway is designed to scale which has around one second of startup time and the memory footprint is very low. WSO2 API Microgateway is designed to work in locked-down and isolated environments, making it very easy to scale.

Why Decentralized API Management?

Modern application development practices mandate a more decentralized approach. Due to the limitations of monolithic architecture, businesses are now moving towards the microservices architecture (MSA). Independent modules are one of the advantages of the MSA. You can test, deploy, and run your modules separately. This improves productivity and gives the development team more autonomy to self serve. As the development team gets early feedback, this opens up room for new innovations and helps the team to bring new ideas to the market. The MSA philosophy also favors decentralization in all aspects of software design.

Decentralized API Management Architecture

This is a cloud native architecture based on containers where it runs on Kubernetes. The idea is to utilize the key features such as auto scaling, auto healing, workload management, etc. that container orchestration systems provide.

The data plane is considered as the most crucial plane as it contains the API Microgateways and microservices. In order to consume APIs, consumers access the Ingress gateway in Kubernetes. The Ingress gateway could be an Nginx or ALB in Amazon, etc. The Ingress gateway has relevant API Microgateway mappings so that it routes to the relevant API Microgateway based on the incoming request.

Apart from the data plane, you can have the control plane and management plane. The control plane contains the traffic manager which is for rate limiting purposes and the key manager for access token issuing and validation. The management plane contains the API Publisher for API designing and lifecycle management, API developer portal for API discovery, and API Analytics for API related business insights.

Ingress Gateway Traffic Routing

A decentralized API management architecture has multiple gateways that hold different APIs. One challenge we face is how to configure the Ingress gateway where a single endpoint that can route traffic to the respective gateway. This is not a challenge encountered in a centralized API management architecture. In many use cases, API consumers interact with a single endpoint for API invocations.

In Kubernetes, we can add an Ingress resource for each API Microgateway. In the Ingress resource we can define rules as follows:

...
spec:
  rules:
  - host: mgw.ingress.wso2.com
    http:
      paths:
      - path: /review/v1.0
        backend:
          serviceName: review-api
          servicePort: 9095
      - path: /inventory/v1.0
        backend:
          serviceName: inventory-api
          servicePort: 9095
...

In this case, we have one microgateway for review API and another one for inventory API. The API context of these APIs are /review/v1.0 and /inventory/v1.0. As you can see in the Ingress resource, when the context of the review API is matched, the request goes to the review API Microgateway service. Similarly, this happens for the inventory API requests as well. Basically we can automatically add these rules in Ingress along adding APIs to the system.

Another challenge is the grouping of APIs when deploying APIs in a decentralized architecture. APIs need to be grouped together to deploy in gateways and grouping can be done based on the following aspects:

By functionality

Based on different functionalities, APIs can be grouped into gateways. For example, APIs such as pizza API, food delivery API, and restaurant API can be grouped into one gateway. In most of the cases, the same client talks to the same sets of APIs.

By datacenter or region

In some cases, services are deployed in different datacenters or regions and there is no need to have cross datacenter or cross-region communication. API gateways can be deployed based on the datacenter or region to expose these services as APIs.

API management consists of 3 main phases:

  • API design phase
  • API approval and publishing phase
  • API deployment phase

The complete CI/CD workflow looks as follows:

1. API Design Phase

In the API design phase, the API creator designs the API using the WSO2 API Manager publisher portal based on requirements.

Then the API creator performs the following tasks:

  • Use apictl (command line tool) to export the API from the API Publisher. The apictl tool is specifically designed to interact with WSO2 API Manager.
  • Alternatively, the API creator can also download the Swagger definition from the API Publisher as well.
  • Then the API creator commits the API artifacts to a personal Git Repository.
  • apictl export-api -n Online-Store -v 1.0.0 -r admin -e development
    git commit -a -m "Adding OnlineStoreAPI"
    git push origin store-apis
    
  • Make a pull request for API Product Manager’s approval.

Although the API creator creates the API, the API creator doesn’t have permission to publish the API to the gateway. This is done by a person who has higher permissions such as an API Product Manager.

Github Branch Structure

In the Github repository, we can have several branches as follows:

Github Branch Name food-apis sms-api location-api
API Name(s) food-delivery-api sms-api location-api
pizza-api
restaurant-api
API Microgateway 1 API Microgateway 2 API Microgateway 3

Each Github branch corresponds to an API Microgateway. As you can see from the above, the food-apis branch has 3 APIs: food-delivery-api, pizza-api, and restaurant-api. Likewise, we can have a set of Github branches according to the API Microgateway we deploy.

2. API Approval Phase

The API Product Manager is mainly responsible for two tasks. The first task is to review and merge the pull request that is made by the API creator. When reviewing the pull request, the API Product Manager verifies whether the API is in the correct group. The second task is to publish the API in the API Publisher. Once the API Product Manager publishes the API, it becomes available in the API Developer portal for internal/external consumers.

Before we discuss the API deployment phase, let’s look at the API Operator for Kubernetes.

API Operator for Kubernetes

The API operator for Kubernetes makes APIs a first-class citizen in the Kubernetes ecosystem. Similar to deploying microservices, you can now use this operator to deploy APIs for individual microservices or compose several microservices into individual APIs. With this, users will be able to expose their microservice as a managed API in Kubernetes environment without any additional work. The basic idea is that users will provide a Swagger definition to the Kubernetes. Then the API operator deploys an API Microgateway for the given Swagger definition. It also deploys Ingress resources for traffic routing and horizontal pod autoscaling policy for auto scaling the API Microgateway based on the CPU usage. You can find the API Operator in Operator Hub.

3. API Deployment Phase

The deployment phase is completely automated and it is designed by integrating Github with Jenkins. We can create a webhook pointing to the Jenkins pipeline on Github. Once the API Product Manager merges the pull request, the Jenkins pipeline is triggered. The usage of the Jenkins pipeline takes the following path:

  • Creates a Github release in the respective branch (eg: food-apis-v1.0.0). This is done to track the changes taking place in the pipeline.
  • Jenkins job triggers for the release of food-apis-v1.0.0
  • Using apictl (command line tool), add an API in Kubernetes using the API Operator for Kubernetes. If the API exists in Kubernetes it updates the API.
  • The API operator deploys the API Microgateway and relevant Ingress resources for traffic routing.

Scaling and Management of APIs

A decentralized API management architecture has three main deployment options for APIs:

  • Private Jet mode
  • Sidecar mode
  • Shared mode

1. Private Jet Mode

The private jet mode has a dedicated API Microgateway for the API and your backend microservices of the API run separately. The scaling happens for a pod in Kubernetes. When the pod autoscaling happens you can scale the API and backend microservices separately based on the load.

2. Sidecar Mode

In the sidecar mode, both the API Microgateway and the microservice container reside in a single pod in Kubernetes. This pattern reduces the additional external network hops that are required in the private jet deployment pattern while having the local network call to communicate. There are cases where you have a higher volume of traffic for a particular backend microservice. In such cases, you have to scale the API gateway and the backend microservice to handle the increasing load. The sidecar mode can be used to fulfill such a requirement.

3. Shared Mode

You can deploy multiple APIs in a single microgateway in the shared mode. The backend microservices run separately. When scaling happens for the pod, you scale all the APIs that are deployed in the API Microgateway.

Summary

This article discusses the centralized API management and its limitations. A decentralized API management architecture solves current limitations in centralized API management. The ingress traffic routing, grouping of APIs, and scaling and management of APIs are 3 main things to consider in the decentralized API management architecture. We can group APIs based on functionality or based on data centers or regions. The private jet mode, sidecar mode, and shared mode are three main deployment options for APIs. Learn more about our API management solution here.

About Author

  • Pubudu Gunatilaka
  • Technical Lead
  • WSO2