5 Sep, 2019 | 3 min read

Blue-green Deployments and Canary Releases for WSO2 API Microgateway in Kubernetes

  • Pubudu Gunatilaka
  • Senior Technical Lead - WSO2

Image credits: Aron Visuals on Unsplash

Kubernetes is a production-grade container management system that allows you to deploy, scale, and manage your application. WSO2 API Microgateway is a cloud-based microgateway that is specifically designed for your microservices and it is container-native. In this blog, I discuss the following topics:

  • How to deploy WSO2 API Microgateway in Kubernetes
  • Canary releases for WSO2 API Microgateway
  • Blue-green deployment for WSO2 API Microgateway

Deploying WSO2 API Microgateway in Kubernetes

We have recently released WSO2 API Microgateway v3.0.1 and you can download it here. You need to download both the runtime and the toolkit. The runtime is the microgateway and the toolkit is used to generate runtime executables.

Step 1: Setup the project

Unzip the toolkit and set up the path below in the command line.

export PATH=$PATH:/bin

Step 2: Initialize the project

Let’s create your project first.

micro-gw init k8s_project

Step 3: Add an Open API definition to the project

In order to expose an API in WSO2 API Microgateway, you need to provide the Open API definition for your API. A sample definition can be found here.

Step 4: Create a deployment.toml for K8s

To deploy in K8s, we need to specify the K8s artifacts we require for the microgateway. Create a file called deployment.toml and add the following content.

    enable = true
    image = 'pubudu/petstore:v1.0'
    enable = true
    serviceType = 'NodePort'
    enable = true  ballerinaConf =

In this configuration, we provide an image name for the docker image. We also need to specifically mention the K8s service type as NodePort. We have to mount the micro-gw.conf as a configmap to the deployment. For the ballerinaConf value, we have to specify the location of the micro-gw.conf. The micro-gw.conf resides in the resources directory of the WSO2 API Microgateway toolkit.

Step 5: Build the project

You can build the project as follows.

micro-gw build k8s_project --deployment-config 
Sample Command:
micro-gw build k8s_project --deployment-config deployment.toml

When you build the project, it will create the following artifacts:

  1. Kubernetes Service and Deployment for WSO2 API Microgateway
  2. Kubernetes configmap which holds the micro-gw.conf of WSO2 API Microgateway
  3. Builds the docker image with the name given above (pubudu/petstore:v1.0)

Step 6: Push the Docker image to the registry

For deploying WSO2 API Microgateway in K8s, you need to make sure your docker image is available for the K8s cluster. You can push the docker image to the Docker hub as follows:

docker login docker.com
docker push pubudu/petstore:v1.0

Step 7: Deploy WSO2 API Microgateway in Kubernetes

When you build the project, you can see a sample command to deploy your artifacts in Kubernetes using kubectl or Helm. Let’s deploy WSO2 API Microgateway in K8s as follows.

kubectl apply -f

This will deploy 3 services, 1 deployment, and 1 configmap for WSO2 API Microgateway. In the default configuration, it runs only a single gateway pod. You can scale the replicas based on your requirements.

Step 8: Invoke the API

In order to invoke the API, we require a JWT token as the API is protected with JWT. You can use the sample JWT below which is taken from WSO2 API Manager.


You can get the gateway URL by checking on the services deployed in K8s.

kubectl get svc
Sample Output:

We have used the NodePort service type in K8s. In this case, you can use any K8s node IP as the gateway IP. In docker for Desktop, the gateway URL will be localhost. Based on the above service output, the respective node port for the gateway port 9095 is 32497. You can invoke the API shown below:

curl -X GET "https://localhost:32497/petstore/v1/pet/1"
 -H "accept: application/xml" -H "Authorization:Bearer $TOKEN" -k

Need for Updates...

There can be cases where you need to apply WUM updates (patches), configuration changes, critical security updates, etc. Also, there can be cases where you want to update the API which is already deployed. WSO2 API Microgateway is immutable and if you want to do any changes, then you need to create the Docker image again and deploy the newly created image. There are two approaches we can select when updating or upgrading WSO2 API Microgateway and these are canary based and blue-green deployment.

Canary releases for WSO2 API Microgateway

The canary release is a technique that is used to reduce the risk of switching the production traffic to a completely new deployment by gradually rolling out the changes to a small set of users, before rolling it out to all users.

When applying the canary release strategy to WSO2 API Microgateway, you must first have the live system running and then bring up the new deployment. Let’s say you have 4 pods running in the live system and you have 1 pod running in the new deployment. In the Kubernetes service, we update the selector label what selects not only live traffic pods but also pods in the new deployment. With this change, 20% of the live traffic now routes to the new deployment. This ratio is governed by the number of pods running in the live system and new deployment. Once you are satisfied with the new deployment, you can increase the pod count in the new deployment and reduce the pod count in the live system. Eventually, all the traffic would route to the new deployment.

When applying canary releases for WSO2 API Microgateway, there are so many ways we can achieve this. I will show one way of getting this done.

1. Update the labels of the deployment

Open the k8s_project.yaml file in

/Users/pubudu/wso2am-micro-gw-toolkit-3.0.1/bin/ k8s_project/target/kubernetes/k8s_project/ location.

If you check the Deployment resource, you can see labels are defined as app: k8s_project under the pod template. We should add a new label call release: canary as follows:

      annotations: {}
      finalizers: []
        app: k8s_project
        release: canary
      ownerReferences: []

As we did in step 7, we can use kubectl to apply and update those existing artifacts. When updating these, it will use the Rolling update strategy and we can smooth this behavior by adding readiness and liveness probes to the deployment. That will guarantee zero downtime for the running APIs.

2. Deploy the new version in Kubernetes

You can start from step 2 where you create a new project with a different name or use the same project with some modifications. In step 4, you need to use the following deployment.toml file.

    enable = true
    image = 'pubudu/petstore:v1.1'
    enable = true
    ballerinaConf =

If you look at the definition, this only creates the configmap and the deployment for the new deployment. This will skip creating the services for the new deployment.

In step 5, once you build the project, it creates the relevant K8s artifacts. Here, you also need to update the labels of the pod template for the new deployment as we did in the previous step.

      annotations: {}
      finalizers: []
        app: k8s_project_up1
        release: canary
      ownerReferences: []

You can follow from step 2 to step 7 where it deploys the new deployment.

3. Update the services to route traffic to the new deployment pods

In the system, now you have previous pods as well as new pods. But new pods are not connected to the services and traffic will not route to these pods. In order to do that, you can edit the 3 services using the following command and add the selector label as release: canary. You should remove the existing selector label app:k8s_project.

kubectl edit svc 

    release: canary

Once you update the selector label in the services, this will start routing traffic to new pods and the traffic ratio depends on the number of pods in the new deployment and previous deployment. Eventually, you can increase the pods in the new deployment and scale down the pod count in the previous one.

Blue-green Deployment for WSO2 API Microgateway

In this technique, you have two identical environments running as blue and green. At a given time only one environment is live and serves the production traffic.

Let’s look at how we can do the blue-green deployment for WSO2 API Microgateway.

1. Make the changes required to the Swagger file and build the project

You only need to change the project name and provide a different image name for the Docker image. Once you build the artifacts as we did in the initial deployment, this will generate K8s artifacts for the new project including the K8s deployment, K8s service, and K8s configmap. Unlike the canary release, we need a separate K8s service in this case.

2. Deploy K8s artifacts

Once you deploy K8s artifacts, you now have both deployments running on K8s. But the load balancer is still configured to the previous deployment for production traffic.

3. Point load balancer to the new deployment

When the load balancer is pointed to the new deployment, it routes the production traffic to the new deployment.


In this blog, I explained how you can deploy WSO2 API Microgateway in Kubernetes easily using the microgateway toolkit. When you need to push updates to WSO2 API Microgateway, there are two approaches to consider: the canary release technique and blue-green technique. You can go with the canary release technique if you are not completely certain that the new version will function correctly in production. With this technique, you can allow internal users, specific geographical users or users based on the source IP to experiment with the new version of the application. In the blue-green technique, as we route all the traffic to the new version, it is expected that the new version is tested in a testing environment and functions properly in production. In both techniques, you can always revert the changes in case of a failure.

Learn more about WSO2 API Microgateway here.