WSO2Con2025 Logo

March 18-20 | Barcelona, Spaain

 
ei
2020/03/03
3 Mar, 2020

Deploying WSO2 Micro Integrator on Kubernetes

  • Hasitha Abeykoon
  • Technical Lead - WSO2

Introduction

WSO2 Micro Integrator is an open-source, cloud-native integration framework with a graphical drag-and-drop integration flow designer and a configuration-based runtime for integrating APIs, services, data, SaaS applications, and proprietary/legacy systems. It has different deployment choices, including Kubernetes, which is a popular microservices management platform by Google.

In this document, we discuss how to implement a Micro Integrator-based deployment on Kubernetes. WSO2 has developed an operator to perform the Kubernetes-based deployment. However, it is always better to know how to perform the deployment using a vanilla approach, because it gives much more flexibility, and enables developers to leverage the Kubernetes platform’s full capabilities.

This document can be followed to set up a WSO2 Micro Integrator-based deployment on top of any Kubernetes platform, as we do not use any specific feature native to a specific cloud platform. Hence, it is possible to follow the same document to set up a WSO2 Micro Integrator-based deployment not only on-premises but also on top of managed Kubernetes services—such as AWS EKS, Google GKE or Microsoft AKS.

Prerequisites

To follow the article, you need to have the following installed. Please use the latest versions of each software component.

  • Docker
  • Kubernetes
  • WSO2 Integration Studio
  • Apache Maven

Creating an image with integration logic

The first step is to create an immutable Docker image based on WSO2 Micro Integrator, including the integration logic that it is supposed to run. There can be several WSO2 Micro Integrator-based images in a particular deployment, baring different segments of the integration logic. Furthermore, if a customer wants process isolation for different business units in the integration (i.e., tenants) the integration logic specific to the business units can be burnt into different Docker images and deployed as containers in the same Kubernetes deployment. The advantage of segmenting logic is that they can be independently scaled. For example, the integration logic of one BU is resource hungry, so you will need more containers running with it, while another BU’s integration logic does not consume as many resources, so you will have a fewer number of containers out of it. Also, independent teams can develop integration logic in parallel, once all the teams are in agreement about the communication APIs. Releasing versions of integration logic related to each business unit can also be done independently.

Let’s see how to create a particular image based on WSO2 MI and deploy it onto Kubernetes.

How to develop integration logic and add that to an image 

WSO2 Integration Studio is the tool to develop integration logic for WSO2 Micro Integrator. It has a rich interface, where users can easily and quickly drag and drop mediators and configure the integration logic. Once the logic is completed and tested, it can be exported as a carbon application

Figure 1: Integration Studio - Design view

The carbon application can be packaged into the WSO2 Micro Integrator-based Docker image when building the image as below (included in the Dockerfile).

FROM docker.wso2.com/wso2mi:latest
COPY doctorAppointmentCompositeApplication_1.0.0.car /home/wso2carbon/wso2mi-1.1.0/repository/deployment/server/carbonapps

NOTE: If you do not have a WSO2 subscription, use the WSO2 Micro Integrator image from public dockerHub. Also, note that when you copy resources to a folder that does not exist in the base image, you need to provide necessary permissions using the flag --chown=wso2carbon:wso2.

Create a base image with common configurations

When developers create different WSO2 Micro Integrator-based images having different integration logics, it is better to first create a base image containing common configurations and integration logics, and then create other images based on that base image. The advantage is if there is a change to be done to common configurations in an MI deployment having multiple images, developers only need to update the base image and re-burn the images.

Figure 2: Docker image hierarchy

NOTE: If you wish to apply WSO2 secure vault to encrypt passwords in configuration files, it is better to perform it to the base image. You can follow this post as a guide.

Defining a probe for WSO2 Micro Integrator

When deploying WSO2 Micro Integrator on Kubernetes, it needs a liveness and readiness probe. Until WSO2 comes with an API that can be used for this, for now, developers can define a simple API. This will be a common integration logic for all the WSO2 Micro Integrator-based images in the deployment. Hence, the API used as the probe should be included into the base image as described above. 

Figure 3: Liveness and readiness probes for Micro Integrator

Please use the API here as the liveness and readiness probe. You can export it as a CAR application and add it to the Micro Integrato base image.

Note: In the Micro Integrator runtime, CAPPs are deployed according to the alphabetical order of CAPP names. If you name the CAR application containing the readiness probe to start with ‘z’ for example, it will get deployed after all real services are deployed and available. This is a temporary solution until this feature comes with WSO2 MI.

Docker project

We need to keep all the resources and the Dockerfile required to create a Docker image together so that we can change the resources or modify the Dockerfile and re-trigger the build to modify the Docker image when required. For that, we can keep all of the above files in a folder and call it a docker project.

Figure 4: Docker project structure

You can use the following Docker command to build the image out of the Docker project. You need to execute it from the directory where the Dockerfile resides. Tag the image with a meaningful name.

docker build -t doctor_appointment_app .

Committing WSO2 Micro Integrator-based images to the Docker registry

Once the Docker image is ready, the next step would be to commit the image to a repository so that it can be used for a deployment.

DockerHub is the public place to commit any Docker image.

#First login
docker login -u "myusername" -p "mypassword” docker.io
#Then push the correctly tagged image
docker push abeykoon/doctor_appointment_app_1.0.0

However, you can use a docker registry set up on-premise. If you are on Amazon AWS, you can use Amazon Elastic Container Registry (ECR) as a private registry to host your Docker images.

aws ecr get-login --registry-ids 123456789012 --no-include-email
$ docker push 123456789012.dkr.ecr.eu-central-1.amazonaws.com/doctor_appointment_app:1.0.0

Figure 5: Pushing the Docker image to the Docker registry and deploying

Deployment on Kubernetes

Once the Docker images required to set up the integration system is created and committed to the Docker registry, developers can proceed with the deployment. This article hereon discusses the deployment configurations and the related concepts in Kubernetes in brief. Please refer to official documentation on Kubernetes if you want to read more on the concepts and configurations.

When you use the Kubernetes API to create a Kubernetes object (either directly or via kubectl), that API request must include that information as JSON in the request body. Most often, you provide the information to kubectl in a .yaml file. kubectl converts the information to JSON when making the API request. You can read more about this process here.

Preparation for deployment

Before creating any Kubernetes artifacts, let’s create a new namespace inside Kubernetes for our deployment. This will separate the deployment from any other. All K8S artifacts (pods, services, ingress) will reside inside this namespace. To get authenticated to K8S API when the deployment is created, we need an SVC account as well.

Creating a namespace

$kubectl create namespace wso2mi

Creating an SVC account

$kubectl create serviceaccount wso2svc-account -n wso2mi

Set the current context

KUBERNETES_CLIENT=`which kubectl`
$kubectl config set-context $(${KUBERNETES_CLIENT} config current-context) --namespace=wso2mi

Now we are ready to go ahead with the deployment. All your Kubernetes resources will be created inside the namespace you defined above.

Mounting shared volumes (optional)

Before configuring specific containers in the deployment, it is required to set up shared resources in the deployment. By nature, WSO2 Micro Integrator containers are immutable and operate independent of each other in the deployment. They do not share information with each other. Hence, there is no requirement to set up shared databases or file systems between the containers (pods).

However, sometimes we will come across integration scenarios where we need to set up a container that would keep listening to a file location that is accessible from an external system. Once that external system places a file in that location, the container has to pick it up and process it. This processing may include integration points such as dumping data into message queues, data transformation, and invoking services.

In simple terms, if we have such scenarios, we can mount an external volume into containers in our Kubernetes deployment to fulfill the above requirements. In order to do that, we need to define and create PersistentVolume and PersistentVolumeClaim objects.

Figure 6: Design view—configuring persistent volumes and claims

Define persistent-volumes.yaml file as below. You can point to a directory in an NFS server, or if you are on Amazon AWS, define the Amazon EFS location.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: integrator-shared-deployment-pv
  labels:
    purpose: integrator-shared-deployment
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: ""
    path: ""

Execute the command below to create the Kubernetes PV object.

$kubectl create -f persistent-volumes.yaml

Now let’s create a PV claim for the volume. Define a file named integrator-volume-claims.yaml with the following content.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: integrator-shared-deployment-volume-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: ""
  selector:
    matchLabels:
      purpose: integrator-shared-deployment

To create the PV claim object in the deployment, execute the command below.

$kubectl create -f integrator-volume-claims.yaml

We can use these objects when the volume is mounted onto different containers in the K8S deployment. Now, it is time to start creating containers with the images we created in the above sections.

Creating pods with containers

The smallest deployment element in Kubernetes is a pod. A pod can contain one or more containers inside it. If more than one container exists in a pod, usually they are co-located and co-scheduled, and run in a shared context (i.e., sidecars). In our example, we will run the docker image tagged as doctor_appointment_app_1.0.0 as a single container in a pod.

“In general, users shouldn’t need to create Pods directly. They should almost always use controllers for example, Deployments. Controllers provide self-healing with a cluster scope, as well as replication and rollout management.”

In order to create a deployment with containers out of the image, define the following wso2mi-deployment.yaml file.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: doctor-appointment-app-deployment
spec:
  replicas: 1
  minReadySeconds: 30
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        deployment: doctor-appointment-app
    spec:
      containers:
      - name: doctor-appointment-app
        image: abeykoon/doctor_appointment_app_1.0.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8290
          initialDelaySeconds: 15
          periodSeconds: 5
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8290
          initialDelaySeconds: 15
          periodSeconds: 5
          timeoutSeconds: 4
        imagePullPolicy: Always
        ports:
        - containerPort: 8290
          protocol: TCP
        - containerPort: 8253
          protocol: TCP
        volumeMounts:
        - name: shared-pd
          mountPath: /home/wso2carbon/
      serviceAccountName: "wso2svc-account"
      volumes:
      - name: shared-pd
        persistentVolumeClaim:
          claimName: integrator-shared-deployment-volume-claim
  • Note how the liveness and readiness probes are configured.
  • Note the ports that are exposed.
    • 8290 - port listening for HTTP traffic
    • 8253 - port listening for HTTPS traffic
  • We have defined only one replica of the container in the above deployment. We can have many replicas and load balance traffic between them (horizontal scaling).
  • We can set resource limits to the container. When resource limits are hit, Kubernetes will spawn replicas automatically. Please read more about it here.
spec:
  Containers:
  - name: 
    image: 
    resources:
      requests:
        memory: "200Mi"
        cpu: "500m"
      limits:
        memory: "1000Mi"
        cpu: "1500m"
  • We can pass environment variables to the containers. WSO2 Micro Integrator supports certain configurations passed as environment variables. Please refer to WSO2 documentation here to get to know what configurations are supported.
spec:
  containers:
  - name: 
    image: 
    env:
    - name: KEY1
      value: "value1"
    - name: KEY2
      value: "value2"
  • In this integration scenario, there is a File Inbound Proxy, where it polls and reads files from a known folder location. An external system is placing files into a shared folder and WSO2 Micro Integrator is supposed to read and process those files. In order to do that, the container should have a volume mount to the shared location. There is a variety of file systems supported by Kubernetes, such as NFS and Amazon EFS. You can read more here. Please note how volume mounting is done to an Amazon EFS location in the above configuration using the PV and PVC we defined in earlier sections.

To create the deployment, please use the following command.

$kubectl create -f wso2mi-deployment.yaml

In this section, we discussed how to perform a WSO2 Micro Integrator deployment with Kubernetes. However, the MI container that is running is still not accessible from the outside world. Moreover, it is not accessible from any other pod. In the next section, we will look at how to make the WSO2 Micro Integrator pod accessible from outside.

Creating services to expose WSO2 Micro Integrator containers

A Service in Kubernetes is an abstraction that defines a logical set of Pods and a policy by which to access them. The set of Pods targeted by a Service is usually determined by a LabelSelector. Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. There are different types of services like NodePort, LoadBalancer, ExternalName, and Ingress. The following article on Medium gives a good brief on what type to choose and when.

The following integrator-gateway-service.yaml file defines the service we are going to create to expose the pod.

apiVersion: v1
kind: Service
metadata:
  name: doctor-appointment-app-gateway-service
spec:
  selector:
    deployment: doctor-appointment-app
  ports:
  - name: pass-through-http
    port: 8290
    targetPort: 8290
    protocol: TCP
  - name: pass-through-https
    port: 8253
    targetPort: 8253
    protocol: TCP

Please note how the selector is matched to the WSO2 Micro Integrator-based deployment we did in the earlier step (doctor-appointment-app). We expose the same ports at the service layer as well mapping them to the ports exposed by the pods.

To deploy the service, please execute the command below.

$kubectl create -f integrator-gateway-service.yaml

Kubernetes’s default service type is called ClusterIP. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access. However, you can access the service using Kubernetes Proxy.

$kubectl proxy --port=8080
https://localhost:8080/api/v1/proxy/namespaces/<NAMESPACE>/services/<SERVICE-NAME>:<PORT-NAME>/

You can use this method to access management APIs of WSO2 Micro Integrator-based pods.

Now, it is time to give access to the service we deployed from the outside world.

Giving external access to the services

If you need to give access to your service from outside, the standard way of doing that is by defining a Kubernetes service of type Loadbalancer. For an example on GKE (Google Kubernetes Engine), this will spin up a Network Load Balancer that will give you a single IP address that will forward all traffic to your service.

However, this method is not very cost effective. Each service you expose with a LoadBalancer will get its own IP address, and you have to pay for a LoadBalancer per exposed service. If there are a lot of services in the deployment, the cost will be high. That is where the Ingress controller (which is not a service type) comes to our rescue.

Defining an ingress controller

The ingress sits in front of multiple services and acts as a “smart router” or entrypoint into your cluster. Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the ingress resource. Thus one load balancer will be enough to forward the traffic to different services in the Kubernetes deployment. You must have an ingress controller to satisfy an ingress. Usual choice is ingress-nginx. The linked document explains how to install NGINX ingress controllers for different platforms.

If you are on Amazon AWS, AWS ALB Ingress Controller enables ingress using the AWS Application Load Balancer. You can also use AWS NLB (layer 4) as ingress as well. Here is an article from Amazon around this concept.

The following file integrator-gateway-ingress.yaml contains the definition of ingress. Please note how NGINX annotations are used.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: wso2ei-scalable-integrator-gateway-ingress
 annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
 backend:
   serviceName: other
   servicePort: 8290
 rules:
 - host: appointments.wso2.com
   http:
     paths:
     - backend:
         serviceName: doctor-appointment-app-gateway-service
         servicePort: 8290
 - host: wso2.com
   http:
     paths:
     - path: /bar/*
       backend:
         serviceName: bar
         servicePort: 8290

To deploy the ingress in the Kubernetes deployment, please use the command below.

$kubectl create -f integrator-gateway-ingress.yaml

Now, it is possible to access the proxy services and APIs exposed by the doctor-appointment-app deployment through the load balancer. External applications calling the integration system will only need to use one IP/hostname, which is of the loadbalancer, everything else is taken care by the Kubernetes cluster.

Figure 7: Elements of a Micro Integrator deployment and message routing

Scaling, high availability and monitoring

The advantage of using a container management platform like Kubernetes is that it handles many deployment aspects automatically. When the number of replicas is specified at the wso2mi-deployment.yaml file, even if one container has crashed due to some reason, Kubernetes will spawn another container in order to maintain the number of specified replicas (self healing of a deployment). Thus, the high availability aspect is always maintained and guaranteed. Moreover, Kubenetes will dynamically scale up and down the number of containers needed in the deployment to handle the incoming traffic using metrics like memory and CPU utilization of the containers.

There are certain scenarios, where only one pod should process the messages out of the whole WSO2 Micro Integrator deployment. For example, if multiple pods proceed to read messages from the same file location or read from the same message broker topic, there will be some coordination between the pods that needs to happen to stop processing the same message by different pods. This kills the independent nature of the pods. In such cases, we can do a separate deployment with one replica of the pod containing message polling logic. Self-healing features of Kubernetes will make sure that pod is always alive.

There are a number of monitoring tools imagering which provide the capability to monitor a K8S deployment. Prometheus by CNF, the Kubernetes Dashboard add-on for Kubernetes clusters, Jaeger by Uber tech are some popular tools.

For centralized logging you can use platforms such as Elasticsearch and Kibana (data visualization dashboard for Elasticsearch). In a typical deployment, logs that are written to Stdout of the pod are published to Elasticsearch. All WSO2 Micro Integrator logs are not written to Stdout. To get them to Stdout, we can run another container along withthe WSO2 Micro Integrator container (as a Sidecar). In that way, we can get all the logs onto a centralized logging platform like Elasticsearch.

 

Figure 8: Configuring centralized logging

Another option to get logs routed to the Stdout is to configure log appenders. For example, correlation logs are not written to the console by default. We can route the logs to the console by configuring appenders as below.

appenders = CARBON_CONSOLE, …, CORRELATION
loggers = AUDIT_LOG, SERVICE_LOGGER, trace-messages, ..., correlation

appender.CORRELATION.type = RollingFile
appender.CORRELATION.name = CORRELATION
appender.CORRELATION.fileName =${sys:carbon.home}/repository/logs/correlation.log
appender.CORRELATION.filePattern =${sys:carbon.home}/repository/logs/correlation-%i.log.gz
appender.CORRELATION.layout.type = PatternLayout
appender.CORRELATION.layout.pattern = CORRELATION - %d{yyyy-MM-dd HH:mm:ss,SSS}|%X{Correlation-ID}|%t|%m%n
appender.CORRELATION.policies.type = Policies
appender.CORRELATION.policies.size.type = SizeBasedTriggeringPolicy
appender.CORRELATION.policies.size.size=10MB
appender.CORRELATION.strategy.type = DefaultRolloverStrategy
appender.CORRELATION.strategy.max = 20
appender.CORRELATION.filter.threshold.type = ThresholdFilter
appender.CORRELATION.filter.threshold.level = INFO

logger.correlation.name = correlation
logger.correlation.level = INFO
logger.correlation.appenderRef.CORRELATION.ref = CORRELATION
logger.correlation.appenderRef.CARBON_CONSOLE.ref = CARBON_CONSOLE

Updating Deployment Through CI/CD Pipeline

There are two main scenarios where we will have to update the existing deployment.

  1. WSO2 is releasing new images with bug fixes and security updates. Those fixes need to be included into the deployment
  2. Integration logic needs an update due to a change request or an integration contract is updated

It is always preferred to do frequent updates if possible to reduce time to market and so that customers can take advantage of features faster. However, with more frequent updates, the chances of negatively affecting application reliability or customer experience can also increase. This is why it’s essential for operations and DevOps teams to develop processes and manage deployment strategies that minimize risk to the production customers. Kubernetes provides deployment strategies, including rolling deployments and more advanced methods like canary and its variants.

In either of the above two situations, WSO2 Micro Integrator-based images need to be built again. In addition, if there is an update to integration logic, relevant CAPP projects need to be built first. In order to speed up the process of building updated artifacts and getting them deployed, it is important to automate the steps we performed above. It is useful for preventing mistakes that can happen when performing them manually as well. Hence, let us see how we can set up a continuous integration and continuous delivery (CI/CD) pipeline. 

CI/CD Pipeline

So far in the article, we discussed how to create a Kubenertes set up based on WSO2 Micro Integrator. However, in the above steps, we created the deployment manually. When continuous integration and continuous delivery is considered all the steps we performed needs to be automated.

The following diagram depicts the full CI-CD workflow suggested in this article.

Figure 9: CI/CD pipeline for a typical Micro Integrator-based deployment/p>

There are two main parts in the complete CI/CD flow.

  1. Development of integration logic and building a Docker image
    1. Integration Studio project: has all the integration logic. There can be multiple projects under one Maven multi-module project or a single project. Building that project using Maven, it is possible to build the CAR applications.
    2. Docker project: has all the artifacts and configurations needed to build a docker image. It will contain CAR applications built by Integration Studio project/s, other server related configurations (i.e deployment.toml) and external Jar files needs to be packed into the Docker image. Using a script, docker images can be built and tagged as per a given format. Pushing to a configured docker registry can also be done using the same script. (Note: alternatively, you can create a Docker project using Integration Studio itself)

All the projects will be in a source repository like Github. The above two steps related to two types of projects can be integrated to a single script. Then, in Github or Jenkins, we can trigger the script upon merging a change to the integration project or the Docker projects. It will build up images with the latest change and push to the registry with the tag latest.

  1. Automating the deployment

There is nothing much to do to automate the deployment. We can create a Kubernetes project with all the .yaml files we created and commit it to Github along with a script comprised of all the commands we executed in this article to create K8S objects in the deployment. Below is a sample script.

set -e

ECHO=`which echo`
GREP=`which grep`
KUBERNETES_CLIENT=`which kubectl`
SED=`which sed`
TEST=`which test`

# methods
function echoBold () {
    echo $'\e[1m'"${1}"$'\e[0m'
}

# create a new Kubernetes Namespace
${KUBERNETES_CLIENT} create namespace wso2mi

# create a new service account in 'wso2' Kubernetes Namespace
${KUBERNETES_CLIENT} create serviceaccount wso2svc-account -n wso2mi

# switch the context to new 'wso2' namespace
${KUBERNETES_CLIENT} config set-context $(${KUBERNETES_CLIENT} config current-context) --namespace=wso2mi

echoBold 'Genarating the pv and pvc ...'
${KUBERNETES_CLIENT} create  -f persistent-volumes.yaml
${KUBERNETES_CLIENT} create -f integrator-volume-claims.yaml

echoBold 'Deploying the Kubernetes Services...'
${KUBERNETES_CLIENT} create -f integrator-gateway-service.yaml

# Integrator
echoBold 'Deploying WSO2 Integrator and Analytics...'
${KUBERNETES_CLIENT} create -f wso2mi-deployment.yaml
sleep 2m
echoBold 'Deploying Ingresses...'
${KUBERNETES_CLIENT} create -f integrator-gateway-ingress.yaml
echoBold 'Finished'

We can trigger the deployment script whenever we need to do the deployment from scratch. We can load the image name to use in a configuration file.

Needless to say, we can prepare a script to remove the whole deployment. If you dedicate this namespaces of Kubernetes for this deployment only, you can just remove the namespace.

set -e

ECHO=`which echo`
KUBECTL=`which kubectl`

# methods
function echoBold () {
    echo $'\e[1m'"${1}"$'\e[0m'
}

# persistent storage
echoBold 'Deleting persistent volume and volume claim...'
${KUBECTL} delete -f integrator-volume-claims.yaml
${KUBECTL} delete -f persistent-volumes.yaml

# WSO2 Enterprise Integrator
echoBold 'Deleting WSO2 Enterprise Integrator deployment...'
${KUBECTL} delete -f integrator-gateway-service.yaml
${KUBECTL} delete -f wso2mi-deployment.yaml
sleep 2m

# delete the created Kubernetes Namespace
${KUBECTL} delete namespace wso2mi

# switch the context to default namespace
${KUBECTL} config set-context $(kubectl config current-context) --namespace=default

echoBold 'Finished'

Tools like Helm package your Kubernetes deployment so that you can directly install it and manage it with updates. It is an alternative choice for easy deployment of your Kubernetes applications. Helm Package Manager for Kubernetes can be used in cloud-based Kubernetes offerings such as Amazon EKS as well. You can also consider comprehensive deployment tools like Spinnaker (by Netflix), CodeFresh or Tekton (by Google) for more complex deployments.

Conclusion

In this article, we discussed how to burn a Docker image based on WSO2 Micro Integrator containing an integration logic and then how to do a Kubernetes-based deployment using the Docker images. We also showed how to monitor the Kubernetes cluster, do centralized logging, address HA and scalability in brief. In the last section, we concluded with how to automate the deployment using WSO2 Integration Studio, scripting and Apache Maven together with Jenkins and Github. Altogether, this article is a basic guide to onboard any user who wishes to deploy WSO2 Micro Integrator on Kubernetes.

 

About Author

  • Hasitha Abeykoon
  • Technical Lead
  • WSO2

Hasitha is an Associate Technical Lead at WSO2. He has been part of the Message Broker team since he joined in December 2011. He holds a B.Sc. in Computer Science & Engineering from the Department of Computer Science and Engineering of University of Moratuwa, Sri Lanka. Hasitha has been a consultant for several customers on behalf of WSO2, for products such as WSO2 ESB, WSO2 Message Broker, and WSO2 Data Services Server. He focuses his research and development in asynchronous messaging, messaging reliability, NoSQL databases, and big data concepts for performance.