WSO2 API Microgateway 3.0 is Released

The WSO2 API Manager team recently released version 3.0 of its WSO2 API Microgateway. This blog takes a closer look at the key attributes of a microgateway, changes in the new release, new features available, use cases of microgateway, and what to expect in the future from WSO2 API Microgateway.

Key Attributes

Cloud native

  • Comes as lightweight containers (fast boot-up times, low memory footprint, and low distribution size
  • Designed in a stateless manner
  • Isolated from underlying system/OS
  • Can be deployed on self-service, elastic, and cloud infrastructure
  • Agile DevOps and CI/CD
  • Automated capabilities for deployment
  • Developed with frameworks suited for cloud (based on Ballerina)

Developer-centric and enables the following:

  • Creation of microservices
  • Define the open API definition for microservices
  • Initiate the microgateway project from the open API definition
  • Build the microgateway project
  • Locally test the service exposed via microgateway


  • Decentralized per API gateway, with a dedicated gateway for each service
  • Contains a private jet gateway, with a dedicated gateway for clusters of same microservices
  • Contains sidecar gateways, gateways that are deployed in the same node with microservices
  • A gateway for subset of APIs only, to expose several services/APIs using a single API


  • Rebuild required if API changes, new resource added
  • Finalize open API definitions prior to deploying
  • Immutable containers
  • Immutable runtime artifacts for non containerized runtimes


  • Serves traffic independently (acts without key manager with self contained tokens, local rate limiting capabilities, and stores analytics data)
  • Independent scaling sans the need to scale other components
  • Can be scaled with microservices when used as private jet or side car mode
  • Inbuilt support for container orchestration tools to manage scaling

What Has Changed in the New Release?

1. Introduction of a developer-first approach

The 2.x series of WSO2 API Microgateway depended on the WSO2 API Manager publisher portal when designing APIs to be exposed via the microgateway. WSO2 API Microgateway 3.0 takes a developer-first approach. The API developer who is designing APIs and defining the interfaces of the APIs is now able to develop a microgateway based on the interface of his/her APIs.

In this new version, the microgateway toolkit will accept a valid open API definition of developer services or microservices, with WSO2 specific open API extensions. Then the toolkit will translate this open API definition into an executable format which is accepted by the microgateway runtime component. Once the API developer adds WSO2 specific open API extensions to the open API definition of the microservices, microgateway will add QoS like authentication, authorization, rate limiting, transformations, analytics, etc.

2. Separation of toolkit and runtime into two distributions

The 2.x series of WSO2 API Microgateway had a single distribution where both toolkit and runtime resided. The toolkit created runtime distribution for the user, containing all the APIs which the user had to add to the project. This has changed in WSO2 API Microgateway 3.0, which has two separate distributions, one for the toolkit and the other for the runtime. I’ve explained this in detail in the section below.

  • Microgateway toolkit

Microgateway toolkit is a command line tool designed for API developers to create micro gateway projects by adding open API definitions. This cli creates a project structure for the API developer once the project is initiated. If the API developer has a single open API definition or multiple open API definitions, these can be copied to this newly created project. Once the project is finalized, the cli can be used to build this project which will create an executable file that is accepted by the microgateway runtime.

  • Microgateway runtime

This is the component which actually serves the API requests. Runtime component cannot be run without providing the output created by the toolkit. The runtime component can be dowloaded as a zip file or as a docker image. When using a zip file, the executable file created by this toolkit should be provided as an input argument for the startup scripts of the runtime. When using the docker image, the executable file should be mounted into the docker container.

You can refer to the quick start guide to expose your first API with microgateway in a few steps here:

New Features

Define per resource endpoints

In the microservices world, developers might want to expose their microservices as APIs to the outside world. The API developer will define an interface of these services using open API definitions. Several microservices will be contained in a single open API definition which defines a single API for a particular use case (for example, online store). So when defining the microservices in the open API definition as REST resources, users should be able to define different back ends based on the resource.

Open API extension (“x-wso2-production-endpoints”, “x-wso2-sandbox-endpoints”) introduced by WSO2 enables users to define back end services at the resource level. This way, users can logically collect his microservices into a single API and these can be exposed via the microgateway. Refer to the documentation here.

HTTP2 support

WSO2 API Microgateway is upgraded to support HTTP/2 together with HTTP/1.1 as the incoming and outgoing transport protocol. WSO2 API Microgateway is able to process requests faster and simpler with HTTP/2 enablement. For more information on HTTP/2 and its benefits, refer to the HTTP/2 homepage. It supports both client -> gateway and gateway -> back end communication using http2. Refer to the documentation here.

Mutual SSL based authentication

Microgateway is enabled to serve requests from trusted clients without providing OAuth2 tokens. After sharing the certificates of the trusted client partners are, requests from these trusted certificates will be served. Microgateway can impose the mutual ssl as required or as optional. If required, then requests from only trusted clients will be served, and if it is optional, the trusted client will be served without OAuth2 tokens and untrusted clients (the client who has not shared their public certificates) will need a valid OAuth2 token.

Config based basic authentication support

Microgateway allows users to invoke APIs using their basic authentication credentials, apart from OAuth2 tokens as well. The basic authentication support can be defined per API using the open API definition. Microgateway supports the open API security schemes in order to define the basic authentication for the APIs. Refer to the documentation here.

Response and request schema validation

Microgateway can intercept responses and requests, and validate these against the models defined in the open API definition. Microgateway stores the open API definitions added to the microgateway project and cross check the request and response payloads against the schema models defined in the open API definitions. Refer to the documentation here.

Service discovery with ETCD

One challenge we face with microservices architecture is that the services are dynamic. Services do not have a fixed connection url, it changes with time, and are mostly maintained in a ETCD server as a key value pair. Since microgateway is immutable, it should be able to route traffic to these dynamics endpoints without having to rebuild. Connecting with the ETCD server and resolving dynamic micro services urls in real time are both supported by the microgateway. Refer to the documentation here.

Global throttling

Up to date, the microgateway was able to perform the rate limiting locally using memory. Each gateway maintained its own set of counters and throttling decision were taken in an independent manner. With this new release, the microgateway enables the publication of throttle events to the WSO2 API Manager traffic manager component, and take decisions based on traffic manger subscriptions.

Integration with third party key managers

By default all the APIs in the microgateway are OAuth2 protected. Hence API consumers require a valid OAuth2 token in order to invoke the APIs. Microgateway supports self contained jwt OAuth2 access tokens from any trusted key manager. In order to validate the jwt token sent by the key manager, microgateway requires the public certificate of the key manager in its trust store.

Request and response transformations

Microgateway now has the first class support to plugin external functions written in Ballerina, as interceptors during the request in-flow and the response out-flow. API developers can manipulate request/response headers, body, etc. prior to sending to the back end or responding to the client.

API/Resource level throttling

Earlier versions of microgateway only supported application and subscription level throttling. With this new version onwards, API developers can define new policies in the policy.yaml file of their project and attach them to the APIs using the open API extensions.

JWT revocation

Microgateway self validates the jwt tokens issued by the trusted key manager. It validates the JWT tokens signature using the public certificate of the key manager which signed the JWT. Due to this self validation mechanism microgateway will accept revoked tokens until they are get expired. So there should be a mechanism to notify microgateway about revoked jwt tokens. There are two mechanisms supported by microgateway:

  • Persistant notification via ETCD server

Microgateway can connect an ETCD server during startup and fetch all revoked tokens from the ETCD server. In the key manager component which issues and revokes tokens, there should be an extension point to add the revoked token into an ETCD server. When revoked tokens are added with their validity period, the ETCD server automatically removes them upon the expiration, hence mitigating the aggregation of revoked tokens on the ETCD server.

  • Real time notification via JMS

Microgateway can be configured to subscribe to a JMS topic to fetch details about tokens that are revoked during the runtime. This way, microgateway is notified about the tokens that get revoked after the server startup. In the key manager component, there should be an extension point to add the revoked token to the JMS topic.

Microgateway Deployment Use Cases

1.Monolithic centralized deployments

2.Use in microservices architecture as a private jet or sidecar gateway

3.Exposure point for the microservices as APIs in service mesh

What to Expect in the Near Future

  • GRPC support
  • Observability with Prometheus and Grafana
  • Cookie based authentication for SPAs
  • CI/CD with APIM import export tool and publish to WSO2 API Manager
  • Improved toolkit to fetch open API definitions from any URL
  • K8s CRDs with enhanced dev focussed design

Learn more about WSO2 API Microgateway here.

Accessibility Requirements in WSO2 API Management

Accessibility standards in the world of web and internet are quite important as it allows people to access web resources without any difficulty, irrespective of their disabilities. There are many standards put forward to help developers implement accessibility into web-based resources. Some of these are:

  • Section 508 of the Rehabilitation Act of 1973
  • W3C Web Content Accessibility Guidelines (WCAG) 2.0 level A
  • W3C Web Content Accessibility Guidelines (WCAG) 1.0
  • W3C Authoring Tools Accessibility Guidelines (ATAG 1.0)

As a middleware vendor we’ve often been asked by Government customers and Federal Agencies in the United States, Europe and other parts of the world, if we are compliant with Section 508, WCAG 1.0/2.0, and ATAG 1.0 to enable a wider audience to use our software. These standards describe the desired accessibility required from information technology based services and products to make them accessible to differently-abled people.

A small explanation for those who can’t relate directly; sometimes you may ask if our software doesn’t touch end users, then why bother? Our software is predominantly catered towards developers and architects and they are the people who engage with our software on a technical level. So for instance, if you’re catering to a colorblind developer, having software complying to Section 508 ensures their disability is addressed.

What Do These Standards Say?

Section 508 states that it “requires Federal Agencies to make their ‘electronic and information technology’ accessible to people with disabilities”. However, this not only applies to Federal Agencies, but also impacts any company that conducts business with a Federal Agency like private contractors, and financial, healthcare, and legal organizations among others.

Section 508 was made part of the Rehabilitation Act of 1973 in 1998. It was revised in 2017 to include requirements for information and communication technology (ICT) in the federal sector. The guidelines were also updated to extend to the telecommunications sector, hence Section 508 was reorganized (along with Section 255) to better align with and reflect recent communication technology innovations.

Similarly, WCAG 1.0/ 2.0 documentation defines how to make ‘web content’ more accessible to people with disabilities. Accessibility involves a wide range of disabilities, including visual, auditory, physical, speech, cognitive, language, learning, and neurological disabilities. Web “content” generally refers to the information on a web page or web application, including:

  • Natural information such as text, images, and sounds
  • Code or markup that defines structure and presentation

It is primarily intended for web content developers (page authors, site designers, etc.), web authoring tool developers, web accessibility evaluation tool developers and others who want or need a standard for web accessibility, including for mobile accessibility.

The WCAG 1.0 is organized around guidelines that have checkpoints while 2.0 is organized around four design principles of web accessibility. The basis of determining conformance of the former are the checkpoints and the latter are the success criteria.

Overall, these different accessibility standards apply in different circumstances but are designed to complement each other. Often, these standards improve over time which is why you come across different versions.

In the open source community, we are as open to individual differences as we are to heterogeneous technologies. WSO2 API Manager, being one of the key products of the WSO2 Integration Agile Platform, has widespread deployments and has also been adopted by Governments and federal organizations. Hence, we designed WSO2 API Manager to be compliant with Section 508 so that it is accessible to all sorts of intended users regardless of their differences. The following section presents how each of the guidelines has been achieved in the product.

Section 508 Compliance in WSO2 API Manager

I have presented the basic guidelines from Section 508 and indicated how WSO2 API Manager has been designed to achieve this particular requirement. In this guideline, only the applicable parts have been discussed.

Addressing the requirements of Subpart B:

Section 508 Technical Standard WSO2 API Manager Achievement (with a few examples)
Software applications and operating systems
(a) When software is designed to run on a system that has a keyboard, product functions shall be executable from a keyboard where the function itself or the result of performing a function can be discerned textually. WSO2 API Manager can be started up and executed using the command line (Windows)/ terminal (Mac), where the keyboard is the primary tool used to operate the application. Once started up, the API manager functions can be performed using the keyboard. For instance, the API design and implementation can be executed using the keyboard where you use the tab to move to each field where you want to specify API metadata. This is demonstrated in the sample API provided at the first-time launch of the product.
(b) Applications shall not disrupt or disable activated features of other products that are identified as accessibility features, where those features are developed and documented according to industry standards. Applications also shall not disrupt or disable activated features of any operating system that are identified as accessibility features where the application programming interface for those accessibility features has been documented by the manufacturer of the operating system and is available to the product developer. WSO2 API Manager is a web-based application. It does not interfere, disable or disrupt the operation of other applications or operating system on which it is running and vice versa.
(c) A well-defined on-screen indication of the current focus shall be provided that moves among interactive interface elements as the input focus changes. The focus shall be programmatically exposed so that assistive technology can track focus and focus changes.

Yes, on the startup screen, a walk-through to create an API is presented for first-time users. This shows the current focus and guides the user to the next item after completion of each step.

For experienced users, the tab key can be used to identify the current focus and navigate through the interfaces. When a text field is focused, it shows the cursor in the particular textbox and the textbox is outlined in blue. For dropdowns, the list of options is expanded and shown to the user.

(d) Sufficient information about a user interface element including the identity, operation and state of the element shall be available to assistive technology. When an image represents a program element, the information conveyed by the image must also be available in text. Yes, this is made available where necessary.
(e) When bitmap images are used to identify controls, status indicators, or other programmatic elements, the meaning assigned to those images shall be consistent throughout an application’s performance. Yes, the images used for APIs and their meanings are often consistent. The first letter of the API name is reflected in the image if it is not uploaded separately.
(f) Textual information shall be provided through operating system functions for displaying text. The minimum information that shall be made available is text content, text input caret location, and text attributes. Yes, textual information is provided through operating system functions. As a minimum, we provide:

  • Short text labels
  • Sample text that reflects the type of input required
  • Text input location identified using the cursor
  • Simple and consistent text attributes for titles, labels, validations, and descriptions across all WSO2 API Manager components
(g) Applications shall not override user selected contrast and color selections and other individual display attributes. WSO2 API Manager shows the default screens, however only affected by the contrast and color selections of the user’s screen/ monitor’s configuration.
(h) When animation is displayed, the information shall be displayable in at least one non-animated presentation mode at the option of the user. Not applicable as animations are not part of WSO2 API Manager.
(i) Color coding shall not be used as the only means of conveying information, indicating an action, prompting a response, or distinguishing a visual element. Information is conveyed in different means, such as text, color, images and other items. For instance, the action to start creating an API (on the Publisher) is indicated as a rectangular button with text that says ‘Start Creating’.
(j) When a product permits a user to adjust color and contrast settings, a variety of color selections capable of producing a range of contrast levels shall be provided. Not applicable as adjusting color and contrast settings are not part of WSO2 API Manager.
(k) Software shall not use flashing or blinking text, objects, or other elements having a flash or blink frequency greater than 2 Hz and lower than 55 Hz. Not applicable as flashing and blinking text, objects or other elements are not used in WSO2 API Manager.
(l) When electronic forms are used, the form shall allow people using assistive technology to access the information, field elements, and functionality required for completion and submission of the form, including all directions and cues. Screen magnification is available as the interface can be zoomed in and out as preferred. Assistive technology is not built into the WSO2 API Manager however, it can work with such tools available with the OS/browser to enhance such capabilities.

The above table is based on WSO2 API Manager v2.6.0. If you do have any feedback, please email us at

Optimizing “Mean Time To Detect” (MTTD) for WSO2 Incidents

The “L1” incident is a term that both WSO2 customers and employees pay high attention to. This term is used to represent the “Catastrophic Severity Level” incident and it’s also known in the industry as a High Severity Incident. Most of the time when “L1” is observed, a WSO2 customer has experienced a substantial loss of service, placing a substantial portion of the Subscriber’s revenue at risk of loss or business operations have been severely disrupted.

WSO2 Support portal

WSO2 support and product engineers attend L1 incidents with high priority and provide the necessary responses within 1 hour, work around for the mitigation within 24 hours of the report of the incident and resolution within 48 hours. Furthermore, these high severity issues are closely monitored by the WSO2 executive team and when the WSO2 team recognizes a mission-critical issue, the WAR room will be deployed with the SWAT team consisting of product experts, specialists, and technical owners to ensure the resolution of the issue.

With over 15 years of experience in the integration industry, we know that some incidents may require hours, days, weeks or months to identify the root cause. It may be a product related issue or an environment related issue. It is the nature of the middleware to receive the blame all the time, as middleware product sit in the middle of services and consumers of services.

Most of the time everyone focuses on optimizing the Mean Time To Recovery (MTTR). MTTR is a metric that indicates the time it takes a system to revert to normal production following an incident. Mean Time to Resolution refers to the time taken to address the root cause and also the time taken for any proactive measures to stop the incident from occurring again. There is another spectrum in this big picture known as Mean Time To Detection (MTTD). This is a Key Performance Indicator (KPI) and it is a measure of how long a problem exists in the ecosystem before the appropriate parties become aware of it and take any necessary action towards resolving the issue, which includes finding the root cause. In this blog, I provide some guidelines on how you can identify high severity issues or incidents in the production system and show how you can organize your efforts to reduce MTTD by applying industry best practices.

As WSO2 helps organizations to move from a monolithic architecture to a distributed Microservices Architecture along with cloud-native adoption, this environment includes thousands of components interacting in complex, rapidly changing deployments over multiple tiers. Therefore, there will be a large number of events, matrices, and data produced in each and every node. Today’s dynamic cloud-native environments use multiple different technologies and aggregated tools. Currently in the industry, there are two techniques utilized for monitoring: observability and surveillance.

If there is no proper system to monitor or observe the ecosystem, organizations will never be able to detect or resolve damaging problems speedily. Hence, I introduce the tooling, monitoring tools, KPIs, alerting mechanisms, and observability techniques to significantly reduce the MTTD.


Observability is the critical pillar for reducing MTTDs. By definition observability refers to the collection of diagnostics data across all stacks to identify and debug production problems, as well as provide critical signals about usage to enable a highly adaptive and scalable system. Observability is primarily driven by five different dimensions to understand the environment and these include:

  • Monitoring
  • Tracing
  • Log aggregation
  • Visualizing
  • Alerting

Let us now take a closer look at each of these dimensions.


A fundamental aspect of monitoring is to collect, process, aggregate, and display real-time quantitative data about an ecosystem and measure metrics at three levels: network, machine, and application. Such monitoring will produce error counts and types, processing times, memory usage, and server lifetimes.

JConsole view for CPU and memory usage

For example, monitoring performance of the given JVM based application can simply be monitored using JConsole and you can collect matrices of CPU usage, memory usage, number of threads running, etc.

Thread monitoring

However, with larger enterprises with distributed applications, it is not feasible to only target the monitoring of a single JVM or machine. Instead, Application Performance Monitoring (APM) tools should be in place to facilitate the monitoring of multiple functional dimensions. For example, DynaTrace, AppDynamics, New Relic, Datadog, and Apache Skywalking are full-fledged monitoring and analytics capability providers that allow APM.

WSO2 API Manager profiles and WSO2 Enterprise Integrator interaction view in AppDynamics


Traditionally, monolithic applications employed logging frameworks to provide insight on what has happened if something failed in the system. Looking at the log statements with correct timestamps and context is more than enough to understand or recognize the failure and most of the information will be revealed if the logs are correctly defined during development. However, with distributed Micro Service Architecture, logs alone are not enough to understand and see the big picture.

Tracing can be easily understood with an analogy of a medical angiogram. An angiogram is a technique used to find the blocks in the heart by injecting an x-ray sensitive dye which makes block detection possible through dynamic x-ray snapshots while the dye moves through blood vessels. Detecting the bottlenecks in this manner will be utilized to take any necessary action to fix the issue, rather than searching everywhere or replacing the entire heart.

Medical angiogram

Likewise, tracing is heavily utilized in distributed software ecosystem to profile and monitor the communication or transaction between multiple systems including networks and applications. Furthermore, tracing also helps to understand the flow between services with an overview of application-level transparency. Zipkin, Jaeger, Instana, DataDog, Apache Skywalking, and Appdash are few examples that enable distributed tracing tools which support the OpenTracing Specification.

Log aggregation

There are endless different varieties of logs like application logs, security logs, audit logs, access logs, and more. In a single application, the complexity of all these logs is manageable. However, in a distributed architecture, there are many applications or services working together to complete a single business functionality. For example, ordering a pizza involves checking the store availability, making the payment, placing the order, fulfilling the order, enabling tracking, shipping schedule placement, and many other activities.

In the event of an error in such a complex transaction, tracing may pinpoint the location to search for the root cause. However, if application-centric logs are distributed across different components, it will be a nightmare to find the exact issue and time taken to find the relevant logs could make the situation more critical. Therefore, having a centralized location to collect and index all the logs that belong to an enterprise is critical to ensure more efficient detection of the exact location of an issue.

Currently, there are multiple tools and software in the market to achieve log aggregation. Splunk, Sumo Logic, Elastic, and GrayLog play important roles in the log aggregation market.

As previously stated, one of the main responsibilities of log aggregators is the collection and storage of logs from multiple sources. As shown in the image below, logs are collected from different containers in the given Kubernetes pod.

Logs view in Kibana

Searching through the log from multiple sources in one single centralized place enables an enterprise to locate the necessary information in a short amount of time and reduces the MTTD.

String search in Kibana

Indexing log aggregators allows organizations to systematically organize logs in a given category. Most of the time, logs are indexed according to the source it originates from, such as the application name, hostname, datacenter, or IP.

Indexing in Splunk for WSO2 API


There are tools that collect the data, logs, or matrices in a centralized location. However, if the collected data and logs do not provide any meaningful information, they will be not useful. Most APM tools and log aggregators provide data visualization to depict a holistic view based on the criteria provided.

For example, locating the host with the most number of error messages can be identified easily with visualization.

Abusive usage of the services

Another epic example is correlating two different errors that took place on separate hosts or applications and these can be created using time series aggregation charts.

Visualization of two errors that can be correlated

Data visualization is not only restricted to errors and exceptions, but it can also be used to understand the behavioral monitoring of application users. For example, if a user over-uses an API, data visualization can help to detect abusive behavior.


Searching for log and data can be helpful to speed up the debugging process and resolving issues. But in reality, manually monitoring visualizations to detect incidents is not practical. Hence, is it critical to create automated alerts.

Common scenarios that require alerts include the sudden failure of one or more APIs, a sudden increase in the response time of one or more APIs, and the change in the pattern of API resource access. These alerts can result in an email, a phone call, instant message or PagerDuty. It’s important to note that with alerts, when a predefined condition is met or violated, necessary stakeholders need to be informed with the right amount of information rather than too much data.

Alert email from AppDynamics on Gateway Behavior


Collecting data in a random manner with different views of the same random data does not really reveal anything at all.

Image by StockSnap from Pixabay

Real-world surveillance is used to monitor activities by the police or security organizations, and later may be used as evidence of crimes. Likewise, surveillance is used to force the targeted observation of the system to ensure that functionalities and performance are not violating the intended behavior.

Image by edwardbrownca from Pixabay

Let’s take an example of applications that are handling real-time traffic or processing a high payload and tends to be memory intensive. The probability of an application consuming too much memory is high and if the application is not properly designed and developed to handle this, the application may use up too much memory and crash. Detecting these leaks or abnormal memory usage is critical to uninterrupted service.

Memory leak detection in AppDynamics


Optimizing infrastructure for minimizing the mean time taken to detect WSO2 incidents ensure that an organization has established appropriate systematic techniques to employ observability and surveillance technologies effectively to identify incidents right away and keep the system stable.


Reducing MTTD for High Severity Incidents (Published by O’Reilly Media, Inc.)

Dapper, a Large-Scale Distributed Systems Tracing Infrastructure

Ballerina and University of Washington Hyperloop: Forward Together

What do you get when you mix some of the brightest minds from a highly accomplished school with Ballerina, a cloud native programming, network-distributed application programming language?

Why, a Hyperloop challenger that is what.

I was honored to be invited to the University of Washington’s recent unveiling of their 2019 Hyperloop POD racer at the UW’s Husky Union Building on May 10, 2019. The room was full of determined, excited and busy students who are passionate about their entry for this year’s pod race event at Elon Musk’s SpaceX facility.

For those that have not been following the Hyperloop event for the last few years it is a competition run by SpaceX to enable teams from colleges to develop, test, and then run a scale Hyperloop pod in a competition each year.
This competition allows crowdsourcing of technologies and theories in how such a system could be built enabling a rapid transport system of the future. Ballerina and WSO2 have been sponsors of the UW team since before the event last year where the team won an innovation award from the man himself and came in at first place from the US entries and fourth overall.

Ballerina is a new language that helps developers author integration services, deploy serverless or containerized microservices, and resist service failures with transaction resilience and chaos-ready deployment. The Ballerina Platform contains an integration programming language, a serverless and container tuned runtime along with transaction frameworks for integrating legacy apps, an API gateway, a message broker and agile toolchain plug-ins. These mechanisms are used to bring together the control code modules delivered in different languages enabling the UW Hyperloop control team to monitor the pod, enable shutdown commands and analyze the runs and performance of their engine, pod and control systems.

Speaking to the team at the event, they could not stop enthusing about how easy it had been to build the code modules in Ballerina. How it enabled them to quickly prototype integrations, functions and capabilities when changes needed to be made and how the ability to pose questions to the Open Source Community that backs the project has been invaluable.

Look to the blog for more posts as we move forward with the event including a post about the final testing phases this month.

What’s New in WSO2 Open Banking v1.3.0

WSO2 Open Banking caters to global open banking requirements. The latest release, version 1.3, focuses on helping banks open up their APIs for external testing. This was brought about by the March 2019 deadline for PSD2 and Open Banking, which requires all banks operating in Europe and the UK to have a sandbox environment ready to open up their APIs securely to third-party providers (TPPs). This blog discusses the new release’s features, improvements, and changes.

How can banks have their open banking setups ready for external testing?

From a UK standpoint, the Open Banking Implementation Entity (OBIE) mandates that the UK Open Banking API specification v3.1 should be used for the March deadline. If you look at the revised roadmap, it explains what is required for each deadline based on the bank type, CMA9 or non-CMA9. The API Release Management Policy explains the API specifications that the bank needs to support based on the time it is published.

We considered the following UK standards for the new release:

Europe follows the Berlin Group NextGenPSD2 and STET specifications. However, the region does not have mandated requirements for the March deadline. The new release also includes support for the Berlin Group NextGenPSD2 specification v1.1 and the STET specification v1.4.

In addition, the release includes enhancements to security, user experience, transaction risk analysis, and fraud detection considering future requirements beyond the March deadline.

The complete release notes can be found here.

Packaging of WSO2 Open Banking

WSO2 Open Banking comes with three product packs:

  1. wso2-obam-1.3.0
  2. wso2-obkm-1.3.0
  3. wos2-obbi-13.0

A user with a WSO2 subscription can download the WSO2 Open Banking solution via WSO2 Update Manager (WUM). As WSO2 Open Banking is closed source, it is not publicly available to download.

The wso2-obam pack contains API management-related components including the key generation, consent management, and authentication (identity and access management) components.

The new v1.3.0 release introduces wso2-obbi – WSO2 Open Banking Business Intelligence. The business intelligence component should be purchased to utilize transaction risk analysis, fraud detection, business intelligence, monitoring, and reporting capabilities.

The wso2ob-am-analytics and wso2ob-ei components have been removed from the open banking solution as the wso2-am-analytics and wso2-ei products offer the same functionality. So hereafter WSO2 Open Banking will not maintain or improve the features of those components.

A Standard Open Banking Deployment

If we consider the open banking requirements for different banks and regions, the main requirements are covered in the Open Banking API Manager and Open Banking Key Manager components. Our standard deployment contains wso2-obam and wso2-obkm, as shown in the below image.

The Open Banking API Manager component provides API management capabilities. It needs to connect with the internal banking systems to expose the required data and services to external third parties. The Open Banking Key Manager component provides API security, strong customer authentication, consent management, and user management capabilities.

How to Extend the Standard Deployment

With Analytics

A bank may want to see how their APIs are performing, provide analytics to third parties to see how their applications are performing and make decisions based on those to improve the whole open banking ecosystems. To do this they need to download wso2-am-analytics and integrate with the standard deployment as shown below.

With Integrator and Workflows

When exposing internal bank data and services via APIs to third parties, open banking solutions need to integrate with the internal banking system. If that internal bank system has its own API, we can directly map those with the external APIs. However, you may need to add an additional layer that can transform different message formats and protocols. In such cases, banks can integrate the WSO2 EI Integrator Profile.

Some banks may want to have business workflows for certain functionality. For example, when a third party signs up with the open banking solution, the bank wants to take them through the approval process where someone goes through all the third party information and provides the approval to access the exposed APIs. Another use case involves the bank wanting to introduce the approval process when a third party wants to access a production API after testing in the sandbox APIs. In that kind of situation where a bank wants to introduce human interaction, it can be achieved by introducing the WSO2 EI Business Process Profile to the standard deployment.

Both these scenarios (integrator and business process profiles) are shown in the image below.

With Business Intelligence

Apart from the basic compliance requirements that are required for the March deadline, there are more requirements mentioned in the Regulatory Technical Standards (RTS) and guidelines published by the European Bank Authority (EBA). Transaction risk analysis (TRA) and fraud detection gets the highest priority among those requirements because once the banks open up their APIs to external parties, there’s a high chance for fraud to happen.

That’s why WSO2 Open Banking now provides a new component – Open Banking Business Intelligence (OBBI), which analyzes the transaction risk and identifies fraud situations. The OBBI component can be integrated with the standard deployment as shown below. If any bank has its own TRA and fraud detection systems, that can also easily be integrated with the standard deployment.

This OBBI component is built on top of the Siddhi engine, so this component can also provide business insights, and reporting and monitoring capabilities. If a bank wants to build their own dashboard, that is also supported through this component.

WSO2 Open Banking is OBIE Security Conformance Certified

The OBIE has implemented a test suite to help Account Servicing Payment Service Providers (ASPSPs), TPPs and solution providers to check that they have implemented each part of the OBIE standard correctly, i.e. Read/Write APIs, Security Profile, Operational Guidelines, and Customer Experience Guidelines. When the implementers use these tools to self-attest, then OBIE validates and publishes the conformance certificate. These certificates can be used by implementers as evidence to all parties including regulators that they have followed the standard correctly.

We are pleased to announce that WSO2 Open Banking OBIE Security Profile Conformance Certified. You can see that WSO2 is listed as an OBIE Security Profile Compliant solution here.

If you wish to run the OBIE security profile conformance suite against the WSO2 Open Banking solution, this document provides the instructions for your reference.

WSO2 Open Banking Documentation

Another great improvement that came with WSO2 Open Banking v1.3.0 is the availability of public documentation. Here are some of the important links to WSO2 Open Banking documentation.

If You Missed the March Deadline Without a Sandbox Environment Ready

All banks in Europe and the UK were required to have their open banking setups ready for external testing by March 2019. If your bank couldn’t meet that deadline yet, or is still struggling to select the required open banking platform, WSO2 can help you. Here are some useful resources:

Our Next Release to Meet the September Deadline

Our next target is to help banks achieve the regulatory deadlines that fall in September. This includes providing a production system available for third parties to go live with their third-party applications.

We are planning another release before September to ensure banks have time to upgrade their solution on time. That release will contain the required implementations of the UK v3.1.1 Specification for banks in the UK and Berlin Group NextGenPSD2 v1.3 Specification for banks in Europe.

Apart from that, we are working towards implementing the Australia CDR Specificationtoo.


WSO2 Open Banking is purpose-built for global open banking. It is built to align banking and regulatory needs with technology infrastructure and domain expertise to fully satisfy the technology requirements for PSD2 and Open Banking. If you missed the March 2019 deadline to open up APIs, we can help. The solution roadmap focuses on several enhancements to the solution especially those in line with meeting the September 2019 deadline for the RTS. If you’re exploring open banking or you’re already knee deep in a PSD2 compliance project and require a technology partnership, drop us a note at