WSO2 Venus

Nandika JayawardanaThe open source ESB

Open Source is associated with most software products nowaday and middleware are no exceptions.

As a middleware user, how does open source benefit you? .

1. Being open source means, you do not have to buy a commercial license to use the software.

2. There will be regular updates, which means, you can keep your deployment up to date with respect to security flows etc.

3. Open source means, there is a definite cost advantage, when it comes to total cost of ownership of a software product.

However, most open source middleware products do not come as a fully open source. Sometimes, there is an open source community edition and an enterprise edition and so forth. Hence, WSO2 ESB is one of the truly open source Enterprise Service Bus product.

Learn more about the open source nature of WSO2 ESB from "THE OPEN SOURCE ESB" article.

Anupama PathirageWSO2 DSS - Using Oracle Ref Cursors

A REF CURSOR is a PL/SQL data type whose value is the memory address of a query work area on the database. This sample shows how to use ref cursors as OUT parameter in a stored procedure or as return parameter in a function with WSO2 DSS. The sample is using oracle DB with DSS 3.5.1

SQL Scripts: To create table, insert data and create stored procedure & function.

CREATE TABLE customers (id NUMBER, name VARCHAR2(100), location VARCHAR2(100));

INSERT into customers (id, name, location) values (1, 'Anne', 'UK');
INSERT into customers (id, name, location) values (2, 'George', 'USA');
INSERT into customers (id, name, location) values (3, 'Peter', 'USA');
INSERT into customers (id, name, location) values (4, 'Will', 'NZ');

  OPEN o_Customer_Data FOR
  SELECT * FROM customers WHERE id>i_ID;
END getCustomerDetails;

CREATE FUNCTION returnCustomerDetails(i_ID IN NUMBER)
o_Customer_Data   SYS_REFCURSOR;
  OPEN o_Customer_Data FOR
  SELECT * FROM customers WHERE id>i_ID;
  return o_Customer_Data;

Data Service

Following data service has two queries and associated operations.
  • GetDataAsOut - Oracle ref cursor is used as out parameter of a stored procedure.
  • GetDataAsReturn - Oracle ref cursor is used as return value of a function.

<data name="TestRefCursor" transports="http https local">
   <config enableOData="false" id="TestDB">
      <property name="driverClassName">oracle.jdbc.driver.OracleDriver</property>
      <property name="url">jdbc:oracle:thin:@localhost:1521/xe</property>
      <property name="username">system</property>
      <property name="password">oracle</property>
   <query id="    " useConfig="TestDB">
      <sql>call getCustomerDetails(?,?)</sql>
      <result element="CustomerData" rowName="Custmer">
         <element column="id" name="CustomerID" xsdType="string"/>
         <element column="name" name="CustomerName" xsdType="string"/>
      <param name="id" sqlType="INTEGER"/>
      <param name="data" sqlType="ORACLE_REF_CURSOR" type="OUT"/>
   <query id="GetDataAsReturn" useConfig="TestDB">
      <sql>{? = call returnCustomerDetails(?)}</sql>
      <result element="CustomerDataReturn" rowName="Custmer">
         <element column="id" name="CustomerID" xsdType="string"/>
         <element column="name" name="CustomerName" xsdType="string"/>
      <param name="data" ordinal="1" sqlType="ORACLE_REF_CURSOR" type="OUT"/>
      <param name="id" ordinal="2" sqlType="INTEGER"/>
   <operation name="GetCustomerDataAsOut">
      <call-query href="GetDataAsOut">
         <with-param name="id" query-param="id"/>
   <operation name="GetCustomerDataAsReturn">
      <call-query href="GetDataAsReturn">
         <with-param name="id" query-param="id"/>

Sample Requests

Operation GetCustomerDataAsOut


   <p:GetCustomerDataAsOut xmlns:p="">
      <!--Exactly 1 occurrence-->


<CustomerData xmlns="">

Operation GetCustomerDataAsReturn


   <p:GetCustomerDataAsReturn xmlns:p="">
      <!--Exactly 1 occurrence-->


<CustomerDataReturn xmlns="">

Gobinath LoganathanInstall Oracle JDK in Ubuntu

Oracle Java is the proprietary, reference implementation for Java. This is no longer currently available in a supported Ubuntu repository. This article shows you the way to manually install the latest Oracle Java Development Kit (Oracle JDK) in Ubuntu. Note: This article uses JDK8_Update_$java_update_no to demonstrate the installation. In the provided commands, replace the version

Dinusha SenanayakaAmazon Web Services (AWS) integration with WSO2 Identity Server (WSO2 IS)


Amazon Web Services (AWS) supports federated authentication with SAML2 and OpenId Connect standards. This gives capability to login to AWS Management console or call the AWS APIs without having to create an IAM user in AWS for everyone in your organization.

Benefits of using federated single sign on login for AWS access 

  • No need to create IAM users in AWS side
    • If organization having existing user store, we can use it as the user base for AWS 
  • You can use single identity for user,  all over the systems used by your organization
    • This makes administrator life easier when user onboarding or offboarding to the organization

In this tutorial we are going to look at following integration scenarios;
  1. Connect WSO2 Identity Server (WSO2 IS) to single AWS account
  2. Connect WSO2 Identity Server to multiple AWS accounts

1. Connect WSO2 Identity Server to single AWS account

Business Use case : Your organization owns a AWS account and need to give different level of privilege access to AWS console to organization users.

How to configure WSO2 IS to support this: This tutorial explains the required steps for this including Multi Factor Authentication (MFA)   

2. Connect WSO2 Identity Server to multiple AWS accounts

Business Use case :  Your organization can owns multiple AWS accounts (eg: Development, Production), you need to assign different level of permissions in these accounts using the existing identity used for users in organization user store (ldap, jdbc etc).

How to configure WSO2 IS to support this:
Following tutorial explains required configurations for this.

We assume a user Alex in organization ldap, which need to give EC2 Admin permissions in development AWS account and need to have only EC2 read only access to the production AWS.

Business Requirements:
  • Organization use WSO2 IS as the Identity Provider (IdP). Use same IdP to authenticate users to AWS Management console as well
  • User Alex in organization should be able to log into development AWS account as an EC2 admin user
  • Alex should be able to log into production AWS account using the same identity,  but only with EC2 read only access
  • Alex should be able to switch role from  development account to production account

Configuration Guide

1. Configure AWS

1.1.  Configure AWS Development Account

Step 1: Configure WSO2 IS as an Identity Provider in Development Account

 a. Log into AWS console using development account, navigate to Services, then click on IAM

b. Click on "Identity Provider" from left menu and then click on "Create Provider"

c. On the prompt window provide following info and click on "Create"

Provider Type : SAML
Provider Name: Any preferred name as identifier (eg:wso2is)
Metadata Document: Need to download WSO2 IS IdP metadata file and upload here.  Following is the instructions to download IdP metadata file from WSO2 IS.

Login to WSO2 IS management console as admin user. Navigate to "Resident" under "Identity Providers" left menu.  In the prompt window, expand the "Inbound Authentication Configuration", then expand the "SAML". There you can find the "Download SAML Metadata" option. Click on it, this will give option to save IdP medata in medata.xml file. Save it to local file system and upload it in AWS IdP configure UI as the Metadata Document.

AWS IdP configuring UI

d. Locate the Identity Provider that we created and make a copy of Provider ARN value. We need this value later in the configurations.

Step 2: Add AWS IAM roles and configure WSO2 IS Identity provider as trusted source in these roles

a. We need to create a AWS IAM role with EC2 Admin permissions since Alex should have EC2 Admin privileges in  development AWS account.

Option 1 : If you have an existing role.
If you have an existing role with EC2Admin permissions, then we can edit the trust relationship of role by giving SSO access to WSO2 IS identity provider. If you do not have an exiting role, move to the option 2 which describes with adding a new role.

Click on the desired role -> Go to "Trust Relationships" tab and click on "Edit trust relationship"

If your current trust relationship policy is empty for this role, you can copy and replace following policy configuration there after replacing the <Provider ARN Value of IdP> value (i.e the Provider ARN value that you taken in step1)

"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<Provider ARN Value of IdP>:saml-provider/local-is"
"Action": "sts:AssumeRoleWithSAML",
"Condition": {
"StringEquals": {
"SAML:aud": ""

If you have a current policy in place, then you need to edit existing policy and include SSO access for WSO2 IS.

Option 2 : Create and assign permissions to a new role.

Go to "Roles" and Click on "Create new role". Select the role type as "Role for identity provider access" since we need to allow SSO access using WSO2 IS.

Select wso2is as the SAML provider and click Next.

On the next step just verify the trust policy and click on Next.

Select your preferred policy to be assigned to the role that you are creating. As per our sample scenario, we need to assign "AmazonEC2FullAccess" policy to give EC2 Admin permissions to this role.

Give a preferred role name and click on "Create".  (eg: Dev_EC2_Admin)

b. Locate the Role ARN value in role summary page and make a copy of the value. We need this value later in the configurations.

Now we have configured WSO2 IS as a SAML Identity Provider for development AWS account and also created a role with EC2 full access permissions allowing sts:AssumeRoleWithSAML capability to WSO2IS saml-provider.

1.2.  Configure AWS Production Account

Step 1 : We need to repeat the same step we did for development account previously with the step 1 and configure WSO2 IS as an Identity Provider for production account as well.

Step 2 : Similar to we created Dev_EC2_Admin role in development account, we need to create EC2ReadOnly role in production AWS account. (As per our sample scenario, Alex should have EC2 read only access to the production AWS account). Only difference is you need to select the appropriate policy (AmazonEC2ReadOnlyAccess) for this role. Refer following which highlights only this step.

Once the role is created, make a copy of Role ARN value of this role as well.  We need this value later in the configurations.

1.3. Configure account switch capability from AWS development account's Dev_EC2_Admin role to production account's Prod_EC2_ReadOnly role

a. Login to the AWS development account and configure an IAM policy that grants privilege to call sts:AssumeRole for the role that you want to assume (i.e we need to assume Prod_EC2_ReadOnly role in production account).  To do this,

1. Select "Policies" in the left menu and click on "Create Policy" option. Pick the "Create Your Own Policy" option there.

2. Give a relevant name for policy name and copy the following policy configuration after replacing <Prod_AWS_Account_Id> and <Prod_AWS_EC2_ReadOnly_Role> values as the content.

"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::Prod_AWS_Account_Id:role/Prod_AWS_EC2_ReadOnly_Role"

3. Attach the policy that we created in previous step to Dev_EC2_Admin role in development account. For this, click on the role name and click on "Attach Policy" in resulting window.

Now we have given permissions to Dev_EC2_Admin role in development AWS account to to assume the role Prod_EC2_ReadOnly in production account.

b.  Login to the production AWS account and edit the trust relationship of role Prod_EC2_ReadOnly, by adding development account as a trust entry. To do this,

1. Click on the role name "Prod_EC2_ReadOnly" and navigate to "Trust relationships" tab and click on "Edit trust relationship" option.

2. In the resulting policy editor, copy following configuration and update the trust policy after updating your development account id for <Dev_Account_Id>.

"Version": "2012-10-17",
"Statement": [
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::222222222222:saml-provider/wso2is-local"
"Action": "sts:AssumeRoleWithSAML",
"Condition": {
"StringEquals": {
"SAML:aud": ""
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Dev_Account_Id>:root"
"Action": "sts:AssumeRole"

We are done with the AWS configurations. We need to configure WSO2 IS to SSO with these two accounts now.

2. Configure AWS app in WSO2 IS

1. Login to the WSO2 IS Management console, then navigate to Main -> Service Providers -> Add from the left menu. Provide any proffered name to "Service provider name" (eg: AWS) and click on register.

2. In the resulting window, expand the "Claim Configuration" section, then select the "Define Custom Claim Dialect" option and do following claim mapping. -> ->

3. Expand the "Role/Permission Configuration", then "Role Mapping" and add following role mappings.
What we do here is, map local roles in wso2is to AWS roles. We have two ldap roles called "dev_aws_ec2_admin" and "prod_aws_ec2_readonly"  which assigned to organization users to give required access to AWS developer and production account.

When you do the mapping pick relevant role in your organization user store instead of dev_aws_ec2_admin and prod_aws_ec2_readonly. Also relevant Role ARN and Provider ARN values from each account.

dev_aws_ec2_admin ->

prod_aws_ec2_readonly ->
dev_aws_ec2_admin -> arn:aws:iam::222222222222:role/Dev_EC2_Admin,arn:aws:iam::222222222222:saml-provider/local-is
prod_aws_ec2_readonly -> arn:aws:iam::111111111111:role/Prod_EC2_ReadOnly,arn:aws:iam::111111111111:saml-provider/wso2is-local

4. Expand the "Inbound Authentication Configuration", under that "SAML2 Web SSO Configuration" and select "Configure".

In the configuration UI, provide following fields and click update.

Issuer : urn:amazon:webservices
Default Assertion Consumer URL :
Enable Attribute Profile: Checked
Include Attributes in the Response Always: Checked
Enable IdP Initiated SSO: Checked

5.  Open the IS_HOME/repository/conf/user-mgt.xml and find the active user store configuration there. Change the MultiAttributeSeparator value to something different from comma (,) and restart the server.

<Property name="MultiAttributeSeparator">$$</Property>

Why we need to change this MultiAttributeSeparator value is, this property is used to separate the multiple attributes. By default this is set to a comma (,). But since we need to use AWS Role ARN, Provider ARN as a single value, we need to change it's value to something different from comma.

We are done with all configurations.

3.  Testing

1. Before access AWS console, login to the WSO2 IS Management console and confirm whether user Alex is having required roles assigned. Also Alex's user profile has been updated with his email address which mapped as RoleSessionName claim in AWS.

2.  Access the AWS console using following url. (Replace the <WSO2IS-HOST>:<PORT> as relevant).

3. Previous step will redirect you to WSO2 IS login page and once user Alex provided credentials and authenticated, AWS will provide it's role selection page where user can pick the role for current session and continue.

4. Alex can switch role from development account to production role using either switch role option provided in AWS console or the Switch Role URL associated to AWS role.

AWS switch role url can be found in the role detail. Usually this is in the format of;<AWS_ACCOUNT_ID>&roleName=<AWS_ROLE_NAME>

If you provide production account id and role as "Prod_EC2_ReadOnly" in the above URL, you can see that Alex can switch to production account's  Prod_EC2_ReadOnly role from development account where he was logged in.

Chandana NapagodaWhat is WSO2 Governance Registry

Many SOA governance tools/solutions have not been matured over the years. However, SOA governance tools provided by WSO2 has been improved a lot during the last couple of years.

WSO2 Governance Registry provides enterprises with end-to-end SOA governance, which includes configuration governance, development process governance, design and runtime governance, and life cycle management. This enables IT professionals to streamline application development, testing and deployment processes. The latest WSO2 Governance Registry release (5.0), introduces a host of features to further enhance various aspects of SOA governance.

WSO2 Governance Registry (5.0) release has a new publisher and store user interfaces to publish and consume assets. Asset owners can publish the assets from the Publisher UI and manage the lifecycle of these assets from this UI, while consumers can discover them from the Store UI.

New features of WSO2 Governance Registry:
WSO2 Governance Registry is shipped with newly added features such as, 
  • Rich Enterprice Store based rich publisher and Store, 
  • API Manager 2.0.0 integration, 
  • Dependency Visualization UI, 
  • Multiple lifecycle support, 
  • Out of the box support for Swagger Imports, 
  • Service and Application discovery feature for 3rd party servers, 
  • Graphical diff-view to compare two interrelated assets and new REST based Governance API. 

Rajith SiriwardenaOSGi Service Trackers

Requirement is to have a dynamic mapping with an interface implementation provided by the main OSGi bundle. At a given time there could only be a default and a single custom implementation. For this purpose I'm using OSGi ServiceTracker to dynamically assign the implementation.

Use case:

This interface "org.siriwardana.sample.core.MessageHandlerFactory" will be exported by the core bundle and will be implemented by the default bundle and a custom implementation.

* Interface for message handler factory. Custom deployments should implement this interface
public interface MessageHandlerFactory {

MessageHandler getHandler(String messageType);

Following is the org.siriwardana.sample.core.MessageHandler interface which is also exported by the core bundle.

* Custom message handler which should be implemented by the custom deployment bundle to handle
public interface MessageHandler {

*Create the message with a custom implementation.
* @return String
String createReqMsg();

* Handle response of the request as per the custom implementation.
void handleResponse (String response);

* Handle error as per the custom implementation
void onError(Exception e);


Default service bundle and a custom implementation service bundle will be available at runtime and the custom implementation will get the priority by the consumer bundle.

ServiceTracker (org.osgi.util.tracker.ServiceTracker) and ServiceTrackerCustomizer (org.osgi.util.tracker.ServiceTrackerCustomizer) will be used to dynamically used by the consumer bundle.

Following bundle activator implementation will demonstrates the solution. 

* @scr.component name="org.siriwardana.sample.consumer" immediate="true"
public class ServiceComponent {

private static Log LOGGER = LogFactory.getLog(ServiceComponent.class);
private static final String MESSAGE_HANDLER_DEFAULT = "default";

private ServiceTracker serviceTracker;
private BundleContext bundleContext;
private ServiceRegistration defaultHandlerRef;

protected void activate(ComponentContext context) {
bundleContext = context.getBundleContext();

Dictionary<String, String> props = new Hashtable<>();

if (bundleContext != null) {

ServiceTrackerCustomizer trackerCustomizer = new Customizer();
serviceTracker = new ServiceTracker(bundleContext, MessageHandlerFactory.class.getName(), trackerCustomizer);;
LOGGER.debug("ServiceTracker initialized");
} else {
LOGGER.error("BundleContext cannot be null");

protected void deactivate(ComponentContext context) {

serviceTracker = null;
LOGGER.debug("ServiceTracker stopped. Cloud Default handler bundle deactivated.");

private void setMessageHandlerFactory(ServiceReference<?> reference) {

MessageHandlerFactory handlerFactory = (MessageHandlerFactory) bundleContext.getService(reference);
LOGGER.debug("MessageHandlerFactory is acquired");

private void unsetMessageHandlerFactory(MessageHandlerFactory handlerFactory) {

LOGGER.debug("MessageHandlerFactory is released");

* Service tracker for Message handler factory implementation
private class Customizer implements ServiceTrackerCustomizer {

public Object addingService(ServiceReference serviceReference) {

LOGGER.debug("ServiceTracker: service added event invoked");
ServiceReference serviceRef = updateMessageHandlerService();
return bundleContext.getService(serviceRef);

public void modifiedService(ServiceReference reference, Object service) {
LOGGER.debug("ServiceTracker: modified service event invoked");

public void removedService(ServiceReference reference, Object service) {

if (reference != null) {
MessageHandlerFactory handlerFactory = (MessageHandlerFactory) bundleContext.getService(reference);
LOGGER.debug("ServiceTracker: removed service event invoked");

private ServiceReference updateMessageHandlerService() {

ServiceReference serviceRef = null;
try {
ServiceReference<?>[] references = bundleContext
.getAllServiceReferences(MessageHandlerFactory.class.getName(), null);
for(ServiceReference<?> reference : references) {
serviceRef = reference;
.equalsIgnoreCase((String) reference.getProperty(Constants.MESSAGE_HANDLER_KEY))) {
if (serviceRef != null) {
LOGGER.debug("ServiceTracker: HandlerFactory updated. Service reference: " + serviceRef);
} else {
LOGGER.debug("ServiceTracker: HandlerFactory not updated: Service reference is null");
} catch (InvalidSyntaxException e) {
LOGGER.error("ServiceTracker: Error while updating the MessageHandlers. ", e);
return serviceRef;

Following is the custom implementation bundle which will register it's service for the MessageHandlerFactory interface.

* @scr.component name="org.siriwardana.custom" immediate="true"
public class CustomMessageHandlerFactoryComponent {

private static final String MESSAGE_HANDLER = "CUSTOM";
private static Log LOGGER = LogFactory.getLog(CustomMessageHandlerFactoryComponent.class);

private ServiceRegistration serviceRef;

protected void activate(ComponentContext context) {

BundleContext bundleContext = context.getBundleContext();
Dictionary<String, String> props = new Hashtable<>();

serviceRef = bundleContext.registerService(MessageHandlerFactory.class, new CustomMessageHandlerFactory(), props);
LOGGER.debug("Custom Message handler impl bundle activated ");

protected void deactivate(ComponentContext context) {
LOGGER.debug("Custom Message handler impl bundle deactivated ");

When every an Interface implementation is available, ServiceTracker will update the consumer bundle.

Himasha GurugeHandling FIX session level reject messages in WSO2 ESB

FIX transport implementation  in WSO2 ESB, and  different usages of FIX with integration is discussed in detail in [1]. As much as processing FIX messages it is also important that the integration layer handles any serious FIX errors as well.

FIX  Reject <3> messages are issued when a message is received but cannot be properly processed due to a session-level rule violation. Different reasons for causing session-level rule violations are discussed in [2]. With WSO2 ESB 5.0 we have enhanced FIX implementation for you to acknowledge these session level reject messages and process error handling accordingly. 

You can implement SessionEventHandler (org.apache.synapse.transport.fix.SessionEventHandler)  and direct back specific FIX messages (REJECT<3> messages in this case) back to your proxy. In this case we are updating fromAdmin method to send session level reject messages back to the application, which is the proxy.

public class InitiatorSessionEventHandler implements SessionEventHandler {

public void fromAdmin(FIXIncomingMessageHandler fixIncomingMessageHandler, Message message, SessionID sessionID) {
        //Sending fix 35=3 admin message back to the client.
       try {

           if (message.getHeader().getField(new StringField(35)).getValue().equals("3")) {

               fixIncomingMessageHandler.fromApp(message, sessionID);


Now you just need to add above handler jar to ESB/repository/components/lib and  access these reject messages from your proxy, and do proper error handling.

<filter regex="3" source="//message/header/field[@id='35']">
//error handling 

<parameter name="transport.fix.InitiatorSessionEventHandler">com.wso2.test.InitiatorSessionEventHandler</parameter>

This is just a simple sample on how you can extend SessionEventHandler! You could extend this for any of your session level FIX requirements. And guess what? This is 100% open source! Check out [3] on how WSO2  ESB is 100% open source and how you can benefit from it.


Gobinath LoganathanParse PCAP files in Java

This article is for those who have spent hours like me to find a good library that can parse raw PCAP files in Java. There are plenty of open source libraries already available for Java but most of them are acting as a wrapper to the libpcap library which makes them hard to use for simple use cases. The library I came across: pkts is a pure Java library which can be easily imported into your

Anupama PathirageWSO2 DSS - Exposing Data as REST Resources

The WSO2 Data Services feature supports exposing data as a set of REST style resources in addition to the SOAP services. This sample demonstrates how to use rest resources for data inserts and batch data inserts via POST requests.

<data enableBatchRequests="true" name="TestBatchRequests" transports="http https local">
   <config enableOData="false" id="MysqlDB">
      <property name="driverClassName">com.mysql.jdbc.Driver</property>
      <property name="url">jdbc:mysql://localhost:3306/testdb</property>
      <property name="username">root</property>
      <property name="password">root</property>
   <query id="InsertData" useConfig="MysqlDB">
      <sql>Insert into Customers(customerId,firstName,lastName,registrationID) values (?,?,?,?)</sql>
      <param name="p0_customerId" sqlType="INTEGER"/>
      <param name="p1_firstName" sqlType="STRING"/>
      <param name="p2_lastName" sqlType="STRING"/>
      <param name="p3_registrationID" sqlType="INTEGER"/>
   <resource method="POST" path="InsertDataRes">
      <call-query href="InsertData">
         <with-param name="p0_customerId" query-param="p0_customerId"/>
         <with-param name="p1_firstName" query-param="p1_firstName"/>
         <with-param name="p2_lastName" query-param="p2_lastName"/>
         <with-param name="p3_registrationID" query-param="p3_registrationID"/>

Insert Single Row of Data

When you send an HTTP POST request, the format of the JSON object name should be "_post$RESOURCE_NAME", and the child name/values of the child fields should be the names and values of the input parameters in the target query.

Sample Request : http://localhost:9763/services/TestBatchRequests/InsertDataRes
Http Method : POST
Request Headers : Content-Type : application/json
Payload :

  "_postinsertdatares": {
    "p0_customerId" : 1,
    "p1_firstName": "Doe",
    "p2_lastName": "John",
    "p3_registrationID": 1

Insert Batch of Data

When batch requests are enabled for data services resources, resource paths are created with the "_batch_req" suffix. In the payload content, the single request JSON object becomes one of the many possible objects in a parent JSON array object.

Sample Request : http://localhost:9763/services/TestBatchRequests/InsertDataRes_batch_req
Http Method : POST
Request Headers : Content-Type : application/json
Payload :

    "_postinsertdatares_batch_req": {
        "_postinsertdatares": [{
                "p0_customerId": 1,
                "p1_firstName": "Doe",
                "p2_lastName": "John",
                "p3_registrationID": 10
                "p0_customerId": 2,
                "p1_firstName": "Anne",
                "p2_lastName": "John",
                "p3_registrationID": 100

Pushpalanka JayawardhanaThe Role of IAM in Open Banking

This presentation discusses on PSD2 standards in detail with the PISP and AISP flows, the technologies involved around the standard and finally how it can be adopted for Sri Lankan financial market.

Chamara Silva4 Common mistakes are doing in customer support

In day to day customer support operations we win the customers and sometime we lose them due to the various reasons. According to my experience, I would like to note, what are the common mistakes we do for lost the customers. "Not a listener"  Listing is the major fact, when it comes to the effective customer support. The human brain can listen 500 - 550 words per minute. But can talk only

Manorama PereraWSO2 ESB Advantages

Many organizations use ESBs in their integration scenarios in order to facilitate interoperability between various heterogeneous systems.

WSO2 ESB is one of the leading ESB solutions in the market which is 100% free and open source with commercial support.

Here are the advantages of selecting WSO2 ESB for your integration needs.

  • WSO2 ESB is feature rich and standards compliant. It supports standard protocols such as SOAP, REST over HTTP and several other domain specific protocols such as HL7.
  • It has numerous built-in message mediators.
  • You can select among various message formats, transports as needed.
  • WSO2 ESB connector store provides numerous built-in connectors to seamlessly integrate third party systems.
  • WSO2 ESB tooling enables quickly build integration solutions to be deployed in WSO2 ESB.
  • Furthermore, it is highly extensible since it provides you the flexibility to develop WSO2 ESB extensions such as connectors, class mediators which allow adding more features which are not supported OOTB.

Read through this comprehensive article written by Samisa Abeysinghe (Chief Engineering and Delivery Officer - WSO2 ) on What is WSO2 ESB.

This article explains about when you should consider using WSO2 ESB, what are the advantages of it and also about the powerful capabilities of WSO2 ESB.

Denuwanthi De Silva[WSO2 IS]Setting new challenge question sets

1.I am using WSO2 IS 5.0.0+service pack1

2.You need to have jdk 1.6/ 1.7 to use WSO2 IS 5.0.0

3.I will be showing how to add new challenge question sets using ‘UserIdentityManagementAdminService’ SOAP api.

4.This service is a admin webservice embedded inside WSO2 IS.

5.External parties can invoke methods expose by this web service via a tool like SOAP UI.

6.Here I will show adding new challenge question set by a new tenant admin.

7.For that you need to create a new tenant in WSO2 IS.

8.My tenant is

9.Now login to management console as and create a new user called ‘denu’. So the full qualified username will be Give that user admin permissions

10.Create another user called ‘loguser’. Assign him permissions to ‘login’ to management console and monitor ‘logs’

11.Before invoking any apis in ‘UserIdentityManagementAdminService’, make sure that you have added claim uri mappings for challenge question sets.

In WSO2 IS two sets of challenge questions are there.


As you can see the claim uri is equal to the challenge question set id.

So, if you plan to add a new set of challenge questions with ‘; set id, then before doing anything you need to add a claim mapping for it as below.


You can give any value from underlying data store as the mapped attribute.

After setting the challenge question claims manually in the tenant admin as above, you can invoke the apis exposed by the soap api.

Denuwanthi De Silva[Git]Merging Conflicted PRs

  1. Update the master
  2. Checkout a new branch from the master

git checkout -b new-branch master

3. pull the conflicting PR from the remote branch in remote repository

git pull remote-branch

4.Resolve merge conflicts

git add the modified files, and commit them

5.checkout the master

git checkout master

6.merge the new branch with resolved conflicts to master

git merge –no-ff new-branch

7.push the local master to your remote repository

git push origin master


Amalka SubasingheTips on using environment variables in WSO2 Integration Cloud

Environment variables allow you to change an application's internal configuration without changing its source code. Let’s say you want to deploy the same application in development, testing  and production environments. Then database related configs and some other internal configurations may change from one environment to another. If we can define these configurations as an environment variables we can easily set those without changing the source code of that application.

When you deploy your application in WSO2 Integration Cloud, it lets you define environment variables via the UI. Whenever you change the values of environment variables, you just need to redeploy the application for the changes to take effect.

Predefined environment variables
Key Concepts - Environment Variables provides you some predefined set of environment variables which will be useful when deploying applications in WSO2 Integration Cloud.

Sample on how to use environment variables
Use Environment Variable in your application provides you an sample how to use environment variables in WSO2 Integration Cloud.

Bulk environment variable upload
If the environment variable list is high, then entering one by one to the Integration Cloud UI is bit awkward. You can upload them all as a JSON file

Sample json file:


Use REST API to manipulate environment variables
WSO2 Integration Cloud provides an REST API to get/add/update/delete environment variables

Get version hash Id
curl -v -b cookies -X POST -d 'action=getVersionHashId&applicationName=app001&applicationRevision=1.0.0'

Get environment variables per version
curl -v -b cookies -X POST -d 'action=getEnvVariablesOfVersion&versionKey=123456789'

Add environment variable
curl -v -b cookies -X POST  -d 'action=addRuntimeProperty&versionKey=123456789&key=ENV_USER&value=amalka'

Update environment variable
curl -v -b cookies -X POST -d 'action= updateRuntimeProperty&versionKey=123456789&prevKey=ENV_USER&newKey=ENV_USERNAME&newValue=amalkasubasinghe'

Delete environment variable
curl -v -b cookies -X POST -d 'action=deleteRuntimeProperty&versionKey=123456789&key=ENV_USERNAME'

Code samples to read environment variables for different app types
Here are sample code to read environment variables from different app types, which are supported by WSO2 Integration Cloud.

Tomcat/Java Web Application/MSF4J


string dbUrl = system:getEnv("ENV_DATABASE_URL");


print getenv('ENV_DATABASE_URL');


You can use script mediator to read the environment variable in the synapse configuration. Please find the sample proxy service. Here, we get the property ENV_DATABASE_URL which is defined as the environment variable.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <script language="js"><![CDATA[
             mc.setProperty("envDatabaseURL", java.lang.System.getenv("ENV_DATABASE_URL"));
        <log level="custom">
           <property expression="$ctx:envDatabaseURL"
                     name="EnvDatabaseURL: "/>


Where ENV_DATABASE_URL is the name of the variable we wish to access.

var process = require('process');
print(process.getEnvs()); // json objectprint(process.getEnv('ENV_DATABASE_URL')); // string

Isuru PereraUsing Java Flight Recorder

I wrote a Medium story about "Using Java Flight Recorder" in last year. The story is a bit long, but it has all the details you need to know when you want to start using Java Flight Recorder (JFR).

Read more at "Using Java Flight Recorder".

Thank you!

Isuru PereraMoving to Medium!

Ever since came, most of the people I know started to write blog posts in medium. So, I also wanted to try it out and see how it works!

My page is and I already wrote one story in the last year.

I really liked the editor in Medium and I don't have to worry how my story will look when I publish it. This is the main problem I have with blogger. I have to "preview" my post to make sure it looks fine. Especially when I have code snippets. This is not really a problem of the Blogger platform. It's a problem as I use a third-party syntax highlighter.

I'm really disappointed that I didn't spend time to write more posts. My last blog post on Blogger was more than a year ago! There were many personal reasons for not writing blog posts. Anyway, now I want to start writing again and I will continue to write on Medium. I'm also planning to link my Medium stories from this blog.

Thank you for reading! :)

Chandana NapagodaJava 8 lambda expression for list/array conversion

1). Convert List to List ( List of Strings to List of Integers)

List<Integer>  integerList =; 

// the longer full lambda version: 
List<Integer>  integerList  = -> Integer.parseInt(s)).collect(Collectors.toList());

2). Convert List to int[](List of Strings to int array)

int[] intArray =;

3). Convert String[] to List ( String array to List of Integers)

List<Integer>  integerList = Stream.of(array).map(Integer::parseInt).collect(Collectors.toList());

4). Convert String[] to int[] (String array to int array)

int[] intArray = Stream.of(stringArray).mapToInt(Integer::parseInt).toArray();

5). Convert String[] to List (String array to Double List)

List<Double> doubleList = Stream.of(stringArray).map(Double::parseDouble).collect(Collectors.toList());

6). Convert int[] to String[] (int array to String array)

String[] stringArray =[]::new);

7).  Convert 2D int[][] to List> ( 2D int array to nested Integer List)

List<Integer>  list =;

Senduran BalasubramaniyamWhile loop in WSO2 ESB

How to invoke an endpoint many number of times.
There are two situations

  1. The number of invocation is defined. i.e we know how many time we are going to invoke the endpoint.
    In such a situation, we can construct a mock payload with that number of elements and by iterating it we can invoke the endpoint
  2. The number of invocation is not defined. i.e we don't know how many time we need to invoke the endpoint. i.e the response of previous invocation determine the number of calls
This post gives a sample ESB configuration to invoke an endpoint any number of time. Like a while loop (until a condition is satisfied)

IMPORTANT: I do NOT recommend the following configuration in a production environment. If you come across a similar situation, I recommend to revisit your use case and come up with the better way.

Anyway the following sample is really fun and shows you how powerful WSO2 ESB is. 

The idea behind creating a loop is, I am using filter mediator and based on a condition I decide whether to continue the flow or terminate. When I say continuing the flow, I will be invoking the endpoint using send mediator and dispatching the response to the same sequence which does the filter so there will be loop until it satisfies the terminating condition.

In this sample I will be using a simple data base to be updated. But I will be querying only one entry at a time and will be looping until all the entries are updated.

Setting up the sample

I am using WSO2 Enterprise Integrator 6.1.1. and MySQL as my database.

Creating the database and table

USE SimpleSchool;

CREATE TABLE `Student` (`Id` int(11) DEFAULT NULL, `Name` varchar(200) DEFAULT NULL, `State` varchar(4) DEFAULT NULL);

Data Service Configuration

Remember to add the mysql driver jar into EI_HOME/lib

<data name="SimpSchool" transports="http https local">
<config enableOData="false" id="SimplSchool">
<property name="driverClassName">com.mysql.jdbc.Driver</property>
<property name="url">jdbc:mysql://localhost:3306/SimpleSchool</property>
<property name="username">USERNAME</property>
<property name="password">PASSWORD</property>
<query id="insertStudent" useConfig="SimplSchool">
<sql>insert into Student (Id, Name, State) values (? ,? ,?);</sql>
<param name="Id" sqlType="STRING"/>
<param name="Name" sqlType="STRING"/>
<param name="State" sqlType="STRING"/>
<query id="getCount" useConfig="SimplSchool">
<sql>select count(*) as count from Student where State = 'New';</sql>
<result element="Result" rowName="">
<element column="count" name="count" xsdType="string"/>
<query id="UpdateState" returnUpdatedRowCount="true" useConfig="SimplSchool">
<sql>update Student set state='Done' where Id=?;</sql>
<result element="UpdatedRowCount" rowName="" useColumnNumbers="true">
<element column="1" name="Value" xsdType="integer"/>
<param name="Id" sqlType="STRING"/>
<query id="selectStudent" useConfig="SimplSchool">
<sql>select Id, Name from Student where state='New' limit 1;</sql>
<result element="Entries" rowName="Entry">
<element column="Id" name="Id" xsdType="string"/>
<element column="Name" name="Name" xsdType="string"/>
<resource method="POST" path="insert">
<call-query href="insertStudent">
<with-param name="Id" query-param="Id"/>
<with-param name="Name" query-param="Name"/>
<with-param name="State" query-param="State"/>
<resource method="POST" path="getCount">
<call-query href="getCount"/>
<resource method="POST" path="select">
<call-query href="selectStudent"/>
<resource method="POST" path="update">
<call-query href="UpdateState">
<with-param name="Id" query-param="Id"/>

ESB Configuration

I will be using an API to initiate the call, the API will invoke the loop_getCountAndCheck sequence which queries the database and retrieve the count, the result is being filtered and if the count is zero we call the loop_done sequence. Else we call the loop_update_logic sequence. Inside the loop_update_logic we update the data base and let the response to be processed inside the loop_getCountAndCheck sequence.

<?xml version="1.0" encoding="UTF-8"?>
<sequence name="loop_getCountAndCheck" xmlns="">
<payloadFactory media-type="xml">
<dat:_postgetcount xmlns:dat=""/>
<http method="POST" uri-template="http://localhost:8280/services/SimpSchool/getCount"/>
<log level="custom">
<property expression="//n1:Result/n1:count"
name="called get count. Remaining are:"
xmlns:n1="" xmlns:ns="http://org.apache.synapse/xsd"/>
<filter regex="0" source="//n1:count"
xmlns:n1="" xmlns:ns="http://org.apache.synapse/xsd">
<log level="custom">
<property name="remaining are 0" value="calling done sequence"/>
<sequence key="loop_done"/>
<log level="custom">
<property name="remaining are not zero" value="calling update_logic sequence"/>
<sequence key="loop_update_logic"/>

loop_done sequence
<?xml version="1.0" encoding="UTF-8"?>
<sequence name="loop_done" xmlns="">
<log level="custom">
<property name="this is done sequence" value="Responding to client"/>
<payloadFactory media-type="xml">
<done xmlns="">updating all</done>

<?xml version="1.0" encoding="UTF-8"?>
<sequence name="loop_update_logic" xmlns="">
<log level="custom">
<property name="this is logic sequence" value="selecting entries"/>
<payloadFactory media-type="xml">
<dat:_postselect xmlns:dat=""/>
<http method="POST" uri-template="http://localhost:8280/services/SimpSchool/select"/>
<log level="custom">
<property name="data queried" value="for updating"/>
<payloadFactory media-type="xml">
<dat:_postupdate xmlns:dat="">
<arg evaluator="xml" expression="//n1:Id" literal="false"
xmlns:n1="" xmlns:ns="http://org.apache.synapse/xsd"/>
<log level="custom">
<property name="data constructed" value="for updating"/>
<send receive="loop_getCountAndCheck">
<http method="POST" uri-template="http://localhost:8280/services/SimpSchool/update"/>

API configuration
<api xmlns="" name="UpdateInLoop" context="/loop">
<resource methods="GET">
<sequence key="loop_getCountAndCheck"/>

Sample data to fill the Student table
INSERT INTO Student (Id, Name, State) values (1, 'AAAA' , 'New'), (2, 'BBBB', 'New'), (3, 'CCCC' , 'New'), (4, 'DDDD' , 'New');

Sample curl request
curl -v http://localhost:8280/loop

Console output on a happy path
[2017-08-07 16:58:52,474] [EI-Core]  INFO - LogMediator called get count. Remaining are: = 4
[2017-08-07 16:58:52,474] [EI-Core] INFO - LogMediator remaining are not zero = calling update_logic sequence
[2017-08-07 16:58:52,475] [EI-Core] INFO - LogMediator this is logic sequence = selecting entries
[2017-08-07 16:58:52,488] [EI-Core] INFO - LogMediator data queried = for updating
[2017-08-07 16:58:52,489] [EI-Core] INFO - LogMediator data constructed = for updating
[2017-08-07 16:58:52,517] [EI-Core] INFO - LogMediator called get count. Remaining are: = 3
[2017-08-07 16:58:52,518] [EI-Core] INFO - LogMediator remaining are not zero = calling update_logic sequence
[2017-08-07 16:58:52,518] [EI-Core] INFO - LogMediator this is logic sequence = selecting entries
[2017-08-07 16:58:52,524] [EI-Core] INFO - LogMediator data queried = for updating
[2017-08-07 16:58:52,524] [EI-Core] INFO - LogMediator data constructed = for updating
[2017-08-07 16:58:52,553] [EI-Core] INFO - LogMediator called get count. Remaining are: = 2
[2017-08-07 16:58:52,553] [EI-Core] INFO - LogMediator remaining are not zero = calling update_logic sequence
[2017-08-07 16:58:52,553] [EI-Core] INFO - LogMediator this is logic sequence = selecting entries
[2017-08-07 16:58:52,559] [EI-Core] INFO - LogMediator data queried = for updating
[2017-08-07 16:58:52,559] [EI-Core] INFO - LogMediator data constructed = for updating
[2017-08-07 16:58:52,586] [EI-Core] INFO - LogMediator called get count. Remaining are: = 1
[2017-08-07 16:58:52,586] [EI-Core] INFO - LogMediator remaining are not zero = calling update_logic sequence
[2017-08-07 16:58:52,587] [EI-Core] INFO - LogMediator this is logic sequence = selecting entries
[2017-08-07 16:58:52,597] [EI-Core] INFO - LogMediator data queried = for updating
[2017-08-07 16:58:52,598] [EI-Core] INFO - LogMediator data constructed = for updating
[2017-08-07 16:58:52,619] [EI-Core] INFO - LogMediator called get count. Remaining are: = 0
[2017-08-07 16:58:52,620] [EI-Core] INFO - LogMediator remaining are 0 = calling done sequence
[2017-08-07 16:58:52,620] [EI-Core] INFO - LogMediator this is done sequence = Responding to client

The whole flow is working on the same message context. I haven't done any error handling here. We can introduce a property and by incrementing it in each iteration, we can further control the loop.

Cheers !

Malith JayasingheThe Performance Of Multi-Level Feedback Queue

In this blog, I will discuss the performance characteristics of Multi-Level Feedback queue which is a scheduling policy that gives preferential treatment to short jobs. This policy is also called multi-level time sharing policy (MLTP). Several OS schedulers use multilevel feedback queue (or variants of it) for scheduling jobs and it allows a process to move between queues. One of the main advantages of using this policy is that it can speed up the flow of small tasks. This can result in overall performance improvements when the service time distribution of tasks follow long-tailed distributions (note: long-tailed distributions are explained in detail later in this blog)

The following figure illustrates the model I consider in this article.


The basic functionality of multi-level time sharing scheduling policy considered in this blog is as follows.

Each new task that arrives at the system is placed at the lowest queue, where the task is served in a First-Come-First-Serviced (FCFS) manner until it receives maximum of q1 amount of service (note: q1 represents a time duration).

If the service time of the task is less than or equal to q1, the task departs system (after receiving maximum of q1 amount of service). Otherwise, the task is placed at Queue 2, where the task is processed in a FCFS manner until it receives at most q2 amount of service and so on.

The task propagates through the system of queues until the total processing time, the task has so far received is equal to its service time at which point it leaves the system.

A task waiting to be served in Queue i has the priority of service over tasks that are waiting to be served in Queue i + 1, i + 2,…,N, where N denotes number of levels. However, a task currently being processed is not preempted upon the arrival of a new task to the system.

Two Variants

In this blog, I will consider the following two models:

Multi-level optimal quantum timesharing policy with N levels (N-MLTP-O): The quanta (q1, q2, …, qn) for N-MLTP-O are computed to optimise overall expected waiting time.

Multi-level equal quantum time-sharing policy with N levels (N-MLTP-E): The quanta for N-MLTP-E are equal on each level, i.e. , q1 =q2=q3=…=qn

Long-tailed Distributions

In his paper, L. E. Schrage derived an expression for the expected waiting time for multi-level time sharing policy under general service time distribution when the task arrivals follow Poisson process. We used this result in our previous work to study the performance of multi-level time sharing policy under long-tailed service time distributions. We have specifically considered long-tailed service time distributions since there is evidence that shows service times of certain computing work loads closely follow such distributions. In such distributions:

  1. There is a very high probability that the size of a task being small very small (short) and the probability that a size of a task being very large (long) is very small. This results in a service time distribution that has a very high variance .
  2. Although the probability of very large task appearing is very small, the load imposed on the system by these (very small number of) large tasks can be as high as 50% of the system load.
  3. When service time distribution exhibits very high variance, several small tasks can get behind a very large task. This results in significant performance degradations, particularly if the tasks are processed in a FCFS manner until completion

In particular, we looked at the performance of multi-level time sharing policy under Pareto Distribution (one of the commonly appearing long-tailed distributions) and investigated how the variability of service time affect the performance of multi-time sharing policy under different system loads. The probability density function of Pareto Distribution is given by

where 2 > α > 0. α represents the variability of task service times. The value of α depends on the type of tasks. For example, unix process CPU requirements have an α value of 1.0. The lower the value of alpha the higher variability of service times.

In this blog, I will briefly present some of these results.

The behaviour of overall expected waiting time (or overall average waiting time)

The following figures show the overall expected waiting time and factor of improvement in expected waiting time in MLTP over FCFS

In Figure 2:

Y axis: E[W]- Overall expected waiting time (or overall average waiting time) of a task which enters the system (unit: time unit)

X axis: α: Represents the variability of task service times (refer to the previous section). The lower the value of alpha the higher variability of service times.

In Figure 3:

Y axis: The factor of improvement in MLTP over FCFS

X axis: α: Represents the variability of task service times (refer to the previous section). The lower the value of alpha the higher variability of service times.

Figure 2: The behaviour of overall average expected waiting time
Figure 3: The factor of improvement in MLTP over FCFS

First let have a look at the performance of 2-MLTP-O, 2-MLTP-E and FCFS.

We note from Figure 1, 2-MLTP-O outperforms both 2-MLTP-E and FCFS under all the scenarios considered. For example, under a system load of 0.7, when α = 0.4, 2-MLTP-O outperforms FCFS and 2-MLTP-E by factors of 3 and 2 respectively.

Under the same system load, when α is equal to 1.1, 2-MLTP-O outperforms FCFS and 2-MLTP-E by factors of 2 and 1.5 respectively. We note that the factor of improvement is highly significant when both the system load and the task size variability are high (i.e. low α).

On the other hand, if both the system load and task size variability are low (i.e. high α), then the factor of improvement is not highly significant.

Also notice that as you increase the number of levels we get an improvement in the performance. For example, under a system load of 0.7, when α is equal to 0.4, 3-MLTP-O performs 1.6 times better than 2-MLTP-O

The impact numbers of levels

The following figure plots the expected waiting vs number of levels (N) for some selected α values and system loads. Note that in these plots x and y axes are in log (base10) scale.

In the figure below:

E[W]: Overall expected waiting time (or overall average waiting time) of a task which enters the system (unit: time unit)

N: Number of queues/levels

α: Represents the variability of task service times (refer to previous section). The lower the value of alpha the higher variability of service times.

The impact of numbers of levels on E[W]

One of the main observations is when the variability of service times are very high, then we can get significant improvement in the average waiting time by increasing the numbers of levels.


In this blog, we had a look at the performance of multi-level feedback queue scheduling policy (also called multi-level time sharing policy) with a finite number of queues. We compared the performance of multi-level time sharing policy with the FCFS under two scenarios (1) quanta of multi-level time sharing policy computed to optimize the expected waiting time (MLTP-O) and (2) quanta are equal on each level (MLTP-E). We noticed that if the variability of service times are high, we can get significant improvements by using MLTP-O. We noticed that both MLTP-O and MLTP-E outperform FCFS significantly in particular when the variability of service times are high. If the variability of service times is low, then there are no significant differences in the performance results.

Gobinath LoganathanMicroservices in a minute

Microservices get more light in recent years as a new service oriented architecture with a high level of modularity. Compared to traditional web services either they are SOAP or REST, microservices are small in size and complexity but brings the cohesiveness to web services. Microservices do not require a servlet container or web server to be deployed instead they are created as individual JAR

Chandana NapagodaService Discovery with WSO2 Governance Registry

This blog post explains about the service discovery capability of WSO2 Governance Registry. If you have heard about UDDI and WS-Discovery, we used those technologies to discover Services during 2009-2013 time.

What is UDDI:

UDDI stands for Universal Description, Discovery, and Integration. It is seen with SOAP and WSDL as one of the three foundation standards of web services. It uses Web Service Definition Language(WSDL) to describe the services.

What is WS-Discovery:

WS-Discovery is a standard protocol for dynamically discovering service endpoints. Using WS-Discovery, service providers multicast and advertise their endpoints with others.

Since most of the modern services are REST based, above two approaches are considered as dead nowadays. Both UDDI and WS-Discovery target for SOAP based services and they are very bulky. In addition to that, industry is moving from Service Registry concept to Asset Store(Governance Center), and people tend to use REST API and Discovery clients.

How Discovery Client works

So, here I am going to explain how to write discovery client in WSO2 Governance Registry(WSO2 G-Reg) to discover services which are deployed in the WSO2 Enterprise Service Bus(WSO2 ESB)/WSO2 Enterprise Integrator(WSO2 EI). This service discovery client will connect to ESB/
EI server and find the services which are deployed there and catalog those into the G-Reg server. In addition to service metadata(endpoint, name, namespace, etc.), discovery client will import the WSDLs and XSDs as well. 

Configure Service Discovery Client:

Sample service discovery client implementation can be found from the below GitHub repo(Discovery Client).

1). Download WSO2 Governance Registry and WSO2 ESB/WSO2 EI product and unzip it.

2). By default, both servers are running on 9443 port, so you have to change one of the server ports. Here I am changing port offset of the ESB server.

Open the carbon.xml file located in <ESB_HOME>/repository/conf/carbon.xml and find the “Offset” element and change its value as follows: <Offset>1</Offset>

3). Copy <ESB_HOME>/repository/components/plugins/org.wso2.carbon.service.mgt.stub_4.x.x.jar to <GREG_HOME>/repository/components/dropins.

4). Download or clone ESB service discovery client project and build it.

5). Copy build jar file into <GREG_HOME>/repository/components/dropins directory.

6). Then open the registry.xml file located in <GREG_HOME>/repository/conf/registry.xml and register service discovery client as a Task. This task should be added under “tasks” element.

<task name="ServiceDiscovery" class="">
            <trigger cron="0/100 * * * * ?"/>
            <property key="userName" value="admin" />
            <property key="password" value="admin" />
            <property key="serverUrl" value="https://localhost:9444/services/"/>
            <property key="version" value="1.0.0" />

7). Change the userName, password, serverUrl and defaultVersion according to your setup.

8). Now Start ESB server first and then start the G-Reg server. 

So, you can see “
# of service created :...” message in G-Reg console once server has discovered a service from the ESB server and mean time related WSDL and XSD has got imported into G-Reg. Above services are cataloged under “SOAP Service” asset type.

Chandana NapagodaG-Reg and ESB integration scenarios for Governance

WSO2 Enterprise Service Bus (ESB) or WSO2 Enterprise Integrator(EI) products employs WSO2 Governance Registry for storing configuration elements and resources such as WSDLs, policies, service metadata, etc. By default, WSO2 ESB/EI shipped with embedded Registry, which is entirely based on the WSO2 Governance Registry (G-Reg). Further based on the requirements, you can connect to a remotely running WSO2 Governance Registry using a remote JDBC connection which is known as a ‘JDBC registry mount’.

Other than the Registry/Repository aspect of WSO2 G-Reg, its primary use cases are Design time governance and Runtime governance with seamless lifecycle management. It is known as Governance aspect of WSO2 G-Reg. So with this governance aspect of WSO2 G-Reg, more flexibility is provided for integration with WSO2 ESB/EI.

When integrating WSO2 ESB/EI with WSO2 G-Reg in governance aspect, there are three options available. They are:

1). Share Registry space with both ESB/EI and G-Reg
2). Use G-Reg to push artifacts into ESB/EI node
3). ESB/EI pulls artifacts from the G-Reg when needed

Let’s go through the advantages and disadvantages of each option. Here we are considering a scenario where metadata corresponds to ESB artifacts such as endpoints are stored in the G-Reg as asset types. Each asset type has their own lifecycle (Ex: ESB Endpoint RXT have it’s own Lifecycle). Then with the G-Reg lifecycle transition, synapse configurations (Ex: endpoints) will be created. Those will be the runtime configurations of ESB.

Share Registry space with both ESB and G-Reg

Embedded Registry of every WSO2 product consist of three partitions. They are local, config and governance.

Local Partition : Used to store configuration and runtime data that is local to the server.
Configuration Partition : Used to store product-specific configurations. This partition can be shared across multiple instances of the same product.
Governance Partition : Used to store configuration and data that are shared across the whole platform. This partition typically includes services, service descriptions, endpoints and data sources
How to integration should work:
When sharing registry spaces with Both ESB and G-Reg products, we are sharing governance partition only. Here governance space will be shared using JDBC. When G-Reg lifecycle transition happens on the ESB endpoint RXT, it will create the ESB synapse endpoint configuration and copy into relevant registry location using Copy Executor. Then ESB can retrieve that endpoint synapse configuration from the shared registry when required.

     Easy to configure
    Reduced amount of custom code implementation
     If servers are deployed across data centers, JDBC connections will be created in between data centers(may be through WAN or Public networks).
      With the number of environments, there will be many database mounts.
      ESB registry space will be exposed via G-Reg.

Use G-Reg to push artifacts into ESB node
How to integration should work:
In this pattern, G-Reg will create synapse endpoints and push into relevant ESB setup(Ex: Dev/QA/Prod, etc) by using Remote Registry operation. After G-Reg pushing appropriate synapse configuration into ESB, APIs or Services will be able to consume.

G-Reg Push(1).png

      Provide more flexibility from the G-Reg side to manage ESB assets
      Can plug multiple ESB environments on the go
      Can restrict ESB API/Service invocation until G-Reg lifecycle operation is completed

ESB pull artifact from the G-Reg

How to integration should work:

In this pattern, when lifecycle transition happens, G-Reg will create synapse level endpoints in the relevant registry location.

When API or Service invocation happens, ESB will first lookup the endpoint in their own registry. If it is not available, it will pull the endpoint from G-Reg using Remote Registry operations.  Here ESB side endpoint lookup should be implemented as a custom implementation.  

ESB pull.png

        User might be able to deploy ESB API/Service before G-Reg lifecycle transition happens. Disadvantages: 
        First API/Service call get delayed, until Remote API call is completed 
        First API/Service call get failed, if G-Reg lifecycle transition is not completed. 
        Less control compared to option 1 and 2

Ushani BalasooriyaWHY WSO2 ESB?

I have been writing lot of posts about WSO2 ESB. But have you ever thought why we should use WSO2 ESB over other competitors? Have a look at Samisa's article.
Below points are taken from his article.

WSO2 advantages over competitors

  • Ability to easily integrate any component framework. Support of Java based extensions and multiple scripting options. There is no need to have WSO2 specific code to integrate anything with WSO2 ESB
  • Numerous built-in message mediators, solution templates and connectors to third-party cloud systems to help cut down redundant engineering efforts and enable significant component reuse
  • Freedom for architects and developers to pick and choose message formats, transports, and style of services they want to expose using the ESB
  • Component oriented architecture and cloud and container support enables you to deploy the ESB using a topology of your choice based on your needs in a secure, scalable and adaptive manner
  • The ready-made scripts and tools help with rapid deployments, ensuring the ability to go to market quickly with your solution using the ESB
  • Continuous innovation that helps build future proof solutions on top of the ESB
  • Rigorous and frequent product update cycles and state-of-the-art tooling support for managing ESB deployments with DevOps practices. Using Docker descriptors and Puppet scripts
  • Proactive testing and tuning of performance and innovation around performance enhancements

Chanika GeeganageBenefits of WSO2 ESB

WSO2 ESB is a cloud enabled, 100% open source integration solution which is a  standards-based messaging engine that provides the value of messaging without writing code. Instead of having your heterogeneous enterprise applications and systems, which are using various standards and protocols, communicate point-to-point with each other, you can simply integrated with WSO2 ESB, which handles transforming and routing the messages to the appropriate destinations.

It also compromises of
- data integration capabilities, eliminating the need to use a separate data services server for your integration processes.
- managing long-running, stateful business processes.
- analytics capabilities for comprehensive monitoring
- message brokering capabilities that can be used for reliable messaging
- capabilities to run microservices for your integration flows.

Other than those key features, some benefits of having WSO2 ESB are,
  • Enables communication among various heterogeneous applications and systems
  • 100% open source, lightweight, and high performance
  • Support for open standards such as REST, SOAP, WS-*
  • Support for domain specific solutions (SAP, FIX, HL7)
  • Support message format transformation 
  • Supports message routing 
  • Supports message enrichment
  • 160+ Connectors (A ready made tool that can be used to connect to public web APIs) such as Salesforce, JIRA, Twitter, LDAP, Facebook and more)
  • Supports wider range of integration scenarios, known as EIP patterns
  • Having Scalable and extensible architecture
  • Easy to configure and re-use, tooling support via Developer Studio, which is an eclipse based tool for artifact design
  • Equipped with ESB analytics, for real time monitoring

Find more on WSO2 ESB from:

Sashika WijesingheWhen you should consider using WSO2 ESB !!!

Over the time business operations and processes growing in a rapid rate which requires the organizations to focus more on the integration of different applications and reuse the services as much as possible for maintainability.

The WSO2 ESB (Enterprise Service Bus) will seamlessly integrate applications, services, and processes across the platforms, if we simplify it, ESB is a collection of enterprise architecture design patterns that is catered through one single product.

Lets see when you need to consider using WSO2 ESB for your business;

1. You have few applications/services working independently and now you need to integrate those

2. When you want to deal with multiple message types and media types

3. When you want to connect and consume services using multiple communication protocols (ex: jms, websockets , FIX)

4. When you want to implement Enterprise Integration scenarios such as route messages to suitable back-end or aggregate the responses coming form the back-end

5. When you want to expose your applications as a service or API to other applications

6. When you want to augment application security in to your applications

Likewise there are many more scenarios where WSO2 ESB is capable of catering to your integration requirements.

To get more information about WSO2 ESB please refer -

Himasha GurugePowerful capabilities of WSO2 ESB

WSO2 ESB is the one stop shop for your integration requirements.

You need to send a message request of format1 to a back-end that accepts messages of format2? Worried about data format transformations? WSO2 ESB will cover this  for you with  its data transformation capabilities.

You need to send different requests of users to different back-ends? Worried about  how to route these messages? WSO2 ESB will cover this for you ,with its message routing capabilities.

Need to make sure that your service is not available to public and its secured? WSO2 ESB has this covered with its service mediation capabilities.

This is just a glimpse of what WSO2 ESB has in store.. How about data transportation and service hosting? Yes these too are WSO2 ESB capabilities..

Check out written by  Samisa Abeysinghe  (Chief Engineering and Delivery Officer at WSO2) and find out more!

Milinda PereraPowerful capabilities of WSO2 ESB

WSO2 enterprise service bus

WSO2 Enterprise Service Bus is the leading ESB solutions in the market which is 100% free and open source with commercial support. It is a battle tested Enterprise Service Bus catering for all enterprise integration needs. With its new release, we have taken the capabilities of WSO2 Enterprise Service Bus (WSO2 ESB) to a new level.

The WSO2 ESB is now more powerful than ever before, catering 360 degrees of seamless Enterprise Integration requirements with capabilities of integrating legacy systems to cutting edge systems.

WSO2 ESB is like as "swiss army knife" for system integration. With the new release of WSO2 ESB, it's more powerful than ever.

In this post I would like to list some powerful capabilities of WSO2 ESB:

  1. WSO2 ESB now comes with 
    • Built in Data services server ( DSS ) exposing your data stores as Services and APIs.
    • Business Process Server (BPS) to cater for business-processes / workflows (BPEL, BPMN) and human interactions (WS-Humantask) with Business Process Profile 
    • Message Broker (MB)  catering fully pledged messaging capabilities including JMS 1.1 and JMS 2.0 for enterprise messaging with Message Broker profile
    • Integration Analytics message tracing and analytics support with analytics profile
  2. One of the best performing open sources ESB's in the market.
  3. A mature product and supports all enterprise integration patterns.
  4. Complete feature set to cater for any integration need.
  5. Eclipse based IDE support to quickly develop, debug, package, deploy your integration flows.
  6. Consultancy / Support and Services are readily available from WSO2 and WSO2 Partners world wide.

For more interesting information on WSO2 ESB, read through this comprehensive article written by Samisa Abeysinghe (Chief Engineering and Delivery Officer - WSO2 ) on What is WSO2 ESB.

This article explains about when you should consider using WSO2 ESB, what are the advantages of it and also about the powerful capabilities of WSO2 ESB.

Download and play with it now ... ENJOY !!


Senduran BalasubramaniyamWSO2 ESB - Powerful Capabilities

WSO2 Enterprise Service Bus is a lightweight, high performance, near-zero latency product, providing comprehensive support for several different technologies like SOAP, WS* and REST as well as domain-specific solutions and protocols like SAP, FIX and HL7. It goes above and beyond by being 100% compliant with enterprise integration patterns. It also has 160+ ready-made, easy-to-use connectors to seamlessly integrate between cloud service providers. WSO2 Enterprise Service Bus is 100% configuration driven, which means no code needs to be written. Its capabilities can be extended too with the many extension points to plug into.

In the IT world it is vital to communicate among the heterogeneous systems. WSO2 ESB helps you integrate services and applications in an easy, efficient and productive manner.
WSO2 ESB, a 100% open source enterprise service bus help us transforming data seamlessly across different formats and transports.

WSO2 Enterprise Service Bus is the main integration engine of WSO2 Enterprise Integrator

Following are some of the Powerful capabilities of WSO2 ESB

  • Service mediation
    • Help achieve separation of concerns with respect to business logic design and messaging
    • Shield services from message formats and transport protocols
    • Offload quality of service aspects such as security, reliability, caching from business logic
  • Message routing
    • Route, filter, enrich and re-sequence messages in a content aware manner or content unaware manner (without regard to the content) and using rules
  • Data transformation
    • Transform data across varying formats and media types to match data acceptance criteria of various services and applications
  • Data transportation
    • Support for various transport protocols based on data formats and data storage and destination characteristics including HTTP, HTTPS, JMS, VFS
  • Service hosting
    • It is feasible with WSO2 ESB to host services, however, this could become an anti pattern if used in combination with service mediation and service hosting when considering layered deployment for separation of concerns between mediation and hosting
To Learn more on What is WSO2 ESB please check the article written by Samisa Abeysinghe  (Chief Engineering and Delivery Officer at WSO2)  

Pamod SylvesterWhat's Special About WSO2 ESB ??

I am a bit late to write this post. Better late than never :)

Why should you consider WSO2 ESB ?

The recently published article will unveil the answer to the question  What is WSO2 ESB?

WSO2 ESB is one of the most matured products in the WSO2 stack, it's scalable, it's fast and it has all the features which will support all your integration needs. This i believe is evident and would be self explanatory if you download it.

Sashika WijesingheEncrypting sensitive information in configuration files

Encrypting information 

I thought to start from basics before dig in to the target topic. So lets look at what is "encrypting".

Encrypting information is converting information in to another format, which is hard to understood. As we all know encrypting information is really useful to secure sensitive data.

In wso2 products, there is an inbuilt 'Secure Vault' implementation to encrypt plain text information in the configuration files to provide more security.

In this post I will not discuss about the secure vault implementation in details. You can refer 'secure vault implementation' to get more insight about it.
In wso2 products based on carbon 4.4.0 or later visions, 'Ciper Tool' feature is installed by default, therefore you can easily use that to encrypt sensitive information in the configuration file. 

Lets move on to the main purpose of this blog.

We already know that we can use ciper tool encrypt the information in configuration files. But can we encrypt the sensitive information in properties files or .json files ??

How to encrypt information when we can't use xpath notation?

Using the ciper tool we can encrypt any information if we can specify the xpath location of the property correctly. So basically if xpath notation can be defined for a certain property we can encrypt that using the ciper tool without much effort. Detailed steps to encrypt information based on an xpath can be found from here.

But in the properties file or .json files we can not define a xpath. Now you might be thinking how can we encrypt the information in these files !!!

To overcome this, we can manually encrypt the sensitive information using the ciper tool. You can refer the detailed steps provided here to manually encrypt the sensitive information in properties file and .json files.

However, I want to point you out to a very important fact. When you encrypt a sensitive information in a properties file or .json file, the product component which reading the encrypted property should have written in a way to call the secure vault to decrypt the value correctly.

Sashika WijesingheAllowing empty charachters using regular expression

This post will guide you to configure regular expression, to allow empty characters (spaces) for properties like user name and role name.

Validations for User Name, Role Name and Password are done using the regular expressions provided in <Product_Home>/repository/conf/user-mgt.xml file.

I will be taking EMM product as the example. By default empty characters are not allowed for role names in management console. If you enter a role with empty character (ex: Device Manager) you will get a message as in below image.

 Follow below steps to allow empty characters for role name.

1. Go to <EMM_HOME>/repository/conf/user-mgt.xml file and open the file. Then change <RolenameJavaRegEx> property and <RolenameJavaScriptRegEx> proerty as given below
Property name="RolenameJavaRegEx">[a-zA-Z0-9\s._-|//]{3,30}$</Property>

Property name="RolenameJavaScriptRegEx">^\w+( \w+)*$</Property>

Note -
  • <RolenameJavaScriptRegEx> is used by the front-end componenet for role name validation
  •  <RolenameJavaScriptRegEx> is used for back-end validation

2. Then restart the server

Now you will be able to add role names with empty spaces (ex: Device Manager).

Sashika WijesingheInformation filtering using grep commands

While I was working on monitoring the long running test issues, I thought it would be useful to write a post on the usage of 'grep' commands in Linux.

In this article I will be discussing few real examples of using "grep" commands and how to execute grep commands as a shell script.

Search for a given string 

This command is use to search for specific string in a given file.
grep "<Search String>" <File Name>

Ex: In the below example, it will search for the string "HazelcastException" within wso2carbon.log file.
grep "HazelcastException" wso2carbon.log 

Search for a given string and write the results to a text file

This command is use to search for a given string and write the search results to a text file.
grep "<Search String>" <File Name> > <Text File Name>

Ex: In the below example, it will search for the string "HazelcastException" within wso2carbon.log file and write the search results to "hazelcastexception.txt" file.
grep "HazelcastException" wso2carbon.log > hazelcastexceptions.txt

Execute grep commands as a shell script

In some situations it will be useful to execute grep commands as a shell script.
Ex: While I was monitoring the long running test for certain exceptions, I used to search all the target exceptions from wso2carbon.log files and write those to specific files for further reference.

Follow below steps to execute multiple search strings and write those to text files using a shell script.

1) Create a file and add the grep commands to that file as given below and save it as a shell script. (Here I will name this file as "")

grep "HazelcastException" wso2carbon.log* > hazelcastexception.txt
grep "Hazelcast instance is not active" wso2carbon.log* > hazelcastnotactive.txt

2) Now add the shell script to the <Product_HOME>/<repository>/<logs> folder

3) Execute the script file using below command
After you execute the shell script, it will grep all wso2carbon.log files for the given search string and write those to separate text files.

Sashika WijesingheDocker makes your life easy !!!

Most of the time we have come across situations to set up a cluster for WSO2 products. With in a product QA cycle it is a very common thing. But as you all know it consumes considerable amount of time to set up the cluster and troubleshoot.

Now, with the use of dockers we can set up a cluster within few seconds and it makes your life easy :)

So let me give a basic knowledge on what is "docker"

What is Docker

In most simplest terms, docker is a platform to containerize software images.

Install Docker  :

What is Docker Compose

Docker compose is used to compose several applications and run those using one single command to initialize in multiple containers.

Install Docker Compose :

For some of the wso2 products there are docker compose images already exists in a private repository.

Main purpose of this blog is to highlight some of the useful docker commands you have to use while working with docker compose images.

To explain some of the usages I will be using ESB 4.9.0 docker compose image.
You can get a clone of the git repository where the docker compose image for ESB 4.9.0 is available. Follow the instructions in the READ.ME to setup the ESB docker cluster.

Start Docker container

docker-compose up

Build the changes and up the docker compose image

docker-compose up --build

Stop docker containers

docker-compose down 

Start docker in demon mode

 Docker-compose up -d

List docker images

 docker images 

List running docker containers

 docker ps 

Login to an active container

docker exec -i -t <container_id> /bin/bash 

Delete/Kill existing containers

 docker rm -f $(docker ps -aq) 

View container logs 

 docker logs <container_id> 

Insert a delay between docker containers

Sample Scenario: When running ESB cluster, first we want to ensure that DB is up and running, Therefore we can introduce a delay and start the ESB nodes. To configure this, you can add below property to the docker-compose.yml file
- SLEEP=50

Add additional host names

Sample Scenario: Lets assume you want to use a backend service hosted in Application Server in another instance. Host name of the Application Server is "". Docker can not resolve the host name unless you defined the host name in docker-compose.yml file as below.
- ""

Enable additional ports

Sample Scenario: Each of the ports used for the docker compose should be exposed through the docker-compose.yml file. If you are using inbound HTTP endpoint with port 7373, this port should be exposed as below.
- "443:443"
- "80:80"
- "7373:7373"

Sashika WijesingheUse ZAP tool to intercept HTTP Traffic

ZAP Tool

Zed Attack Proxy is one of the most popular security tool that used to find security vulnerabilities in applications.

This blog discuss how we can use the ZAP tool to intercept and modify the HTTP and HTTPS traffic.

Intercepting the traffic using the ZAP tool

Before we start, lets download and install the ZAP Tool.

1) Start the ZAP tool using /

2) Configure local proxy settings
 To configure the Local Proxy settings in the ZAP tool go to Tools -> Options -> Local Proxy and provide the port to listen.

3) Configure the browser
 Now open your preferred browser and set up the proxy to listen to above configured port.

For example: If you are using FireFox browser browser proxy can be configured by navigating to "Edit -> Preferences -> Advanced -> Setting -> Manual Proxy Configuration" and providing the same port configured in the ZAP proxy

4) Recording the scenario

Open the website that you want to intercept using the browser and verify the site is listed in the site list. Now record the scenario that you want to intercept by executing the steps in your browser.

5) Intercepting the requests

Now you have the request response flow recorded in the ZAP tool. To view the request response information you have to select a request from the left side panel and get the information via the right side "Request" and "Response" tabs.

Next step is to add a break point to the request to stop it to modify the content.

Adding a Break Point

Right click on the request  that you want to add a break point, and then select "Break" to add a break point

After adding the breakpoint. Record the same scenario that you recorded above. You will notice that, when the browser reached to the intercepted request it will open up a new tab called 'Break'.

Use the "Break" tab to modify the request  headers and body. Then click the "Submit and step to next request or response" icon to submit the request.

Then ZAP will return the request to the server with the changes applied to it.

Sashika WijesingheHow to use nested UDTs with WSO2 DSS

WSO2 Data Services Server(DSS) is a platform for integrating data stores, creating composite data views and hosting data in different sources such as REST style web resources.

This blog guides you through the process of extracting the data using a data services when nested User Defined Types (UDT) used in a function.

Lets take the following oracle package that returns a nested UDT. When a nested UDT (A UDT that use standard data types and other UDT in it) exists in the oracle package, oracle package should be written in a way that it returns a single ref cursor, as DSS do not support nested UDTs out of the box.

Lets take the following oracle package that includes a nested UDT called 'dType4'. In this example I have used Oracle DUAL Table to represent the results of multiple types included in the 'dType4'.

Sample Oracle Package

create or replace TYPE dType1 IS Object (City VARCHAR2(100 CHAR) ,Country VARCHAR2(2000 CHAR));
create or replace TYPE dType2 IS TABLE OF VARCHAR2(1000);
create or replace TYPE dType3 IS TABLE OF dType1;
create or replace TYPE dType4 is Object(
Region VARCHAR2(50),
CountryDetails dType3,
Currency dType2);

create or replace PACKAGE myPackage IS
FUNCTION getData RETURN sys_refcursor;
end myPackage;
create or replace PACKAGE Body myPackage as FUNCTION getData
tt dType4;
t3 dType3;
t1 dType1;
t11 dType1;
t2 dType2;
cur sys_refcursor;
t1 := dType1('Colombo', 'Sri Lanka');
t11 := dType1('Delihi', 'India');
t2 := dType2('Sri Lankan Rupee', 'Indian Rupee');
t3 := dType3(t1, t11);
tt := dType4('Asia continent', t3, t2);
open cur for
SELECT tt.Region, tt.CountryDetails, tt.Currency from dual;
return cur;
end myPackage;

Lets see how we can access this Oracle package using the WSO2 Data Services Server.

Creating the Data Service

1. Download WSO2 Data Services Server
2. Start the server and go to "Create DataService" option
3. Create a data service using given sample data source.

In this data service I have created an input mapping to get the results of the oracle cursor using 'ORACLE_REF_CURSOR' sql type. The given output mapping is used to present the  results returned by the oracle package.

<data name="NestedUDT" transports="http https local">
   <config enableOData="false" id="oracleds">
      <property name="driverClassName">oracle.jdbc.driver.OracleDriver</property>
      <property name="url">jdbc:oracle:thin:@XXXX</property>
      <property name="username">XXX</property>
      <property name="password">XXX</property>
   <query id="qDetails" useConfig="oracleds">
      <sql>{call ?:=mypackage.getData()}</sql>
      <result element="MYDetailResponse" rowName="Details" useColumnNumbers="true">
         <element column="1" name="Region" xsdType="string"/>
         <element arrayName="myarray" column="2" name="CountryDetails" xsdType="string"/>
         <element column="3" name="Currency" xsdType="string"/>
      <param name="cur" ordinal="1" sqlType="ORACLE_REF_CURSOR" type="OUT"/>
   <resource method="GET" path="data">
      <call-query href="qDetails"/>

Response of the data service invocation is as follows

<MYDetailResponse xmlns="">
      <Region>Asia continent</Region>
      <CountryDetails>{Colombo,Sri Lanka}</CountryDetails>
      <Currency>Sri Lankan RupeeIndian Rupee</Currency>

Manuri PereraWSO2 ESB - A Quick Glance at the Capabilities

It's a big but connected world with huge number of entities communicating by different languages and different protocols. In a service oriented architecture there is a set of such entities/components providing and consuming different services.
For these heterogeneous entities to communicate there need be a person in the middle who can speak with all of them regardless of the languages they speak and protocols they follow. Also, in order to deliver a useful service for a consumer there need to be a person to orchestrate the services provided from different entities. This person better be fast and able to handle concurrency pretty well.
Abstractly speaking Enterprise Service Bus(ESB) is that person and WSO2 ESB is the best opensource option out there if you need one!

Following are the main functionalities WSO2 ESB provides [2]

1. Service mediation

2. Message routing

3. Data transformation

4. Data transportation

5. Service hosting

Even though ESB could cover most of the integration use cases that you might need to implement, there are many extension points you could use in case you are unable to implement your use case with built-in capabilities.

You can download WSO2 ESB at [1] and play with it! [2] is a great article that you must read that would quickly walk you through what WSO2 ESB has to offer!


Sashika WijesingheWorking with WSO2 carbon Admin Services

WSO2 products managed internally using defined SOAP web services named as admin services. This blog will describe how to call the admin services and perform operation with out using the Management Console.

Note - I will be using WSO2 Enterprise Integrator to demonstrate this.

Lets look at how to access the admin services in wso2 products. By default the admin services are hidden from the user. To enable the admin services,

1. Go to <EI_HOME>/conf/carbon.xml and enable admin services as follows


2. Now start the EI server using

./ -DosgiConsole
When the server is started click 'Enter' and you will be directed to the osgi console mode.

3. To search the available admin services, Add 'listAdminServices' in the osgi console.This will list down the available admin services with the URL to access the admin services.

Access Admin Service via SOAP UI

4. You can access any of the admin service via the service URL listed in the above step.

I will demonstrate how to access functionalities supported by the ApplicationAdmin service. This service support the functionalities such as list the available applications, get application details, delete application etc.

5. Start SOAP UI and create a SOAP project using the following WSDL.
6. If you want to list all the available applications in the EI server, open the SOAP request associated with listAllApplications and provide the HTTP basic authentication headers of the EI server. (Specify the user name and password of the EI server)

Similarly you can access any available admin service via SOAP UI with HTTP basic authentication headers.

Reference -

Nandika JayawardanaBenefits of WSO2 ESB

WSO2 Enterprise Service Bus is a battle tested enterprise service bus catering for all your enterprise integration needs. With its new release, we have taken the capabilities of WSO2 enterprise service bus ( ESB ) in to a new level.

 Previously,  we were releasing few products to cater for enterprise integration needs such as data services server ( DSS ) to cater for master data management, business process server (BPS) to cater for business processes / workflows and human interactions, message broker (MB) for enterprise messaging and ESB to provide the bus architecture to interconnect everything.  However, what we identified is that , more often than not, few of these products will be necessary to cater for a given integration use case and it almost always includes ESB. Hence , now we have packaged all the product capabilities into a single package with profiles with many enhancements to each of the profiles.

Following are some of the key benefits of WSO2 ESB.

1. One of the best performing open sources ESB's in the market.

2. A mature product and supports all enterprise integration patterns.

3. Complete feature set to cater for any integration need.

3. Eclipse based IDE support to quickly develop / debug / package /  deploy your integration flows.

4. Build in message tracing and analytics support with analytics profile.

5. Data integration capabilities allowing exposing your data stores as Services and APIs.

6. Message Broker profile provides fully pledged messaging capabilities including JMS 1.1 and JMS 2.0

7. Business Process Profile allows creating workflows and human integrations with BPMN, BPEL and WS-Human Tasks.

8. Consultancy / Support and Services are readily available from WSO2 and WSO2 Partners world wide.

Learn more about WSO2 ESB from follwoing article.

Lakmali BaminiwattaDynamic Endpoints in WSO2 API Manager

From WSO2 APIM 1.10.0, we have introduced new feature to define dynamic endpoints through synapse default endpoints support. In this blog article, I am going to show how we can create an API with dynamic endpoints in APIM.

Assume that you have a scenario where depending on the request payload, the backend URL of the API differs. For instance, if the value of "operation" element in the payload is "menu", you have to route the request to endpoint1 and else you need to route the request to endpoint2.

"srvNum": "XXXX",
"operation": "menu"

In APIM, dynamic endpoints are achieved through mediation extension sequences. For more information about mediation extensions refer this documentation.

For dynamic endpoints we have to set the "To" header with the endpoint address through a mediation In-flow sequence. So let's first create the sequence which sets the "To" header based on the payload. Create a file named dynamic_ep.xml with below content.

<sequence xmlns="" name="dynamic_ep">
<property expression="json-eval($.operation)" name="operation" />
<filter regex="menu" source="$ctx:operation">
<header name="To" value="https://localhost:9448/am/sample/pizzashack/v1/api/menu" />
<header name="To" value="https://localhost:9448/defaultep" />
<property expression="get-property('To')" name="ENDPOINT_ADDRESS" />

Supporting Destination based usage tracing for dynamic endpoints.
In here note that we have to set "ENDPOINT_ADDRESS" additional property with the "To" header value which is required for populating destination address for statistics (API Usage by Destination). So if you have statistics enabled in your APIM setup, you have to set this property as well with the endpoint value in order to see the destination address in the statistic views.

Now let's assign this sequence to the API. For that go to the "Implement" tab of the API creation wizard.

  • Select "Dynamic Endpoint" as the "Endpoint Type"
  • Upload dynamic_ep.xml to the "In Flow" under message mediation policies. 
  • Save the API

Now let's try out the API.

With Payload containing "menu"

Wire log showing request going to endpoint 1.

With Payload NOT containing "menu".

Wire log showing request going to endpoint 2.

This way you can write your own logic using mediation extensions and dynamic endpoints in APIM to route your requests to dynamic destinations. 

Lakmali BaminiwattaWSO2 API Manager- Customizing Store User Sign-Up

WSO2 API Manager allows on boarding new users to the API store through a Sign-up page. The default sign-up page has set of mandatory and optional fields for user to provide details. However, there can be cases where one needs to customize the available fields by modifying available ones or/and adding new fields.

This can be easily achieved in WSO2 API manager since the fields are loaded dynamically from the user claim attributes. So this post explains how we can customize the default Sign-up page.

By default API Store Sign-up looks as below. Note that this blog posts shows how to do this in APIM 2.1.0.

Let's say you want to add a new field called 'City' to Store Sign-up page. This post provides step by step instructions on how to achieve this.

1. Start API Manager 2.1.0 and go to Management Console (https://localhost:9443/carbon/)

2. Go to Claims -> Add -> Add Local Claim

3. Enter the below values for the new claim.

Claim URI :
Display Name : City
Description : City
Mapped Attribute : city
Supported by Default : select

Note that claims which are 'Supported by Default' true, are only displayed in the Sign-up page. Therefore when you are adding new claims make sure to check 'Supported by Default' checkbox.

If you need this claim to be a required field [Mandatory field in Sign-up], make sure to check 'Required' checkbox.

After entering the values, click 'Add'.

7. Now go to API Store Sign-up page and refresh. You should see the newly added field.

Modifying Existing Claims

Let's say now you want to make the city field 'required'. Also you want to change the field display order. 

1. For that go to Claims -> List and click on In the displaying list of claims click Edit on "City" claim.

Listing Claims

Edit Claim

Now select Required checkbox. Also I have changed the display order of all the city claim to 4. 

Check Required and change display order

2. Now Access the API Store Sign-up page. You will see that the "City" field is re-ordered and marked as required.

Dinusha SenanayakaConfigure Single Sign On (SSO) for a Web App Using WSO2 Identity Cloud and Consume APIs Published in WSO2 API Cloud Using JWT Bearer Grant Type

WSO2 Cloud provides comprehensive set of cloud solutions, This includes; Identity Cloud, API Cloud, Integration Cloud and Device Cloud. Identity Cloud provides security while API Cloud provides API Management solution (In near future identity Cloud is going to provide full set of IAM solutions, where at the moment (May 2017) it has only supports Single Sign On). In real world scenarios, application security and API security goes hand in hand where most of the time these web apps need to consume secured APIs.

In this post, we  are going to  look at how we can configure security for a web app with WSO2 Identity Cloud and that application needs to consume some OAuth protected APIs published in WSO2 API Cloud.

If you need to configure SSO for a application with WSO2 identity Cloud, you need to configure a service provider in Identity Cloud representing your application. Document explains the services provider configuration options in detail. If you are configuring SSO for a your own application (not a pre-defined app like Salesfoce, AWS etc), there are two main options provided, that you can select. Those are "Agent based SSO" and "Proxy based SSO". Post explains what these two options are and when to choose which option, in detail.

Here, we are going to use Proxy based SSO option and configure SSO for a java web application. Once user is authenticated to access the application,   Identity Cloud sends a signed JSON Web Token (JWT token) to the backend application. This JWT token can be used with JWT bearer grant type to get an access token from API Cloud to consume APIs publish there.

Before that, what is JWT bearer grant type;
JWT bearer grant type provides a way for client application to request an access token from OAuth server, using an existing proof of authentication in the form of a signed claims which was done by different Identity Provider. In our case, Identity Cloud is the JWT token provider while API Cloud is the one provide OAuth acess tokens.

Step 1: Configure Service Provider in Identity Cloud

i. Login to Identity Cloud admin portal :
ii. Add New Application (Note: Select the "Proxy" as app type)

iii. Go to user-portal of tenant. (I'm using wso2org as the tenant, hence my user-portal is This will list the application there and if you click on it, you can invoke it.  Note that application URL is not the real endpoint URL of application that used to invoke it. This is because, since we used "Proxy" option, Identity Cloud acts as a proxy for this app and gives a proxy URL (also called as gateway URL).

You need to block the direct app invocations using firewall rule or nginx rule to make sure, all users can access application only through Identity Gateway with the provided proxy URL. Following diagram explains what really happens there.

That's all we have to do to get SSO configured for your web application with Identity Cloud using proxy option. In summary, you login to Identity Cloud admin portal, register a new application (service provider) there by providing your web app endpoint url and provide a new url context to get gateway url constructed. Gateway do the SAML authentication part on behalf of application. 

Step 2 : Use JWT Token sends by Identity Cloud to backend and get a access token from API Cloud to invoke APIs

Backend web app needs to consume some APIs published in API Cloud. But the user authentication for web app happened from Identity Cloud, how can it get a access token from API Cloud ? We can use JWT Bearer Grant type for that, since Identity Cloud gives a JWT token after user authentication.

This JWT token should contains API Cloud as an audience, if it need to be consumed by API Cloud. 

i. Edit the service provider (application) which was registered in Identity Cloud and add API Cloud's key manager endpoint as an audience. 

Service provider configuring UI provided in Identity Cloud admin portal does not have option to add audiences for proxy type apps (which should be fixed in UI). Until that we need to login to the carbon management console of Identity Cloud and configure it.  
NOTE : Carbon mgt UIs of WSO2 Cloud are not exposed to everyone. You need to contact wso2 cloud support by filling form  to get carbon mgt UI access. 

In the Carbon UI, navigate to Main -> Service Providers -> List -> Click Edit of Service provider that you created.  Inbound Authentication configuration -> SAML -> Audience URLs  -> Add "" as audience and update the SP. Refer following image.

ii. Configure Identity Cloud as a trusted IdP in API Cloud.

API Cloud should trust Identity Cloud as a trusted IdP, if it needs to issue an access token using JWT token issued by Identity Cloud.  We need to login to Carbon UI of API Cloud's Key Manager and configure Identity Cloud as a trusted IdP. 
NOTE : You need to contact wso2 cloud support by filling form  to get carbon mgt UI access of API Cloud's Key Manager. 

Navigate to Main -> Identity Providers -> Add -> Give the IdP details and Save

Identity Provider Name :

Identity Provider Public Certificate : Need to upload your tenant's public certificate here. You can get this by login to the admin portal of Identity Cloud and Click on "Download IdP Metadata" option provided in application listing page. This metadata file contains public certificate as a one metadata. You can copy and save certificate into a separate file and upload here.

Refer following image for add IdP.

Following images shows how you can download tenant's public certificate from Identity Cloud to upload above.

Downloaded metadata file will looks something similar following. Copy the certificate into a separate file and upload.

We are done with all configurations.

Step 3 : How to read the value of JWT Token and use it to request access token from API Cloud ?

JWT Token is sent to the backend in the header of "X-JWT-Assertion". Backend application can read the value of this header to get the JWT token.

Following image shows a sample JWT token printed by backend application by reading "X-JWT-Assertion" header.

Then backend application can use this JWT token and call to API Cloud token endpoint to get an access token using JWT bearer grant type. Before that you can copy this JWT token and use curl or some other REST client and test it. 

curl -i -X POST -H "Authorization:Basic <YOUR_Base64Encoded(ConcumerKey:ClientSecreat)>" -k 
-d 'grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=<YOUR_JWT_TOKEN_HERE>'
-H 'Content-Type: application/x-www-form-urlencoded'

This should provide you a oauth access token from API Cloud.

That's it !

Reference to sample web application code:
This sample web app contains whole scenario described in this post. Try it too.

Denuwanthi De Silva[WSO2 IS 5.0.0]Password Reset via E-mail Confirmation

In this blog post I will show password reset feature in IS 5.0.0 with email confirmation.

To achieve this you have to only configure 2 files.

  1. identity­
  2. axis2.xml


location: <IS_HOME>/repository/conf/security/identity­



Notification.Expire.Time=15 //Here I have configured 15 mins. You can give any time you like  in mins

Notification.Sending.Internally.Managed=true //Setting this to true will ensure a confirmation code will be sent to the email address you specify. Setting false will provide a confirmation code without sending it to email.So, you can use your own application and call the api and get the confirmation code.


Captcha.Verification.Internally.Managed=false //setting this to false will ensure that you don’t have to send captcha answer when invoking apis.


location: <IS_HOME>/repository/conf/axis2/axis2.xml


<transportSender name=”mailto” class=”org.apache.axis2.transport.mail.MailTransportSender”>
<parameter name=”mail.smtp.from”></parameter>
<parameter name=”mail.smtp.user”></parameter>
<parameter name=”mail.smtp.password”>abc92923</parameter>
<parameter name=””></parameter>

<parameter name=”mail.smtp.port”>587</parameter>
<parameter name=”mail.smtp.starttls.enable”>true</parameter>
<parameter name=”mail.smtp.auth”>true</parameter>

This is an email address created in gmail with low security options.

sanjeewa malalgodaHow to run WSO2 API Manager with MSSQL using docker

First you need to install docker on you local machine.

Then run following to install mssql docker image in your local machine. You can follow this article[ ] for more information.

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Password#123' -p 1433:1433 -d microsoft/mssql-server-linux

Then type following and verify your instance is up and running>>docker ps

CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS                                      NAMES03ebf2498a20        microsoft/mssql-server-linux   "/bin/sh -c /opt/mssq"   23 minutes ago      Up 23 minutes>1433/tcp    

docker exec -it /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Password#123

Download and copy the sqljdbc4 Microsoft SQL JDBC driver file to the WSO2 product's /repository/components/lib/ directory

Edit the default datasource configuration in the /repository/conf/datasources/m aster-datasources.xml file as shown below.


            <description>The datasource used for API Manager database</description>
            <definition type="RDBMS">
              <validationQuery>SELECT 1</validationQuery>

Now start server with following command.
sh -Dsetup 

Malith JayasingheAchieving Better Performance Results via Remote JMETER testing

Photo Credit

In this blog, I will discuss how the remote (distributed) JMETER testing can be used to achieve better performance results (under certain high load scenarios).

When generating large workloads (e.g. using large numbers of concurrent users), it is possible for single JMeter instance to become unstable and this may affect the performance results that it generates. In JMETER it is possible to control multiple, remote JMeter engines (called JMETER servers) from a single JMeter client (see [1]). This will allow us to replicate a performance test across multiple machines and generate larger loads.

Clustering JMETER

Let’s first have a look at how to cluster the JMETER so that one instance of JMETER can control the remote JMETER instances. There are multiple ways to cluster the JMETER and one of them is discussed below.

Starting the slaves (server instances)

The slave instances are the node where the actual performance tests are run. These nodes are also called server instances. To start the servers, type in the following command on the server instances’ terminal

./jmeter-server -Djava.rmi.server.hostname=ipaddress

Note that the ip-address is the ip address of the slave node.

The following line will appear on the terminal

Starting the Client (master)

JMeter client instance can control remote JMeter (server) instances, and collect the data from them. To start the JMETER client type in the following on the client instance’s terminal

./ -n -t perf_test.jmx -R serverip1, serverip2, serverip3 -l pert_test.jtl

serverip1 is the ip address of the jmeter server 1

serverip2 is the ip address of the jmeter server2 and so on.

perf_test.jmx is the script which will be run on each JMETER server instance.

If the script has parameters then use -G to set these parameters (instead of J which is used for non-remote testing). For example, the following command sets the Concurrency parameter at 50

./ -GConcurrency=50 -n -t perf_test.jmx -R serverip1, serverip2 -l pert_test.jtl

While the test is running the test results will appear on the clients terminal. Note that concurrency 100 (50 * numbers of servers, where numbers of servers = 2)

Other configurations

Increase JMETER server Heap Memory

To minimize the effect of JMETER GC on the performance results, increase the JMETER server heap memory if possible.

Case study: Netty Performance Test

The following table shows the performance results for a simple NETTY server. Performance test was done using 2 JMETER (server) instances (with clustering) and single JMETER instance. This test was run on EC2 and each JMETER instance and NETTY were run on separate EC2 instances.

Following are the key observations:

There is a significant improvement in TPS and average latency when using the clustered setup (i.e. remote testing).

There is significant reduction in the load average on the JMETER instances when using multiple JMETER instances.

There is an increase in the maximum latency when using remote testing (we are currently looking into this. One observation was that if we re-use the connections then we can reduce this number significantly)


In this blog I have presented how to use multiple JMETER instances for performance testing. We noticed that remote JMETER testing can achieve better performance results. There is an increase in maximum latency when not re-using connections and this needs to be further investigated.


ayantara JeyarajCreating a Simple Node.js project with MongoDB connectivity (Part 1)

For beginners who would love to have a short intro about why Node has become a raving feature lately, let's start off with elucidating its basic features here.


As defined by the official site for Node.js, it is a JavaScript runtime built on Chrome's V8 JavaScript engine. It utilizes an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node.js is packaged and retrieved from npm which is the largest ecosystem of open source libraries in the world.
  • It's an open source server framework that enables developers to obtain full certification to modify the script as desired.
  • Totally free - Talk about not having to input all that messy credit card info to have access to the script.
  • No need to hassle about platform prerequisites. Node has been proven to run on  most of the renowned platforms such as- 
    • Windows
    • Linux
    • Unix
    • Mac OS X
    • etc.
  • As obviously known, Node.js utilizes JavaScript on the server.
There's just a vast arena of facts which can be discussed under node.js. But let's pause here and move onto the actual implementation of Node on your machine.

Installing Node.js

I've taken a simple example on creating a system called "studentSystem" which we'll be developing to create, update, view and delete student records. The following installation of Node.js will be focused on this studentSystem directory, though within this blog's scope, we'll be focusing on the installation of Node.js itself. You can refer to more information on how to format and create pages for your studentSystem in my next blog.

  • Download Node.js and simply execute the installer.
  • Create a folder into which you choose to store all your node project related files
    • I have created a folder named node inside my F:\NodeJS directory.
  • Install Express-Generator
    • This is an application generator tool application generator tool which can be used to quickly create an application skeleton.
    • In order to install this application generator tool, 
      1. cd into the node folder that you just created and execute the following command
        •  F: \NodeJS\node>npm install –g express-generator
  • Create an Express Project (Here we will create an express project called studentSystem.
    • F: \NodeJS\node>express studentSystem
  • Edit the dependencies in the package.json file.
    • If you check out the package.json file, most of the dependencies will have been already created for you. Now since we're trying to manipulate data using mongo db, we need to add this dependency to the package.json dependencies as well.
  "name": "studentsystem",
  "version": "0.0.0",
  "private": true,
  "scripts": {
    "start": "node ./bin/www"
  "dependencies": {
    "body-parser": "~1.17.1",
    "cookie-parser": "~1.4.3",
    "debug": "~2.6.3",
    "express": "~4.15.2",
    "jade": "~1.11.0",
    "mongodb": "^2.2.25",
    "monk": "^4.0.0",
    "morgan": "~1.8.1",
    "serve-favicon": "~2.4.2"
  • Install dependencies
    • cd to the studentSystem directory and execute the following command.
      • “F: \NodeJS\node\studentSystem>npm install
    • Create a new data folder inside studentSystem directory using the mkdir command on the command prompt.
      • “F: \NodeJS\node\studentSystem >mkdir data
    • Then type “F: \NodeJS\node\studentSystem>npm start

  • Once this page is displayed, you can be assured that you have successfully installed Node. CONGRATULATION!
  • Now you can proceed onto creating your actual student System.

Chandana NapagodaManage Solr Data in WSO2 Server

Recently I was checking an issue faced by one of my colleague while automating WSO2 API Manager deployment. There, once the new pack is deployed by pointing to the existing databases, APIM Store didn't show existing APIs at once. It took some time to display all the existing APIs in the Store.

The APIs are retrieved using the Solr based indexing in APIM. Therefore, the main reason for this behavior is that a fresh pack doesn't have existing Solr data and it takes some time to complete the indexing. Until that indexing process is completed, it will not show API in the Store instantly.

To address this, you can follow one of the below approaches:

1). Backup existing Solr data (APIM_HOME/solr/data) from the existing deployment and added it to newly created pack.

2). Externalize Solr data directory. Solr data stored location can be configured via file located in the APIM_HOME/repository/conf/solr/registry-indexing directory. So you can update to store Solr data outside the product directory.

Denuwanthi De Silva[WSO2 APIM]Mediation Extensions

When adding mediation extensions to WSO2 APIM, it is recommended to upload them via WSO2 APIM tooling (Developer Studio Eclipse plugin).

Following documentation provide detail steps on how to achieve that.

Geeth MunasingheWSO2IoT Architecture


WSO2 IoT is a comprehensive platform to manage your mobile and Internet of Things (IoT) devices. All its capabilities are exposed through industry standard swagger annotated REST APIs. It allows device manufacturers to create their own device types and enroll and manage them securely. It it designed in a way to protect both device and it’s data. It also provides big data capability to gather sensor data, visualize them real time and batch analytics. You can extend its capabilities to securely manage mobile devices.  

Following diagram shows an overview of the complete architecture.



WSO2 IoT, first started as a complete platform to manage mobile devices and later evolved into a more complex system by introducing to capabilities to manage mobile devices and any other IoT device.  First it was released with the ability to manage Android and iOS. Later Windows mobile device management and Application management was added.


The below diagram shows the in depth architecture of WSO2 IoT. Difference between above diagram and this is that in first diagram both “APPS” layer and “DEVICES” layer are represented by the “INTERACTION LAYER”.
As shown in the diagram, it can be broken into four layers.

  1. Connected Device Management Core
  2. API Layer
  3. Security and Integration Layer
  4. Interaction Layer

Connected Device Management Core (CDM)

This is the core of the IoT server. It manages and controls all the functionality WSO2 IoT offers. This acts as the brain of the WSO2 IoT platform. It is designed with many extensions points, hence most of its functionalities can be extended and customized. But CDM (Connected Device Management) core has evolved into a much capable and mature platform, therefore any device type can be added without a hassle.

As shown in the above diagram the CDM core consists of the following components.
  1. Device Management
  2. Device Type Management
  3. Policy Management
  4. Operation Management
  5. Application Management
  6. Configuration Management
  7. Certificate Management
  8. User management
  9. Push Notification Management
  10. Plugin Management
  11. Compliance Monitoring

Device Management

This part is responsible for enrolling/disenrolling devices with the server, managing device information such as location, installed application, device static, and runtime information, such as device memory and its usages. Creating/removing device groups and assigning devices to groups also come under this component.

Device Type Management

This component has the capability to add/remove new device types. This was added newly to WSO2 IoT Server 3.1.0. You can describe the device features, and generate the device operations and the device details.

Policy Management and Compliance Monitoring

Policies make sure that the devices that belong to a user comply with the corporate rules and regulations. A policy will maximise the control of the devices and reduce the risk on corporate data. If a device does not comply with a given policy, the server will be notified of this corrective actions, such as re-enforcing the same policy, will be taken. The policy may include restrictions such as camera disable, configurations, such as VPN or Wi-Fi.

Operation Management

WSO2 IoT Server controls the devices via operations. Any message, configuration, restriction, policy or command is sent to a device as an operation. These operations are either pushed to devices through notification mechanisms or the devices are polled at a configured time. Once the device executes the respective operation, the result is also sent to the server and it’s stored in the database.

Application Management

Application management is a major part of the mobile device management. WSO2 IoT provides ways and means to add applications and upload apk/ipa files through the app publisher console. Further, users who has enrolled their devices with WSO2 IoT Server can install applications from the app store. App publisher and store have provided capabilities to add public apps from The Google store or the Apple app store. You can Install the web clips on devices with WSO2 IoT Server. Enterprise enrollment where an administrator installs an application on multiple devices can be done via the app store too.

Configuration Management

Configurations management is used to add pre-requisites and licensing agreements that are required to start enrolling a device. Especially before enrolling iOS devices, you need to configure the prerequisites, such as plist file and APNS certificates. You need to add the licence agreement that the users need to accept before completing the iOS device enrollment process.

Certificate Management

This component has the implementation of the Simple Certificate Enrollment Protocol (SCEP) protocol, which supports device enrollments via mutual ssl configurations. With SCEP, you need to provide the SSL certificate for device authentication and authorization to happen. During the enrollment process, the device generates a certificate and shares it with the server as a Certificate Signing Request (CSR).

User management

User management comes bundled with the WSO2 IoT and facilitates the management and control of user accounts and roles at different levels.
The user store of WSO2 IoT is configured to operate in any of the following modes.
  1. User store operates in read/write mode - In Read/Write mode, WSO2 IoT reads/writes into the user store.
  2. User store operates in read-only mode - In Read Only mode, WSO2 IoT guarantees that it does not modify any data in the user store. WSO2 IoT maintains roles and permissions in the database but it can read users/roles from the configured user store.
The use of WSO2 IoT has the following features:
  1. The concept of single user store which is either external or internal.
  2. Ability to operate in read-only/read-write mode on your company's LDAP user stores.
  3. Ability to work with Active Directory Directory Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS) in read write mode.
  4. Roles can contain users from external user stores.
  5. Improved configuration capability for external user stores.
  6. Capability to read roles from LDAP/Active Directory user stores.
  7. Implements management permission of the WSO2IoT console.

Push Notification Management

By default WSO2 IoT Server has implement push notification providers for MQTT, XMPP, FCM, and APNS. As IoT server is designed to introduce new push notification mechanism without hassle, any user can write their own push notification libraries as a java extensions and install them on WSO2 IoT Server.

Push notifications come into play when messages (operations) needs to be sent devices. As mentioned above there are three ways operations are sent to devices.

  1. Polling
  2. Push notification with the message (operation)
  3. Push notification with the wake-up call

Currently WSO2 IoT Server supports the following push notification mechanisms:
  1. APNS
  2. GCM/FCM
  3. WNS
  4. MQTT
  5. XMPP
  6. HTTP


The device will trigger a network call to a configured endpoint at a pre-set frequency. This network call will be received by WSO2 IoT Server and the pending operations list is sent by the server to the device. Once the device receives those operations it will execute them and store the result until the next network call is triggered. Once it is triggered, the device will send the result of the previous operation list in the same network call to get the next set of pending operations. This is similar to reverse HTTP call. Out of the box Android agent works in this manner.

Push Notification with the message (Operation)

This mechanism sends the whole message (operation) to the device as soon as an operation is initiated. WSO2 IoT Server pushes the operation to a message broker and the broker in turn sends the message to the device. Once the device executes the operation, the device sends the response back to message broker and it will send the response to WSO2 IoT Server.  The Android sense device type works in this manner out of the box.

Push Notification with the wake-up call

This push notification mechanism is used when the respective message broker has constraints about the size of the message (operation) payload. When an operation is initiated, WSO2 IoT Server sends a wake-up message to the respective message broker and it will then deliver this message the device. Then the device will initiate a network call over HTTP to the WSO2 IoT Server to retrieve the pending operations. Once the device receives the pending operations, it will store the execution result until next wake-up call happens. When that wake-up call happens, the device will send the previous operation result in the same network call to retrieve the next pending operations. APNS, GCM/FCM works in this manner. When an operation is initiated, WSO2 IoT Server calls APNS (if the device is iOS) to send the wake-up call to the device. Once the iOS device receives the wake-up, it will call the pre-configured endpoint of the IoT server to receive next pending operations.

Plugin Management

WSO2 IoT Server is designed as a pluggable architecture. When introducing new device types, this pluggable architecture comes in handy. It is not a must to add a plugin when the user needs to add a custom device type if the device is simple. Most of the IoT devices are supported without any external plugins being introduced to the IoT server. But if you have a device that requires more complex operations then writing a JAVA plugin is the most suitable approach.

There are 3 ways new device types can be introduced the WSO2 IoT Server.

  1. Using the Device Type Service and APIs
  2. Device type descriptor
  3. Writing a java extension with given interfaces.

API Layer

Core level device management functions and most of the device communications protocols are exposed as REST APIs in WSO2 IoT Server. All these APIs are equipped with industry standard swagger annotations, hence stub or client generation can be done easily. Device type related APIs are used because each device type communicates and controls in different ways. For example, Android uses a different message format to control the device than what iOS uses.  iOS has its own protocol to communicate with the device and its own message formats that can be understood by the iOS device. The WOS2 IoT Server operation core is implemented in a way that it does not need to know what type of message is being sent to device, the format of the message and the delivery mechanism. This explained the device type related APIs  that can translate and interpret the operation in a way that devices can understand. Currently WSO2 IoT Server provides simple device message protocols, such as JSON,XML, DM SyncML (also known as Open Mobile Alliance Device Management (OMA DM) XML) and profile protocol for iOS device management.


The above diagram shows the most common REST APIs in WSO2 IoT Server and they provide the services listed below. These APIs provide much more capabilities, but I only listed out the most commonly used capabilities

  1. Device management API
    1. Operation Management
    2. Device groups CRUD operations
    3. Assigning  devices to groups
    4. Policy Management
    5. User management
    6. Device management
    7. Device information update and retrieval
    8. User management
  2. App Publisher API
    1. Adding/publishing an application
    2. Configure workflows for application publishing.
  3. App Store API
    1. Install applications to own devices
    2. Install applications to enterprise devices.
    3. Uninstall applications
  4. Android API
    1. Enroll android devices
    2. Adding operations
  5. iOS API
    1. Enroll iOS devices
    2. Add operations
  6. Windows API
    1. Enroll Windows devices
    2. Add operations

Security and Integration Layer

Security and integration play very critical role in any IoT server.  WSO2 IoT Server is designed to give security the highest priority. When WSO2 IoT Server is used in an enterprise it stores user data, device data, and devices store very confidential information about the business and nature of the enterprise. Hence protecting  and securing IoT server access controls and associated devices are a main part of the business.

Furthermore, integrating IoT server with other applications, user interfaces and exposing its capability to the outside world securely is a must.  Therefore, integration also plays a critical role in wso2 IoT server.

WSO2 IoT Server provides the following security protocols to authenticate/authorize users and devices:
  1. Oauth
  2. Basic Auth
  3. Mutual SSL
  4. SCEP
  5. JWT
  6. Scopes

Two levels of authorization happens when initiating an operation.  Firstly it check if the user  is authorised to access the provided APIs.  Secondly it check if the user is authorized to access  or perform an operation on given device.  If a user is the owner of the device or the device administrator or the device is shared with the user using a device group,  the user can perform operation on the device.


Integration happens via two components.  First one is the API gateway and second one is the message broker. API Gateway handle HTTP requests and responses the same way the message broker handles MQTT messages between the devices and server. As  the IoT Server supports  plugings,  you can plug an XMPP  server or a COAP server  to communicate with the devices  via new access controls if needed.

As  all of the admin services are implemented as rest APIs,  they are securely exposed for integration through the API Gateway. IoT Server also exposes the API store functionality as rest APIs. A user can  find all the available APIs  just by calling  API store endpoints.

Interaction Layer

This layer can be broken into two segments depending on their interaction with the server. As depicted in the image shown below, both of these segments are represented in the same layer. But as they interact with the server differently, it is better to identify them individually.
  1. Device to server interaction
  2. Apps and External System to Server  Interaction.


Device to Server Interaction

This governs how the device connects with the IoT platform. Devices may use different communication protocols to send and receive data and use different messaging formats to communicate.

As most of the device types support over HTTP communications, MQTT and XMPP communication protocols are also introduced in the out of the box IoT server. APNS for iOS, GCM/FCM for Android and WNS for Windows are added as they are required for mobile device communications. All of the over HTTP calls are securely handled via the API gateway.

Applications / External System to Server Interaction

As WSO2 IoT Server exposes all its functionalities, integration with external systems and applications are straightforward. Securely exposed APIs can be accessed from any device, hence the system applications can be extended. Not only are the APIs secured, it also provides different authentication mechanism such as SSO, SAML, XACML when connected with the WSO2 Identity Server.
At the moment WSO2 IoT Server consists of four main UI consoles:

  1. Device Management Console
  2. Application Publisher Console
  3. Application Store Console
  4. API store console ( this is an optional)


Analytics is a crucial part of device management. WSO2 IoT platform has gone over and beyond to provide both realtime and batch analytics. Machine learning is also one of prominent capabilities in WSO2 IoT Server. WSO2 IoT platform offers smart analytics which can detect anomalies and trigger immediate actions. It offers streaming analytics capabilities, complex event processing, and machine learning to help you understand events, map their impacts, identify patterns and react within milliseconds in real-time.

Realtime Analytics

WSO2IoT server provides capability to process more than million events per seconds. This will help analyse the device data in real time and take decision on demand. Predictive analytics supports taking correct business decision about future events. Real time analytics provide assistance to refresh dashboards instantly. This can be used to detect anomalies and take corrective action instantly.

Batch Analytics

WSO2 IoT Server supports batch analysis of the collected data to show an aggregated and summarized view. The summarized data is also stored in the RDBMS database, hence it can be consumed by the other applications to support the company eco system.

Edge Analytics

You can run the event processing capability on the device itself using WSO2 IoT Server. The the complex event processing library, “Siddhi”, can packed with any device agent.

Devices and SDKs

WSO2 IoT Server provides SDK support to build and connect your device with the WSO2 IoT platform. As all of functionalities are exposed through APIs, the SDK support makes it easier to create new agent applications that run on the device. All the basic functionalities to connect the device with the server are supported in the SDK itself.

Message Flow

WSO2 Iot Server message flows can be separated into two parts.

  1. Device to server message flow.
  2. Apps to server message flow.

Device to server message flow

The device to server message flow is shown in the below diagram.
WSO2 IoT Server sends messages to the broker. APNS, FCM, GCM, and WNS work as a virtual brokers. MQTT and XMPP are actual brokers that exchanges messages between the IoT server and devices. APNS, FCM, GCM, and WNS are used to notify and wake-up the devices due to constraints of message sizes. Actual payloads are not sent to devices through them. Once the wake-up call is received by the devices, it calls the IoT server to retrieve the actual payload.

Please note that API Gateway is part of the WSO2 IoT Platform.


Apps to server message flow

External/System applications connect to IoT server through the API gateway. Please note that API Gateway is part of the WSO2 IoT Platform.


Amalka SubasingheStop nginx decoding query paramaters on proxy pass

When using nginx in front of several applications, In coming requests come to the nginx as encoded parameters but it reached to the backend application as decoded parameters. Because of that backend application rejects the massage since it accept encoded parameter.

In order to solve this problem, with the following configuration we can stop decoding query parameters at the nginx level.

location /t/ {
                proxy_set_header X-Real-IP $remote_addr;
#               proxy_next_upstream error timeout invalid_header http_500;

The parameter added as $request uri is the full original request URI (with arguments)

Hariprasath ThanarajahCreate a scenario using WSO2 ESB connectors using WSO2 Developer Studio and run and test it locally and WSO2 Cloud

In this post we are going to create a scenario using WSO2 ESB Connectors using  WSO2 Developer Studio then run and test the output(Basically a car file) locally and in the WSO2 Integration Cloud.

The scenario is explained by the below image

Explanation about the above scenario is,

Add Salesforce leads to Google Sheets and send email alerts - When add a new Salesforce leads, add it to the Spreadsheet and send a gmail alert regarding the lead creation to relevant regional managers.

Here we use ESB connectors Salesforce SOAP, Google Spreadsheet and Gmail. For that, I used create method of Sf SOAP connector to create the Lead and from the response of that method we can able to get the Id of that record and using that Id we can able to retrieve the information about that Lead creation. After that, we need to insert the needed information of that record to a spreadsheet using Google Spreadsheet connector addRowsData method and from getCellData method, we can able to retrieve the information about the record. And finally, we can build the message body with the above information and send an alert to the manager about the Lead creation using Gmail Connector.

In this post there we have three important parts to understand,

  1. Step by step guide to create the CAR file of the scenario using WSO2 ESB Developer Studio.
  2. Test the scenario locally.
  3. Test the scenario in WSO2 Integration cloud.


In this scenario we are going to use WSO2 ESB Connectors to integrate different enterprise systems like Salesforce, Google Spreadsheet and Gmail to complete the scenario.

So for that We need to give the credentials or the accessTokens to access the third party API's.

First we need to get the needed parameters to configure the Connectors to access the third party API's. The documentations to know the parameters can be found for Salesforce, Gmail, Google Spreadsheet.

So to configure the connectors we may need some parameters to connect with those API's. The following is the way to get those details,

Configure the Connectors
Get AccessToken and RefreshToken for Gmail.

The details can be found in this post.

Get AccessToken and refreshToken for GoogleSpreadsheet

Creating a Client ID and Client Secret
      1. As the email sender, navigate to the URL  and log in to your google account.
      2. If you do not already have a project, create a new project and navigate to Create Credential -> OAuth client ID.
      3. At this point, if the consent screen name is not provided, you will be prompted to do so.
      4. Select the Web Application option and create a client. Provide  as the redirect URL under Authorized redirect URIs and click on Create. The client ID and client secret will then be displayed.
      5. See Google Spreadsheet API documentation for details on creating the Client ID and Client Secret.
      6. Click on the Library on the side menu, and select Google Sheets API. Click enable.
Obtaining the Access Token and Refresh Token
Follow these steps to automatically refresh the expired token when connecting to Google API:
  1. Navigate to the URL and click on the gear wheel at the top right corner of the screen and select the option Use your own OAuth credentials. Provide the client ID and client secret you previously created and click on Close.
  2. Now under Step 1, select Google Sheets API v4 from the list of APIs and check all the scopes listed down and click on Authorize APIs. You will then be prompted to allow permission, click on Allow.
  3. In Step 2, click on Exchange authorization code for tokens to generate and display the access token and refresh token.

And also you need to create an empty spreadsheet in Google Spreadsheet and obtain the spreadsheet Id and the name of the spreadsheet.

Configure Salesforce.

For Salesforce SOAP connector you need to know the username, password and the security token which can be found in you email when you are register with salesforce to connect Salesforce.

Create the above scenario using WSO2 ESB Tooling
  1. First you need to install the WSO2 ESB tooling to eclipse to create ESB artifacts by following
  2. Open WSO2 ESB Tooling installed Eclipse


  1. Click Developer Studio -> Open Dashboard


  1. Create an ESB Solution Project because here we need to create a proxy , working with connectors and also need to create a Composite Application Project to create a car file to deploy it as a carbon app. Follow to create different artifacts using WSO2 ESB Tooling.


  1. Name it as sfSpreadSheetGmailTemplate -> Next -> Finish


  1. After that you can see there are four different artifacts. First one is the ESB Config project where you can create the ESB artifacts like proxy, api, inbound endpoint etc. Second one is the Composite Application Project from we can create the car file to deploy it in ESB as carbon App. Third one is the Connector Exporter Project from we can add the connectors to the project because here we mainly create this scenario using the connectors.

  • Create the proxy

Right click sfSpreadSheetGmailTemplate -> select add or remove connector -> Add Connector -> Next -> You can directly download from the wso2 store or you can manually add the connectors from locally. For this scenario we need salesforce, google spreadsheet and gmail connectors.

Then right click the sfSpreadSheetGmailTemplate -> New -> Proxy Service -> Create A New Proxy -> Next -> Give the name as sfGmailSpreadsheet and select the Proxy Service Type as Custom Proxy -> Finish

After that you will see as below image.


  • Switch to source view and copy and paste the below content and save

<?xml version="1.0" encoding="UTF-8"?>
<proxy name="sfGmailSpreadsheet" startOnLoad="true" transports="http https" xmlns="">
           <script language="js"><![CDATA[mc.setProperty("salesforcesoap.lead.firstname",java.lang.System.getenv("salesforcesoap_lead_firstname"));
           <payloadFactory media-type="xml">
                   <sfdc:sObjects type="Lead" xmlns:sfdc="sfdc">
                   <arg evaluator="xml" expression="$ctx:salesforcesoap.lead.firstname"/>
                   <arg evaluator="xml" expression="$ctx:salesforcesoap.lead.lastname"/>
                   <arg evaluator="xml" expression="$"/>
                   <arg evaluator="xml" expression="$"/>
               <sobjects xmlns:sfdc="sfdc">{//sfdc:sObjects}</sobjects>
           <payloadFactory media-type="xml">
                   <sfdc:sObject xmlns:sfdc="sfdc">
                   <arg evaluator="xml" expression="//ns:createResponse/ns:result/ns:id/text()" xmlns:ns=""/>
               <objectIDS xmlns:sfdc="sfdc">{//sfdc:sObject}</objectIDS>
           <property expression="//ns:retrieveResponse/ns:result/urn:Email/text()" name="" scope="default" type="STRING" xmlns:ns="" xmlns:urn=""/>
           <property expression="//ns:retrieveResponse/ns:result/urn:Id/text()" name="" scope="default" type="STRING" xmlns:ns="" xmlns:urn=""/>
           <property expression="//ns:retrieveResponse/ns:result/urn:FirstName/text()" name="salesforce.lead.firstname" scope="default" type="STRING" xmlns:ns="" xmlns:urn=""/>
           <property expression="//ns:retrieveResponse/ns:result/urn:LastName/text()" name="salesforce.lead.lastname" scope="default" type="STRING" xmlns:ns="" xmlns:urn=""/>
           <script language="js"><![CDATA[var mail = mc.getProperty("");
   var id = mc.getProperty("");
   var fname = mc.getProperty("salesforce.lead.firstname");
   var lname = mc.getProperty("salesforce.lead.lastname");                           
               var fields = "[[";                         
               if(mail !="" && mail !=null && id !="" && id !=null && fname !="" && fname !=null && lname !="" && lname !=null){
               fields = fields.concat('"' + mail + '","' + id + '","' + fname + '","' + lname + '"');
               fields = fields.concat("]]");
           <property expression="json-eval($.updates.updatedRange)" name="spreadsheet.lead.updatedRange" scope="default" type="STRING"/>
           <property expression="json-eval($.values[0][0])" name="spreadsheet.lead.mailId" scope="default" type="STRING"/>
           <property expression="json-eval($.values[0][1])" name="spreadsheet.lead.sfId" scope="default" type="STRING"/>
           <property expression="json-eval($.values[0][2])" name="spreadsheet.lead.firstName" scope="default" type="STRING"/>
           <property expression="json-eval($.values[0][3])" name="spreadsheet.lead.lastName" scope="default" type="STRING"/>
           <script language="js"><![CDATA[var mail = mc.getProperty("spreadsheet.lead.mailId");
   var id = mc.getProperty("spreadsheet.lead.sfId");
   var fname = mc.getProperty("spreadsheet.lead.firstName");
   var lname = mc.getProperty("spreadsheet.lead.lastName");                           
               var contentType = "The New Lead Registration detail is: "                         
               if(mail !="" && mail !=null && id !="" && id !=null && fname !="" && fname !=null && lname !="" && lname !=null){
               contentType = contentType.concat('"The Email Id is: ' + mail + '","The Salesforce Id is: ' + id + '","The Firstname is: ' + fname + '","The Lastname is: ' + lname + '"');
               <subject>New Lead Registration</subject>
               <contentType>text/html; charset=UTF-8</contentType>
           <property expression="$axis2:HTTP_SC" name="sendMailToManager"/>
        <filter regex="true"
                source="get-property('sendMailToManager') != 200">
              <log level="full"/>
              <property name="responseMessage" value="Unable to Send the Message to Manager "/>
              <log level="full"/>
              <property name="responseMessage" value="Send an alert successfully"/>
        <property name="messageType" scope="axis2" value="application/json"/>
        <payloadFactory media-type="json">
                   "process":"Send the mail alert",
              <arg evaluator="xml" expression="get-property('responseMessage')"/>

  • Add the connectors to the car(Carbon APP)

Right Click sfSpreadSheetGmailTemplateConnectorExporter project -> New -> Add/Remove Connectors -> Add Connector -> New -> click workspace -> Select needed connectors -> Ok -> Finish

  • Create the car file to deploy to ESB

Right click sfSpreadSheetGmailTemplateCompositeApplication -> Export Composite Application Project -> Specify the Export Destination -> Next -> Select all the artifacts to pack as a carbon app.


Click Finish.

So now you can able to create a CAR file. So next is to test it locally or in the Integration Cloud.

Steps to invoke and verify the results
Run in locally.

  1. Download the scenario car file from here or you can use the car file which you created in the last step.
  2. Download the WSO2 ESB or EI from the official website
  3. You need to add the below entry in ESB_HOME/repository/conf/axis2/axis2.xml under transportSender element to enable the TLSv1.1 or greater. Because salesforce recently disabled the TLSv1.0 support.

<parameter name="HttpsProtocols">TLSv1.1,TLSv1.2</parameter>

  1. Put the downloaded CAR file into ESB_HOME/repository/deployment/server/carbonapps
  2. You need to configure the elements as environmental variables to get the credentials and dynamic values. The below values should be in the environment.

Open a terminal > sudo gedit /etc/environment and put the values as below,


  1. After that using an client application(Example: Postman) you can call the wso2 proxy to do this scenario. You can find the proxy as sfGmailSpreadsheet.                         



If you do the above configuration correctly then you can get the below response in the terminal or the client application’s response UI.
Apart from that you can get an email alert as like below to the configured gmail_lead_managerEmailId in environmental variable.



Run and test in WSO2 Integration cloud:


  1. Under Integration click WSO2 ESB Composite Application.


  1. Click Local File System Under Deploy an Artifact title and click continue


  1. Give an application name as sfSheetGmail and browse the downloaded car file to upload it. After completing the upload click Create. Then you can see like below


  1. After that car file automatically deployed in ESB.


  1. Here you need to put some values as environmental variables for that You need to click the Environment Variables tab -> Click Add Environment Variable -> add the key and the value and save.


Use below json upload the file instead of adding individual environment variables.


After adding the values in environment variable then redeploy the car.

  1. Click the link there you can find the http endpoint to run the scenario,



  1. Get that above highlighted endpoint and call it via a client application like postman


  1. You can get the response as below and also get an alert to the value of gmail_lead_managerEmailId variable.

That's it. You can successfully create a scenario and run it locally and in the WSO2 Integration Cloud.

Evanthika AmarasiriAnalysing data with Data Analytics Server

When we talk about WSO2 DAS there are a few important things we need to give focus to. They are, Event receivers, Event Streams, Event Stream definitions and Event Stores.

Events, are units of data, that are received by WSO2 DAS using Event Receivers. Through these Event Receivers WSO2 DAS receives events from different transports in JSON, XML, WSO2 Event formats, etc. formats. There are many different Event receivers available in WSO2 DAS, such as HTTP Event Receivers, SOAP Event Receivers, WSO2Event Event Receivers, etc.

Event Streams are known to be a sequence of events of a particular type. The “type” in this context can be defined as an event stream definition.

An Event Stream definition is sort of a schema which describes in which format the events that comes into WSO2 DAS server should be in. Each Event Stream Definition would have a name, a version and most importantly the type of the data that it expects to be sent in to WSO2 DAS as Events.

Once an event is received by the Event Receiver, it would be checked against the Event Stream definition and be persisted to an Event Store. This is happening through the Data Analytics Layer (DAL) where the events will be stored in the Event store (can be a relational database) as a BLOB which is in human unreadable format. Then these events will be analyzed and processed and the processed information will then again be store in a Process Store (This too can be a relational database) in a BLOB format.
These analyzed data will be decrypted by DAL and presented in a human readable format through the Data Explorer of WSO2 DAS.

When it comes to IS Analytics, whatever the analyzed data that are in the Process Store will be presented through the Analytics Dashboard which is available in WSO2 DAS after the data being decrypted from DAL.

However, the API Manager Analytics are visible from the Store/Publisher portals that are shipped with the API Manager product. However, the API manager related events that are stored in the Process store cannot be read directly from the API Manager dashboards as they are in a encrypted format. Only a DAL can decrypt this information into a human readable format. Because of this, we have introduced a way which is using a method called carbonJDBC where the DAL converts these information that are in the process store and store them in an external relational database. This database is then pointed to, from the API Manager dashboards and you will see API Manager analytics accordingly.

Dimuthu De Lanerolle

Useful Git commands

Q: How can I merge a distinct pull request to my local git repo ?

   You can easily merge a desired pull request using the following command. If you are doing this merge at first time you need to clone a fresh check-out of the master branch to your local machine and apply this command from the console.
git pull +refs/pull/78/head

Q: How do we get the current repo location of my local git repo?

A: The below command will give the git repo location your local repo is pointing to.

git remote -v

Q: Can we change my current repo url to a remote repo url

A: Yes. You can point to another repo url as below.

git remote set-url origin

Q: What is the git command to clone directly from a non-master branch (eg: two branches master & release-1.9.0 how to clone from release-1.9.0 branch directly without switching to release-1.9.0 after cloning from the master) 

A: Use the following git command.

git clone -b release-1.9.0


Q : I need to go ahead and build no matter i get build failures. Can I do that with maven build?

A: Yes. Try building like this.

mvn clean install -fn 

Q : Can I directly clone a tag of a particular git branch ?

A : Yes. Lets Imagine your tag is 4.3.0 , Following command will let you directly clone the tag instead the branch.

Syntax : git clone --branch <tag_name> <repo_url>

git clone --branch carbon-platform-integration-utils-4.3.0

Q : To See git remote urls in more detail

A : git remote show origin

Q: Creating  a new branch

git checkout -b NewBranchName
git push origin master
git checkout master
git branch      (The pointer * represents that, In which branch you are right now.)
git push origin NewBranchName

    For More Info :

    Q : Getting the below error -

     ! [rejected]        HEAD -> v0.5.0 (already exists)
    error: failed to push some refs to ''
    hint: Updates were rejected because the tag already exists in the remote.

    git tag -f v0.5.0
    delete remote tag on github: git push origin --delete v0.5.0
    git push origin v0.5.0

Denuwanthi De Silva[WSO2 Carbon]Custom Authentication for WSO2 Management Console

Almost all the WSO2 products are shipped with a management console.

You normally authenticates with this management console by typing username and password in the the login page.


But, that is not the only way you can authenticate with WSO2 management console.

WSO2 provides extension points to plugin custom authentication mechanisms to login to WSO2 management console.

Example usecase:

You have an identity provider which authenticates the users. The identity provider will send the authenticated user name as a header to WSO2 . In that case you don;t want the user to login again to management console. Since user is already verified from the identity provider, you want the user to get logged directly to management console.


So, in the above case the authentication mechanism you want to use with the management console is slightly different.

To cater such custom authentication scenarios, WSO2 provide you the capability to write custom authenticators.

You can write your custom authenticator and add it to ‘dropins’ folder and configure it in ‘authenticators.xml‘ situated at ‘<CARBON_HOME>/repository/conf/security‘ folder.

<Authenticator name=”CustomAuthenticator” disabled=”false”>

The authenticators.xml already have some custom authenticators defined in it.




You can enable the authenticators by changing the ‘disabled’ attribute to ‘false’.

In order to disable the authenticator put disabled=”true”.

The <Priority> element defines the precedence of the authenticators. If the priority value is high, precedence will be high.


Prakhash SivakumarXSS : Crash Course

Purpose of this post is to explain different types of XSS and prevention mechanisms through multiple self paced exercises


  • JDK 1.7+ and Maven

Lab Setup

Clone :
go to the XSSSession folder and execute “mvn tomcat7:run” to start the web application.
openup : http://localhost:8080/XSSSamples/

Example1 : Reflected

This example is to demonstrate the Reflected XSS attack

Reflected XSS occurs when an untrusted user data is sent to the web application and immediately echoed back as an untrusted content. Then, as usual, the browser receives the code from the web server response and renders it.

Clearly this type of XSS deals with the server side code.

open up http://localhost:8080/XSSSamples/reflected.jsp
Try with : <script>alert(1)</script>, you will observe the popup and also you will observe request is logged in the server console

Example2 : Persitent

This example is to demonstrate the Persitent XSS attack

Persitent XSS flaws are very similar to Reflected XSS however,rather than the malicious input directly reflected into the response, it is stored within the web application.

Once this occurred,then this can be echoed anywhere in your application and might be available to all users

open up http://localhost:8080/XSSSamples/Persitent.jsp
Try with : <script>alert(1)</script>.

Now check the stored data in the URl http://localhost:8080/XSSSamples/showDataStored.jsp,you will observe the popup when ever you are visiting to this URL.

Example3 : domAttack

This example is to demonstrate the DOM XSS attack

This form of xss attack only within the client side code. Generally this vulnerability lives within the DOM environment, and does not reach server-side code

open up http://localhost:8080/XSSSamples/domAttack.jsp,
Try with the payload :<img src=”"/>

you can observe an output similar to below, and if you analyze the page source of domAttackWrongPage.jsp page, you can identity the DOM elements belongs to domAttackWrongPage.jsp got manipulated with this injection.


This challenge is to demonstrate, identifying exploiting XSS using different inputs

open up http://localhost:8080/XSSSamples/Unprotected.jsp, and first try with the : <script>alert(1)</script> and analyze the page source

From the page source, you can identify there should be another </script> element added in order to close the first <script> element already available in the jsp page, you can do the exploitation using <script>alert(1)</script> only after closing the first <script> tag, So modify the script by adding </script> as follows </script><script>alert(1)</script>.

Now you will be able to exploit it.

Example5: WithInputValidation.

This example is to demonstrate, having input validation only will not helpful to prevent XSS attacks, we need proper output escaping too

Here in the input field available in the URL “http://localhost:8080/XSSSamples/WithInputValidation.jsp” , you cannot provide any inputs which contains the following characters “^[^<>]+$” as those are restricted for the input.

So the challenge is sending a post request to the “validated” servlet by passing the input validations

To exploit , you can host another webpage(Find here)which is very similar to WithInputValidation.jsp(not having pattern validation and “http://localhost:8080/XSSSamples/validated” as the action value).

So now if you submit the payload <script>alert(1)</script> to the newly hosted page, the request will be sent to http://localhost:8080/XSSSamples/validated, and you can observe the popup.

Example6 and Example 7: WithOutputEscaping and WithSPOutputEscaping

This examples are to demonstrate preventing XSS with Output Escaping.

In WithOutputEscaping example we can observe, when providing<script>alert(1)</script>, that will get printed in “http://localhost:8080/XSSSamples/escaped” page as it is, anyhow that will not get reflected. This is due to a the output escaping. By going through the page source, we can observe the behavior of the output escaping.

In WithSPOutputEscaping is demonstrating specific output escaping techniques applied for various languages.

Example8: WithContentSecurityPolicy

This example is to demonstrate the use of “Content-Security-Policy” header.

Content Security Policy (CSP) is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS) and data injection attacks[1]

In the example, there are no input validation and output escaping is done. If your browser is CSP compatible, then you will not be able to do any script injection to the particular input field.

Most of the latest browsers are CSP compatible. Anyway it is not advisable to always depend on CSP.



XSS : Crash Course was originally published in Blue Space on Medium, where people are continuing the conversation by highlighting and responding to this story.

Hariprasath ThanarajahWorking with WSO2 ESB Gmail Connector

First, we need to know about what is ESB connectors,

ESB Connectors - Connectors allow you to interact with a third-party product's functionality and data from your ESB message flow. This allows ESB to connect with disparate cloud APIs, SaaS applications, and on-premise systems.

WSO2 have over 150+ connectors in the store. From this blog post, I am going to explain how to configure the WSO2 Gmail Connector.

The Gmail connector allows you to access the Gmail REST API through WSO2 ESB. Gmail is a free Web-based e-mail service provided by Google. Users may access Gmail as a secure webmail. The Gmail program also automatically organizes successively related messages into a conversational thread.

These sections provide step-by-step instructions on how to set up your web services environment in Gmail and start using the connector.


  • WSO2 ESB
  • Gmail connector
  • Gmail account
Connect with Gmail API

  • After creating the project Go to credentials tab and Create credentials -> OAuth Client ID but before that you must configure the consent screen, For that go to OAuth consent screen under Credentials and give a name in "Product name shown to users".

  • Select webapplication -> create and give a name and a redirect URI(Get the code to request an accessToken call to Gmail API) -> create

  • When you create the app you will get the ClientID and Client Secret,

  • After that

copy the above URL(need to replace the redirect_uri and client_id according to your app) into a browser and enter -> You need to log in with your email -> allow 

After that, you will get a code with your redirect URI like,

  • For successful responses, you can exchange the code for an access token and a refresh token. To make this token request, send an HTTP POST request to the endpoint, and include the following parameters and the content-type should be application/x-www-form-URL encoded

Replace the code and clientId , clientSecret and redirectUri according to your app and send. From the response, you will get the accessToken and refreshToken.

Setup the WSO2 ESB with Gmail Connector

  • Download the ESB from here and start the server.
  • Download the Gmail connector from WSO2 Store.
  • Add and enable the connector via ESB Management Console.
  • Create Proxy service to send a mail from Gmail and invoke the proxy with the following request.
Proxy Service 
<proxy xmlns=""
<property name="to" expression="json-eval($.to)"/>
<property name="subject" expression="json-eval($.subject)"/>
<property name="from" expression="json-eval($.from)"/>
<property name="messageBody" expression="json-eval($.messageBody)"/>
<property name="cc" expression="json-eval($.cc)"/>
<property name="bcc" expression="json-eval($.bcc)"/>
<property name="id" expression="json-eval($.id)"/>
<property name="threadId" expression="json-eval($.threadId)"/>
<property name="userId" expression="json-eval($.userId)"/>
<property name="refreshToken" expression="json-eval($.refreshToken)"/>
<property name="clientId" expression="json-eval($.clientId)"/>
<property name="clientSecret" expression="json-eval($.clientSecret)"/>
<property name="accessToken" expression="json-eval($.accessToken)"/>
<property name="apiUrl" expression="json-eval($.apiUrl)"/>
<property name="accessTokenRegistryPath" expression="json-eval($.accessTokenRegistryPath)"/>
<parameter name="serviceType">proxy</parameter>


  "clientId": "",
  "refreshToken": "1/3e68t0-PStjwMYDVR4zgUx8QxXkR51xKcWjubEIq5PI",
  "clientSecret": "qlC234235235mP0iw8s9i2",
  "userId": "",
  "messageBody":"Hi hariprasath",

No need to get an accessToken for every request. The connector itself getting the accessToken by refreshing by refreshToken.

Lasindu CharithWSO2 ESB Redis Class Mediator

In the following post I'm going to explain the use of a Redis Class Mediator with WSO2 ESB 5.0.0.

Redis[1] is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries.

Say you have a use-case where you need to store and retrieve key, value pairs in a central registry store in an efficient manner. Redis is a efficient solution to store and retrieve temporary data super fast. In your ESB mediation flow, if you have a use-case to store data and retrieve data per message, then you could engage following Redis Class Mediator in your message flow.

I have used the Jedis[2] java client to make the connections and perform operations on the Redis Cluster. The Class mediator was developed as a OSGI Component [3] instead of a regular jar. The reason was that we needed to bundle both Jedis Client and apache commons pool2 dependencies with the same Class mediator jar for easy deployment. The post does not cover setting up Redis, however you can read the Redis documentation to understand the steps.

Connections and Pooling

Redis uses Apache Commons Pool for the connection pooling configurations [4]. When the Redis Connection pool is initialized there will be minIdle number of connections created initially, and for each Redis call, the Redis mediator will try to use one of the idle connections from the pool, if not will create connections upto maxTotal configured. Since the jedis pool is referenced as a static object, for each reference of Redis Class Mediator it reuses the same jedis connection pool. 

Some Configuration parameters on Connection pooling etc. can be found in [5]. The code reads the Redis connection configuration values from the registry, but you could customize it using the method you prefer.


package com.test.wso2.redis;

* Constants for Redis Class Mediator
public class RedisClassMediatorConstants {

// Redis Cluster Configuration values
protected static final String REDIS_SERVER_HOST = "";
protected static final String REDIS_SERVER_PORT = "redis.server.port";
protected static final String REDIS_SERVER_TIMEOUT = "redis.server.timeout";
protected static final String REDIS_SERVER_PASSWORD = "redis.server.password";
protected static final String REDIS_SERVER_DATABASE = "redis.server.database";
protected static final String REG_PREFIX = "conf:/test/";

// Redis Class mediator Properties
protected static final String REDIS_GET_KEY = "redis.key.get";
protected static final String REDIS_GET_VALUE = "redis.value.get";
protected static final String REDIS_SET_KEY = "redis.key.set";
protected static final String REDIS_SET_VALUE = "redis.value.set";
protected static final String REDIS_SET_VALUE_STATUS = "redis.set.status";
protected static final String REDIS_SET_TTL_VALUE = "redis.ttl.set";

// Redis Connection Factory Configurations
protected static final String REDIS_SERVER_MAX_TOTAL = "redis.server.maxTotal";
protected static final String REDIS_SERVER_MAX_IDLE = "redis.server.maxIdle";
protected static final String REDIS_SERVER_MIN_IDLE = "redis.server.minIdle";
protected static final String REDIS_SERVER_TEST_ON_BORROW = "redis.server.testOnBorrow";
protected static final String REDIS_SERVER_TEST_ON_RETURN = "redis.server.testOnReturn";
protected static final String REDIS_SERVER_TEST_WHILE_IDLE = "redis.server.testWhileIdle";
protected static final String REDIS_SERVER_MIN_EVICT_IDL_TIME = "redis.server.minEvictableIdleTimeMillis";
protected static final String REDIS_SERVER_TIME_BW_EVCT_RUNS = "redis.server.timeBetweenEvictionRunsMillis";
protected static final String REDIS_SERVER_NUM_TESTS_PER_EVCT_RUN = "redis.server.numTestsPerEvictionRun";
protected static final String REDIS_SERVER_BLOCK_WHEN_EXHAUSTED = "redis.server.blockWhenExhausted";



package com.test.wso2.redis;

import org.apache.commons.lang.StringUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.synapse.MessageContext;
import org.apache.synapse.config.Entry;
import org.apache.synapse.mediators.AbstractMediator;
import org.apache.synapse.registry.Registry;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;

import java.util.Set;

/** * Redis Class Mediator * Supports GET / SET Operations */public class RedisClassMediator extends AbstractMediator {

private static final Log log = LogFactory.getLog(RedisClassMediator.class);
private static JedisPool jedisPool;
private Registry registryInstance;

public boolean mediate(MessageContext messageContext) {

String redisHost;
int redisPort;
int redisTimeout;
String redisPassword;
int redisDatabase;
Jedis jedis = null;
boolean isGet = false;
boolean isSet = false;

try {
// Get Registry Object registryInstance = messageContext.getConfiguration().getRegistry();

if (registryInstance != null) {
String redisHostString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_HOST);
if (StringUtils.isNotEmpty(redisHostString)) {
redisHost = redisHostString.trim();
} else {
"Redis Hostname is not set in Registry configurations, hence skipping RedisClassMediator execution");
return false;

String redisPortString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_PORT);
if (StringUtils.isNotEmpty(redisPortString)) {
redisPort = Integer.parseInt(redisPortString.trim());
} else {"Redis Port is not set in Registry configuration, hence using default port as 6379");
redisPort = 6379;

String redisTimeoutString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_TIMEOUT);
if (StringUtils.isNotEmpty(redisTimeoutString)) {
redisTimeout = Integer.parseInt(redisTimeoutString.trim());
} else {"Redis timeout is not set in Registry configuration, hence using default timeout as 2000");
redisTimeout = 2000;

String redisPasswordString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_PASSWORD);
if (StringUtils.isNotEmpty(redisHostString)) {
redisPassword = redisPasswordString.trim();
} else {
"Redis Password is not set in Registry configurations, hence skipping RedisClassMediator execution");
return false;

String redisDatabaseString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_DATABASE);
if (StringUtils.isNotEmpty(redisDatabaseString)) {
redisDatabase = Integer.parseInt(redisDatabaseString.trim());
} else {"Redis database is not set in Registry configuration, hence using default database as 0");
redisDatabase = 0;

String redisGetKey = (String) messageContext.getProperty(RedisClassMediatorConstants.REDIS_GET_KEY);
if (StringUtils.isNotEmpty(redisGetKey)) {
isGet = true;
Set pros = messageContext.getPropertyKeySet();
// Remove if any previously set attributes from the connector before executing again. if (pros != null) {
String redisSetKey = (String) messageContext.getProperty(RedisClassMediatorConstants.REDIS_SET_KEY);
String redisSetValue = (String) messageContext.getProperty(RedisClassMediatorConstants.REDIS_SET_VALUE);
if (StringUtils.isNotEmpty(redisSetKey) && StringUtils.isNotEmpty(redisSetValue)) {
isSet = true;

jedis = getRedisPool(redisHost, redisPort, redisTimeout, redisPassword, redisDatabase).getResource();
if (jedis != null) {
if (isGet) {
// Handle Redis GET Operations String redisGetValue = jedis.get(redisGetKey);
if (StringUtils.isNotEmpty(redisGetValue)) {
messageContext.setProperty(RedisClassMediatorConstants.REDIS_GET_VALUE, redisGetValue);
if (log.isDebugEnabled()) {
String.format("Get [Key] %s and Get [Value] %s for [messageId] %s", redisGetKey,
redisGetValue, messageContext.getMessageID()));

} else {
log.warn(String.format("A Valid value for [key] %s not found in Redis for [messageId] %s",
redisGetKey, messageContext.getMessageID()));

// Removing property after use if (pros != null) {
} else if (isSet) {
// Handle Redis PUT Operations String status;
Object ttlValueObj = messageContext
if (ttlValueObj instanceof Integer) {
int redisSetTTLValue = (Integer) messageContext
status = jedis.setex(redisSetKey, redisSetTTLValue, redisSetValue);
} else {
status = jedis.set(redisSetKey, redisSetValue);
messageContext.setProperty(RedisClassMediatorConstants.REDIS_SET_VALUE_STATUS, status);
if (log.isDebugEnabled()) {
log.debug(String.format("Set [Key] %s and Set [Value] %s for [messageId] %s", redisSetKey,
redisSetValue, messageContext.getMessageID()));
// Removing properties after usage if (pros != null) {
if (ttlValueObj != null) {
} else {
log.error("Cannot find required Redis GET or SET Properties, skipping Redis Mediator");
} else {
log.error("Unexpected error. Cannot initiate Jedis Resource. Check your jedis connection details.");
} else {
log.error("Cannot initiate Registry to read config values.");
} catch (Exception e) {
String error = "Error occurred while handling message in RedisClassMediator. " + e;
handleException(error, messageContext);
} finally {

if (jedis != null) {
return true;

private synchronized JedisPool getRedisPool(String host, int port, int timeout, String password, int database) {
if (jedisPool != null) {
return jedisPool;
} else {
jedisPool = new JedisPool(buildPoolConfig(), host, port, timeout, password, database);
if (log.isDebugEnabled()) {
log.debug("Redis Connection Pool initialized");
return jedisPool;

// Build Jedis Connection Pool private JedisPoolConfig buildPoolConfig() {
final JedisPoolConfig poolConfig = new JedisPoolConfig();

// Jedis Connection Pool Default Configuration Values int maxTotal = 128;
int maxIdle = 128;
int minIdle = 16;
boolean testOnBorrow = true;
boolean testOnReturn = true;
boolean testWhileIdle = true;
long minEvictableIdleTimeMillis = 60000;
long timeBetweenEvictionRunsMillis = 30000;
int numTestsPerEvictionRun = 3;
boolean blockWhenExhausted = true;

// Read and override values from registry String maxTotalString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_MAX_TOTAL);
String maxIdleString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_MAX_IDLE);
String minIdleString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_MIN_IDLE);
String testOnBorrowString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_TEST_ON_BORROW);
String testOnReturnString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_TEST_ON_RETURN);
String testWhileIdleString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_TEST_WHILE_IDLE);
String minEvictableIdleTimeMillisString = getRegistryResourceString(RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_MIN_EVICT_IDL_TIME);
String timeBetweenEvictionRunsMillisString = getRegistryResourceString(
RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_TIME_BW_EVCT_RUNS);
String numTestsPerEvictionRunString = getRegistryResourceString(RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_NUM_TESTS_PER_EVCT_RUN);
String blockWhenExhaustedString = getRegistryResourceString(RedisClassMediatorConstants.REG_PREFIX + RedisClassMediatorConstants.REDIS_SERVER_BLOCK_WHEN_EXHAUSTED);

if (StringUtils.isNotEmpty(maxTotalString)) {
maxTotal = Integer.parseInt(maxTotalString.trim());

if (StringUtils.isNotEmpty(maxIdleString)) {
maxIdle = Integer.parseInt(maxIdleString.trim());

if (StringUtils.isNotEmpty(minIdleString)) {
minIdle = Integer.parseInt(minIdleString.trim());

if (StringUtils.isNotEmpty(testOnBorrowString)) {
testOnBorrow = Boolean.parseBoolean(testOnBorrowString.trim());

if (StringUtils.isNotEmpty(testOnReturnString)) {
testOnReturn = Boolean.parseBoolean(testOnReturnString.trim());

if (StringUtils.isNotEmpty(testWhileIdleString)) {
testWhileIdle = Boolean.parseBoolean(testWhileIdleString.trim());

if (StringUtils.isNotEmpty(minEvictableIdleTimeMillisString)) {
minEvictableIdleTimeMillis = Long.parseLong(minEvictableIdleTimeMillisString.trim());

if (StringUtils.isNotEmpty(timeBetweenEvictionRunsMillisString)) {
timeBetweenEvictionRunsMillis = Long.parseLong(timeBetweenEvictionRunsMillisString.trim());

if (StringUtils.isNotEmpty(numTestsPerEvictionRunString)) {
numTestsPerEvictionRun = Integer.parseInt(numTestsPerEvictionRunString.trim());

if (StringUtils.isNotEmpty(blockWhenExhaustedString)) {
blockWhenExhausted = Boolean.parseBoolean(blockWhenExhaustedString.trim());

return poolConfig;

// Read a registry resource file as a String private String getRegistryResourceString(String registryPath) {
String registryResourceContent = null;

Object obj = registryInstance.getResource(new Entry(registryPath), null);
if (obj != null && obj instanceof OMTextImpl) {
registryResourceContent = ((OMTextImpl) obj).getText();
return registryResourceContent;


You need to add following dependencies in the pom.xml file of your Class mediator project


Deploy in ESB

  • Build and copy test-redis-mediator-1.0.0-SNAPSHOT.jar to <ESB_HOME>/repository/components/dropins. (We need to deploy the mediator in Dropins as it's an OSGI bundle)
  • Update log4j.peroperties file to enable DEBUG/INFO logs for the mediator. (
  • Restart the server
  • Create following resources under /_system/config/test and using method as 'Create text content' using admin console

ResourceMandatoryDefault ValueValue

Usage of Redis Class Mediator in mediation Flow

GET a value from Redis

<property name="redis.key.get" value="wso2-redis-test-key"/>
<class name="com.test.wso2.redis.RedisClassMediator"/>
<log level="custom">
<property name="redis-get-value" expression="get-property('redis.value.get')"/>
Note: The retrieved value can be accessed via the Synapse Property "redis.value.get" as in above example.

SET a value in Redis

<property name="redis.key.set" value="wso2-redis-test-key"/>
<property name="redis.value.set" value="wso2-redis-test-value"/>
<property name="redis.ttl.set" value="3600"/>
<class name="com.test.wso2.redis.RedisClassMediator"/>
<log level="custom">
<property name="redis-set-value-status" expression="get-property('redis.set.status')"/>
Note: The put status can be accessed via the Synapse Property "redis.set.status" as in above example.



Lasindu CharithWSO2 API Manager : Multi Data Center Deployment

Multi Data Center Active - Passive Deployment (Recommended)

Above architecture describes a Active-Passive Multi Datacenter deployment of API Manager 2.1.0. Both Data Centers are running identical setups. In DC1 there are 2 all-in-one Active-Active API Manager Instances which runs all the API Manager Profiles including Publisher, Store, Key Manager, Gateway and Traffic Manager. The artifacts which need to be synchronized among the nodes will be created from Publisher portal (APIs) and Admin portal (Throttling policies). In the above diagram, one node acts as the master for deployment synchronization. The gateway url and policy deployer url of the 2nd active node should point to the Master node in the api-manager.xml configuration file.

The artifacts of Master node should be synchronized between the 2 other indicated nodes(non master active node in DC1 and master node in DC2) using Rsync pull mode(Rsync pull will be convenient when scaling the nodes in a DC). In the Passive DC, when it becomes active, again one node should be the master, so the gateway url and policy deployer url of the other node should point to DC2 master node similarly. Rsync pull should be configured from DC2 master to the other node respectively.

Coming back to DC1 (Active data center), both API Manager instances should have write access to the databases, since any of the active instance can serve API/Policy create/update requests at a given time. The database cluster in Active DC1 should be replicated in Slave cluster in DC2, so that when the Passive to Active switch happens from DC1 to DC2 in a failure scenario, all the artifacts will be up to date in DC2.

Gateways on DC1 (Active DC) should publish statistics to all 4 Analytics nodes (in both DCs), so that the analytics databases need not to be synchronized. The reason is that, Analytics cannot work only with the data in the databases when the failover happens.

Multi Data Center Active - Active Deployment with Geographical Load Balancing

Above solution architecture assumes,
  • Gateway traffic will be routed to two datacenters depending on client geography.
  • There is an absolute requirement to have a Active-Active multi data center deployment.
  • Guarantees across DC high availability for Gateway component (API Runtime). To have within DC high availability for Store and Publisher we need NFS to replace Rsync.
  • Clustering MariaDB within a DC and replicating across DCs are out of the scope of this document

Both the DC1 and DC2 are running active (i.e both data centers will be serving API traffic). DC1 has 2 Active all-in-one instances of API Manager where the DC2 only has two active instances running Gateway + Traffic Manager + Key Manager (of-course they will have to be started with default profiles, but the load balancer will not route traffic to store, publisher and admin of DC2). All Publisher, Store and admin portal requests will only be served by two active instances in DC1. Again to facilitate two active-active instances in DC1, we need to point the gateway url and policy deployer url of non-master node to that of the master. RSync should be configured in all 3 APIM instances to pull from the Master node. 

Master-Slave database clusters replication should be done similar to the Active-Passive case above. However when replicating, the IDN_OAUTH2_ACCESS_TOKEN table of AM_DB should be omitted, so that the token validation would be consistent between two data centers. (No need of replicating the Analytics databases as analytics will be independent in two DCs)

The gateways in two DCs should publish statistics to two local analytics nodes in failover manner. In each DC, the traffic manager works locally and gateways can connect to TM in failover manner(i.e one node should point to both itself and other node’s traffic Manager when publishing traffic data)

Denuwanthi De Silva[WSO2]Enabling GC logs in WSO2 servers


I am using WSO2 API Manager 2.1.0 distribution. This will be similar to any WSO2 product.

  1. Open file located at ‘wso2am-2.1.0/bin’ folder.
  2. Add following command under the ‘$JVM_MEM_OPTS’


-XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:”$CARBON_HOME/repository/logs/gc.log” \



-XX:+HeapDumpOnOutOfMemoryError \

-XX:HeapDumpPath=”$CARBON_HOME/repository/logs/heap-dump.hprof” \

-XX:+PrintGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:”$CARBON_HOME/repository/logs/gc.log” \



3.Then a file named gc.log will be created in ‘wso2am-2.1.0/repository/logs’ folder.

GC logs will be printed there.

Pushpalanka JayawardhanaSigning SOAP Messages - Generation of Enveloped XML Signatures

Digital signing is a widely used mechanism to make digital contents authentic. By producing a digital signature for some content, we can let another party capable of validating that content. It can provide a guarantee that, is not altered after we signed it, with this validation. With this sample I am to share how to generate the a signature for SOAP envelope. But of course this is valid for any other content signing as well.

Here, I will sign
  • The SOAP envelope itself
  • An attachment 
  • Place the signature inside SOAP header 
With the placement of signature inside the SOAP header which is also signed by the signature, this becomes a demonstration of enveloped signature.

I am using Apache Santuario library for signing. Following is the code segment I used. I have shared the complete sample here to to be downloaded.

public static void main(String unused[]) throws Exception {

        String keystoreType = "JKS";
        String keystoreFile = "src/main/resources/PushpalankaKeystore.jks";
        String keystorePass = "pushpalanka";
        String privateKeyAlias = "pushpalanka";
        String privateKeyPass = "pushpalanka";
        String certificateAlias = "pushpalanka";
        File signatureFile = new File("src/main/resources/signature.xml");
        Element element = null;
        String BaseURI = signatureFile.toURI().toURL().toString();
        //SOAP envelope to be signed
        File attachmentFile = new File("src/main/resources/sample.xml");

        //get the private key used to sign, from the keystore
        KeyStore ks = KeyStore.getInstance(keystoreType);
        FileInputStream fis = new FileInputStream(keystoreFile);
        ks.load(fis, keystorePass.toCharArray());
        PrivateKey privateKey =

                (PrivateKey) ks.getKey(privateKeyAlias, privateKeyPass.toCharArray());
        //create basic structure of signature
        javax.xml.parsers.DocumentBuilderFactory dbf =
        DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
        DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
        Document doc = dBuilder.parse(attachmentFile);
        XMLSignature sig =
                new XMLSignature(doc, BaseURI, XMLSignature.ALGO_ID_SIGNATURE_RSA_SHA1);

        //optional, but better
        element = doc.getDocumentElement();

            Transforms transforms = new Transforms(doc);
            //Sign the content of SOAP Envelope
            sig.addDocument("", transforms, Constants.ALGO_ID_DIGEST_SHA1);

            //Adding the attachment to be signed
            sig.addDocument("../resources/attachment.xml", transforms, Constants.ALGO_ID_DIGEST_SHA1);


        //Signing procedure
            X509Certificate cert =
                    (X509Certificate) ks.getCertificate(certificateAlias);

        //write signature to file
        FileOutputStream f = new FileOutputStream(signatureFile);
        XMLUtils.outputDOMc14nWithComments(doc, f);

At first it reads in the private key which is to be used in signing. To create a key pair for your own, this post  will be helpful. Then it has created the signature and added the SOAP message and the attachment as the documents to be signed. Finally it performs signing  and write the signed document to a file.

The signed SOAP message looks as follows.

<soap:Envelope xmlns:dsig="" xmlns:pj=""
        <pj:MessageHeader pj:version="1.0" soap:mustUnderstand="1">
                <pj:PartyId pj:type="ABCDE">FUN</pj:PartyId>
                <pj:PartyId pj:type="ABCDE">PARTY</pj:PartyId>
            <pj:ConversationId>FUN PARTY FUN 59c64t0087fg3kfs000003n9</pj:ConversationId>
                <pj:MessageId>FUN 59c64t0087fg3kfs000003n9</pj:MessageId>
        <pj:Via pj:id="59c64t0087fg3ki6000003na" pj:syncReply="False" pj:version="1.0"
                soap:actor="" soap:mustUnderstand="1">
        <ds:Signature xmlns:ds="">
                <ds:SignatureMethod Algorithm=""></ds:SignatureMethod>
                <ds:Reference URI="">
                    <ds:DigestMethod Algorithm=""></ds:DigestMethod>
                <ds:Reference URI="../resources/attachment.xml">
                        <ds:Transform Algorithm=""></ds:Transform>
                    <ds:DigestMethod Algorithm=""></ds:DigestMethod>
            <ds:SignatureValue>d0hBQLIvZ4fwUZlrsDLDZojvwK2DVaznrvSoA/JTjnS7XZ5oMplN9  THX4xzZap3+WhXwI2xMr3GKO................x7u+PQz1UepcbKY3BsO8jB3dxWN6r+F4qTyWa+xwOFxqLj546WX35f8zT4GLdiJI5oiYeo1YPLFFqTrwg==
   <ds:X509Certificate>                MIIDjTCCAnWgAwIBAgIEeotzFjANBgkqhkiG9w0BAQsFADB3MQswCQYDVQQGEwJMSzEQMA4GA1UE...............qXfD/eY+XeIDyMQocRqTpcJIm8OneZ8vbMNQrxsRInxq+DsG+C92b
        <pr:GetPriceResponse xmlns:pr="">

In a next post we will see how to verify this signature, so that we can guarantee signed documents are not changed (in other words guarantee that the integrity of the content is preserved) .


Lakshman UdayakanthaPDF generation with Apache FOP

What is Apache FOP?
Apache FOP is a print formatter driven by XSL formatting objects(XSL-FO). It is a library to read XSL FO objects and generate documents with specified output format. Here I have used pdf as the output format.

What is XSL?

XSL is a language for expressing stylesheets. It describes how to display data in an XML file.

What is XSL FO?

XSL FO is a part of XSL which is a markup language for XML document formatting. Follow W3school tutorial for XSL FO.

How Apache FOP generate PDFs.

I have generated javaFX form to enter the data and when I click the print button after filling data, PDF will be created in a folder called PDFs. source code for this available in

Yasassri RatnayakeCustomizing the service URI of a Proxy Service in Enterprise Integrator

In WSO2 Enterprise integrator we have proxy services. In a proxy service by default if you create a proxy service the context of the service URL will be auto generated with the name of the proxy service name, For example if you create a proxy service name "myproxy". The service URL will be like the below.


But what if you want to have a custom URI? For ecample.


Following is how you can do this.

Step 01 - Open your axis2.xml, this can be found <EI_HOME>/conf/axis2/axis2.xml

Step 02 : In the axis2xml find the phaseOrder type="InFlow" and add the following section.

Add following dispatcher to the Phaseorder,

<handler name="RequestURIBasedDispatcher"

The dispatcher should be inserted in to the In-Flow at the Dispatch phase. It should be the first handler in the Dispatch phase.

Note : In axis2.xml we have different message phases. Namely (Inflow, Outflow and FaultFlow) So make sure you have added the dispatcher to the correct phase. You need to add the dispatcher to InFlow.

<phaseOrder type="InFlow">
<!-- System pre defined phases -->
The MsgInObservation phase is used to observe messages as soon as they are
received. In this phase, we could do some things such as SOAP message tracing & keeping
track of the time at which a particular message was received

NOTE: This should be the very first phase in this flow
<phase name="MsgInObservation">
<handler name="TraceMessageBuilderDispatchHandler"
<phase name="Validation"/>
<phase name="Transport">
<handler name="RequestURIBasedDispatcher"
<order phase="Transport"/>
<handler name="CarbonContextConfigurator"
<handler name="RelaySecuirtyMessageBuilderDispatchandler"
<handler name="SOAPActionBasedDispatcher"
<order phase="Transport"/>
<!--handler name="SMTPFaultHandler"
<order phase="Transport"/>
<phase name="Addressing">
<handler name="AddressingBasedDispatcher"
<order phase="Addressing"/>
<phase name="Security"/>
<phase name="PreDispatch">
<!--Uncomment following handler to enable logging in ESB log UI-->
<!--<handler name="TenantDomainSetter"-->
<phase name="Dispatch" class="org.apache.axis2.engine.DispatchPhase">
<handler name="CustomURIBasedDispatcher"
<handler name="RequestURIBasedDispatcher"
<handler name="SOAPActionBasedDispatcher"
<handler name="RequestURIOperationDispatcher"
<handler name="SOAPMessageBodyBasedDispatcher"

<handler name="HTTPLocationBasedDispatcher"
<handler name="MultitenantDispatcher"
<handler name="SynapseDispatcher"
<handler name="SynapseMustUnderstandHandler"
<!-- System pre defined phases -->
<phase name="RMPhase"/>
<phase name="OpPhase"/>
<phase name="AuthPhase"/>
<phase name="MUPhase"/>
<!-- After Postdispatch phase module author or or service author can add any phase he want -->
<phase name="OperationInPhase"/>

Step 03 : Now restart your server and you can create the following proxy service. Note the ServiceURI parameter.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
<property name="=============TReached" value="=============="/>
<parameter name="ServiceURI">/services/myproxy/idservice/1.0</parameter>

So that is it, please drop a comment if you have any queries or need assistance.

Milinda PereraSetting up SMB file share in fresh Windows Server 2012-R2

Open "Server Manager" and Navigate to File and Storage Services > Volumes

In Volumes view, you can find SHARES section, click on "Start the Add Roles and Features Wizard" for feature installation wizard.

Under Server Roles tab, select "File and iSCSI Services" as shown below and click Next.

Click Next to reach Confirmation tab, which will list down installed server roles as follows. Click Install to install them.

Click Close to complete the installation.

Now, create SMB share. Navigate to File and Storage Services > Volumes
Then click New Share to start New Share Wizard under TASKS in SHARES section.

Select the "SMB Share - Quick" as file share profile and click Next

Browse to the directory shared or Select relevant volume in Share Location tab. Then click Next

Provide Share Name and click Next

Keep default settings in Other settings tab, if not needed, and click Next to navigate to Permissions tab.
Click Customize Permissions 

In Advanced Security settings view, under Share tab, you can find permission entries. By default, it contains Read-only rights for everyone. 
Here, I'm adding permission entry to provide full-control (RW) to Administrator user. To perform that click Add to add new permission entry.

 Click Select a principal in Permission Entry view

Provide User name or group name under "Enter the object name to select" text box. Here I'm providing Administrator (user name) and click OK

Then select relevant permission level under Permissions section and click OK

Newly added permission entry will list down as follows.

Click OK and click Next to Confirm changes and complete New Share wizard

After closing the wizard, newly created SMB file share will be listed under Share tab as follows

Now you can access SMB share from a remote machine within the network with URI smb://[Host/IP]/[Share_Name]. eg: smb:// 



Pushpalanka JayawardhanaChallenges of Future IAM (concerned with Mergers , Acquisitions, Startups)

When the companies bring in external users to work within the enterprise activities, via mergers, acquisitions, outsourcing and allowing end users come via social login, a problem is raised due to the variety of protocols each of these external parties may use for identity management. Most of the time these external parties would not agree to share their user base with sensitive information of the users, which is a major asset of them. In this case identity federation or cross domain authentication comes into provide a solution to this problem. There are identity federation protocols that have evolved with the time mainly OpenID, SAML, WS-Federation and OpenID connect to address the requirement of federated authentication. Even though these protocols have been able to cater for it, while the acquisitions and merges grows up in numbers the solutions still suffers from two major limitations, namely[1],

Federation Silos

When there is federation requirement, organizations would choose on of the protocols available as suitable for them and move ahead with it. Any new system to be integrated would be preferred to support this protocol as it will be able to co-operate with the existing system This leads to a federation anti-pattern that may be a silo of SAML federation, a silo of OpenID Connect federation or a silo of OpenID federation or some other protocol. Later this makes it so hard to bring on a system which does not support another protocol and system is within the boundary of one particular protocol.

Spaghetti identity

When large-scale federation deployments are considered, this anti pattern is observed. When one silo is considered from the above figure, within an enterprise there may be so many parties involved in any of the protocols as service providers and identity providers. Almost all of these protocols depend on a trust relationship built among these parties in order for the federation authentication to work. In a large scale this means there are many point to point trust relationships that need to be maintained as below. This added complexity makes it an anti-pattern that needs to be get rid of.

Hence an integration mechanism is required between these parties of service providers and identity providers. If this integration just focused on each single entity that the enterprise would interact, then it can end up with something similar to below, which not doing any better than above.

                                       Integration with External Parties for Identity Management

As seen in the figure, if this approach is taken, the end result is a maintenance costly, complex design. This means to write adapters for each new party that is joining the enterprise system, which leads to several complications such as,
  • Adapters needs to be written(may be from the scratch) which takes time and involves a significant cost
  • With the increase of number of adapters, complexity of maintenance goes high
  • Less re-use of available resources and efforts put on writing adapters
  • No central location which can control the identities involved in the enterprise.
    Identity Management includes several aspects such as, authentication, authorization, claims handling of users, provisioning users etc. These have common factors for all the parties which can be reused among them. Also if authorization is considered for an example enterprise usually have policies that needs to be effective across the system. With above design this is much complex and there is no single location that can cater for monitoring or managing requirements.

Pushpalanka JayawardhanaFuture of Identity and Access Management (IAM)

When a business needs a rapid growth or a new technology integrated, partnering and acquiring strategies are commonly put forward. WhatsApp been acquired by Facebook, Skype been acquired by Microsoft are such popular acquisitions done by the giants in the industry. According to the Wall Street Journal “2015 the biggest year ever for mergers and acquisitions” globally[1]. When this is considered from the aspects of enterprise identity management, it means the rapid merge of external users to current enterprise system. While this merge needs to happen rapidly in order to take the competitive advantage, privacy and security aspects cannot be ignored. Quocirca which is a primary research and analysis company also confirms that “many businesses now have more external users than internal ones. Many organisations are putting in place advanced identity and access management tools to facilitate the administration and security issues raised by this.”[2]. 

The impact of these merges and acquisitions are been predicted by the reputed analyst firm Gartner, as “By 2020, 60% of digital identities interacting with the enterprise will come from external identity providers through a competitive marketplace – up from less than 10% today.”[3]. Quocirca further discuss this topic with relation to BYOID concept, where the users may produce these identity via an external identity provider that enterprise would trust. These external identity providers might be using different protocols (legacy, proprietary, standard based) to deal with identities. Hence integrating these with existing systems is a challenge as sometimes full replacement of those legacy systems is often difficult or even impossible and this is dealing with a more sensitive part of enterprise security. As a solution for this foreseen rising requirement for enterprise in IAM arena, industry is investigating on several solutions. While some has been evaluating on the possibility of using an ESB itself for the purpose, a new concept has also been emerged as EIB which is specifically focusing on identity mediation.

Apart from enterprises growing with mergers and acquisition, if a new enterprise is concerned, most of the time, users had to register there, filling a lengthy form. But with the application of BYOID concept, it is opening doors to easily attract a whole user base of social identity providers. For example if a website is concerned which allows to login via Google or Facebook, it is having a possible user base as large as Google users+Facebook users when compared to a website that allows login for own registered users. In order achieve this kind of external identity provider integration, there needs to be a mechanism to securely confirm the user's’ identity and submit the decision in a mutually understandable way. For this there needs to be a transformation happening in between the two parties, which can be identified as the main functionality of an EIB.

With above facts it is evident that identity mediation is a requirement for enterprises in the coming up days, due to high rate of mergers, acquisitions happening and the possible competitive advantage of supporting login via the submission of a social identity. Also with the newly emerging technologies like IoT, many new protocols may be introduced to interact with identities and current protocols might get new version with several modifications. Time is critical factor for the enterprise when adapting new technologies and faster they move, more the benefits. Requirement that is given rise in this situation is an Identity Mediation mechanism that can do the transformation between identity protocols, similar to how ESBs transform messages between different transport protocols.

[1] - M. Farrell, "2015 Becomes the Biggest M&A Year Ever", WSJ, 2016. [Online]. Available: [Accessed: 24- Jan- 2016].

[2] -, "Identity, access management and the rise of bring your own identity |", 2013. [Online]. Available: [Accessed: 24- Jan- 2016].

[3] - D. Atkinson, "A Report From Inside the Gartner Identity and Access Management Summit", Top Identity & Access Management Software, Vendors, Products, Solutions, & Services, 2014. [Online]. Available: [Accessed: 24- Jan- 2016].

Hariprasath ThanarajahApache Kafka - How to start


Kafka is a distributed, streaming platform and it has three key capabilities
  •     It lets you publish and subscribe to streams of records. In this respect it is similar to a message queue or enterprise messaging system.
  •     It lets you store streams of records in a fault-tolerant way.
  •     It lets you process streams of records as they occur.
It is used for two broad classes application.
  •     Building real-time streaming data pipelines that reliably get data between systems or applications
  •     Building real-time streaming applications that transform or react to the streams of data

The concepts

    Kafka is run as a cluster on one or more servers.
    The Kafka cluster stores streams of records in categories called topics.
    Each record consists of a key, a value, and a timestamp.

Kafka offers different core API's to work. In this post we are mostly looking about producer and consumer API.
  •     The Producer API allows an application to publish a stream of records to one or more Kafka topics.
  •     The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them.
More about Kafka API's and their functionality can be found in Kafka official site.

Start with Kafka

Kafka offer a command line tool to start.

  •     Download Apache Kafka distribution form here
  •     Extract and goto Kafka home 
  •     Start the Zookeeper  
         bin/ config/      
  •     Start the Kafka server  
          bin/ config/
  •     Create a topic,
bin/ --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.
  •     Run a producer and type the message.
bin/ --broker-list localhost:9092
-- topic test

         This is a message
         This is another message

  •     Run the consumer,the messages appear in the consumer
bin/ --bootstrap-server localhost:9092 --topic test  --from-beginning

        This is a message
       This is another message

If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.

Kafka Multi Broker Cluster Setup

So far we have been running against a single broker, but that's no fun. For Kafka, a single broker is just a cluster of size one, so nothing much changes other than starting a few more broker instances. But just to get feel for it, let's expand our cluster to three nodes (still all on our local machine).

We will create 3 Kafka brokers (broker0, broker1 and broker2) whose configurations are based on the default.

First we make a configuration file for each of the brokers:

The default broker0 server properties file is.

copy the file for the broker1 and broker2.
> cp config/ config/
> cp config/ config/

Edit these new files and set the following properties:



The property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each other's data.

Now we have created 3 Kafka  broker cluster. Start the Kafka server with the  appropriate server properties file.

> bin/ config/

> bin/ config/

> bin/ config/

Now create a new topic with a replication factor of three:

bin/ --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic myTest

Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "describe topics" command:

bin/ --describe --zookeeper localhost:2181 --topic myTest

The output is as like below

Topic:myTest PartitionCount:1    ReplicationFactor:3 Configs:
    Topic: myTest  Partition: 0    Leader: 1   Replicas: 1,2,0 Isr: 1,2,0

Here is an explanation of output. The first line gives a summary of all the partitions, each additional line gives information about one partition. Since we have only one partition for this topic there is only one line.
  •     "leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
  •     "replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
  •     "isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
Note that in my example node 1 is the leader for the only partition of the topic.

We can run the same command on the original topic we created to see where it is: 

bin/ --describe --zookeeper localhost:2181 --topic test

The response:

Topic:test  PartitionCount:1    ReplicationFactor:1 Configs:
    Topic: test Partition: 0    Leader: 0   Replicas: 0 Isr: 0

So there is no surprise there—the original topic has no replicas and is on server 0, the only server in our cluster when we created it.

Let's publish a few messages to our new topic:

bin/ --bootstrap-server localhost:9092 --from-beginning --topic myTest

test messages
test messages one
test messages two

Now let's test out fault-tolerance. Broker 1 was acting as the leader so let's kill it:

> ps aux | grep

You can get the process number to kill as like below and the process number is 7564

7564 ttys002    0:15.91 /System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home/bin/java...

> kill -9 7564

Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica set:

Again check

bin/ --describe --zookeeper localhost:2181 --topic myTest

The output is:

Topic:myTest   PartitionCount:1    ReplicationFactor:3 Configs:

    Topic: myTest  Partition: 0    Leader: 2   Replicas: 1,2,0 Isr: 2,0

But the messages are still available for consumption even though the leader that took the writes originally is down:

bin/ --bootstrap-server localhost:9092 --from-beginning --topic myTest

Charini NanayakkaraCalculating Latency and Throughput in WSO2 CEP

Prior to learning how to calculate latency and throughput, lets's differentiate these two key words which are often encountered in real time event processing.

Latency: Time elapsed since the arrival of an event until an alert was generated from that event. This reflects the time taken to process a single event.

Throughput: The Number of events processed per second is commonly referred to as throughput.

Latency could be calculated in WSO2 CEP as follows...

  1. With each event sent to stream x, associate a time-stamp (tIn) in milliseconds, reflecting the time it was sent to CEP.
  2. In the final select query in the event flow, we can get the latency as time:timestampInMilliseconds() - tIn. Since final Select query marks the end of processing, this conveys the time taken to process a single event. Function time:timestampInMilliseconds() gives the current time in milliseconds.
Throughput could be calculated in WSO2 CEP as follows...
  1. Assume that of all the events sent to stream x, eFirst is the 1st event and eLast is the last event. As in previous instance, associate a time-stamp (tIn) in milliseconds, reflecting the time each event was sent to CEP.
  2. In the final select query in the event flow, associate a time-stamp (tOut) using function time:timestampInMilliseconds() (time:timestampInMilliseconds() as tOut). 
  3. Count the total number of events sent through stream x (eventCount).
  4. The throughput could be calculated as eventCount/(tOut of eLast - tIn of eFirst)

Pushpalanka JayawardhanaWorth of Bitcoins

Bitcoins seems to be an interesting subject and is been taking the hype recently.

If we look at the value of a Bitcoin over the range of years, at 2015 it was worth $250 and now it is going beyond value of $2500 at the moment. This is capable of attracting more people towards it. We will proceed with more posts to understand Bitcoins, how to use it and any useful information for anyone interested in moving forward with Bitcoins, which I think is the currency of the future.

Following captured from on 8th June 2017 shows the value deviation of bitcoin from it's very start.

Source :

Pushpalanka Jayawardhana[WSO2 Article] Frictionless Adoption of Payment Services Directive 2 (PSD2) with WSO2

Following webinar recording from WSO2 discusses in detail on the security implications of PSD2, the available technical standards around the recommendations and what WSO2 products are in-line to cater for those.

Pushpalanka JayawardhanaAdaption of PSD2

European Union has enforced Payment Service Directive version 2 (PSD2) for the Payment Service Providers to adapt by the year 2018. Following slide-deck discusses the

  • PSD2 background 
  • PSD2 effects on the business domain 
  • Security implications of the directive 
  • What technologies, standards are available to meet the requirements 
  • How WSO2 products can support to adapt PSD2

Pushpalanka JayawardhanaWhy Identity Mediation? And a Language ?

As identified and predicted by several prominent analyst firms(Forrester, Gartner) , acquiring and merging has been the frequent mechanism for enterprises to expand in the recent past and the years to come. With this expansion there is a rising need for enterprises to handle the enterprise across identity and access management procedures in a secured way that is fast enough to have the competitive advantage of the merged or acquired assets. With different enterprises having variety of standards and protocols in use for identity and access management, catering for this requirement is absolutely challenging given the time factor. A similar situation has been addressed by Enterprise Service Bus(ESB) concept few years back, when the requirements raised to mediate between different transport protocols and data formats when communication is required between disparate enterprise systems that are legacy and modern.

We are trying to apply the same concepts around ESB in the arena of identity and access management to provide the basement for an Enterprise Identity Bus(EIB). While the idea of EIB has been discussed frequently in panels with the participation of industry giants and the concept has existed a while, there are limited implementations and research done around the subject. Hence in order to design an elegant solution, we have to go deep down to root levels of mediation language implementations and possible approaches for the mediation engine implementation.

Observing how the identity protocols have been evolving, reaching the glory stages and then getting dead in few years time, the mediation engine needs to be very flexible in its configuration and extensibility where a Domain Specific Language(DSL) is to be defined to cater for. This decision is considered looking at the pros and cons of it and usage of mediation languages in ESBs.

This blog is to provide a platform to discuss and share important findings, thoughts towards the implementation of IML(Identity Mediation Language) and IME(Identity Mediation Engine) together with an approach towards providing a robust solution for the requirement under consideration.

Chathura DilanCreate a WSO2 ESB API that return the request as the reponse

If you want to create a WSO2 ESB API that return the same request as the response please use the follows

<api xmlns="" name="someapi" context="/myapi">
<resource methods="POST" url-mapping="/" protocol="http">
<log level="full"/>

Now what you have to do is send a POST request to http://localhost:8280/myapi with some xml data in the request body. It will return the same request data as the response

Amalka SubasingheHow to use FHIR connector in WSO2 Integration Cloud

In WSO2 Integration Cloud, we provide WSO2 ESB as an app type. So you can configure to FHIR connector on WSO2 ESB.  

At the moment we don't have a specific document on configuring the FHIR connector on WSO2 Integration Cloud. But, we have included an example document [1] on how to configure a sample (Twitter) connector. This is a general guide a user can follow, it shows how to create a CAR file and import it onto the Integration Cloud.

For information on configuring the FHIR connector, you can follow the document here.[2]

Please note that if you wants to add custom server certificates into the client truststore or requires any custom configurations, you need to create custom docker image and deploy it in WSO2 Integration Cloud. [3]

Amalka SubasingheHow to insert a Getting started guide into my WSO2 API Store

Let's say I have published a API and I want to let my API store users how they can use the API.

Currently this can be done by adding API documentation. So with this you will need to add the documentation to each API. The documentation types supported in the API Publisher are as follows:
Please refer [1] for more information regarding this. 

If your requirement is to add a generic guide to the store unfortunately this is something not possible at the moment. 

Amalka SubasingheHow to allow WSO2 cloud team to access your tenant

Sometimes, you may require to access your tenant by WSO2 cloud team, to investigate a issue, do some configurations changes on behalf of you, etc...

This blog will say how you can allow WSO2 cloud team to access your tenant.

1. Go to cloud organization management portal:

2. Click on check box (Allow Access to WSO2 Support) inline with your tenant name

Later you can remove it clicking on the check box agina.

Amalka SubasingheHow to remove a thumbnail from an API

Let's say you have created an API in API cloud and you have added thumbnail image to it. Now you want to remove it.

When you go to the edit api view it allows you to change the thumbnail, but not remove. Let's see how we can remove it.

1. login to the carbon console of gateway node as tenant admin

2. Go to Resource -> Browse under main menu

3. Go to "/_system/governance/apimgt/applicationdata/provider" 

4. Click on the relevant tenant - you will see list of APIs (eg:

5. Select relevant API - you will see api artifact   (eg: api1 under version 1.0.0)

6. Click on "api" - you will see list of meta data for that api

7. Remove the thumbnail value from attribute "Thumbnail"

8. Save the API

9. Then logout from the API publisher UI and login in incognito window, you will see thumbnail has removed from your API.

Amalka SubasingheHow to start multiple services as a group in WSO2 Integration Cloud

Let's say, we have a use case which is deployed in Integration Cloud and that involves number of applications.
There can be a PHP/Web application which user interact, ESB which provide integration with number of systems and DSS to manipulate database.

So let's say we want to start/stop these 3 applications as a group. But at the moment, Integration Cloud does not provide any grouping. So you have to login to the Integration Cloud and go to each and every application and start/stop those.

To make this little easier, we can use Integration Cloud REST API and write our own script.

This is the script to start the all applications as a group. You need to provide username, password, organization name and file which contains application list with versions

How to execute this script
./ <username> <password> <orgnaizationName> wso2Project.txt

wso2Project.txt file content should be like this. There you should provide applicationName and version separated with [ | ] pipe character

As shown above you can keep number of project files and start using script.

Amalka SubasingheAdd multiple database users with different privileges for the same database

Currently, the WSO2 Integration Cloud supports adding multiple database users for a same database, but does not support changing user privileges.

Let's say someone has a requirement of using same database via two different user, one user has full access, where other user should have READ_ONLY access. How we do this in Integration Cloud?
We are planning to add this as feature to change the user permissions, but until that you can do it as I have mentioned below.


1. Login Create a database with a user

2. Once you create a database you can see it as below, and you can add another user when clicking on the All users icon

3. There you can create new user or you can attach existing user to the same database

I added two users u_mb_2NNq0tjT and test_2NNq0tjT to the database wso2mb_esbtenant1
My requirement is to give full access to the u_mb_2NNq0tjT user and remove INSERT permission from test_2NNq0tjT user.

4. Login to the via mysql client as user u_mb_2NNq0tjT and revoke the INSERT permission of test_2NNq0tjT

first login as test_2NNq0tjT and check grants
mysql -u  test_2NNq0tjT -pXXXXX -h

show grants
| Grants for test_2NNq0tjT@%                                                             |
| GRANT USAGE ON *.* TO 'test_2NNq0tjT'@'%' IDENTIFIED BY PASSWORD <secret>              |
| GRANT ALL PRIVILEGES ON `wso2mb_esbtenant1`.* TO 'test_2NNq0tjT'@'%' WITH GRANT OPTION |

login as u_mb_2NNq0tjT and revoke the insert permission
mysql -u  u_mb_2NNq0tjT -pXXXXX -h

REVOKE INSERT ON wso2mb_esbtenant1.* FROM 'test_2NNq0tjT'@'%';

login again as test_2NNq0tjT and check grants
mysql -u  test_2NNq0tjT -pXXXXX -h

show grants

| Grants for test_2NNq0tjT@%                                                                                                                                                                                                                                   |
| GRANT USAGE ON *.* TO 'test_2NNq0tjT'@'%' IDENTIFIED BY PASSWORD <secret>                                                                                                                                                                                    |
2 rows in set (0.24 sec)

With this approach we can change the permissions of another user who is attached to the same database.

To make an read-only user you need to revoke the permissions as follows

Please note: after you change the user privileges, do not detach/attach the test_2NNq0tjT user to the same or different database. Then it will set the all privileges automatically.

Amalka SubasingheHow to run a Jenkins in WSO2 Integration Cloud

This blog post guides you on how to run Jenkins in WSO2 Integration Cloud and configure it to build an GitHub project. Currently the WSO2 Integration Cloud does not support Jenkins as a app type, but we can use Custom docker app type with Jenkins docker image.

First we need to find out, proper Jenkins docker image, which we can use for this  or we have to build it from the scratch.

If you go to you can find official Jenkins images in docker hub, but we can't use this images as it is due to several reasons. So I'm going to create a fork of and do some changes to the Dockerfile.

I use the branch here.

A. You will see it has VOLUMN mount - at the moment WSO2 Integration Cloud does not allow you to upload an image which has VOLUMN mount. So we need to comment it out

#VOLUME /var/jenkins_home

B. My plan is to build Git hub project, so I need enable Git hub Integration plugin. So I add the following line at the end of the file

RUN docker-slaves github-branch-source

C. I want to build projects using Maven, so I add the following segment to the Dockerfile to install and configure Maven.


RUN mkdir -p /usr/share/maven /usr/share/maven/ref/ \
  && curl -fsSL$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz \
    | tar -xzC /usr/share/maven --strip-components=1 \
  && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

ENV MAVEN_HOME /usr/share/maven
COPY settings-docker.xml /usr/share/maven/ref/
RUN chown -R ${user} "$MAVEN_HOME"

D. I don't want to expose slave agent port 50000 to the outside. Just comment it out.

#EXPOSE 50000

E. I want to configure the Jenkins job to build the project periodically, so I need to copy the required configurations to the Jenkins and give the correct permissions.

Note: You can first run a Jenkins on your local machine, configure the job and get the config.xml file.
I configured the Jenkins job to poll the Github project every 2 minutes and build. (You can configure the interval as you wish)

Here's the Jenkins configurations

<?xml version='1.0' encoding='UTF-8'?>
    <com.coravy.hudson.plugins.github.GithubProjectProperty plugin="github@1.26.1">
  <scm class="hudson.plugins.git.GitSCM" plugin="git@3.1.0">
    <submoduleCfg class="list"/>
      <spec>H/2 * * * *</spec>
      <targets>clean install</targets>
      <settings class="jenkins.mvn.DefaultSettingsProvider"/>
      <globalSettings class="jenkins.mvn.DefaultGlobalSettingsProvider"/>

We need to create the following content in the JENKINS_HOME/jobs folder, to configure a job

 --> jobs
         ├── HelloWebApp
         │   └── config.xml

Add the following to the Dockerfile.

RUN mkdir -p $JENKINS_HOME/jobs/HelloWebApp
COPY HelloWebApp $JENKINS_HOME/jobs/HelloWebApp

RUN chmod +x $JENKINS_HOME/jobs/HelloWebApp \
  && chown -R ${user} $JENKINS_HOME/jobs/HelloWebApp

So let's build the Jenkins image and test locally.
Go to the folder where the Dockerfile exist and execute

docker build -t jenkins-alpine .

Run the Jenkins

docker run -p 80:8080 jenkins-alpine

You will see the Jenkins logs in the command line

You can access the Jenkins via http://localhost/ and see build jobs running in every 2 minutes when it detects any changes in GitHub project.

If you click on the HelloWebApp and go to configure, then you will see the Jenkins job configurations.

So now the image is ready and let's push it to the docker hub and deploy it in WSO2 Integration Cloud.

docker images

REPOSITORY               TAG                               IMAGE ID            CREATED             SIZE
jenkins-alpine                 latest                            d7dc03cec1df        51 minutes ago      257.4 MB

docker tag d7dc03cec1df amalkasubasinghe/jenkins-alpine:hellowebapp

docker login

docker push amalkasubasinghe/jenkins-alpine:hellowebapp

When you login to the docker hub you can see the image you push

Let's login to the WSO2 Integration Cloud -> Create Application -> and select Custom Docker Image

add the image providing image URL

Wait until the security scanning finished and then create the Jenkins application selecting scanned image

Here I select Custom Docker http-8080 and https-8443 runtime, as Jenkins run in 8080 port.

Wait until the Jenkins instance fully up and running. Check the logs

Now you can access the Jenkins UI via

That's all :). Now every 2 minutes our Jenkins job will poll the GitHub project and if there are any changes it will pull the changes and build.

This is how you can setup and configure Jenkins in WSO2 Integration Cloud.

You can see the docker file here

Amalka SubasingheHow to change the organization name and key appear in WSO2 Cloud UI

Here are the instructions to change the Organisation Name:

1. Go to Organization Page from Cloud management app.

2. Select the organization that you want to change and select profile

3. Change the Organization name and update the profile

How to change the Organization Key:

Changing Organization Key is not possible. We generate the key from the organization name users provide at the registration time. It is a unique value and plays a major role in multi-tenancy. We have certain internal criteria for this key.

Another reason why we cannot do this is, we are using the organization key in the internal registries when storing API related metadata. So, if we change it, there is a data migration involved.

Amalka SubasingheHow to change the organisation name appear in WSO2 Cloud invoices

Let's say you want to change the organisation name appear in invoices when you subscribe to a paid plan. Here are the instructions:

1. Login to the WSO2 Cloud and go the the Accounts page.

2. You can find the contact information in the Accounts page. Click on 'update contact Info'.

3. Change the organization name, Add the organization name which you want to display in the invoice.

4. Save the changes.

5. You can see the changed organization name in the Accounts Summary.

Amalka SubasingheHow to add a new payment method to the WSO2 Cloud

Here are the instructions:

1. Go to:
2. Log in with your WSO2 credentials (email and password),
3. Click the 'New Payment Method' button:

4. Supply the new credit card information, click the Payment Info button and then the Proceed button.

Let us know if you need further help :)

Amalka SubasingheWSO2 App Cloud Architecture

App Cloud SaaS application provides an user interface & REST API for app cloud users to deploy their application artifacts. The web UI is developed by Jaggery, and it invokes the REST API, which invokes following backend components to provide the app cloud solution.

Docker Client provides an interface to build images and push to the docker registry.
Kubernetes Client provides an interface to deploy applications in kubernetes cluster
DAO provides an interface to manipulate database operations to store meta data required for App Cloud in App Cloud DB
SOAP Client uses to invoke WSO2 Storage Server admin services to create databases and users

WSO2 Application Server provides an hosting environment to deploy App Cloud SaaS application

WSO2 Identity Server provides identity management configuring SSO with App Cloud SaaS application

WSO2 Storage Server provides RSS instances for app cloud developers to store application data.

WSO2 Data Analytics Server collects statistics published by deployed applications and provides dashboards to the app cloud users.

Docker Registry uses to store application images created using the deployable artifacts and runtimes.

Kubernetes provides runtime for deploy applications. Kubernetes namespaces provides tenant isolation and per pod per container per application will be deployed

End users will access the deployed applications via HAProxy. Further it provides Default URL and Custom URL features apart from load balancing.

Amalka SubasingheHow to block a particular user from accessing an API

1. Login to Admin Dashboard from the admin user. (

2. Click Black List under the Throttle Policies section and click Add Item (Refer to the screenshot below)

3. Select the condition type as the user and give the full qualified username as the value and click blacklist. (Refer to the screenshot below)

For example, if you want to block the user from invoking APIs, you have to provide the value as by appending the organization key at the end of the username with '@' character. 

If you follow the above steps, the user will not be able to invoke APIs. Also please note that if you blacklist, the user will not be able to invoke any API until you remove the blacklist policy.

sanjeewa malalgodaBest practices we should follow when we create API.

Best practices we should follow when we create API.

Create API for dedicated back end service. For each of the resources the HTTP methods used to perform the required application functions have to be decided. This includes the use of applicable HTTP headers.
Special behavior required by the application (e.g. concurrency control, long running requests) has to be decided.
Finally, potential error situations have to be identified and corresponding error messages have to be designed.

Proper Naming

It's important to have proper name for service and service paths. For example when we create some API related to camera related operations we can name API and resources with camera specific terms. For example we can name API as camera-api. Then when we create new resource we can create resource like capture-image, delete-image etc.
Resources must be named properly by means of URIs (Uniform Resource Identifiers). Proper naming of resources is key for an API to be easily understandable by clients.
Following are some of the guidelines for design proper API/Resource paths and names. These are not mandatory rules but whenever possible we should these best practices.

Atomic resources, collection resources and composite resources should be named as nouns because they represent "things", not "actions" (actions would lean more towards verbs as names of resources).
  • Processing function resources and controller resources should be named as verbs because they in fact represent "actions".
  • Processing function resources and controller resources should not be sub-resources of individual other resources.
  • Lower case characters should be used in names only because the rules about which URI element names are case sensitive and which are not may cause confusion.
  • If multiple words are used to name a resource, these words should be separated by dashes (i.e. "-") or some other separator.
  • Singular nouns should be used for naming atomic resources.
  • Names of collections should be "pluralized", i.e. named by the plural noun of the grouped concept (atomic resource or composite resource).
  • Use forward slashes (i.e. "/") to specify hierarchical relations between resources. A forward slash will separate the names of the hierarchically structured resources. The parent name will immediately precede the name of its immediate children.

Proper Versioning

The version of an API is specified as part of its URI. Usually this version is specified as a pair of integers (separated by a dot) referred to as the major and the minor number of the version, preceded by the lowercase letter "v". E.g. a valid version string in the base path would be v2.1 indicating the first minor version of the second major version of the corresponding API.

Proper versioning strategy for API will helpful for all API users and client to communicate with API easily and effectively. In WSO2 API Manager we do have API versioning support. Also we can copy existing API and create new version of same API. When you need to create new version of running API recommended approach is create new version from existing API API and modify it. Then before we publish new version we need to test all the functionalities of this API.
Following are some of the guidelines we need to follow when we version APIs. Again these are not mandatory rules. But it's good to follow these rules whenever possible.

In general, a version number following the semantic versioning concept has the structure major.minor.patch and the significance in terms of client impact is increasing from left to right:
  • An incremental patch number means that the underlying modification to the API cannot even be noticed by a client - thus, the patch number is omitted from the version string. Only the internal implementation of the API has been changed while the signature of the API is unchanged. From the perspective of the API developer, a new patch number indicates a bug fix, a minor internal modification, etc.
  • An incremented minor number indicates that new features have been added to the API, but this addition must be backward compatible: the client can use the old API without failing. For example, the API may add new optional parameters or a completely new request.
  • An incremented major number signals changes that are not backward compatible: for example, new mandatory parameters have been added, former parameters have been dropped, or complete former requests are no longer available.
It is best practice to support the current major version as well as at least one major version back. In case new versions are released frequently (e.g. every few months) more major versions back have to be supported. Otherwise, clients will break (too fast).

Use of Proper HTTP Methods

Manipulation of resources in the REST style is done by create, retrieve, update, and delete operations (so-called CRUD operations), that map to the HTTP methods POST, GET, PUT, PATCH and DELETE.
A request that can be used without producing any side-effect is called a safe request. A request that can be used multiple times and that is always producing the same effect as the first invocation is called idempotent.


GET is in HTTP as well as in the REST style specified as a safe and idempotent request. Thus, an API using the GET method must not produce any side-effects. Retrieving an atomic resource  or a composite resource is done by performing a GET on the URI identifying the resource to be retrieved.


PUT substitutes the resource identified by the URI. Thus, the body of the corresponding PUT message provides the modified but complete representation of a resource that will completely substitute the existing resource: parts of the resource that have not changed must be included in the modified resource of the message body. Especially, a PUT request must not be used for a partial update.


POST is neither safe nor idempotent. The main usages of POST are the creation of new resource, and the initiation of functions, i.e. to interact with processing function resources as well as controller resources.
In order to create a new resource, a POST request is used with the URI of the collection resource to which the new resource should be added.


A resource is deleted by means of the DELETE request on the URI of the resource. Once a DELETE request returned successfully with a "200 OK" response, following DELETE requests on the same URI will result in a "404 Not Found" response because there is no resource available with the URI of the deleted resource.


Usually HTTP Patch request will use to do partial update on resources. For example if you do not change entire resource and just changes some of the attributes you can use patch method. Unlike Put method we can use this for partial update for resources.

Malith JayasingheNetwork Monitoring using SAR

In this blog we will have a look at how to use SAR (System Activity Report) to monitor the network activity.

Installing SAR

Simple steps to install and configure sar (sysstat) on Ubuntu/Debian servers

Monitoring network interface statistics

command: sar -n DEV

The report contains the following

  1. IFACE: Name of the network interface for which statistics are reported.
  2. rxpck/s: packet receiving rate (unit: packets/second)
  3. txpck/s: packet transmitting rate (unit: packets/second)
  4. rxkB/s: data receiving rate (unit: Kbytes/second)
  5. txkB/s: data transmitting rate (unit: Kbytes/second)
  6. rxcmp/s: compressed packets receiving rate (unit: Kbytes/second)
  7. txcmp/s: compressed packets transmitting rate (unit: Kbytes/second)
  8. rxmcst/s: multicast packets receiving rate (unit: Kbytes/second)

The following report shows the SAR report while running a performance test with 50 concurrent uses (note: the tests started around 5.29 AM)

The SAR report for the same performance test with 500 concurrent users is shown below (note: performance test started around 6.30 AM)

Note that there is a significant increase in the data transfer rates

Monitoring network interface errors

command: sar -n EDEV

  1. IFACE : Name of the network interface for which statistics are reported.
  2. rxerr/s: Total number of bad packets received per second.
  3. txerr/s: Total number of errors that happened per second while transmitting packets.
  4. coll/s: Number of collisions that happened per second while transmitting packets.
  5. rxdrop/s: Number of received packets dropped per second because of a lack of space in linux buffers.
  6. txdrop/s: Number of transmitted packets dropped per second because of a lack of space in linux buffers.
  7. txcarr/s: Number of carrier-errors that happened per second while transmitting packets.
  8. rxfram/s: Number of frame alignment errors that happened per second on received packets.
  9. rxfifo/s: Number of FIFO overrun errors that happened per second on received packets.
  10. txfifo/s: Number of FIFO overrun errors that happened per second on transmitted packets.

See below for a sample report.

Monitoring socket usage

command: sar -n SOCK

The report contains the following:

totsck : Total number of sockets used by the system.

tcpsck: Number of TCP sockets currently in use.

udpsck: Number of UDP sockets currently in use.

rawsck: Number of RAW sockets currently in use.

ip-frag: Number of IP fragments currently in use.

tcp-tw: Number of TCP sockets in TIME_WAIT state.

The following SAR report shows the socket statistics while running a performance test. Note that the test started 6.39 am


Malith JayasingheMeasuring the Network Performance between two EC2 instances

In this blog we will measure the maximum network bandwidth between two Amazon EC2 instances of type t2.xlarge using iPerf3.

The factors that can affect Amazon EC2 network performance (refer to [1] for more details):

Instance type: larger instance types typically provide better network performance

Single virtual private cloud (VPC):

  • Default interface configuration for EC2 instances uses jumbo frames (9001 MTU), which support higher throughput in a single virtual private cloud (VPC)
  • Outside a single VPC, the maximum MTU is 1500 or less, requiring large packets to be fragmented by intermediate systems

High performance computing (HPC) support (using placement groups.)

Enhanced networking support


iPerf3 [2] is a tool for measuring the maximum achievable bandwidth on IP networks. It allows you to tune parameters related to timing, buffers and protocols (TCP, UDP, SCTP with IPv4 and IPv6). To install iPerf3 on ubuntu following the instructions provided in [3].

Configure one instance as a server to listen on the port specified with -p

iperf3 -s -p 85

Run a second instance as a client with the desired parameters.

For example, the following command initiates a TCP test against the specified server instance on port 85:

iperf3 -c -i 1 -t 60 -V -p 85

Network Performance Results

Client: client-side network performance results are shown below

Network Bandwidth measured from the client = 1 Gbits/second

Server: server-side network performance results are shown below

Network bandwidth measured from the server= 1.02 Gbits/sec




Rajendram KatheesSimple JSP Ajax Tutorial

In this blog post, a simple JSP Ajax sample for adding two numbers.  Ajax is a method for returning a result without reloading the HTML page using JavaScript, DOM, XMLHttpRequest and CSS technologies.


<script type="text/javascript">
var request;
function addnum() {
var num1 = document.addform.num1.value;
var num2 = document.addform.num2.value;
var url = "sum.jsp?num1=" + num1 + "&num2=" + num2;
if (window.XMLHttpRequest) {
request = new XMLHttpRequest();
} else if (window.ActiveXObject) {
request = new ActiveXObject("Microsoft.XMLHTTP");

try {
request.onreadystatechange = getsum;"GET", url, true);
} catch (e) {
alert("Unable to process");

function getsum() {
if (request.readyState == 4) {
var sum = request.responseText;
document.getElementById("sumvalue").innerHTML = sum;
<h1>JSP Ajax Simple Example</h1>
<form name="addform">
<input type="text" name="num1"> + <input type="text"
name="num2"> <input type="button" value="Add"
<span id="sumvalue"></span>


int num1 = Integer
int num2 = Integer
int sum = num1 + num2;
out.print("Sum is " + sum);


<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="" xmlns:xsi="" xsi:schemaLocation="">

Dimuthu De Lanerolle

Identity Claim Management : 

This is a screen cast video done by me on claim management :

Web Link :

Chamila AdhikarinayakeDebugging "API authentication failure due to Unclassified Authentication Failure" on WSO2 API Manager

One of the common issues you could get when setting up WSO2 API Manager in a clustered setup is "failure due to Unclassified Authentication Failure" error when invoking the api.

WARN APIAuthenticationHandler API authentication failure due to Unclassified Authentication Failure 

This error happens when the gateway node fails to validate the token. Following are some of the tips you could use to debug this issue

1. Check the errors in KeyManager node.
   First thing you should do is to see if there are any errors in the Keymanager error logs. If there are errors, then we could rule out the connection related issues from Gateway to Keymanager node.

2. Check configurations.
   There could be configuration issue in <APIKeyValidator> section in api-manager.xml file in both servers. Check the urls and see whether they point to the correct endpoint. Aslo check whether <KeyValidatorClientType> property is same in both gateway and keymanager. You could swith the client type (WSClient or ThriftClient) and check as well. (Need to configure the thrift ports correctly)

3. Enable debug logs
   Add following entries to the in  repository/conf file in the given node

   In gateway node

   In keymanager node

From these logs you could get more idea on the issue happening

sanjeewa malalgodaHow WSO2 API Manager timeouts and suspension works

Address and WSDL endpoints can be suspended if they are detected as failed endpoints. When an endpoint is in in suspended state for a specified time duration following a failure, it cannot process any new messages. Please find the WSO2 ESB document[1] which explains those properties and their usage(even APIM and ESB are different products they do share same synapse runtime and these properties will effect in same manner).

errorCode - A comma separated error code list which can be returned by the endpoint.This parameter is used to specify one or more error codes which can cause the endpoint to be suspended when they are returned from the endpoint. Multiple error codes can be specified, separated by commas.

initialDuration - The number of milliseconds after which the endpoint should be suspended when it is being suspended for the first time.

maximumDuration -The maximum duration (in milliseconds) to suspend the endpoint.

progressionFactor  - The progression factor for the geometric series. See the above formula for a more detailed description.

When endpoint suspension happens, it will work as follows.
For the suspend duration after the first failure, this equation does not applies. Also when endpoint changed from active to suspend state, suspension duration will be exactly initial duration.
This equation only applies when endpoint already in suspended state and suspension duration expired.

next suspension time period = Min (Initial suspension duration * Progression Factor , Max suspend time).


Also in API Manager we do have different timeouts. So if some endpoint timed out then we will notice that as faulty endpoint and consider for suspension

1. Global timeout defined in (API_HOME\repository\conf) file. This will decide the maximum time that a callback is waiting in the APIM for a response for a particular request. If APIM does not get any response from Back End, it will drop the message and clears out the callback. This is a global level parameter which affects all the endpoints configured in APIM.
The default is 120000 milliseconds (2 minutes).

2. Socket timeout defined in the (APIM_HOME\repository\conf) file. This parameter will decide on the timeout which a particular http request is waiting for a response. If APIM does not receive any response from the Back End during this period, HTTP connection get timed out and that will eventually throw timeout error in the APIM side and fault handlers will be hit.
The default value is 60000 milliseconds (1 minute).

3. You can define timeouts on the endpoint configuration such that it will affect only that particular endpoint in an API. This will be a better option if you need to configure timeouts per API endpoint for different Backend services. You can also define the action upon the timeout.
Below example configuration will set the endpoint to timeout in 120000 milliseconds (2 minutes) and then execute the fault handler.

Manorama PereraAS4 Messaging Standard

AS4 is a recent B2B messaging standard. It can exchange any type of payloads (XML, JSON, Binary, ect.) and also it supports sending multiple payloads in one AS4 message.

AS4 originates from ebXML (Electronic Business XML). AS4 Profile of ebMS 3.0 which is a conformance profile of AS4 defines the following 3 conformance profiles (CP) defining subsets of ebMS V3 options to be supported by an AS4 Message Service Handler (MSH).

  • AS4 ebHandler CP
  • AS4 Light Client CP
  • AS4 Minimal Client CP

AS4 Messaging Model

AS4 defines a Four-Corner-Model including the components involved in a message exchange. Following are the entities of this four-corner-model.

Message Producer: Business application which sends the message content to the sending Message Service Handler(MSH).

Sending Message Service Handler: Packages the message content and sends to the intended receiving MSH.

Receiving Message Service Handler: Receive the message from the sending MSH.

Message Consumer: The business application which receives the message content from receiving MSH.

The above-mentioned conformance profiles of AS4 Profile of ebMS 3.0 defines supported different subsets of AS4 profile options by an AS4 Message Service Handler.
  • The message sent from the producer to the sending MSH can be of any format agreed by those two entities.
  • Similarly, the messages exchanged between the receiving MSH and the consumer can be of any format.
  • AS4 defines the message format being exchanged between the sending MSH and receiving MSH.
  • Apart from that AS4 P-mode configuration files which reside in the MSHs instruct on how to handle AS4 messages.

AS4 Message Exchange Patters (MEPs)

In AS4, there are two message exchange patterns. 
  • One-way / Push
    • In one-way / push MEP, the sending MSH sends the AS4 user message containing payloads to the receiving MSH. The initiator is the sending MSH.
  • One-way / Pull
    • In one-way / pull MEP, the receiving MSH sends a pull request signal message to the sending MSH. Then the sending MSH sends the user message. The initiator in this MEP is the receiving MSH.

      Image result for as4 one way pull

    AS4 Minimal Client Profile

    As per the specification AS4 minimal client profile applies only to one side of an MEP (acting as a “client” to the other party).

    P-Mode configuration file for AS4 Minimal Client Profile








    Himasha GurugeRunning MSSQL on Mac

     This is a great post on how you can setup mssql in mac with Docker. Once the docker instance is up and running, you can use an sql editor such as DBeaver[2] to execute the sql queries.


    Sashika WijesingheImplementing Aggregator Pattern using Ballerina

    Introduction to Aggregator Pattern

    Aggregator is one of the basic pattern defined in SOA patterns and EIP patterns that can be used to define more complex scenarios.

    According to the EIP patterns “The Aggregator is a special Filter that receives a stream of messages and identifies messages that are correlated. Once a complete set of messages has been received (more on how to decide when a set is 'complete' below), the Aggregator collects information from each correlated message and publishes a single, aggregated message to the output channel for further processing’ [1]

    Use Case

    Let’s assume a Salesperson wants to get the Customer's Personal Information, Contact Information and the Purchasing Behavior for a given customer ID through the Customer Relationship Management (CRM) system for the upcoming direct marketing campaign. In a real world scenario, the  CRM system needs to call multiple backend services to get the required information and aggregate the responses coming from the backend systems to provide the information requested by the salesperson.

    The system will send a request message with the customer ID to retrieve the required information from following systems.
    • Send a request to "Customer Info Service" to get customer's personal information
    • Send a request to "Contact Info Service" to get customer's contact Information
    • Send a request to "Purchasing Behavior Service" to get the purchasing details of the customer

    Implementation Description

    Following backend services will provide the requested information based on the customer ID provided.
    •     ContactInfo.bal
    •     CustomerInfo.bal
    •     PurchasingInfo.bal
    Intermediate service (AggregatorService) will get the responses coming from the backend services and integrate the responses to provide the response to the Salesperson.

    Let's Code with Ballerina

    First, go to Ballerina website and download the latest ballerina distribution.

    Note - I have used Ballerina 0.89 version to demonstrate this use case

    Start Ballerina Composer

    Ballerina Composer is a visual editor tool that provides the capability to write or draw your integration scenario.

    To start the composer, go to <Ballerina_Home>/bin and execute following command depending on your environment.

    Linux Environment - ./composer
    Windows Environment  - composer.bat

    Implementing the backend services

    To implement the above use case let's create the required backend services; Customer Info Service, Contact Info Service and Purchasing Behavior Service.

    Customer Information Service
    I have created a service named “CustomerInfoService” via the composer and provide the base path “/customerInfo” to access this service directly by the outside clients. To demonstrate the scenario, I have created a map to maintain the customer information. The jsonpath is used to extract the customer ID from the incoming request and extract the customer information from the ‘customerInfoMap’ based on the customer ID. If there is no details for the requested ‘CustomerID’, service will return an error payload.

    Let’s see how this can be represented using the composer.

    Following is the code representation of the above design.

    package aggregator;

    import ballerina.lang.messages;
    import ballerina.lang.jsons;
    import ballerina.lang.system;

    @http:config {basePath:"/customerInfo"}
    service<http> CustomerInfoService {


        resource CustomerInfoResource(message m) {
        json incomingPayload = messages:getJsonPayload(m);
        map customerInfoMap = {};
        json cus_1 = {"PersonalDetails": {"Name": "Peter Thomsons","Age": "32","Gender": "Male"}};
        json cus_2 = {"PersonalDetails": {"Name": "Anne Stepson","Age": "50","Gender": "Female"}};
        json cus_3 = {"PersonalDetails": {"Name": "Edward Dewally","Age": "23","Gender": "Male"}};
        customerInfoMap["100"]= cus_1;
        customerInfoMap["101"]= cus_2;
        customerInfoMap["102"]= cus_3;
            string customerID = jsons:getString(incomingPayload,"$");
            system:println("Customer ID = " + customerID);
            message response = {};
            json payload;
            payload, _ = (json) customerInfoMap[customerID];

            if (payload != null) {
            } else {
                json errorpayload = {"Response": {"Error": "No Details available for the given Customer ID"}};
                messages:setJsonPayload(response, errorpayload);
        reply response;


    This service will return the customer Information based on the requested customer ID.

    Note - I have created the Contact Info Service and Purchasing Behaviour Service similar to the above service. Only difference is the payload used in the service.

    Implementing the Intermediate service

    So far we have created the ‘Customer Information Service’, ‘Contact Information Service’ and ‘Purchasing Information Service’ using ballerina. Let’s see how to create an intermediate service to aggregate the responses coming from each of the backend system and provide an aggregated response to the salesperson.

    I have created a service named “AggregatorService” to aggregate the backend responses. To implement the scenario I have used the Fork Join function in Ballerina, which has the capability of defining individual workers that will work on an assigned task and wait until all the workers are completed with the assigned task. When the backend responses are collected those will be aggregated to create a JSON payload as diagramed in composer below.

    Following is the code representation of the above design.
     package aggregator;

    import ballerina.lang.messages;
    import ballerina.lang.jsons;

    @http:config {basePath:"/AggregatorService"}
    service<http> AggregatorService {


        resource CRMResource(message m) {
        http:ClientConnector customerInfoEP = create http:ClientConnector("http://localhost:9090/customerInfo");
        http:ClientConnector contactInfoEP = create http:ClientConnector("http://localhost:9090/contactInfo");
        http:ClientConnector purchasingInfoEP = create http:ClientConnector("http://localhost:9090/purchasingInfo");
        json incomingPayload = messages:getJsonPayload(m);
        string customerID = jsons:getString(incomingPayload, "$");
        message aggregateResponse = {};

        if (customerID == "100" || customerID == "101" || customerID == "102" ) {
            fork {
                worker forkWorker1 {
                message response1 = {};
                message m1 = messages:clone(m);
                response1 =, "/", m1);
                response1 -> fork;
                worker forkWorker2 {
                message response2 = {};
            message m2 = messages:clone(m);
                response2 =, "/", m2);
              response2 -> fork;

            worker forkWorker3 {
                message response3 = {};
                response3 =, "/", m);
                response3 -> fork;

            } join (all) (map results){
                any[] t1;
                any[] t2;
        any[] t3;
                t1,_ = (any[]) results["forkWorker1"];
                t2,_ = (any[]) results["forkWorker2"];
        t3,_ = (any[]) results["forkWorker3"];
        message res1;
        message res2;
        message res3;
                res1, _  = (message) t1[0];
                res2, _  = (message) t2[0];
        res3, _  = (message) t3[0];
                json jsonres1 = messages:getJsonPayload(res1);
        json jsonres2 = messages:getJsonPayload(res2);
        json jsonres3 = messages:getJsonPayload(res3);

        json payload = {};
        payload.CustomerDetailsResponse = {};
        payload.CustomerDetailsResponse.PersonalDetails = jsonres1.PersonalDetails;
        payload.CustomerDetailsResponse.ContactDetails = jsonres2.ContactDetails;
        payload.CustomerDetailsResponse.PurchasingDetails = jsonres3.PurchasingDetails;
     } else {
         json errorpayload = {"Response": {"Error": "No Details available for the given Customer ID"}};

     messages:setJsonPayload(aggregateResponse, errorpayload);

    reply aggregateResponse;

    Executing the Service

    Deploying the Service
    Now we have all the backend services and aggregator service created using Ballerina. Let’s see how to deploy and invoke the services.

    I have packaged all the backend services and intermediate service under “aggregator” package by defining the “package aggregator;” on top of each service. For the demonstration purpose I have created a ballerina archive named “aggregator.bsz” including all the services in the “aggreagtor” package.

    Use following command to create a ballerina archive

    <Ballerina_HOME>/bin/ballerina build service <package> -o <FileName.bsz>

    Ex: <Ballerina_HOME>/bin/ballerina build service aggregator -o aggregator.bsz

    Run the following command to deploy and run the service.

    ./ballerina run service <BallerinaArchiveName>

    Ex: ./ballerina run service aggregator.bsz

    Note : Ballerina Archive for the above use case can be found from [2]

    Invoking the Service

    Now the Salesperson can get all the expected information (personal details, contact details and purchasing behavior information) required for the direct marketing campaign by providing the CustomerID to the CRM system.

    Here, I have used “Postman” Rest Client to represent the CRM system and requesting the information for the CustomerID = “101”.



    Darshana GunawardanaSolve: “No subject alternative DNS name matching” error

    I have been working on countless situations on solving SSL related issues, but today I have came across with a new one.

    Caused by: No subject alternative DNS name matching found.
    at org.apache.jsp.login_jsp._jspService(
    at org.apache.jasper.runtime.HttpJspBase.service(
    at javax.servlet.http.HttpServlet.service(
    at org.apache.jasper.servlet.JspServletWrapper.service(
    ... 41 more
    Caused by: No subject alternative DNS name matching found.
    ... 55 more

    No subject alternative DNS name matching  found

    By reading  this post and this post I understood that, this SAN is an extension can used to cover multiple hostnames using a single certificate. Using a wildcard certificate it can achieve the similar requirements of covering multiple domains from one certificate. But using the SAN extension has more flexibility to whitelist different domains that not belong to same pattern.

    Going back to the error, when I browse the server certificate and check on the details on the SAN extension, it was figured that this particular internal hostname not included in the DNS list. As the fix it was changed to request endpoint to use correct hostname.

    Few usages on SAN extension on popular domains:

    1. Facebook (Browed using Firefox)
    2. GMail (Browser using Chrome)
    3. WordPress (Browser using Chrome)

    In summary SAN extension provide flexibility to add multiple domains covered by the single certificate while providing the hostname verification during the SSL handshake. If you need more details on the hostname verification read this post.

    That’s all for now.. Hope you learned something.. 🙂

    Anupama PathirageWSO2 DSS - Use User Defined Data types (UDT)

    This post explains how to use Oracle UDTs with WSO2 DSS.

    SQL For Create UDT and Tables:

    home_no   NUMBER(5),
    city varchar2(20)

    CREATE TABLE Students (
    id    NUMBER(10),
    home_addr T_Address);

    insert into Students(id, home_addr) values(1, T_Address(10, 'Colombo'));
    insert into Students(id, home_addr) values(2, T_Address(30, 'New York'));

    Data Service:

    <data name="TestUDT" transports="http https local">
       <config enableOData="false" id="OracleDB">
          <property name="driverClassName">oracle.jdbc.driver.OracleDriver</property>
          <property name="url">jdbc:oracle:thin:@localhost:1521/xe</property>
          <property name="username">system</property>
          <property name="password">oracle</property>
       <query id="SelectData" useConfig="OracleDB">
          <sql>Select id,home_addr From Students</sql>
          <result element="Entries" rowName="Entry">
             <element column="id" name="id" xsdType="integer"/>
             <element column="home_addr[0]" name="Home_Number" xsdType="integer"/>
             <element column="home_addr[1] " name="Home_City" xsdType="string"/>
       <query id="InsertData" returnUpdatedRowCount="true" useConfig="OracleDB">
          <sql>Insert into Students (id, home_addr) values (?, T_Address(?,?))</sql>
          <result element="UpdatedRowCount" rowName="" useColumnNumbers="true">
             <element column="1" name="Value" xsdType="integer"/>
          <param name="student_id" sqlType="INTEGER"/>
          <param name="home_number" sqlType="INTEGER"/>
          <param name="home_city" sqlType="STRING"/>
       <operation name="SelectOp">
          <call-query href="SelectData"/>
       <operation name="InsertOp">
          <call-query href="InsertData">
             <with-param name="student_id" query-param="student_id"/>
             <with-param name="home_number" query-param="home_number"/>
             <with-param name="home_city" query-param="home_city"/>


    For InsertOp:


       <p:InsertOp xmlns:p="">
          <!--Exactly 1 occurrence-->
          <!--Exactly 1 occurrence-->
          <!--Exactly 1 occurrence-->


    <UpdatedRowCount xmlns="">

    For SelectOp:


    <Entries xmlns="">
          <Home_City>New York</Home_City>

    Samitha ChathurangaWSO2 Puppet Deployment

    WSO2 Products are accompanied with puppet modules which make your life easier when setting up and configuring a product as per the requirement/the deployment architecture. I am going to provide an introduction and guide on how to use these puppet modules for development or deployment purposes.

    So if you are a developer and want to customize a WSO2 puppet module (to facilitate further flexibility or add more parameterized configurations), this post would be a good starter. 

    And also if you are a user who directly want to setup/configure a puppet environment to deploy a certain enterprise deployment, you may read this.

    WSO2 Puppet architecture was changed completely within last year and so now the puppet modules of each wso2 product are in separate git repositories as opposed to the old structure where all-were-in-one. The old WSO2 puppet-modules repository can be found here if you just want to have a lookup. That is now been deprecated and all the latest product related puppet scripts are written/been written under the new architecture, which I am going to describe here.

    What is done by puppet...?


    Before reading further let's clarify, what puppet does with respect to WSO2 products ? We have to understand this.

    For a beginner into puppet, and for whom being ready to tackle with WSO2 puppet modules I may introduce puppet as below and this is very simple and premature introduction on what puppet does. (this concept could be common for any other puppet module too )

    Following diagram (Figure 1) illustrate what is occurred simply when we use WSO2 APIM 2.1.0 puppet module to deploy and configure the product in production environment.
    Figure 1

    I guess you didn't understand this 100%. Don't worry. :-D .I am describing.

    In the repository-which we call as WSO2 puppet module (for instance take wso2-apim-2.1.0 puppet module), there are configuration files acting as templates for each file that needs to be edited/configured in a product deployment. eg. axis2.xml, carbon.xml, master-datasources.xml.

    The difference between an actual vanila product pack's config file and a related puppet template file is that the latter has been replaced with variables/parameters in order to change their values at runtime.

    And a puppet module basically includes files (hiera files) with lists of values to be passed to each parameter/variable in each those config files. These values, which we call as hiera data are defined separately for each deployment pattern (or profile if available).

    So when we "run" puppet, 3 basic steps are executed by puppet, as mentioned in the above figure.

    1 - Apply the configuration data (of the required pattern), into the puppet template files.
    2 - Replace the vanilla wso2am-2.1.0 product's configuration files with the modified template files of step 1.
    3 - Copy the modified product pack, in step 2, into the production environment and start the product server.

    Ok, now you know what we do with puppet, so shall we move in deeper. First we may clarify, the parts and particles of WSO2 puppet modules.

    Organization of WSO2 Puppet repositories

    If you are going to work with a certain WSO2 product (for a puppet deployment), you may have to deal with 3 functional components, which are found as git repositories.
    1. The certain WSO2 product related repository
    2. puppet-base repository
    3. puppet-common repository
    Both 2 and 3 are required for a puppet deployment of a WSO2 product.

    1. The certain WSO2 product related repository

    Each WSO2 product has a puppet-module repository. (i.e. puppet-apim, puppet-is, puppet-das, puppet-esb, puppet-iot, puppet-ei). Most of these has been released for latest product release ( as per the status by June 2017 ) and please find the puppet module repository list in here. These are consisted of puppet scripts that support multiple patterns of deployment and multiple profiles if available.

    Let's take WSO2 API Manager puppet modules for instance. It consists of 3 puppet modules which are related to WSO2 APIM product. They are as below and the specif product related to each module is mentioned infront.
    1. wso2am_runtime - WSO2 API Manager 2.1.0
    2. wso2am_analytics - WSO2 APIM Analytics Server 2.1.0
    3. wso2is_prepacked - Pre-packaged WSO2 Identity Server 5.3.0 (for IS  as  Key Manager APIM deployment)
    And this wso2am_runtime module includes puppet scripts which facilitate deployment of APIM in 7 deployment patterns, with 5 APIM profiles.

    2. puppet-base repository

    WSO2 base puppet repository can be found in here. Puppet-base is also another "puppet module" according to the puppet perspective. This provides features for installing and configuring WSO2 products. On high level it does the following:
    1. Install Java Runtime
    2. Clean CARBON_HOME directory
    3. Download and extract WSO2 product distribution
    4. Apply Carbon Kernel and WSO2 product patches
    5. Apply configuration data
    6. Start WSO2 server as a service or in foreground

    3. puppet-common repository

    WSO2 Puppet Common repository provides files required for setting up a Puppet environment to deploy WSO2 products.
    • manifests/site.pp: Puppet site manifest
    • scripts/ Base bash script file which provides utility bash methods.
    • The setup bash script for setting up a PUPPET_HOME environment for development work.
    • vagrant A vagrant script for testing Puppet modules using VirtualBox.

    Setting up a puppet environment

    There are basically 2 approaches to setup a puppet environment.
    1. Using vagrant and Orcle VirtualBox
    2. Master agent environment
    It is recommended to select the appropriate approach considering the requirement.

      1. Using vagrant and Orcle VirtualBox


      Vagrant can be used to setup the puppet development environment to easily test a WSO2 product's Puppet module.

      In this approach, Vagrant is used to automate creation of a VirtualBox VM (Ubuntu 14.04) and deploy/install the WSO2 product using the WSO2 puppet modules.

      This approach is very easier than Master-agent approach considering the convenience of setup. But this is less convenient in the case of debugging for errors as vagrant takes much time to up a WSO2 product with puppet as the process includes creating a Virtual Machine in Virtual Box too. If you are developing a WSO2 puppet module, from beginning, this is not the recommended approach. But if you are not a newbie to puppet, and so have a good expertise on how puppet modules works with WSO2 products, then you may use this approach (as u will make less errors).

      And you cannot use this puppet environment to deploy and install a certain WSO2 product into an actual production environment. Because, this install the product into a VirtualBox Virtual Machine which is created automatically on the go.

      For the steps to follow to use this approach follow the official WSO2 documentation Wikis in github here.

      2. Master agent environment


        Master-agent environment can be used to deploy/install WSO2 products in actual production environments. And also if you are developing a puppet module from the beginning or doing major customizations to the existing puppet modules and your development task would take multiple days/weeks, it is better to follow this approach. Because this is convenient in the case of debugging, testing time for each run, re-running after customizations, etc. But this is bit cumbersome, to setup this master-agent environment as it takes much time and and also need multiple OS instances/computers.

      To setup a master-agent puppet environment with WSO2 puppet modules, follow the steps in official WSO2 Documentation Wikis in github.


      Geeth MunasingheFix for iOS host name verification failed issue.

      If you get the following error when installing the certificates for iOS configurations to WSO2IOT server, please replace " localhost" with your <SERVER_IP> with <IoT_HOME>/repository/deployment/server/synapse-configs/default/api/

      [2017-02-02 20:17:21,548] [IoT-Core] ERROR - TargetHandler I/O errorHost name verification failed for host : localhost Host name verification failed for host : localhost
      at org.apache.synapse.transport.http.conn.ClientSSLSetupHandler.verify(
      at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(
      at org.apache.http.nio.reactor.ssl.SSLIOSession.isAppInputReady(
      at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(
      at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(
      at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(
      at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(
      at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(
      at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(
      at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$
      [2017-02-02 20:17:21,561] [IoT-Core]  WARN - EndpointContext Endpoint : admin--IOSDeviceManagement_APIproductionEndpoint_0 with address https://localhost:9443/ios-enrollment/profile will be marked SUSPENDED as it failed
      [2017-02-02 20:17:21,562] [IoT-Core]  WARN - EndpointContext Suspending endpoint : admin--IOSDeviceManagement_APIproductionEndpoint_0 with address https://localhost:9443/ios-enrollment/profile - current suspend duration is : 30000ms - Next retry after : Thu Feb 02 20:17:51 IST 2017

      Geeth MunasingheClustering WSO2IOT for Mobile Device Management

      As indicated in the above diagram, when clustering IoT Server, there is worker manager separation. However, this differs from standard WSO2 Carbon worker manager separation.
      IoT Server includes an admin console that can be used by any user with administrative privileges. These users can perform some actions on enrolled devices and the devices can retrieve those actions by requesting the pending operations. This is done by either walking the device through a push notification or configuring the device to poll at a pre-compiled frequency.
      Normally administrative tasks should be run from manager node.
      There are two major deployment patterns for the manager node. One could be running the manager node in the private network due to security constraints and other is allowing end users to access the management node so that they can control and view their devices.
      A manager node is used to run background tasks that are necessary to the update the device information such as the location and applications installed.

      NGINX Configs

      Please make sure that you have properly signed ssl certificates before starting this. And note that we are using 4 urls for the clustering. Two of them directed at the workers, one at the manager and one at the key managers. When producing the ssl certificates, please make sure to add all urls as SNI.


      This section provides instructions on how to configure Nginx as the load balancer. You can use any load balancer for your setup and Nginx is used here as an example. This covers the configuration in the main Nginx configuration file.
      The location of this file varies depending on how you installed the software on your machine. For many distributions, the file is located at /etc/nginx/nginx.conf. If it does not exist there, it may also be at /usr/local/nginx/conf/nginx.conf or /usr/local/etc/nginx/nginx.conf. You can create separate files inside the conf.d for each configuration. Three different configuration files are used for the Manager, Key Manager and Worker node in the example provided in this page.

      Put this as mgt.conf in /etc/nginx/conf.d/

      upstream {

      server {
             listen 80;
             client_max_body_size 100M;
             location / {
                    proxy_set_header X-Forwarded-Host $host;
                    proxy_set_header X-Forwarded-Server $host;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $http_host;
                    proxy_read_timeout 5m;
                    proxy_send_timeout 5m;

                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";

      upstream {

      server {
      listen 443;
         ssl on;
         ssl_certificate /opt/keys/star_wso2_com.crt;
         ssl_certificate_key /opt/keys/iots310_wso2_com.key;
      client_max_body_size 100M;
         location / {
                    proxy_set_header X-Forwarded-Host $host;
                    proxy_set_header X-Forwarded-Server $host;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $http_host;
                    proxy_read_timeout 5m;
                    proxy_send_timeout 5m;

                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";


      The workerr is pointed by two URLs.

      Put this as wkr.conf in /etc/nginx/conf.d/

      upstream {

      server {
             listen 80;
             location / {
                    proxy_set_header X-Forwarded-Host $host;
                    proxy_set_header X-Forwarded-Server $host;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $http_host;
                    proxy_read_timeout 5m;
                    proxy_send_timeout 5m;

                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";

      upstream {

      server {
      listen 443;
         ssl on;
         ssl_certificate /opt/keys/star_wso2_com.crt;
         ssl_certificate_key /opt/keys/iots310_wso2_com.key;
         location / {
                    proxy_set_header X-Forwarded-Host $host;
                    proxy_set_header X-Forwarded-Server $host;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $http_host;
                    proxy_read_timeout 5m;
                    proxy_send_timeout 5m;

                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";

      Put this as gateway.conf in /etc/nginx/conf.d/

      upstream {

      server {
             listen 80;
             location / {
                    proxy_set_header X-Forwarded-Host $host;
                    proxy_set_header X-Forwarded-Server $host;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $http_host;
                    proxy_read_timeout 5m;
                    proxy_send_timeout 5m;

                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";

      upstream {

      server {
      listen 443;
         ssl on;
         ssl_certificate /opt/keys/star_wso2_com.crt;
         ssl_certificate_key /opt/keys/iots310_wso2_com.key;
         location / {
                    proxy_set_header X-Forwarded-Host $host;
                    proxy_set_header X-Forwarded-Server $host;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $http_host;
                    proxy_read_timeout 5m;
                    proxy_send_timeout 5m;

                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";

      Key Manager

      Put this as keymgt.conf in /etc/nginx/conf.d/

      upstream {

      server {
             listen 80;
             location / {
                    proxy_set_header X-Forwarded-Host $host;
                    proxy_set_header X-Forwarded-Server $host;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $http_host;
                    proxy_read_timeout 5m;
                    proxy_send_timeout 5m;

                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";

      upstream {


      server {
      listen 443;
         ssl on;
         ssl_certificate /opt/keys/star_wso2_com.crt;
         ssl_certificate_key /opt/keys/iots310_wso2_com.key;
         location / {
                    proxy_set_header X-Forwarded-Host $host;
                    proxy_set_header X-Forwarded-Server $host;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $http_host;
                    proxy_read_timeout 5m;
                    proxy_send_timeout 5m;

                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";

      Setting up the MySQL database

      Required Databases and available location is mentioned below. (Please note CDM database includes Android, iOS, Windows and Certificate Management schemas as well as APP manager database includes Store and Social DB. Therefore 5 schemas would suffice)

      1. Registry Database - <PRODUCT_HOME>/dbscripts/mysql.sql
      2. User Manager Database - <PRODUCT_HOME>/dbscripts/mysql.sql
      3. APIM Database - <PRODUCT_HOME>/dbscripts/apimgt/mysql.sql
      4. CDM Database - <PRODUCT_HOME>/dbscripts/cdm/mysql.sql
        1. Certificate Mgt Database - <PRODUCT_HOME>/dbscripts/certMgt/mysql.sql
        2. Android Database - <PRODUCT_HOME>/dbscripts/cdm/plugins/android/mysql.sql
        3. iOS Database - <PRODUCT_HOME>/dbscripts/cdm/plugins/ios/mysql.sql
        4. Windows Database - <PRODUCT_HOME>/dbscripts/cdm/windows/mysql.sql
      5. APP Manager Database - <PRODUCT_HOME>/dbscripts/appmgt/mysql.sql
        1. Store Database - <PRODUCT_HOME>/dbscripts/storage/mysql/resource.sql
        2. Social Database -  <PRODUCT_HOME>/dbscripts/social/mysql/resource.sql

      Databases are configured as following. Please note : make sure that you add the relevant jdbc library to <PRODUCT_HOME>/lib directory. In this case, it would be mysql-connector-java-{version}.jar

      1. <PRODUCT_HOME>/conf/datasources/master-datasources.xml
        1. Registry Database
        2. User Manager Database
        3. APIM Database
        4. APP Manager Database
          1. Store Database
          2. Social Database
      2. <PRODUCT_HOME>/conf/datasources/cdm-datasources.xml
        1. CDM Database (Please add the certMgt tables to CDM schema)
      3. <PRODUCT_HOME>/conf/datasources/android-datasources.xml
        1. Android Database
      4. <PRODUCT_HOME>/conf/datasources/ios-datasources.xml
        1. iOS Database
      5. <PRODUCT_HOME>/conf/datasources/windows-datasources.xml
        1. Windows Database

      Database configs.
      Sample DB config for User manager, Registry and App manager databases in master-datasources.xml
                 <description>The datasource used for User Manager database</description>
                 <definition type="RDBMS">
                         <validationQuery>SELECT 1</validationQuery>

      Sample DB config for APIM in mysql (Please note zeroDateTimeBehavior=convertToNull Parameter for mysql)

                 <description>The datasource used for API Manager database</description>
                 <definition type="RDBMS">
                         <validationQuery>SELECT 1</validationQuery>

      Sample  DB config for CDM, Android, Windows and iOS databases.

                 <description>The datasource used for CDM</description>
                 <definition type="RDBMS">
                         <validationQuery>SELECT 1</validationQuery>

      Registry Mounting

      Registry is a virtual directory based repository system.  It can be federated among multiple databases which is called Registry mounting. All WSO2 servers supports registry mounting.

      There are 3 types of registry repositories.
      1. Local  - stores local instance related data.
      2. Config - contains product specific configuration (shared across multiple instances of the same product)
      3. Governance - contains data and configuration shared across the platform

      See Remote Instance and Mount Configuration Details for more information on registry mounting and why it is useful. These must be done in all nodes. Do the following steps to configure this.

      Key manager registry mounting

      <dbConfig name="mounted_registry">

      <remoteInstance url="https://localhost:9443/registry">

      <mount path="/_system/config" overwrite="true">
      <mount path="/_system/governance" overwrite="true">

      Worker and Manager Registry Mounting

      <dbConfig name="mounted_registry">

      <remoteInstance url="https://localhost:9443/registry">

      <mount path="/_system/config" overwrite="true">
      <mount path="/_system/governance" overwrite="true">

      Configuring the Key manager  (

      Mount the registry as mentioned above. Configure the following databases for the key manager in <PRODUCT_HOME>/conf/datasources/master-datasources.xml file.

      1. Registry DB
      2. User manager DB
      3. APIM DB

      Change the following on <PRODUCT_HOME>/conf/carbon.xml and make sure that port offset is set to 0, if it is set to a higher value, please make sure to reflect that on NGINX config too.


      Change the following configs on the <PRODUCT_HOME>/bin/

"" \
         -Diot.keymanager.https.port="443" \

      Change the <PRODUCT_HOME>/conf/identity/sso-idp-config.xml as follows to configure single sign on for  following front end applications. Highlighted lines shows the changes done.


      Start the server with following command in <PRODUCT_HOME>/bin/ start

      Configuring the Manager   (

      Mount the registry as mentioned above. Configure the following databases for the key manager in <PRODUCT_HOME>/conf/datasources/master-datasources.xml file.

      1. Registry DB
      2. User Manager DB
      3. APIM DB
      4. APP Manager DB (Includes the following schemas to the same db)
        1. Social DB
        2. Storage DB
      5. CDM DB  (Includes the following schemas to the same db)
        1. Certificate Mgt
        2. Android DB
        3. iOS DB
        4. Windows DB

      Change the following on <PRODUCT_HOME>/conf/carbon.xml and make sure that port offset is set to 0, if it is set to a higher value, please make sure to reflect that on NGINX config too.

      <HostName> </HostName>

      Change the following highlighted configs on the <PRODUCT_HOME>/bin/

"" \
         -Diot.manager.https.port="443" \"" \
         -Diot.core.https.port="443" \"" \
         -Diot.keymanager.https.port="443" \"" \
         -Diot.gateway.https.port="443" \
         -Diot.gateway.http.port="80" \
         -Diot.gateway.carbon.https.port="443" \
         -Diot.gateway.carbon.http.port="80" \"" \
         -Diot.apimpublisher.https.port="443" \"" \
         -Diot.apimstore.https.port="443" \

      Change the config on <PRODUCT_HOME>/repository/deployment/server/jaggeryapps/store/config/store.json for SSO as follows.

         "ssoConfiguration": {
             "enabled": true,
             "issuer": "store",
             "identityProviderURL": "",
             "keyStorePassword": "wso2carbon",
             "identityAlias": "wso2carbon",
             "responseSigningEnabled": "true",
             "storeAcs" : "",
             "keyStoreName": "/repository/resources/security/wso2carbon.jks",
             "validateAssertionValidityPeriod": true,
             "validateAudienceRestriction": true,
             "assertionSigningEnabled": true

      Change the config on <PRODUCT_HOME>/repository/deployment/server/jaggeryapps/publisher/config/publisher.json for SSO as follows.

         "ssoConfiguration": {
             "enabled": true,
             "issuer": "publisher",
             "identityProviderURL": "",
             "keyStorePassword": "wso2carbon",
             "identityAlias": "wso2carbon",
             "responseSigningEnabled": "true",
             "publisherAcs": "",
             "keyStoreName": "/repository/resources/security/wso2carbon.jks",
             "validateAssertionValidityPeriod": true,
             "validateAudienceRestriction": true,
             "assertionSigningEnabled": true

      Change the config on <PRODUCT_HOME>/repository/deployment/server/jaggeryapps/api-store/site/conf/site.json   for SSO as follows.

         "ssoConfiguration" : {
             "enabled" : "true",
             "issuer" : "API_STORE",
             "identityProviderURL" : "",
             "keyStorePassword" : "",
             "identityAlias" : "",
             "keyStoreName" :"",
             "passive" : "false",
             "signRequests" : "true",
             "assertionEncryptionEnabled" : "false"

      Change the config on <PRODUCT_HOME>/repository/deployment/server/jaggeryapps/android-web-agent/app/conf/config.json to reflect agent download url.

         "generalConfig" : {
             "host" : "",
             "companyName" : "WSO2 IoT Server",
             "browserTitle" : "WSO2 IoT Server",
             "copyrightText" : "\u00A9 %date-year%, WSO2 Inc. ( All Rights Reserved."

      Change the config on <PRODUCT_HOME>/repository/deployment/server/jaggeryapps/devicemgt/app/conf/config.json to reflect bar code scanne.

       "generalConfig": {
         "host": "",
         "companyName": "WSO2 Carbon Device Manager",
         "browserTitle": "WSO2 Device Manager",
         "copyrightPrefix": "\u00A9 %date-year%, ",
         "copyrightOwner": "WSO2 Inc.",
         "copyrightOwnersSite": "",
         "copyrightSuffix": " All Rights Reserved."

      Start the server with following command in ‘<PRODUCT_HOME>/bin/ start’

      Configuring the Worker Nodes. (,

      Mount the registry as mentioned above. Configure the following databases for the key manager in <PRODUCT_HOME>/conf/datasources/master-datasources.xml file.

      1. Registry DB
      2. User Manager DB
      3. APIM DB
      4. APP Manager DB (Includes the following schemas to the same db)
        1. Socal DB
        2. Storage DB
      5. CDM DB  (Includes the following schemas to the same db)
        1. Certificate Mgt
        2. Android DB
        3. iOS DB
        4. Windows DB

      Change the following on <PRODUCT_HOME>/conf/carbon.xml and make sure that port offset is set to 0, if it is set to a higher value, please make sure to reflect that on NGINX config too.


      Change the following configs on the <PRODUCT_HOME>/bin/

"" \
         -Diot.manager.https.port="443" \"" \
         -Diot.core.https.port="443" \"" \
         -Diot.keymanager.https.port="443" \"" \
         -Diot.gateway.https.port="443" \
         -Diot.gateway.http.port="80" \
         -Diot.gateway.carbon.https.port="443" \<