WSO2 Venus

Chathurika Erandi De SilvaSimple HTTP Inbound endpoint Sample: How to

What is an Inbound endpoint?

As per my understanding an inbound endpoint is an entry point. Using this entry point, a message can be mediated directly from the transport layer to the mediation layer. Read more...

Following is a very simple demonstration on Inbound Endpoints using WSO2 ESB

1. Create a sequence

2. Save in Registry

3. Create an Inbound HTTP endpoint using the above sequence

Now it's time to see how to send the requests. As I have explained, in the start of this post, the inbound sequence is an entry point for a message. If the above third step is inspected, its illustrated that a port is given for the inbound endpoint. When the incoming traffic is directed towards the given port, the inbound endpoint will receive it and straight away pass it to the sequence defined with it. Here the axis2 layer is skipped.

In the above scenario the request should be directed to http://localhost:8085/ as given below

Then the request is directed to the inbound endpoint and directly to the sequence

Asanka DissanayakeResizing images in one line in Linux

Re-sizing images in one line

I hope you all have had the problem when you upload high quality pics to FB or some other social network.
You can use following command to resize the images by 50%. You can change the ratios .. just replace the value you desire with “50%”

first , you need to have ImageMagick

Install ImageMagick with following command

sudo apt-get install imagemagick

Then go to the directory that has photos to be resized
Run following command

mkdir resize;for f in *; do echo "converting $f"; convert $f -resize 50% resize/${f}; done

Then you will see the re-sized files in the resize directory.

Hope this will save someone’s time .. Enjoy !!!


Thilina PiyasundaraRunning your WordPress blog on WSO2 App Cloud

WSO2 App Cloud is now supporting Docker base PHP applications. In this blog post I will describe how to install a WordPress blog in this environment. In order to setup a WordPress environment we need to have two things;

  1. Web server with PHP support
  2. MySQL database

If we have both of these we can start setting up WordPress. In WSO2 App Cloud we can use the PHP application as the WordPress hosting web server which is a PHP enabled Apache web server docker image. Also it provides a database service where you can easily create and manage MySQL databases via the App Cloud user interface (UI).

For the moment WSO2 App Cloud is on beta therefore these docker images will have only 12h of lifetime with no data persistence in the file storage level. Data on MySQL databases will be safe unless you override. If you need more help don't hesitate to contact Cloud support.

Creating PHP application

Signup or signin to WSO2 App Cloud via Then click on the "App Cloud beta" section.
Then it will redirect you to the App Cloud user interface. Click on 'Add New Application' button on the left hand corner.
This will prompt you to several available applications. Select 'PHP Web Application' box and continue.
Then it will prompt you a wizard. In that give a proper name and a version to your application. Name and version will be use to generate the domain name for your application.

There are several options that you can use to upload PHP content to this application. For the moment I will download the file from the wordpress site and upload it to application.
In the below sections of the UI you can set the run time and container specification. Give the highest Apache version as the runtime and use minimal container speck as wordpress does not require much processing and memory.
If the things are all set and the file upload is complete click on 'Create' button. You will get the following status pop-up when you click the create button and it will redirect you to the application when its complete.
In the application UI note the URL. Now you can click on the 'Launch App' button so that it will redirect you to your PHP application.
Newly installed WordPress site will be like this.
Now we need to provide database details to it. Therefore, we need to create database and a user.

Creating database

Go back to the Application UI and click on 'Back to listing' button.
In that UI you can see a button in the top left hand corner called 'Create database'. Click on that.
In the create database UI give a database name, database user name and a password . Password need to pass the password policy so you can click on 'Generate password' to generate a secure password easily. By the way of you use generate password option make sure you copy the generated password before you proceed with database creation. Otherwise you may need to reset the password.

Also note that database name and database user name will append tenant domain and random string accordingly to the end of both. Therefore, those fields will only get few number of input characters.
If all set then click on 'Create database' button to proceed. After successfully creating the database it will redirect you to a database management user interface like following.
Now you can use those details to login to the newly create mysql database as follows;
$ mysql -h -p'' -u
eg :-
$ mysql -h -p'XXXXXXXXX' -u admin_LeWvxS3l wpdb_thilina 
Configuring WordPress

If the database creation is successful and you can login to it without any issue we can use those details to configure WordPress.

Go back to the WordPress UI and click on 'let's go' button. It will prompt to a database configuration wizard. Fill those fields with the details that we got from the previous section.
If WordPress application can successfully establish a connection with the database using your inputs it will prompt you to a UI as follows.
On that click on 'Run the install'. Then WordPress will start populating database tables and insert initial data to the given database.

When its complete it will ask for some basic configurations like the site title, admin user name and passwords.
Click on 'Install WordPress' after filling those information. Then it will redirect you to the WordPress admin console login page. Login to that using the username and password gave in the previous section.
So now WordPress is ready to use. But the existing URL is not very attractive. If you have a domain you can use it as the base URL of this application.

Setting custom domain (Optional)

IN the application UI click on the top left three lines button shown in the following image.
It will show some advance configuration that we can use. In that list select the last one 'Custom URL' option.
It will prompt you following user interface. Enter the domain name that you are willing to use.
But before you validate make sure you add a DNS CNAME to that domain pointing to you application launch URL.

Following is the wizard that I got when adding the CNAME via Godaddy. This user interface and adding CNAME options will be different for you DNS provider.
You can validate the CNAME by running 'dig' command in Linux or nslookup in windows.
If the CNAME is working click on 'Update'.
 If that is successful you will get the above notification and if you access that domain name it will show your newly created WordPress blog.

Hasitha Aravinda[Sample] Order Processing Process

This sample illustrates usage of WS-BPEL 2.0, WS-HumanTask 1.1 and Rule capabilities in WSO2 Business Process Server and WSO2 Business Rule Server.

Order Processing Flow
alt text
  • The client place an order by providing client ID, item IDs, quantity, shipping address and shipping city.
  • Then Process submits order information to invoicing web service, which generates order ID and calculate total order value.
  • If total order value is greater than USD 500, process requires a human interaction to proceed. When a order requires a human interaction process creates a Review HumanTask for regional clerks. If review task is rejected by one of regional clerk user, workflow terminates after notifying the client.
  • Once the regional clerk approve the review task, workflow invokes Warehouse Locater rule service to calculate nearest warehouse.
  • Once receiving nearest warehouse, process invokes Place Order web service to finalize the order.
  • Finally user will be notified with the estimated delivery date.

This sample contains

Please checkout this sample from Github. 

Sameera JayasomaCarbon JNDI

WSO2 Carbon provides an in-memory JNDI InitialContext implementation. This is available from the WSO2 Carbon 5.0.0. This module also…

Chathurika Erandi De SilvaRafting and Middleware QA are they the same?

Rafting the River Twinkle

Mary is opening a rafting entertainment business based on river Twinkle. She has a major challenge, her team has to have the best idea on the river so that they can give the best experience to the customers itself.

So what did they do? They decided to raft the river first by themselves because they needed to identify the loop holes, dangers before they take any customers on it.

Rafting and QA?

Aren't QA folks do the same thing as Mary and the team did? They do various activities to identify what works and what is not working. This is crucial because this information is much needed by the customers who will be using a specific product.

QA and Middleware

Designing tests for middleware is not an easy task. It's not same as designing tests for a simple web app. Middleware testing can be compared to the rafting experience itself while assuring the quality of a web app is boating on a large lake.

If we are to assure the quality of a product such as WSO2 ESB, its a challenging task but i have found out the following golden rules of thumb that can be incorporated to any middleware product.

My Golden Rules of Thumb 

Know where to start

It's important to know where u start designing tests. In order to achieve this, greater understanding on the functionality as well how it's to be implemented is also needed. So obviously, the QA has to be technically competent as well thorough in knowledge on the respective domain. Reading helps a lot as well as trying out by yourself so that knowledge can be gained from these

Have a proper design

A graphical design lets you evaluate your knowledge as well as your competency in the area. QAs with middleware cannot just stick to black box testing, they have to go for the white box as well as they have to ensure the quality of the code it self. So a graphical representation is very valuable in designing the tests and what to test.

Have room for improvement

Its advantage to be self driven, find out about what you are doing, understanding what you are doing is very important to achieve a good middleware testing.

With all of above, its easy to put our selves in customer shoes, because in middleware, there can be various and unlimited customer demands. If we follow the above rules of thumb, i guess any QA can be a better one and more suitable for the middleware platform that changes rapidly.

I'll be discussing more on this, this is just a start...

Prabath SiriwardenaJWT, JWS and JWE for Not So Dummies!

JSON Web Token (JWT) defines a container to transport data between interested parties. It became an IETF standard in May 2015 with the RFC 7519. There are multiple applications of JWT. The OpenID Connect is one of them. In OpenID Connect the id_token is represented as a JWT. Both in securing APIs and Microservices, the JWT is used as a way to propagate and verify end-user identity.

This article on medium explains in detail JWT, JWS and JWE with their applications.

Dinusha SenanayakaHow to use App Manager Business Owner functionality ?

WSO2 App Manager new release (1.2.0) has introduced capability to define business owner for each application. (AppM-1.2.0  is yet to be released by the time this blog post is writing and you could download nightly build from here and tryout until the release is done.)

1. How to define business owners ?

Login as a admin user to admin-dashboard by accessing following URL.

This will give you UI similar to bellow where you can define new business owners.

Click on "Add Business Owner" option to add new business owners.

All created business owners are listed in UI as follows, which allows you to edit or delete from the list.

2. How to associate business owner to application ?

You can login to Publisher by accessing the following URL to create new app.

In the add new web app UI, you should be able to see page similar to following, where you can type and select the business owner for the app.

Once the required data is filled and app is ready to publish to store, change the app life-cycle state to 'published' to publish app into app store.

Once the app is published, users could access app through the App Store by accessing the following URL.

App users can find the business owner details in the App Overview page as shown bellow.

If you are using REST APIs to create and publish the apps, following sample command would help.

Create new policy group
curl -X POST -b cookies -H 'Content-Type: application/x-www-form-urlencoded' http://localhost:9763/publisher/api/entitlement/policy/partial/policyGroup/save  -d "policyGroupName=PG1&throttlingTier=Unlimited&userRoles&anonymousAccessToUrlPattern=false&objPartialMappings=[]&policyGroupDesc='Policy group1'"
{"success" : true, "response" : {"id" : 2}}

Create App
curl -X POST -b cookies -H 'Content-Type: application/x-www-form-urlencoded' http://localhost:9763/publisher/asset/webapp -d 'overview_provider=admin&overview_name=HelloApp1&overview_displayName=HelloApp1&overview_context=%2Fhello1&overview_version=1.0.0&optradio=on&overview_transports=http&overview_webAppUrl=http%3A%2F%2Flocalhost%3A8080%2Fhelloapplication&overview_tier=Unlimited&overview_allowAnonymous=false&overview_skipGateway=false&uritemplate_policyGroupIds=%5B2%5D&uritemplate_javaPolicyIds=[5]&uritemplate_urlPattern0=%2F*&uritemplate_httpVerb0=GET&uritemplate_policyGroupId0=2&autoConfig=on&providers=wso2is-5.0.0&sso_ssoProvider=wso2is-5.0.0&sso_singleSignOn=Enabled&webapp=webapp&overview_treatAsASite=false&overview_businessOwner=Henrry+Alex'

Change app lifecycle state to 'Published'
curl -X PUT -b cookies http://localhost:9763/publisher/api/lifecycle/Submit%20for%20Review/webapp/3d970fa3-1d82-4e64-9b05-777c05de3088
curl -X PUT -b cookies http://localhost:9763/publisher/api/lifecycle/Approve/webapp/3d970fa3-1d82-4e64-9b05-777c05de3088
curl -X PUT -b cookies http://localhost:9763/publisher/api/lifecycle/Publish/webapp/3d970fa3-1d82-4e64-9b05-777c05de3088

Afkham AzeezAWS Clustering Mode for WSO2 Products

WSO2 Clustering is based on Hazelcast. When WSO2 products are deployed in clustered mode on Amazon EC2, it is recommended to use the AWS clustering mode. As a best practice, add all nodes in a single cluster to the same AWS security group.

To enable AWS clustering mode, you simply have to edit the clustering section in the CARBON_HOME/repository/conf/axis2/axis2.xml file as follows:

Step 1: Enable clustering

<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"

Step 2: Change membershipScheme to aws

<parameter name="membershipScheme">aws</parameter>

Step 3: Set localMemberPort to 5701

Any value between 5701 & 5800 are acceptable
<parameter name="localMemberPort">5701</parameter>

Step 4: Define AWS specific parameters

Here you need to define the AWS access key, secret key & security group. The region, tagKey & tagValue are optional & the region defaults to us-east-1

<parameter name="accessKey">xxxxxxxxxx</parameter>
<parameter name="secretKey">yyyyyyyyyy</parameter>
<parameter name="securityGroup">a_group_name</parameter>
<parameter name="region">us-east-1</parameter>
<parameter name="tagKey">a_tag_key</parameter>
<parameter name="tagValue">a_tag_value</parameter>

Provide the AWS credentials & the security group you created as values of the above configuration items.  Please note that the user account used for operating AWS clustering needs to have the ec2:DescribeAvailabilityZones & ec2:DescribeInstances permissions.

Step 5: Start the server

If everything went well, you should not see any errors when the server starts up, and also see the following log message:

[2015-06-23 09:26:41,674]  INFO - HazelcastClusteringAgent Using aws based membership management scheme

and when new members join the cluster, you should see messages such as the following:
[2015-06-23 09:27:08,044]  INFO - AWSBasedMembershipScheme Member joined [5327e2f9-8260-4612-9083-5e5c5d8ad567]: /

and when members leave the cluster, you should see messages such as the following:
[2015-06-23 09:28:34,364]  INFO - AWSBasedMembershipScheme Member left [b2a30083-1cf1-46e1-87d3-19c472bb2007]: /

The complete clustering section in the axis2.xml file is given below:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
<parameter name="AvoidInitiation">true</parameter>
<parameter name="membershipScheme">aws</parameter>
<parameter name="domain">wso2.carbon.domain</parameter>

<parameter name="localMemberPort">5701</parameter>
<parameter name="accessKey">xxxxxxxxxxxx</parameter>
<parameter name="secretKey">yyyyyyyyyyyy</parameter>
<parameter name="securityGroup">a_group_name</parameter>
<parameter name="region">us-east-1</parameter>
<parameter name="tagKey">a_tag_key</parameter>
<parameter name="tagValue">a_tag_value</parameter>

<parameter name="properties">
<property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
<property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
<property name="subDomain" value="worker"/>

Afkham AzeezHow AWS Clustering Mode in WSO2 Products Works

In a previous blog post, I explained how to configure WSO2 product clusters to work on Amazon Web Services infrastructure. In this post I will explain how it works.

 WSO2 Clustering is based on Hazelcast.

All nodes having the same set of cluster configuration parameters will belong to the same cluster. What Hazelcast does is, it calls AWS APIs, and then gets a set of nodes that satisfy the specified parameters (region, securityGroup, tagKey, tagValue).

When the Carbon server starts up, it creates a Hazelcast cluster. At that point, it calls EC2 APIs & gets the list of potential members in the cluster. To call the EC2 APIs, it needs the AWS credentials. This is the only time these credentials are used. AWS APIs are only used on startup to learn about other potential members in the cluster.

Once the EC2 instances are retrieved, a Hazelcast node will try to connect to potential members that are running on the same port as its localMember port. By default this port is 5701. If that port is open, it will try to do a Hazelcast handshake and add that member if it belongs to the same cluster domain (group). The new member will repeat the process of trying to connect to the next port (i.e. 5702 by default) in increments of 1, until next port is not reachable.

 Here is the pseudocode;

for each EC2 instance e
     port = localMemberPort;
     while(canConnect(e, port))
          addMemberIfIPossible(e, port)    // A Hazelcast member is running & in the same domain
          port = port +1

Subsequently, the connections established between members are point to point TCP connections.  Member failures are detected through a TCP ping. So once the member discovery is done, the rest of the interactions in the cluster are same as when the multicast & WKA (Well Known Address) modes are used.

With that facility, you don't have to provide any member IP addresses or hostnames, which may be impossible on an IaaS such as EC2.

NOTE: This scheme of trying to establish connections with open Hazelcast ports from one EC2 instance to another does not violate any AWS security policies because the connection establishment attempts are made from nodes within the same security group to ports which are allowed within that security group.

Prabath SiriwardenaGSMA Mobile Connect vs OpenID Connect

Mobile Connect is an initiative by GSMA. The GSMA represents the interests of mobile operators worldwide, uniting nearly 800 operators with more than 250 companies in the broader mobile ecosystem, including handset and device makers, software companies, equipment providers and internet companies, as well as organizations in adjacent industry sectors. The Mobile Connect initiative by GSMA focuses on building a standard for user authentication and identity services between mobile network operators (MNO) and service providers.

This article on medium explains the GSMA Mobile Connect API and see how it differentiates from the OpenID Connect core specification.

Sameera JayasomaStartup Order Resolving Mechanisms in OSGi

There are few mechanisms in OSGi to deal with the bundle startup order. Most obvious approach is to use “start levels”. The other approach…

Dhananjaya jayasingheApplying security for ESB proxy services...

Security is a major factor we consider when it comes to each and every deployment. WSO2 Enterprise Service Bus also capable of securing services.

WSO2 ESB 4.8 or previous versions were having the capability of applying the security for a proxy service from Admin Console as in [1]

However, From ESB 4.9.0 , we can no longer apply security for a proxy service from Admin Console of the ESB. We need to use WSO2 Developer Studio version 3.8 for this requirement for ESB 4.9.0.

You can find the documentation on  applying security to ESB 4.9.0 based proxy service here[2].  However, i would like to add a small modification to the doc in [2] at the end.

After securing the proxy according to the document, We need to create the Composite Application Project and export the CAR file. When exporting the CAR file, by default the server role of the Registry project is being selected as GovernanceRegistry as in the bellow image.

When we deploy that CAR file in ESB, We are getting following exception [3] due to above Server Role.

In order to fix the problem, we need to change the server role to ESB as bellow since we are going to deploy it in ESB.


 [2016-04-12 14:34:48,658] INFO - ApplicationManager Deploying Carbon Application :  
[2016-04-12 14:34:48,669] INFO - EndpointDeployer Endpoint named 'SimpleStockQuote' has been deployed from file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/
[2016-04-12 14:34:48,670] INFO - ProxyService Building Axis service for Proxy service : myTestProxy
[2016-04-12 14:34:48,671] WARN - SynapseConfigUtils Cannot convert null to a StreamSource
[2016-04-12 14:34:48,671] ERROR - ProxyServiceDeployer ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/ : Failed.
org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(
at org.apache.axis2.deployment.repository.util.WSInfoList.update(
at org.apache.axis2.deployment.RepositoryListener.update(
at org.apache.axis2.deployment.RepositoryListener.checkServices(
at org.apache.axis2.deployment.RepositoryListener.startListener(
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(
at java.util.concurrent.Executors$
at java.util.concurrent.FutureTask.runAndReset(
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(
at java.util.concurrent.ScheduledThreadPoolExecutor$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
[2016-04-12 14:34:48,672] ERROR - AbstractSynapseArtifactDeployer Deployment of the Synapse Artifact from file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/ : Failed!
org.apache.synapse.deployers.SynapseArtifactDeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/ : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.handleSynapseArtifactDeploymentError(
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(
at org.apache.axis2.deployment.repository.util.WSInfoList.update(
at org.apache.axis2.deployment.RepositoryListener.update(
at org.apache.axis2.deployment.RepositoryListener.checkServices(
at org.apache.axis2.deployment.RepositoryListener.startListener(
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(
at java.util.concurrent.Executors$
at java.util.concurrent.FutureTask.runAndReset(
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(
at java.util.concurrent.ScheduledThreadPoolExecutor$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(
... 22 more
[2016-04-12 14:34:48,673] INFO - AbstractSynapseArtifactDeployer The file has been backed up into : NO_BACKUP_ON_WORKER.INFO
[2016-04-12 14:34:48,673] ERROR - AbstractSynapseArtifactDeployer Deployment of synapse artifact failed. Error reading /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/ : ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/ : Failed.
org.apache.axis2.deployment.DeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/ : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(
at org.apache.axis2.deployment.repository.util.WSInfoList.update(
at org.apache.axis2.deployment.RepositoryListener.update(
at org.apache.axis2.deployment.RepositoryListener.checkServices(
at org.apache.axis2.deployment.RepositoryListener.startListener(
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(
at java.util.concurrent.Executors$
at java.util.concurrent.FutureTask.runAndReset(
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(
at java.util.concurrent.ScheduledThreadPoolExecutor$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: org.apache.synapse.deployers.SynapseArtifactDeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/ : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.handleSynapseArtifactDeploymentError(
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(
... 20 more
Caused by: org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(
... 22 more
[2016-04-12 14:34:48,674] ERROR - ApplicationManager Error occurred while deploying Carbon Application
org.apache.axis2.deployment.DeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/ : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(
at org.apache.axis2.deployment.repository.util.WSInfoList.update(
at org.apache.axis2.deployment.RepositoryListener.update(
at org.apache.axis2.deployment.RepositoryListener.checkServices(
at org.apache.axis2.deployment.RepositoryListener.startListener(
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(
at java.util.concurrent.Executors$
at java.util.concurrent.FutureTask.runAndReset(
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(
at java.util.concurrent.ScheduledThreadPoolExecutor$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$
Caused by: org.apache.axis2.deployment.DeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/ : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(
... 20 more
Caused by: org.apache.synapse.deployers.SynapseArtifactDeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/ : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.handleSynapseArtifactDeploymentError(
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(
... 20 more
Caused by: org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(
... 22 more

Amila MaharachchiTomcat returns 400 for requests with long headers

We noticed this while troubleshooting an issue which popped up in WSO2 Cloud. We have configured SSO for the API Publisher and Store at WSO2 Identity Server. SSO was working fine except for one scenario. We checked the SSO configuration and couldn't find anything wrong.

Then we checked the load balancer logs. It revealed that LB was passing the request to the server i.e. Identity server, but gets a 400 from it. Then we looked the Identity Server logs to find nothing printed there. But, there were logs in the access log of the identity server which told us it was getting the request, but it was not letting it go through. Instead it was dropping it saying it was a bad request and was returning a 400 response.

We did some search in the internet and found out this kind of rejection can occur if the header values are too long. In the SAML SSO scenario, there is a referrer header which sends a lengthy value which was about 4000 characters long. When doing further search, we found out the property maxHttpHeaderSize in tomcat configs where we can configure the max http header size allowed in bytes. You can read about this config from here.

Once we increased that value, everything started working fine. So, I thought of blogging this down for the benefit of people using tomcat and also WSO2 products since WSO2 products have tomcat embedded in it. 

Dinusha SenanayakaExposing a SOAP service as a REST API using WSO2 API Manager

This post explains how we can publish an existing SOAP service as a  REST API using WSO2 API Manager.

We will be using a sample data-service called "OrderSvc"as the SOAP service which can be deployed as a SOAP service in WSO2 Data Services Server. But this could be any of SOAP service.

1. Service Description of ‘OrderSvc’ SOAP Backend Service

This “orderSvc” service provides WSDL with 3 operations (“submitOrder”, “cancelOrder”, “getOrderStatus”). 

submitOrder operation takes ProductCode and Quantity as parameters.
Sample request :
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">


cancelOrder operation takes OrderId as parameter and does an immediate cancellation and returns a confirmation code.
Sample request:
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">

orderStatus operaioin takes the orderId as parameter and return the order status as response.
Sample request :
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">

We need to expose this "OrderSvc" SOAP service as a REST API using API Manager. And once we exposed this as a REST API, “submitOrder”, “cancelOrder”, “getOrderStatus” operations should map to REST resources as bellow which takes the user parameters as query parameters.

“/submitOrder” (POST) => request does not contain order id or date; response is the full order payload.

“/cancelOrder/{id}” (GET) => does an immediate cancellation and returns a confirmation code.

“/orderStatus/{id}” (GET) => response is the order header (i.e., payload excluding line items).

Deploying the Data-Service :

1. Login to the MySQL and create a database called “demo_service_db” . (This database name can be anything , we need to update the data-service (.dbs file) accordingly).

mysql> create database demo_service_db;
mysql> demo_service_db;

2. Execute the dbscript given here on the above created database. This will create two tables ‘CustomerOrder’, ‘OrderStatus’ and one stored procedure ‘submitOrder’. Also it will insert some sample data into two tables.

3. Include mysql jdbc driver into DSS_HOME/repository/components/lib directory.

4. Download the data-service file given here. Before deploy this .dbs file, we need to modify the data source section defined in it. i.e in the downloaded orderSvc.dbs file, change the following properties by providing correct jdbcUrl ( need to point to the database that you created in step 1)  and change the userName/ Pwd of mysql connection, if those are different than the one defined here.

<config id="ds1">
     <property name="driverClassName">com.mysql.jdbc.Driver</property>
     <property name="url">jdbc:mysql://localhost:3306/demo_service_db</property>
     <property name="username">root</property>
     <property name="password">root</property>

5. Deploy the orderSvc.dbs file in Data services server by copying this file into “wso2dss-3.2.1/repository/deployment/server/dataservices” directory. Start the server.

6. Before expose through API Manager, check whether all three operations works as expected using try-it tool or SOAP-UI.

Sample request :
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">

Response :
<soapenv:Envelope xmlns:soapenv="">
     <submitOrderResponse xmlns="">

Sample request:
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">

<soapenv:Envelope xmlns:soapenv="">
     <axis2ns1:REQUEST_STATUS xmlns:axis2ns1="">SUCCESSFUL</axis2ns1:REQUEST_STATUS>

Sample request :
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">

<soapenv:Envelope xmlns:soapenv="">
     <OrderStatus xmlns="">


2. Configuring API Manager

1. Download the custom sequence given here and save it to the APIM registry location “/_system/governance/apimgt/customsequences/in/”.  This can be done by login to the API Manager carbon management console. 

In the left menu section, expand the Resources -> Browse -> Go to "/_system/governance/apimgt/customsequences/in" -> Click in "Add Resource" -> Browse the file system and upload the "orderSvc_supporting_sequence.xml" sequence that downloaded above. Then click "Add". This step will save the downloaded sequence into registry.

4. Create orderSvc API by wrapping orderSvc SOAP service.

Login to the API Publisher and create a API with following info.

Name: orderSvc
Context: ordersvc
Version: v1

Resource definition1
URL Pattern: submitOrder
Method: POST

Resource definition2
URL Pattern: cancelOrder/{id}
Method: GET

Resource definition3
URL Pattern: orderStatus/{id}
Method: GET

Endpoint Type:* : Select the endpoint type as Address endpoint. And go to the “Advanced Options” and select the message format as “SOAP 1.1”.
Production Endpoint:https://localhost:9446/services/orderSvc/ (Give the OrderSvc service endpoint)

Tier Availability : Unlimited

Sequences : Click on the Sequences checkbox and selected the previously saved custom sequence under “In Flow”.

Publish the API into gateway.

We are done with the API creation.

Functionality of the custom sequence "orderSvc_supporting_sequence.xml"

OrderSvc backend service expecting a SOAP request while user invoking API by sending parameters as query parameters (i.e cancelOrder/{id}, orderStatus/{id}).

This custom sequence will take care of building SOAP payload required for cancelOrder, orderStatus operations by looking at the incoming request URI and the query parameters.

Using a switch mediator, it read the request path . i.e 

<switch xmlns:soapenv="" xmlns:ns3="http://org.apache.synapse/xsd" source="get-property('REST_SUB_REQUEST_PATH')">

Then check the value of request path using a regular expression and construct the payload either for cancelOrder or orderStatus according to the matched resource. i.e

<case regex="/cancelOrder.*">
<payloadFactory media-type="xml">
<soapenv:Envelope xmlns:soapenv="">
<soapenv:Body xmlns:dat="">
<arg evaluator="xml" expression="get-property('')"/>
<header name="Action" scope="default" value="urn:cancelOrder"/>

<case regex="/orderStatus.*">
<payloadFactory media-type="xml">
<soapenv:Envelope xmlns:soapenv="">
<soapenv:Body xmlns:dat="">
<arg evaluator="xml" expression="get-property('')"/>
<header name="Action" scope="default" value="urn:orderStatus"/>

Test OrderSvc API published in API Manager

Login to the API Store and subscribe to the OrderSvc API and generate a access token. Invoke the orderStatus resource as given bellow. This will call to OrderSvc SOAP service and give you the response.

curl -v -H "Authorization: Bearer  _smfAGO3U6mhzFLro4bXVEl71Gga" http://localhost:8280/order/v1/orderStatus/3

Chandana NapagodaConfigure External Solr server with Governance Registry

In WSO2 Governance Registry 5.0.0, we have upgraded Apache Solr version into 5.2 release. With that you can connect WSO2 Governance Registry into an external Solr server or Solr cluster. External Solr integration provides features to gain comprehensive Administration Interfaces, High scalability and Fault Tolerance, Easy Monitoring and many more Solr capabilities.

Let me explain how you can connect WSO2 Governance Registry server with an external Apache Solr server.

1). First, you have to download Apache Solr 5.x.x from the below location.
Please note that we have only verified with Solr 5.2.0 and 5.2.1 versions only.

2). Then unzip Solr Zip file. Once unzipped, it's content will look like the below.

The bin folder contains the scripts to start and stop the server. Before starting the Solr server, you have to make sure JAVA_HOME variable is set properly. Apache Solr is shipped with an inbuilt Jetty server.

3). You can start the Solr server by issuing "solr start" command from the bin directory. Once the Solr server is started properly, following message will be displayed in the console. "Started Solr server on port 8983 (pid=5061). Happy searching!"

By default, server starts on port "8983" and you can access the Solr admin console by navigating to "http://localhost:8983/solr/".

4) To create a new Solr Core, you have to copy and paste Solr configuration directory(registry-indexing) found in G-REG_HOME/repository/conf/solr to SOLR_HOME/server/solr/ directory. Please note that only "registry-indexing" directory needs to be copied from the G-Reg pack.  This will create a new Solr Core named as "registry-indexing".

5). After creating "registry-indexing" Solr core, you can see it from the Solr admin console as below.

6). To integrate newly created Solr core with WSO2 Governance Registry, you have to modify registry.xml file located in <greg_home>/repository/conf directory. There you have to add "solrServerUrl" under indexingConfiguration as follows and need to comment out "IndexingHandler".

    <!-- This defines index cofiguration which is used in meta data search feature of the registry -->

<!--number of resources submit for given indexing thread -->
<!--number of worker threads for indexing -->

7). After completing external Solr configurations as above, you have to start the WSO2 Governance Registry server. If you have configured External Solr integration properly, you can notice below log message in the Governace Registry server startup logs(wso2carbon log).

[2015-07-11 12:50:22,306] INFO {org.wso2.carbon.registry.indexing.solr.SolrClient} - Http Sorl server initiated at: http://localhost:8983/solr/registry-indexing

Further, you can view indexed data by querying via Solr admin console as well.

Happy Indexing and Searching...!!!

Note(added on March 2016): If you are moving from older G-Reg version to the latest one,  you have to replace existing Solr Core(registry-indexing) with the latest one available in the G-Reg pack.

Chamila WijayarathnaExtending WSO2 Identity Server to Engage Workflows with non User Store Operations

In my previous blog, I described about adding more control to an user store operation using workflows. By default Identity Server only supports engaging workflows to user store operations. But is this limited to user store operations? No, you can engage any operation with a workflow, if there exist an interceptor where we can start a workflow when the event is occurred.

First before seeing how to achieve this, let's try out a simple example on this. So here, I am going to demonstrate controlling 'adding service provider' operation using workflows. For this I am going to use sample workflow event handler which is available at [1].

Let's first clone the source code of this sample handler and built it. Then we should put the jar created at the target folder at handler source to repository/components/dropins folder of your Identity Server. Now start the Identity Server.

Now as usual, first you have to create the roles and users required for the approval process and then create a workflow with desired approval steps as I described in my previous blog [2].

If you have followed my previous blog [2], steps until this should be very comfortable for you. You know that after creating the workflow with approval steps, next part to do it is engaging the operation with the workflow. Here we are planning to engage 'add service provider' operation which is a non user store operation with this workflow.

In the 'add workflow engagement' page, by default, it will only show user store operations as the operations that can be engaged with workflow. But now since we have added new service-provider workflow handler, it will show service provider related operations in that UI as well.

Now we can fill the rest of the 'add workflow engagement' form in the usual way we did.

Now we have engaged 'add service provider' operation with a approval process. Now if we add a new service provider, it will not directly added until it was accepted in the approval process. Only after it is approved, it will shown in the UI and is usable.

So now we know that, not only user store operations, but other operations also can be engaged to workflows. But the most challenging thing here is how do we write the custom event handler. Anyway I'm not going to describe that part here, even though its the most important part of this, because its already available in WSO2 docs at [3].


Chandana NapagodaLifecycle Management with WSO2 Governance Registry

SOA Lifecycle management is one of the core requirements for the functionality of an Enterprise Governance suite. WSO2 Governance Registry 5.2.0 supports multiple lifecycle management capability out of the box. Also, it gives an opportunity to the asset authors to extend the out of the box lifecycle functionality by providing their own extensions, based on the organization requirements. Further, the WSO2 Governance Registry supports multiple points of extensibility. Handlers, Lifecycles and Customized asset UIs(RXT based) are the key types of extensions available.


A lifecycle is defined with SCXML based XML element and that contains,
  • A name 
  • One or more states
  • A list of check items with role based access control 
  • One or more actions that are made available based on the items that are satisfied 

Adding a Lifecycle
To add a new lifecycle aspect, click on the Lifecycles menu item under the Govern section of the extensions tab in the admin console. It will show you a user interface where you can add your SCXML based lifecycle configuration. A sample configuration will be available for your reference at the point of creation.

Adding Lifecycle to Asset Type
The default lifecycle for a given asset type will be picked up from the RXT definition. When creating an asset, it will automatically attach lifecycle into asset instance. Lifecycle attribute should be defined in the RXT definition under the artifactType element as below.


Multiple Lifecycle Support

There can be instances, where given asset can go through more than one lifecycle. As an example, a given service can a development lifecycle as well as a deployment lifecycle. Above service related states changes will not be able to visualize via one lifecycle and current lifecycle state should depend on the context (development or deployment) which you are looking at.

Adding Multiple Lifecycle to Asset Type
Adding multiple lifecycles to an Asset Type can be done in two primary methods.

Through Asset Definition:Here, you can define multiple lifecycle names in a comma separated manner. Lifecycle name which is defined in first will be considered as the default/primary lifecycle. Here, multiple lifecycles which are specified in the asset definition(RXT configuration) will be attached to the asset when itis getting created. An example of multiple lifecycle configurations is as below,

Using Lifecycle Executor
Using custom executor Java code, you can assign another lifecycle into the asset. Executors are one of the facilitators which helps to extend the WSO2 G-Reg functionality and Executors are associated with a Governance Registry life cycle. This custom lifecycle executor class needs to implement the Execution interface that is provided by WSO2 G-Reg. You can find more details from below article[Lifecycles and Aspects].

Prabath SiriwardenaThirty Solution Patterns with the WSO2 Identity Server

WSO2 offers a comprehensive open source product stack to cater to all needs of a connected business. With the single code base structure, WSO2 products are weaved together to solve many enterprise-level complex identity management and security problems. By believing in open standards and supporting most of the industry leading protocols, the WSO2 Identity Server is capable of providing seamless integration with a wide array of vendors in the identity management domain. The WSO2 Identity Server is one of the most powerful open source Identity and Entitlement Management server, released under the most business friendly Apache 2.0 license.

This article on medium explains thirty solution patterns, built with the WSO2 Identity Server and other WSO2 products to solve enterprise-level security and identity management related problems.

Chamara SilvaHow to generate random strings or number from Jmeter

While testing soap services, most of the time we may need jmeter scripts to generate random string or numbers as a service parameters. I had a soap service to send the name (string value) and age (integer value) contentiously and each value should not be repeatable need to be random unique values. I used Random and RandomString functions to generate these values. Following Jmeter scrips may help

Dhananjaya jayasingheHow to get the Client's IP Address in WSO2 API Manager/ WSO2 ESB

Middleware solutions are designed to communicate with multiple parties and most of them are integrations. While integration different systems, It is required to validate the requests and collect statistics. When it comes to collecting statistics, Client's / Request Originator's IP Address plays a vital role.

In order to publish the client's IP to the stat collector, We need to extract the client's IP from the request received to the server.

When the deployment contains WSO2 API Manager or WSO2 Enterprise Service Bus, We can obtain the client's IP address using a property mediator in the InSequence.

If the deployment has a Load Balancer in front of ESB/APIManager, We can use X-Forwarded-For Header property as explained in the blog post of Firzhan.

In a deployment which doest not has Load Balancer in front of WSO2 ESB / API Manager, We can use REMOTE_ADDR to obtain the client's IP Address.

We can extract it as follows with using a property mediator.

 <property name="api.ut.REMOTE_ADDR"

Then we can use it in the sequence. As an example, if we extract the IP Address as above and log it, synapse configuration for it will look like bellow.

<property name="api.ut.REMOTE_ADDR"
<log level="full">
<property name="Actual Remote Address"

You can use this in the InSequence of ESB or API Manager to obtain the client's IP Address.

Chathurika Erandi De SilvaWhy Message Enriching ? - A Simple Answer

 What is Message Enriching?

Message enriching normally happens when the incoming request does not contain all the needed information the backend is expecting. The message can be enriched by inserting data to the request midway as needed.

Graphically it can be illustrated as below

Message Enriching

Golden Rule of Message Enriching (my version)

Of course there are lot of use cases where enriching can be used, but ultimately, they can be narrowed down to the following three, to keep things simple

1. The message is enriched through a calculation using the existing values
2. The message is enriched using values from environment
3. The message is enriched using values from external systems, databases, etc...

WSO2 ESB in to the equation 

Now we have to see where WSO2 ESB fits in the picture. The Enrich mediator can be used to achieve message enriching. Following samples are basic demonstrations that are designed to cover the above mentioned "Golden Rule"

The message is enriched through a calculation using the existing values

For demonstration, I have created a sample sequence with Enrich mediator in it. This sequence takes in the request, matches a parameter in the request with a predefined one, and enriches messages when condition is true.

Sample Sequence

In above when a request reaches ESB with customerType as 1,2,3,4 a reference value is assigned to the customerType because the backend is expecting the customerType to come in either gold, platinum, silver or bronze

Now let's look at the Golden Rule #2

The message is enriched using values from environment

This rule is relatively simple. If the request is missing a certain value and if that value can be obtained through the environment, then its injected to the request.

Sample Sequence


In above SystemDate is inserted to the request and later value is populated through enrich mediator.

The final Golden Rule, Rule # 3

The message is enriched using values from external systems, databases, etc...

This is the simplest, put in simple words, this rule says, if you don't have it, ask from some one who does and include it in the request.

Sample Sequence


In above the request doesn't have the customer id, its inserted and populated through enrich mediator. The customer id is obtained from the database using DbLookUp mediator

Winding up, the Golden Rules are purely based on my understanding and of course, if anyone reads better, any one can come up with better set of Golden Rules.

Asanka DissanayakeValidate a URL with Filter Mediator in WSO2 ESB

In day to day development life, you may have come across this requirement lot of times. When you are getting a url as a field in the request, you may need to validate it.
Whether the url is in correct structure or whether it contains any unallowed characters.

This can be achieved using filter mediator in WSO2 ESB.

Matter is figuring out correct regular expression. So the code structure would be as follows.

<filter source="//url" regex="REGEX">
		<log level="custom">
			<property name="propStatus" value="url is valid" />
		<log level="custom">
			<property name="propStatus" value="!!!!url is not valid!!!!" />

Refer to following table to figure out regular expressions for each use case.

Regex Sample
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)* http or https with host name/domain name with optional port
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)*[\/\w\?=&]* url with query parameters, special characters like other than ?,&,= not allowed
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)*[\/\w]* url without query parameters
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)*[\/\w\W]* url with query parameters, special characters

You can play around with this using following api



api url :



Try changing the regex and url values from the above table.


Happy Coding !!!:)

Asanka DissanayakeCheck for existence of element/property with Filter Mediator in WSO2 ESB

In day to day development , sometimes you will need to check for the existence of an element or property or some thing. In other words, you will need to check something if it is null. It can be easily done in WSO2 ESB using filter mediator.

Learning by example is the best way ,

Let’s take a simple example. Suppose there is a payload incoming like below


Suppose you need to read this field into a property. and suppose <role/> is an optional element. In that case what are you going to do?

The expected behavior is , if role is not coming with the payload, it is considered as the default role “generic_user”.

So the following code segment of filter mediator will do it for you.

<filter xpath="//role">
      <log level="custom">
         <property name="propStatus" value="role is available" />
      <property name="userRole" expression="//role" />
      <log level="custom">
         <property name="propStatus" value="!!!!role is not available!!!!" />
      <property name="userRole" value="generic_user" />

“xpath” attribute in filter element provides the xpath expression to be evaluated.
If the xpath expression is evaluated to “true”, synapse code in the “then” block will be executed.
Otherwise code in the else block will be executed.

If the evaluation of the xpath returns something not null. It will be considered as true. If it is null it will be considered as false.

If you want to play with this, create filter.xml with following content and copy it to


and make an HTTP POST to http://localhost:8280/user/rolecheck with following payloads.


Check the log file and you will see following output.

[2016-04-11 22:49:38,041] INFO - LogMediator propStatus = role is available
[2016-04-11 22:49:38,042] INFO - LogMediator status = ====Final user Role====, USER_ROLE = admin


Check the log file and you will see following output.

[2016-04-11 22:49:43,083] INFO - LogMediator propStatus = !!!!role is not available!!!!
[2016-04-11 22:49:43,084] INFO - LogMediator status = ====Final user Role====, USER_ROLE = generic_user

Hope this helps someone:) happy coding ….

Prabath SiriwardenaSecuring Microservices with OAuth 2.0, JWT and XACML

Microservices is one of the most trending buzzword, along with the Internet of Things (IoT). Everyone talks about microservices and everyone wants to have microservices implemented. The term ‘microservice’ was first discussed at a software architects workshop in Venice, in May 2011. It’s being used to explain a common architectural style they’ve been witnessing for some time. With the granularity of the services and the frequent interactions between them, securing microservices is challenging. This post, which I published on medium presents a security model based on OAuth 2.0, JWT and XACML to overcome such challenges.

Ushani BalasooriyaHow to hide credentials used in mediation configuration using Secure Vault in WSO2 ESB

Eventhough we use secure vault to encrypt password, it is not possible to use secure vault directly in the mediation configuration. As an example imagine you need to hide a password given in a proxy.

All you have to do is using Secure Vault Password Management screen in WSO2 ESB.

1. Run sh -Dconfigure and enable secure vault
2. Start the WSO2 ESB with
3. Go to  Manage -> Secure Vault Tool and then click Manage Passwords
4. You will see the below screen.

Srinath PereraUnderstanding Causality and Big Data: Complexities, Challenges, and Tradeoffs

image credit: Wikipedia, Amitchell125

“Does smoking causes cancer?”

We have heard that lot of smokers have lung cancer. However, can we mathematically tell that smoking causes cancer?

We can look at cancer patients and check how many of them are smoking. We can look at smokers and check will they develop cancer. Let’s assume that answers come up 100%. That is, hypothetically, we can see a 1–1 relationship between smokers and cancer.

Ok great, can we claim that smoking causes cancer? Apparently it is not easy to make that claim. Let’s assume that there is a gene that causes cancer and also makes people like to smoke. If that is the cause, we will see the 1–1 relationship between cancer and smoking. In this scenario, cancer is caused by the gene. That means there may be an innocent explanation to 1–1 relationship we saw between cancer and smoking.

This example shows two interesting concepts: correlation and causality from statistics, which play a key role in Data Science and Big Data. Correlation means that we will see two readings behave together (e.g. smoking and cancer) while causality means one is the cause of the other. The key point is that if there is a causality, removing the first will change or remove the second. That is not the case with correlation.

Correlation does not mean Causation!

This difference is critical when deciding how to react to an observation. If there is causality between A and B, then A is responsible. We might decide to punish A in some way or we might decide to control A. However, correlation does warrant such actions.

For example, as described in the post The Blagojevich Upside, the state of Illinois found that having books at home is highly correlated with better test scores even if the kids have not read them. So they decide the distribute books. In retrospect, we can easily find a common cause. Having the book in a home could be an indicator of how studious parents are, which will help with better scores. Sending books home, however, is unlikely to change anything.

You see correlation without a causality when there is a common cause that drives both readings. This is a common theme of the discussion. You can find a detailed discussion on causality from the talk “Challenges in Causality” by Isabelle Guyon.

Can we prove Causality?

Great, how can I show causality? Casualty is measured through randomized experiments (a.k.a. randomized trials or AB tests). A randomized experiment selects samples and randomly break them into two groups called the control and variation. Then we apply the cause (e.g. send a book home) to variation group and measure the effects (e.g. test scores). Finally, we measure the casualty by comparing the effect in control and variation groups. This is how medications are tested.

To be precise, if error bars for groups does not overlap for both the groups, then there is a causality. Check for more details.

However, that is not always practical. For example, if you want to prove that smoking causes cancer, you need to first select a population, place them randomly into two groups, make half of the smoke, and make sure other half does not smoke. Then wait for like 50 years and compare.

Did you see the catch? it is not good enough to compare smokers and non-smokers as there may be a common cause like the gene that cause them to do so. Do prove causality, you need to randomly pick people and ask some of them to smoke. Well, that is not ethical. So this experiment can never be done. Actually, this argument has been used before (e.g. )

This can get funnier. If you want to prove that greenhouse gasses cause global warming, you need to find another copy of earth, apply greenhouse gasses to one, and wait few hundred years!!

To summarize, Casualty, sometime, might be very hard to prove and you really need to differentiate between correlation and causality.

Following are examples when causality is needed.

  • Before punishing someone
  • Diagnosing a patient
  • Measure effectiveness of a new drug
  • Evaluate the effect of a new policy (e.g. new Tax)
  • To change a behavior

Big Data and Causality

Most big data datasets are observational data collected from the real world. Hence, there is no control group. Therefore, most of the time all you can only show and it is very hard to prove causality.

There are two reactions to this problem.

First, “Big data guys does not understand what they are doing. It is stupid to try to draw conclusions without randomized experiment”.

I find this view lazy.

Obviously, there are lots of interesting knowledge in observational data. If we can find a way to use them, that will let us use these techniques in many more applications. We need to figure out a way to use it and stop complaining. If current statistics does not know how to do it, we need to find a way.

Second is “forget causality! correlation is enough”.

I find this view blind.

Playing ostrich does not make the problem go away. This kind of crude generalizations make people do stupid things and can limit the adoption of Big Data technologies.

We need to find the middle ground!

When do we need Causality?

The answer depends on what are we going to do with the data. For example, if we are going to just recommend a product based on the data, chances are that correlation is enough. However, if we are taking a life changing decision or make a major policy decision, we might need causality.

Let us investigate both types of cases.

Correlation is enough when stakes are low, or we can later verify our decision. Following are few examples.

  1. When stakes are low ( e.g. marketing, recommendations) — when showing an advertisement or recommending a product to buy, one has more freedom to make an error.
  2. As a starting point for an investigation — correlation is never enough to prove someone is guilty, however, it can show us useful places to start digging.
  3. Sometimes, it is hard to know what things are connected, but easy verify the quality given a choice. For example, if you are trying to match candidates to a job or decide good dating pairs, correlation might be enough. In both these cases, given a pair, there are good ways to verify the fit.

There are other cases where causality is crucial. Following are few examples.

  1. Find a cause for disease
  2. Policy decisions ( would 15$ minimum wage be better? would free health care is better?)
  3. When stakes are too high ( Shutting down a company, passing a verdict in court, sending a book to each kid in the state)
  4. When we are acting on the decision ( firing an employee)

Even, in these cases, correlation might be useful to find good experiments that you want to run. You can find factors that are correlated, and design the experiments to test causality, which will reduce the number of experiments you need to do. In the book example, state could have run a experiment by selecting a population and sending the book to half of them and looking at the outcome.

Some cases, you can build your system to inherently run experiments that let you measure causality. Google is famous for A/B testing every small thing, down to the placement of a button and shade of color. When they roll out a new feature, they select a polulation and rollout the feature for only part of the population and compare the two.

So in any of the cases, correlation is pretty useful. However, the key is to make sure that the decision makers understand the difference when they act on the results.

Closing Remarks

Causality can be a pretty hard thing to prove. Since most big data is observational data, often we can only show the correlation, but not causality. If we mixed up the two, we can end up doing stupid things.

Most important thing is having a clear understanding at the point when we act on the decisions. Sometimes, when stakes are low, correlation might be enough. On some other cases, it is best to run an experiment to verify our claims. Finally, some systems might warrant building experiments into the system itself, letting you draw strong causality results. Choose wisely!

Original Post from my Medium account:

Chamila WijayarathnaMaking Use of WSO2 Identity Server's Workflow Feature

WSO2 IS 5.1.0 which was released in the end of 2015 contains workflow support which can be used to add more control to the operations that can be done through Identity Server. By default WSO2 IS 5.1.0 support to control user store operations by engaging them to an approval process where 1 or more privileged users need to approve the operation before it will take into operation. Even though only this is supported by default, Identity Server can be extended by adding custom templates and custom handlers to do much more advanced tasks using workflow framework.

Since I was one of the developers involved in developing this feature, I thought of writing a blog to describe how to make use of this feature. So in this blog I will be writing about implementing a simple use case using workflow feature. In future blogs I will be writing more advanced use cases with custom event handlers and custom templates.

Following are some use cases that can be implemented using this.

  1. When user registered to IS using self sign up, get approval by an admin user before he can login
  2. Get approval from an admin user before lock / unlock user accounts due to invalid login attempts
  3. When user update his user account (eg : update profile picture), check if its appropriate and get approval from an admin
  4. Get approval from an privileged user before increasing privileges of a user
So lets see how to implement one of these use cases. I will describe how to implement a scenario, when a new user added to the identity server with admin privileges, that operation need to get accepted by  a 'senior manager' and company CEO respectively. This is a common use case that we come across in most of the enterprises.

WSO2 IS contains WSO2 BPS features embedded within it which can be used to manage the approval process of this scenario. Instead of using this, you can use a separate WSO2 BPS for this purpose also. First let's see how we can achieve this with BPS feature embedded to IS without using a separate BPS.

You can download latest version of WSO2 IS from here. Extract it and start it. 

Following are the users and roles we are going to have in our setup.

User 'ceo' and role 'senior_manager' which we are going to use in approval process need to have at least following permission.

  • Login
  • Human Tasks > View Human Tasks
  • Workflow Management > BPS Profiles > View
So now we have to define the approval process through defining a new workflow. To do this, we have to log into Identity Server and then select Workflow Definitions -> Add from Main menu.

Then you'll be directed to the 'Add Workflow Definition' wizard. In the first step, you have to define a name to identify the workflow and a small description about the workflow.

Now in the next step, you can define the approval process of this workflow. As I mentioned earlier, here we want the operation to be accepted by a senior_manager and then by CEO. We can define this process as follows.

Now we have added the step 1 of the approval process. We have to add the next step also in the same page. This can be done by following below steps.

Now we can proceed to the next step. In next step we need to select the BPS profile details we are using for this approval process. For now let's use the BPS embedded in to Identity Server. We can use an external BPS as well in this process.

By clicking finish button, we have created the approval process. Out next task is to integrate the 'add admin user' to this approval process. This can done by going to Workflow Engagements -> Add in main menu.

Now you can engage 'add admin user' event to the created approval process by adding a workflow engagement as follows.

Now we have finished the setup, we can test how this works. We can go to management console and add user with assigning 'admin' role to the user. When we do this you'll observe that user is not directly added. Even though user is shown in the user list, he will shown as an user in pending state.

User account will be only activated once both a manager and the CEO accepted the operation. In the first step, a manager need to approve the user addition. Manager can do this by login to the user portal of Identity Server. When a manager logged into user portal and access the 'Approvals' gadget there, he will see the list of operations which require is approval.

If the CEO logged into dashboard and access 'Approvals' gadget, he won't see adding 'newAdmin' user there. He will only see it once a manager approves this.

So manager can now accept or reject the operation from here.

If the manager approve the operation, CEO can also approve/reject the operation. User account will be only activated if user account is approved in both steps. If its rejected at any stage, user account will be deleted as it didn't existed there at any time.

In the same manner, we can engage any user store operations to this kind of multi step approval processes in Identity Server. This functionalities are available in Identity Server by default. You don't have to do any customizations to make use of these. By customizing we can do lot more stuff and I will be writing about few of them in my next few blogs.

Here we used the BPS embedded in Identity Server for implementing this. We can use a external BPS for this as well. You can add a external BPS by 'Workflow Engine Profiles -> Add' in configure menu.

When we add a new profile, it will also be shown in the drop down menu in the 3rd step of add workflow wizard. To do this we have to share user store and identity database of Identity Server with the BPS.

Dhananjaya jayasingheCustomize HTTP Server Response Header in WSO2 API Manager / WSO2 ESB

You may know that in the response header from WSO2 ESB invocations or WSO2 API Manager invocations, You are getting "Server" header as bellow.

HTTP/1.1 200 OK
Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: POST,GET,DELETE,PUT,HEAD
Content-Type: application/json
Access-Control-Allow-Credentials: true
Date: Sat, 09 Apr 2016 20:02:58 GMT
Server: WSO2-PassThrough-HTTP
Transfer-Encoding: chunked
Connection: Keep-Alive

"origin": ""

You can see that Server Header Contains WSO2 as bellow.

Server: WSO2-PassThrough-HTTP

Sometimes there are situations like It needs to customize this header.

Eg: If we need to customize this as bellow.

Server: PassThrough-HTTP

What we need to do is , Add the http.origin-server to file located in ESB_HOME/repository/conf/ directory with customized value as bellow.


Once you restart the server, Above response will be changed as bellow.

HTTP/1.1 200 OK
Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: POST,GET,DELETE,PUT,HEAD
Content-Type: application/json
Access-Control-Allow-Credentials: true
Date: Sat, 09 Apr 2016 20:11:47 GMT
Server: PassThrough-HTTP
Transfer-Encoding: chunked
Connection: Keep-Alive

"origin": ""

Dhananjaya jayasingheActiveMQ - WSO2 ESB - Redelivery Delay - MaximumRedeliveries Configuration

There are use cases which we need to configure Redelivery Delay and the Maximum Redeliveries for Message Consumers of Activemq.

When consuming ActiveMq Queue, we can configure these parameters as mentioned in [1]

WSO2 ESB also can act as a message consumer and a message producer. Information of configuring that can be found in ESB documentation. [2]  Then we can find Consumer / Producer configurations in [3]

In ESB JMS proxy, we can configure Redelivery Delay and MaximumRedeliveries with using following parameters.

Eg:  By default Redelivery Delay in ActiveMq is one second and MaximumRedelivery count is 6. If you need to change the as bellow, you can do it with following parameters in the proxy service.

Redelivery Delay - 3 Seconds
MaximumRedelivery Count - 2

   <parameter name="redeliveryPolicy.maximumRedeliveries">2</parameter>
<parameter name="transport.jms.SessionTransacted">true</parameter>
<parameter name="redeliveryPolicy.redeliveryDelay">3000</parameter>
<parameter name="transport.jms.CacheLevel">consumer</parameter>

Other than enabling the default configurations for JMS transport receiver, you dont need to add any other parameters to the axis2.xml to achive this.

Here is a sample proxy service which i have added above parameters.

Tested Versions : ESB 4.9.0 / Apache ActiveMQ 5.10.0

<proxy xmlns=""
<log level="full">
<property name="Status" value=" Consuming the message"/>
<property name="transactionID" expression="get-property('MessageID')"/>
<property name="SET_ROLLBACK_ONLY" value="true" scope="axis2"/>
<parameter name="redeliveryPolicy.maximumRedeliveries">2</parameter>
<parameter name="transport.jms.DestinationType">queue</parameter>
<parameter name="transport.jms.SessionTransacted">true</parameter>
<parameter name="transport.jms.Destination">JMStoHTTPStockQuoteProxy</parameter>
<parameter name="redeliveryPolicy.redeliveryDelay">3000</parameter>
<parameter name="transport.jms.CacheLevel">consumer</parameter>

When we configure these redelivery parameters, we need to make sure that we have enabled transactions for the proxy. We have done it using following parameter.

<parameter name="transport.jms.SessionTransacted">true</parameter>

Once we enable transactions, If the transaction is successful there is no redelivery happens.  So, in order to test the redelivery functionality, After consuming the message, we need to RollBack the transaction. In order to do that we need add following parameter inside the InSequence of the proxy service.

<property name="SET_ROLLBACK_ONLY" value="true" scope="axis2"/>

With above property, we notify the server that, The tranaction is Rollbacked.

All these properties are passed in to the server when we make the connection from ESB to the Message Broker. So, at that time we need to specify all these parameters.


Chathurika Erandi De SilvaWS-Addressing: A simple demonstration with WSO2 ESB

WS-Addressing as I understand

WS-Addressing or web service addressing is a mechanism that is used with web services, so that we can invoke services regardless of the transport. We include message routing data in the soap headers, so that the request will be routed in a transport neutral manner.

More on WS-Addressing

BUT this is not to explain WS-Addressing, then what?

I have been working with WSO2 ESB this week, and thought of sharing the below for a person who is looking at an entry point to WS-Addressing relevant tasks

In this post, I am discussing the below

1. Enabling WS-Addressing for an endpoint
2. Enabling WS-Addressing for the whole proxy service
3. Invoking a WS-Addressing enabled proxy through SOAP-UI

If you are a begineer to WSO2 ESB, spend a little time to read...

Enabling WS-Addressing for an endpoint

Using below configuration I have enabled WS-Addressing for the endpoint.

    <address uri="">

Enabling WS-Addressing for the whole proxy service

Using below I have enabled WS-Addressing for the proxy service

<parameter name="enforceWSAddressing">true</parameter>


                    <address uri="">
            <log level="full"/>
    <publishWSDL uri=""/>
    <parameter name="enforceWSAddressing">true</parameter>

Invoking a WS-Addressing enabled proxy through SOAP-UI

Follow the below steps to invoke WS-Addressing enabled web service using SOAP-UI.

1. Give the relevant wsdl of the proxy service and open the SOAP UI project

2. Open the request.

3. In the left hand side Request Properties panel, enable WS-Addressing

Request Properties-SOAP UIFig1. - Request Properties-SOAP UI.

4. In the Request window click on WS-Addressing to include WS-Addressing related headers.

Fig2. - Request-Enabling WS-Addressing headers.

5. Click "Add default wsa-action" and "Add default wsa-To"

When the request is sent, below can be seen if the wire logs are enabled in WSO2 ESB

[2016-04-08 16:56:07,113] DEBUG - wire >> "

The response will look as below

<ns:return xsi:type="ax2435:ValueSetter"

As illustrated, the repsonse too carries, WS-Addressing headers.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
User administrators by the user store

  • Define user administrators by user store. For example, a user belongs to the role foo-admin will be able to perform user admin operations on the foo user store, while he/she won’t be able to perform user admin operations on the bar user store.
  • Deploy the WSO2 Identity Server as an identity provider over multiple user stores. 
  • Define a XACML policy, which specified who should be able to do which operation on user stores. 
  • Create a user store operation listener and talk to the XACML PDP during user admin operations. 
  • Create roles by user store and user administrators to appropriate roles. Also, make sure each user administrator has the user admin permissions from the permission tree. 
  • Products: WSO2 Identity Server 4.6.0+ 

Nuwan BandaraMicroservices gateway pattern

With microservices outer architecture the gateway pattern is something quite popular, which is also elaborately explained at nginx blogs. In summary, linking your microservices directly to the client applications is almost always considered a bad idea.

You need to keep updating and upgrading ur microservices and you should be able to do it transparently. With a larger services based ecosystem microservices wont always be HTTP bound, its probably be using jms, mqtt or maybe thrift for their transports. In such scenarios having a gateway to deal with those complexities is always a good idea.


Proving the concept I created couple of microservices (ticket listing/catalog service, ticket purchase service and a validate service) which get deployed in their respective containers. WSO2 Gateway act as the microservice gateway in this PoC and the routs are defines in it. Gateway also deploys in a container on its own.

To build the microservices I am using MSF4j the popular microservices framework and the ticket data is stored in a redis store.

The PoC is committed to github with setup instructions, do try it out and leave a comment.

Kalpa WelivitigodaWSO2 Application Server 6.0.0-M1 Released

Welcome to WSO2 Application Server, the successor of WSO2 Carbon based Application Server. WSO2 Application Server 6.0.0 is a complete revamp and is based on vanilla Apache Tomcat. WSO2 provides a number of features by means of extensions to Tomcat to add/enhance the functionality. It provides first class support for generic web applications and JAX-RS/JAX-WS web applications. The performance of the server and individual application can be monitored by integrating WSO2 Application Server with WSO2 Data Analytics Server. WSO2 Application Server is an open source project and it is available under the Apache Software License (v2.0) .

Download WSO2 Application Server 6.0.0-M1 from here.

Key Features

  • HTTP Statistics Monitoring
  • Webapp Classloading Runtimes

Fixed Issues

Known Issues

Reporting Issues

Issues, documentation errors and feature requests regarding WSO2 Application Server can be reported through the public issue tracking system.

Contact us

WSO2 Application Server developers can be contacted via the Developmentand Architecturemailing lists.
Alternatively, questions can also be raised in the stackoverflow forum :

Thank you for your interest in WSO2 Application Server.

-The WSO2 Application Server Development Team -

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Service provider-specific user stores

  • The business users need to access multiple service providers supporting multiple heterogeneous identity federation protocols. 
  • When the user gets redirected to the identity provider, the users only belong to the user stores specified by the corresponding service provider, should be able to login or get an authentication assertion. 
  • In other words, each service provider should be able to specify from which user store it accepts users.
  • Deploy the WSO2 Identity Server as an identity provider over multiple user stores and register all the service providers. 
  • Extend the pattern 18.0 Fine-grained access control for service providers to enforce user store domain requirement in the corresponding XACML policy. 
  • Use a regular expression to match allowed user store domain names with the authenticated user’s user store domain name. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Home realm discovery

  • The business users need to login to multiple service providers via multiple identity providers. 
  • Rather than providing a multi-login option page with all the available identity provider, once redirected from the service provider, the system should find out who the identity provider corresponding to the user and directly redirect the user there.
  • Deploy WSO2 Identity Server as an identity provider and register all the service providers and identity providers. 
  • For each identity provider, specify a home realm identifier. 
  • The service provider prior to redirecting the user to the WSO2 Identity Server must find out the home realm identifier corresponding to the user and send it as a query parameter. 
  • Looking at the home realm identifier in the request the WSO2 Identity Server redirect the user to the corresponding identity provider. 
  • In this case, there is a direct one-to-one mapping between the home realm identifier in the request and the home realm identifier value set under the identity provider configuration. This pattern can be extended by writing a custom home realm discovery connector, which knows how to relate and find the corresponding identity provider by looking at the home realm identifier in the request, without maintaining a direct one-to-one mapping. 
  • Products: WSO2 Identity Server 5.0.0+ 

Nuwan BandaraContainerized API Manager


So while continuing my quest to make all demos dockerized; I containerized WSO2 API Manager this week. This is two folded, one is with simple API Manager deployment with integrated analytics (WSO2 DAS). The other is fully distributed API Manager with analytics.

This is making things easier and demos are becoming more and more re-usable. You can find instructions to execute in github repo.

Docker ! Docker ! Docker !😀

Evanthika AmarasiriCommon SVN related issues faced with WSO2 products and how they can be solved

Issue 1

TID: [0] [ESB] [2015-07-21 14:49:55,145] ERROR {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository} -  Error while attempting to create the directory: http://xx.xx.xx.xx/svn/wso2/-1234 {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository}
org.tigris.subversion.svnclientadapter.SVNClientException: org.tigris.subversion.javahl.ClientException: svn: authentication cancelled
    at org.tigris.subversion.svnclientadapter.javahl.AbstractJhlClientAdapter.mkdir(
    at org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository.checkRemoteDirectory(

Reason : The user is not authenticated to write to the provided SVN location i.e.:- http://xx.xx.xx.xx/svn/wso2/ . When you see this type of an error, verify the credentials you have given under the svn configuration in the carbon.xml


Issue 2

TID: [0] [ESB] [2015-07-21 14:56:49,089] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} -  Deployment synchronization commit for tenant -1234 failed {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask}
java.lang.RuntimeException: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: A repository synchronizer has not been engaged for the file path: /home/wso2/products/wso2esb-4.9.0/repository/deployment/server/
    at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.commit(
    at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncCommit(
    at java.util.concurrent.Executors$
    at java.util.concurrent.FutureTask.runAndReset(
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(
    at java.util.concurrent.ScheduledThreadPoolExecutor$

    (I) SVN version mismatch between local server and the SVN server (Carbon 4.2.0 products support SVN 1.6 only.

    Solution - Use the SVN kit jar 1.6 in Carbon server


      (II) If you have configured your server with a different SVN version than what's in the SVN server and even if you use the correct svnkit jar at the Carbon server side later, the issue will not get resolved

      Solution - Remove all the .svn files under $CARBON_HOME/repository/deployment/server folder

      (III) A similar issue can be observed when the SVN server is not reachable.

      Issue 3

        [2015-08-28 11:22:27,406] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} - Deployment synchronization update for tenant -1234 failed java.lang.RuntimeException: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: No Repository found for type svn at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.update( at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncUpdate( at at java.util.concurrent.Executors$ at java.util.concurrent.FutureTask.runAndReset( at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301( at java.util.concurrent.ScheduledThreadPoolExecutor$ at java.util.concurrent.ThreadPoolExecutor.runWorker( at java.util.concurrent.ThreadPoolExecutor$ at Caused by: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: No Repository found for type svn at org.wso2.carbon.deployment.synchronizer.repository.CarbonRepositoryUtils.getDeploymentSyncConfigurationFromConf( at org.wso2.carbon.deployment.synchronizer.repository.CarbonRepositoryUtils.getActiveSynchronizerConfiguration( at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.update( ... 9 more 


        You will notice this issue when the svn kit (i.e. for latest versions of Carbon i.e. 4.4.x the jar version would be svnkit-all-1.8.7.wso2v1.jar) jar is not available in $CARBON_HOME/repository/components/dropins folder

        Sometimes dropping the svn-kit-all-1.8.7.wso2v1.jar would not solve the problem. In such situations, verify whether the trilead-ssh2-1.0.0-build215.jar is also available under $CARBON_HOME/repository/components/lib folder.

Thilini Ishaka[NEW] OData support in WSO2 Data Services Server

OData (Open Data Protocol) is an OASIS standard that defines the best practice for building and consuming RESTful APIs. OData helps to build RESTful APIs and it provides facility for extension to fulfill any custom needs of your RESTful APIs.

OData RESTful APIs are easy to consume. The OData metadata, a machine-readable description of the data model of the APIs, enables the creation of powerful generic client proxies and tools. Some of them can help you interact with OData even without knowing anything about the protocol.

WSO2 DSS 3.5.0 onwards we have come up with the support for OASIS OData protocol version 4.0.0. So that now you can easily expose your databases as an OData service.

Chathurika Erandi De SilvaFilter request on content: WSO2 ESB - Part 1

Use Case

There is a requirement to route a message to different endpoints based on the request itself. This can be achieved by various methods

1. Reading a message context property as Action / To
2. Reading the request itself

In this post it will be explained on reading a property in message context then route the request accordingly.

For the above to be achieved, a filtering mechanism should be there to filter out the message based on the message context property. WSO2 ESB provides Filter Mediator to achieve these kind of requirements.

Heream using the message context property 'Action'. By comparing the value set in this property i will filter out the messages.

Sample Sequence Configuration

<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="" name="filter_1">
   <log level="full"/>
   <filter xmlns:ns="http://org.apache.synapse/xsd"
               <address uri="http://<ip>:9793/services/MenuProvider/getMacMenu"/>
         <log level="custom">
            <property name="macMenu" value="INSIDE MAC MENU"/>
               <address uri="http:/<ip>:9793/services/MenuProvider/getOtherMenu"/>
         <log level="custom">
            <property name="otherMenu" value="INSIDE OTHER MENU"/>

Above we are matching the value returned by the get-property('Action') to the regex expression. In this particular scenario if the message context, Action property contained "getMacMenu" or put in other words, if the request contains getMacMenu operation then the request is directed to the particular endpoint. If it doesn't then the else part of the filter mediator is executed.  

Sample Request

<soapenv:Envelope xmlns:soapenv="" xmlns:sam="">

Chathurika Erandi De SilvaFilter request on content: WSO2 ESB - Part 2

In the previous post we have discussed how to use the message context properties and filter out based on that.

Use Case

If the request contains a certain operation, then it should be routed to a certain endpoint. The others that does not contain the above operation should be routed to another endpoint.

In order to achieve the above requirement, I have used the xpath expression of Filter Mediator.

Sample Sequence Configuration

<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="" name="filter_seq_2">
   <filter xmlns:ns="http://org.apache.synapse/xsd"
         <log level="custom">
            <property name="getMacMenu" value="INSIDE MAC MENU"/>
               <address uri="http://<ip>:9793/services/MenuProvider/getMacMenu"/>
         <log level="custom">
            <property name="getOtherMenu" value="INSIDE OTHER MENU"/>
               <address uri="http://<ip>:9793/services/MenuProvider/getOtherMenu"/>

Above I have used the xpath expression and have checked whether the incoming request contains the given element. If it does, the request is routed to a certain endpoint while the other requests that return false at the xpath expression is routed to the else part of the mediator

When a request as following is sent, the elements are read from the Xpath expression and evaluated.

<soapenv:Envelope xmlns:soapenv="" xmlns:sam="">

The above request contains the getMacMenu element so the Filter Mediator Xpath expression is evaluated as true.

<soapenv:Envelope xmlns:soapenv="" xmlns:sam="">

The above request contains getOtherMenu element so the Filter Mediator Xpath expression is evaluated as false.

Chathurika Erandi De SilvaFilter request on content: WSO2 ESB - Part 3

In previous posts we have discussed Filter Mediator condition using Xpath and message context properties using regex.

Use Case

If the incoming request contains a certain key / word, then the request should be routed to a certain endpoint, whereas the requests that does not contain the specific key should be routed to another endpoint

Sample Sequence Configuration

<sequence xmlns="" name="filter_seq_4">
   <filter xmlns:ns="http://org.apache.synapse/xsd"
         <log level="custom">
            <property name="getMacMenu" value="INSIDE MAC MENU"/>
            <endpoint key="conf:/send_mac"/>
         <log level="custom">
            <property name="getOtherMenu" value="INSIDE OTHER MENU"/>
            <endpoint key="conf:/send_other"/>

Above Filter Mediator uses the source expression to isolate a relevant element in the incoming request, then the value of that element is matched with the provided regex expression

When the following request is sent to the ESB, since the sam:type element contains MAC, it's validated as true from Filter Mediator

<soapenv:Envelope xmlns:soapenv="" xmlns:sam="">

If the following request is sent to the ESB, since the sam:type element contains OTHER, it's validated as false from Filter Mediator. Thus the else section is executed

<soapenv:Envelope xmlns:soapenv="" xmlns:sam="">

Chathurika Erandi De SilvaUser Stories; How do i formulate them?

I have being working with user stories this week to derive test scenarios so thought of writing on it a bit.

The first question came to my mind when thinking about user stories is that, although there are lot of very good definitions, what is the easiest and concrete way to understand what a user story is?

After reading alot and thinking about it, i figured, the easiest way is to question "why is a particular product is used by a specific person"

This way we really put ourselves in the user's shoes and think on the user's perspective.

There can many answers for this particular question, or just one answer.

If there are multiple answers, then each of those answers will become stories related to that particular user. And of course if there is one answer, then that becomes the only story with relevance to that user.

Of course when getting the story to words, the keywords as "who", "what" and "why" should be addressed and it's always good to keep the user story short but we should keep in mind that the specific story has the expected business value.
As an example, let's take the following story

As a user I want to log in to the system so that I can do some profile tasks

Is there any business value in the above story with respect to the user? If we are to go forward and implement this kind of story, can we see a value in implementing it at all? Does the Why part of the above story consists of a business value?

It's important to define the why part incorporating the business value in what the user wants to do.

As a personal user I want to log in to the system so that I can change my profile picture

In above we can see a straight forward business value.

It's essential to write user stories so that it brings out the business value of it being implemented.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Authenticate the users against one user store but fetch user attributes from multiple other sources

  • User credentials are maintained in a one user store while user attributes are maintained in multiple sources. 
  • When the user logs into the system via any SSO protocol (SAML 2.0, ODIC, WS-Federation), build the response with user attributes coming from multiple sources.
  • Mount the credential store and all the attribute stores as user stores to the WSO2 Identity Server. Follow a naming convention while naming the user stores where the attributes store can be differentiated from the credentials stores just by looking at the user store domain name. 
  • Build a custom user store manager (extending the current user store manager corresponding to the type of the primary user store), which is aware of all the attribute stores in the system and override the method, which returns user attributes. The overridden method will iterate through the attribute stores find the user’s attributes and will return back the aggregated result. 
  • Set the custom user store manager from the previous step as the user store manager corresponding to the primary user store. 
  • Products: WSO2 Identity Server 4.6.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
User administration operations from a third-party web app

  • A third party web app needs to perform all user management operations such as all CRUD operations on users and roles, user/role assignments and password recovery, without having to deal directly with underlying user stores (LDAP, AD, JDBC).
  • Deploy the WSO2 Identity Server over the required set of user stores. 
  • The WSO2 Identity Server exposes a set of REST endpoints as well as SOAP-based services for user management, the web app just need to talk to these endpoints, without having to deal directly with underlying user stores (LDAP, AD, JDBC). 
  • Products: WSO2 Identity Server 4.0.0+ 

Thilina PiyasundaraAdd Let's Encrypt free SSL certificates to WSO2 API Cloud

Let's encrypt is a free and open certificate authority runs for the public benefit. This service is provided by the Internet Security Research Group and there are lots of companies working with them to make the Internet secure. People who have a domain name can get free SSL certificate for their websites using this service for three months. I they need to use for more than that three months we need to renew the certificate and its also for free. But the best thing is that this certificate is accepted by most of the new web browsers and systems by default. So you don't need to add CA certs to you browsers any more.

In this article I will explain how we can use that service to get a free SSL certificate and add that to WSO2 API Cloud. So that you can have your own API store like;

In order to do that you need to have following things in hand.
  • Domain name.
  • Rights to add/delete/modify DNS A records and CNAMEs.
  • Publicly accessible webserver with root access or a home router with port forwarding capabilities. 

Step 1

If you have a publicly accessible webserver you can skip this step.If you don't have a publicly accessible webserver you can make your home PC/Laptop a temporary webserver if you can do port forwarding/NATing in you home router. I will show how I did that with my ADSL router. You can get help on port forwarding information by referring to this website

a. Add a port forwarding rule in your home router.

Get your local (laptop) IP (by running ifconfig/ip addr) and put that as the backend server in your router for. Set the WAN port as 80 and LAN port as 80.

After adding the rule it will be like this.

b. Start a webserver in your laptop. We can use the simple Python server for this. Make sure to check the IPTable rules/Firewall rules.

mkdir /tmp/www
cd /tmp/www/
echo 'This is my home PC :)' > index.html
sudo python3 -m http.server 80

c. Get the public IP of your router. Go to this link : it will give the public IP address. This IP is changing time-to-time so no worries.

d. Try to access that IP from a browser.
If it is giving the expected output you have a publicly accessible webserver.

Step 2

Now we need to update a DNS entry. My expectation is to have a single SSL certificate for both domains '' and ''.

a. Go to your DNS provides console and add an A record for both domain names to point to the public IP of your webserver (or the IP that we got from the previous step).

b. Try to access both via a browser and if its giving the expected out put you can proceed to the next step.

Step 3

I'm follow the instruction in the 'let's encrypt' guide. As I'm using the python server I need to use the 'certonly' option when running the command to generate the certs.

a. Get the git clone of the letsencrypt project.

git clone
cd letsencrypt

b. Run cert generation command. (this requires root/sudo access)

./letsencrypt-auto certonly --webroot -w /tmp/www/ -d -d

If this succeed you can find the SSL keys and certs in '/etc/letsencrypt/live/' location.

Step 4

Check the content of the certs. (Be root before you try to 'ls' that directory)

openssl x509 -in cert.pem -text -noout

Step 5

Create an API in WSO2 API Cloud if you don't have one. Else start on adding a custom domain to your tenant.

a. Remove both A records and add CNAME records to those two domains. Both should point to the domain ''.

b. Now click on the 'Configure' option in the top options bar and select the 'Custom URL' option.

c. Make ready you SSL certs. Copy 'cert.pem', 'chain.pem' and 'privkey.pem' to you home directory.

d. Modify API store domain. Click on the modify button, add the domain name click on verify. It will take few seconds. If that succeed you have correctly configured the CNAME to point to WSO2 cloud.

e. Add cert files to the API Cloud. The order should be the certificate (cert.pem), private key (privatekey.pem) and the CAs chain file (chain.pem). Again it will take sometime to verify uploaded details.

f. Update the gateway domain same as the previous.

Now if you go the API Store it will show something like this.

g. Same way you can use the gateway domain when you need to invoke APIs.

curl -X GET --header 'Accept: application/json' --header 'Authorization: Bearer ' ''

Now you don't need '-k' option. If not make sure you operating system (CA list) is up to date.

Step 6

Make sure to remove port forwarding in you home router if you use that and any changes that you make while obtaining the SSL certificates.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Fine-grained access control for SOAP services

  • Access to the business services must be done in a fine-grained manner. 
  • Only the users belong to the business-admin role should be able to access foo and bar SOAP services during a weekday from 8 AM to 5 PM.
  • Deploy WSO2 Identity Server as a XACML PDP (Policy Decision Point). 
  • Define XACML policies via the XACML PAP (Policy Administration Point) of the WSO2 Identity Server. 
  • Front the SOAP services with WSO2 ESB and represent each service a proxy service in the ESB. 
  • Engage the Entitlement mediator to the in-sequence of the proxy service, which needs to be protected. The Entitlement mediator will point to the WSO2 Identity Server’s XACML PDP. 
  • All the requests to the SOAP service will be intercepted by the Entitlement mediator and will talk to the WSO2 Identity Server’s XACML PDP to check whether the user is authorized to access the service. 
  • Authentication to the SOAP service should happen at the edge of the WSO2 ESB, prior to Entitlement mediator. 
  • If the request to the SOAP service brings certain attributes in the SOAP message itself, the Entitlement mediator can extract them from the SOAP message and add to the XACML request. 
  • Products: WSO2 Identity Server 4.0.0+, WSO2 ESB, Governance Registry 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Render menu items in a web app based on the logged-in user’s fine-grained permissions

  • When a business user logs into a web app, the menu items in the web app should be rendered dynamically based on the user’s permissions. 
  • There can be a case if the same user logs at 9 AM and then again at 9 PM could see different menu items as the permission can also be time sensitive. 
  • There can be a case if the same user logs in from China and then again from Canada could see different menu items as the permission can also be location sensitive.
  • Deploy WSO2 Identity Server as a XACML PDP (Policy Decision Point). 
  • Define XACML policies via the XACML PAP (Policy Administration Point) of the WSO2 Identity Server. 
  • When a user logs into the web app, the web app will talk to the WSO2 Identity Server’s XACML PDP endpoint with a XACML request using XACML multiple decision profile and XACML multiple resource profile. 
  • After evaluating the XACML policies against the provided request, the WSO2 Identity Server returns back the XACML response, which includes the level permissions the user has on each resource under the parent resource specified in the initial XACML request. Each menu item is represented as a resource in the XACML policy. 
  • The web app caches the decision to avoid further calls to the XACML PDP. 
  • Whenever some event happens at the XACML PDP side, which requires expiring the cache, the WSO2 Identity Server will notify a registered endpoint of the web app. 
  • Products: WSO2 Identity Server 4.0.0+ 

Afkham AzeezMicroservices Circuit Breaker Implementation

Circuit breaker


Circuit breaker is a pattern used for fault tolerance and the term was first introduced by Michael Nygard in his famous book titled "Release It!". The idea is, rather than wasting valuable resources trying to invoke an operation that keeps failing, the system backs off for a period of time, and later tries to see whether the operation that was originally failing works.

A good example would be, a service receiving a request, which in turn leads to a database call. At some point in time, the connectivity to the database could fail. After a series of failed calls, the circuit trips, and there will be no further attempts to connect to the database for a period of time. We call this the "open" state of the circuit breaker. During this period, the callers of the service will be served from a cache. After this period has elapsed, the next call to the service will result in a call to the database. This stage of the circuit breaker is called the "half-open" stage. If this call succeeds, then the circuit breaker goes back to the closed stage and all subsequent calls will result in calls to the database. However, if the database call during the half-open state fails, the circuit breaker goes back to the open state and will remain there for a period of time, before transitioning to the half-open state again.

Other typical examples of the circuit breaker pattern being useful would be a service making a call to another service, and a client making a call to a service. In both cases, the calls could fail, and instead of indefinitely trying to call the relevant service, the circuit breaker would introduce some back-off period, before attempting to call the service which was failing.

Implementation with WSO2 MSF4J

I will demonstrate how a circuit breaker can be implemented using the WSO2 Microservices Framework for java (MSF4J) & Netflix Hystrix. We take the stockquote service sample, and enable circuit breaker. Assume that the stock quotes are loaded from a database. We wrap the calls to this database in a Hystrix command. If database calls fail, the circuit trips and stock quotes are served from cache.

The complete code is available at

NOTE: To keep things simple and focus on the implementation of the circuit breaker patter, rather than make actual database calls, we have a class called org.example.service.StockQuoteDatabase and calls to its getStock method could result in timeouts or failures. To see an MSF4J example on how to make actual database calls, see

The complete call sequence is shown below. StockQuoteService is an MSF4J microservice.

Configuring the Circuit Breaker

 The circuit breaker is configured as shown below.

 We are enabling circuit breaker & timeout, and then setting the threshold of failures which will trigger circuit tripping to 50, and also timeout to 10ms. So any database call that takes more than 10ms will also be registered as a failure. For other configuration parameters, please see

Building and Running the Sample

Checkout the code from & use Maven to build the sample.

mvn clean package

Next run the MSF4J service.

java -jar target/stockquote-0.1-SNAPSHOT.jar 

Now let's use cURL to repeatedly invoke the service. Run the following command;

while true; do curl -v http://localhost:8080/stockquote/IBM ; done

The above command will keep invoking the service. Observe the output of the service in the terminal. You will see that some of the calls will fail on the service side and you will be able to see the circuit breaker fallback in action and also the circuit breaker tripping, then going into the half-open state, and then closing.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Single Sign On between a legacy web app, which cannot change the user interface and service providers, which support standard SSO protocols.

  • The business users need to access a service provider,where its UI cannot be changed. The users need to provide their user credentials to the current login form of the service provider. 
  • Once the user logs into the above service provider, and then clicks on a link to another service (which follows a standard SSO protocol), the user should be automatically logged in. The vice-versa is not true.
  • Deploy WSO2 Identity Server as the Identity Provider and register all the service providers with standard inbound authenticators (including the legacy app). 
  • For the legacy web app, which does not want to change the UI of the login form, enable basic auth request path authenticator, under the Local and Outbound Authentication configuration. 
  • Once the legacy app accepts the user credentials from its login form, post them along with the SSO request (SAML 2.0/OIDC) to the WSO2 Identity Server. 
  • The WSO2 Identity Server will validate the credentials embedded in the SSO request and if valid, will issue an SSO response and the user will be redirected back to the legacy application. The complete redirection process will be almost transparent to the user. 
  • When the same user tries to log in to another service provider, the user will be automatically authenticated, as the previous step created a web session for the logged in user, under the WSO2 Identity Server domain. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Access a microservice from a web app protected with SAML 2.0 or OIDC

  • The business users need to access multiple service providers, supporting SAML 2.0 and OIDC-based authentication. 
  • Once the user logs into the web app, it needs to access a microservice on behalf of the logged in user.
  • Deploy WSO2 Identity Server as the Identity Provider and register all the service providers with OIDC or SAML 2.0 as the inbound authenticator. 
  • Enable JWT-based access token generator in the WSO2 Identity Server. 
  • Develop and deploy all the microservices with WSO2 MSF4J. 
  • If the service provider supports SAML 2.0 based authentication, once the user logs into the web app, exchange the SAML token to an OAuth access token by talking to the /token endpoint of the WSO2 Identity Server, following the SAML 2.0 grant type for OAuth 2.0 profile. This access token itself is a self-contained JWT. 
  • If the service provider supports OIDC based authentication, once the user logs into the web app, exchange the ID token to an OAuth access token by talking to the /token endpoint of the WSO2 Identity Server, following the JWT grant type for OAuth 2.0 profile. This access token itself is a self-contained JWT. 
  • To access the microservice, the pass the JWT (or the access token) in the HTTP Authorization Bearer header over TLS. 
  • MSF4J will validate access token (or the JWT) and the token will be passed across all the downstream microservices. 
  • More about microservices security: 
  • Products: WSO2 Identity Server 5.1.0+, WSO2 MSF4J 

Chandana NapagodaHow to disable Registry indexing

Sometimes people complain that they have seen background DB queries executed by some WSO2 products(EX: WSO2 API Manager Gateway profile). These query executions are not harmful, and those correspond to registry indexing task that runs in the background.

It is not required to enable indexing task for APIM 1.10.0 based Gateway or Key Manager nodes. So you can disable the indexing task by setting "startIndexing" parameter as false. This "startIndexing" parameter should be configured in the registry.xml file under "indexingConfiguration" section.


Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Enforce users to provide missing required attributes while getting JIT provisioned to the local system

  • The business users need to access multiple service providers via federated identity provider (i.e Facebook, Yahoo, Google). 
  • Need to JIT provision all the user coming from federated identity providers with a predefined set of attributes. 
  • If any required attributes are missing in the authentication response from the federated identity provider, the system should present a UI to the user to provide those.
  • Deploy WSO2 Identity Server as the Identity Provider and register all the service providers and federated identity providers. 
  • Enable JIT provisioning for each federated identity provider. 
  • Build a connector to validate the attributes in the authentication response and compare those against the required set of attributes. The required set of attributes can be defined via a claim dialect. If there is a mismatch between the attributes from the authentication response and the required set of attributes then this connector will redirect the user to web page (deployed under authenticationendpoints web app) to accept the missing attributes from the user. 
  • Engage the attribute checker connector from the previous step to an authentication step after the step, which includes the federated authenticator. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Accessing a SOAP service secured with WS-Trust from a web app on behalf of the logged-in user (SAML 2.0)

  • The business users need to access multiple service providers supporting SAML 2.0 web SSO-based authentication. 
  • Once the user logs into the web app, the web app needs to access a SOAP service secured with WS-Trust on behalf of the logged in user.
  • Deploy WSO2 Identity Server as an identity provider, and register all the service providers (with SAML 2.0 as the inbound authenticator). Further, it will also act as a Security Token Service(STS) based on WS-Trust. 
  • Deploy the SOAP service in WSO2 App Manager and secure it with WS-Security Policy to accept a SAML token as a supporting token. 
  • Deploy the web app in the WSO2 App Manager. 
  • Write a filter and deploy it in the WSO2 App Server, which will accept a SAML token coming from Web SSO flow and build a SOAP message embedding that SAML token. 
  • Since we are using SAML bearer tokens here, all the communication channels that carry the SAML tokens must be over TLS. 
  • Once the web app gets the SAML token, it will build a SOAP message with the security headers out of it (embedding the SAML token inside ActAs element of the RST) and talk to the WSO2 Identity Server’s STS endpoint to get a new SAML token to act-as the logged in user, when talking to the secured SOAP service. 
  • WSO2 App Server will validate the security of the SOAP message. It has to trust the WSO2 Identity Server, who is the token issuer. 
  • Products: WSO2 Identity Server 3.0.0+, WSO2 Application Server 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Self-signup during the authentication flow with service provider specific claim dialects

  • The business users need to access multiple service providers supporting multiple heterogeneous identity federation protocols. 
  • When the user gets redirected to the identity provider for authentication, the identity provider should provide a page with the login options and also an option to sign up. 
  • If the user picks the sign-up option, the required set of fields for the user registration must be specific to the service provider who redirected the user to the identity provider. 
  • Upon user registration, the user must be in the locked status, and confirmation mail has to be sent to the user’s registered email address. 
  • Upon email confirmation, the user should be prompted for authentication again and should be redirected back to the initial service provider.
  • Deploy WSO2 Identity Server as the Identity Provider and register all the service providers. 
  • Customize the login web app (authenticationendpoints) deployed inside WSO2 Identity Server to give an option user signup in addition to the login options. 
  • Follow a convention and define a claim dialect for each service provider, with the required set of user attributes it needs during the registration. The service provider name can be used as the dialect name as the convention. 
  • Build a custom /signup API, which retrieves required attributes for user registration, by passing the service provider name. 
  • Upon registration, the /signup API will use email confirmation feature in the WSO2 Identity Server to send the confirmation mail and in addition to that the /signup API also maintain the login status of the user, so upon email confirmation user can be redirected back to the initial service provider. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Fine-grained access control for service providers

  • The business users need to access multiple service providers supporting multiple heterogeneous identity federation protocols. 
  • Each service provider needs to define an authorization policy at the identity provider, to decide whether a given user is eligible to log into the corresponding service provider. 
  • For example, one service provider may have a requirement that only the admin users will be able to login into the system after 6 PM. 
  • Another service provider may have a requirement that only the users from North America should be able to login into the system.
  • Deploy WSO2 Identity Server as the Identity Provider and register all the service providers. 
  • Build a connector, which connects to the WSO2 Identity Server’s XACML engine to perform authorization. 
  • For each service provider, that needs to enforce access control during the login flow, engage the XACML connector to the 2nd authentication step, under the Local and Outbound Authentication configuration. 
  • Each service provider, that needs to enforce access control during the login flow, creates its own XACML policies in the WSO2 Identity Server PAP (Policy Administration Point). 
  • To optimize the XACML policy evaluation, follow a convention to define a target element under each XACML policy, that can uniquely identify the corresponding service provider. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Single Page Application (SPA) proxy

  • Authenticate users to a single page application in a secure manner, via OAuth 2.0. 
  • The SPA accessing an OAuth-secured API, the access token must be made invisible to the end-user. 
  • The SPA accessing an OAuth-secured API, the client (or the SPA) must be authenticated in a legitimate manner.
  • There are multiple ways to secure an SPA and this presentation covers some options: 
  • This explains the SPA proxy pattern, where a proxy is introduced, and the calls from the SPA will be routed through the proxy. 
  • Build an SPA proxy and deploy it in WSO2 Identity Server. A sample proxy app is available at 
  • The SPA proxy must be registered in the WSO2 Identity Server as a service provider, having OAuth inbound authenticator. 
  • To make the SPA proxy stateless, the access_token and the id_token obtained from the WSO2 Identity Server (after the OAuth flow) are encrypted and set as a cookie. 
  • Products: WSO2 Identity Server 5.0.0+ 

Nuwan BandaraDockerizing a proof of concept

Few weeks back I was working on a proof of concept to demonstrate a long running workflow based orchestration scenario. More about the architecture behind the PoC can be found at WSO2 solutions architecture blog. But this blog is not related to the architecture, this is simply about delivering the proof of concept in a completely contained environment.

What inspired me to do this: As a day to day job I happened to show how enterprise solutions architectures work in real world. I cook up a use-case in my machine, often with couple of WSO2 products (like the ESB/DSS/DAS/API-M) and some other non-WSO2 ones, then demonstrate the setup to who ever the interested party. I always thought it would be cool if the audience can run this themselves after the demo without any hassle (They can run it even now with bit of work😉 but thats time someone can easily save). The other motivation is to save my own time by re-using the demos I’ve build.

Docker ! Docker ! Docker !

I’ve been playing with docker on and off, thought its a cool technology and I found that creating and destroying containers in a matter of milliseconds is kind of fun😀 okey jokes aside I was looking for a way to do something useful with Docker, and finally found inspiration and the time.

I took the orchestration PoC (Bulk ordering work-flow for book publishers) as the base model that I am going to Dockerize.


I made sure that I cover my bases first with making everything completely remotely deployable. If am to build a completely automated deployment and a start up process I shouldn’t configure any of the products from the management UI.

The artifacts:

All the ESB and DSS artifacts went to a .car file {} with ESB and DSS profiles respectively. The long running orchestration was developed as a BPMN workflow and that was exported to a .bar file {}

Thats pretty much I had to do. After that its more or less bit of dev-ops work and automation. Some of the decisions I took along the dev-ops process were

[1] Not to create Docker images (and maybe push to docker-hub) from WSO2 product bundles + artifacts. Why: Mainly because then the images will get too heavy (~700MB)

[2] Use docker-compose instead of something like Vagrant. Why: I was bit lazy to explore something new and also wanted to stick to one tool for simplicity. And also docker-compose served the purpose.

With #1 deciton I wrote a dockerfile for each of the product so anyone can build an image with a simple command. The bookshop PoC touches WSO2 ESB, DSS, BPS and additionally to store the book orders I created a database in an external mysql server. Its altogether four Dockerfiles

ESB Dockerfile gist:

Once thats done, docker-compose does the wiring.

Docker compose definitions gist:

The composition will build all the images, expose the ports to the host machine and start-up all the containers in an isolated docker network.

# build and start the containers
$ docker-compose up -d

# Stop and kill all the containers with their images
$ docker-compose down --rmi all

Thats about it. The project can be found at github and you can find instructions to run the PoC on the readme fine.

I intend to build all my PoC demos in the above fashion, so unless I get really lazy😀 I should be publishing docker compositions more often.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Mobile identity provider proxy

  • A company builds a set of native mobile apps and deployed into company owned set of devices, which are handed over to its employees. 
  • When a user logs into one native mobile app, he/she should automatically log into all the other native apps, without further requests to provide his/her credentials. 
  • No system browser in the device.
  • Build a native mobile app, which is the identity provider (IdP) proxy and deploy it in each device along with all the other native apps. 
  • This IdP proxy must be registered with the WSO2 Identity Server, as a service provider, having OAuth 2.0 as the inbound authenticator. 
  • Under the IdP proxy service provider configuration in WSO2 Identity Server, make sure to enable only the resource owner password grant type. 
  • Each of the native app must be registered with the WSO2 Identity Server as a service provider, having OAuth 2.0 as the inbound authenticator and make sure only the implicit grant type is enabled. 
  • Under the native app service provider configuration in WSO2 Identity Server, make sure to have oauth-bearer as a request-path authenticator, configured under Local and Outbound Authentication configuration. 
  • The IdP proxy app has to provide a native API for all the other native apps. 
  • When a user wants to login into an app, the app has to talk to the login API of the IdP proxy app passing its OAuth 2.0 client_id. 
  • The IdP proxy app should first check whether it has a master access token, if not it should prompt the user to enter username/password and then using the password grant type talk to the WSO2 Identity Server’s /token API to get the master access token. The IdP proxy must securely store the master access token — and it’s per user. If the master access token is already there, the user needs to not to authenticate again. 
  • Now, using the master access token (as the Authorization Bearer header), the IdP proxy app should talk (HTTP POST) to the /authorize endpoint of the WSO2 Identity Server, following the implicit grant type with the client_id provided by the native app. Also, use openid as the scope. 
  • Once the access token and the ID token are returned from the WSO2 Identity Server, the IdP proxy will return them back to the native app, who did the login API call first. 
  • Products: WSO2 Identity Server 5.2.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Federation Proxy

  • All the inbound requests for all the service providers inside the corporate domain must be intercepted centrally and enforce authentication via an Identity Hub. 
  • Users can authenticate to the hub, via different identity providers. 
  • All the users, who authenticate via the hub must be provisioned locally. 
  • One user can have multiple accounts with multiple identity providers connected to the hub and when provisioned into the local system, the user should be given the option to map or link all his/her accounts and then pick under which account he/she needs to login into the service provider.
  • Deploy WSO2 App Manager to front all the service providers inside the corporate domain. 
  • Configure WSO2 Identity Server as the trusted Identity Provider of the WSO2 App Manager. Both the Identity Server + the App Manager setup we call it as the federation proxy. 
  • Introduce the identity provider running at the hub (it can be another WSO2 Identity Server as well) as a trusted identity provider to the WSO2 Identity Server running as the proxy. 
  • Configure git provisioning against the hub identity provider, configured in WSO2 Identity Server. 
  • For all the service provider, the initial authentication will happen via the hub identity provider and once that is done, configure a connector to the 2nd step to do the account linking. 
  • Products: WSO2 Identity Server 5.0.0+, WSO2 App Manager 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Enforce password reset for expired passwords during the authentication flow

  • During the authentication flow, enforce to check whether the end-user password is expired and if so, prompt the user to change the password.
  • Configure multi-step authentication for the corresponding service provider. 
  • Engage basic authenticator for the first step, which accepts username/password from the end-user. 
  • Write a handler (a local authenticator) and engage it in the second step, which will check the validity of the user’s password and if it is expired then prompt the user to reset the password. 
  • Sample implementation: 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Fine-grained access control for APIs

  • Access to the business APIs must be done in a fine-grained manner. 
  • Only the users belong to the business-admin role should be able to access foo and bar APIs during a weekday from 8 AM to 5 PM.
  • Setup the WSO2 Identity Server as the key manager of the WSO2 API Manager. 
  • Write a scope handler and deploy it in the WSO2 Identity Server to talk to it’s XACML engine during the token validation phase. 
  • Create XACML policies using the WSO2 Identity Server’s XACML policy wizard to address the business needs. 
  • Products: WSO2 Identity Server 5.0.0+, API Manager, Governance Registr

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Claim Mapper

  • The claim dialect used by the service provider is not compatible with the default claim dialect used by the WSO2 Identity Server. 
  • The claim dialect used by the federated (external) identity provider is not compatible with the default claim dialect used by the WSO2 Identity Server.
  • Represent all the service providers in the WSO2 Identity Server and configure the corresponding inbound authenticators (SAML, OpenID, OIDC, WS-Federation). 
  • For each service provider define custom claims and map them to the WSO2 default claim dialect. 
  • Represent all the identity providers in the WSO2 Identity Server and configure corresponding federated authenticators (SAML, OpenID, OIDC, WS-Federation). 
  • For each identity provider define custom claims and map them to the WSO2 default claim dialect. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Identity federation between service providers and identity providers with incompatible identity federation protocols

  • The business users need to login into a SAML service provider with an assertion coming from an OpenID Connect identity provider. 
  • In other words, the user is authenticated against an identity provider, which only supports OpenID Connect, but the user needs to login into a service provider, which only supports SAML 2.0.
  • Represent all the service providers in the WSO2 Identity Server and configure the corresponding inbound authenticators (SAML, OpenID, OIDC, WS-Federation). 
  • Represent all the identity providers in the WSO2 Identity Server and configure corresponding federated authenticators (SAML, OpenID, OIDC, WS-Federation). 
  • Associate identity providers with service providers, under the Service Provider configuration, under the Local and Outbound Authentication configuration, irrespective of the protocols they support. 
  • Products: WSO2 Identity Server 5.0.0+

Chandana NapagodaWSO2 Governance Registry: Support for Notification

With WSO2 Governance Registry 5.x releases, now you can send rich email messages when email notification is triggered in WSO2 Governance Registry with the use of email templating support we have added. In the default implementation, administrator or any privileged user can store email templates in “/_system/governance//repository/components/org.wso2.carbon.governance/templates” collection and the template name must be same as the lower case of the event name.

For an example if you want to customize “PublisherResourceUpdated” event, template file should be as: “/_system/governance/repository/components/org.wso2.carbon.governance/templates/publisherresourceupdated.html”.

If you do not want to define event specific email templates, then you can add a template called “default.html”.

By default, $$message$$ message section in email templates will be replaced with the message generated in the event.

How can I plug my own template mechanism and modify the message?

You can override the default implementation by adding a new custom implementation. First, you have to create a Java project. Then you have to implement “NotificationTemplate” interface and override the “populateEmailMessage” method. There you can write your own implementation.

After that, you have to add the compiled JAR file to WSo2 Governance Registry. If it’s an OSGI bundle, please add it to :  <GREG_HOME>/repository/compoments/dropins/ folder Otherwise jar needs to be added to  <GREG_HOME>/repository/compoments/lib/ folder

Finally, you have to add the following configuration to registry.xml file.

   <class>complete class name with package</class>

What are the notification types available in the Store, publisher and Admin Console?

Store: StoreLifeCycleStateChanged,StoreResourceUpdated,
Publisher: PublisherResourceUpdated, PublisherLifeCycleStateChanged,     PublisherCheckListItemUnchecked, PublisherCheckListItemChecked

Admin Console: Please refer this documentation (Adding a Subscription)

Do I need to enable worklist for console subscriptions?

Yes, you have to enable Worklist configuration.(Configuration for Work List)

Does notifications visible in each application?

If you have login access to the Publisher, Store and Admin Console, then you can view notifications from each of those applications. However, some notifications may have customized to fit the context of relevant applications.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Single Sign On with delegated access control

  • The business users need to login into multiple service providers with single sign on via an identity provider. 
  • Some service providers may need to access backend APIs on behalf of the logged in user. For example, a user logs into the Cute-Cup-Cake-Factory service provider via SAML 2.0 web SSO and then the service provider (Cute-Cup-Cake-Factor) needs to access user’s Google Calendar API on behalf of the user to schedule the order pickup.
  • Represent all the service provider in the WSO2 Identity Server as Service Providers and configure inbound authentication appropriately either with SAML 2.0 or OpenID Connect. 
  • For each service provider that needs to access backend APIs, configure OAuth 2.0 as an inbound authenticator, in addition to the SSO protocol (SSO protocol can be SAML 2.0 or OpenID Connect). 
  • Once a user logs into the service provider, either via SAML 2.0 or OpenID Connect, use the appropriate grant type (SAML grant type for OAuth 2.0 or JWT grant type for OAuth 2.0) to exchange the SAML or the JWT token for an access token, by talking to the token endpoint of the WSO2 Identity Server 
  • Products: WSO2 Identity Server 5.0.0+, WSO2 API Manager, WSO2 Application Server 

Aruna Sujith KarunarathnaHow to Enable Asynchronous Logging with C5

In this post we are going to explore on how to enable asynchronous logging on C5 based servers. More on asynchronous logging can be found here. 1. Copy the disrupter dependency to the /osgi/plugins folder. You can get the disrupter OSGi bundle from here. 2. Edit the /bin/bootstrap/org.wso2.carbon.launcher-5.1.0.jar and add the disrupter jar to the initial bundles list.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
User management upon multi-layer approval

  • All the user management operations must be approved by multiple administrators in the enterprise in a hierarchical manner. 
  • When an employee joins the company, it has to be approved by a set of administrators while, when the same employee is assigned to the sales team, must be approved by another set of administrators.
  • Create a workflow with multiple steps. In each step specify who should provide the approval. 
  • Define a workflow engagement for user management operations and associate the above workflow with it. 
  • When defining the workflow, define the criteria for its execution. 
  • Products: WSO2 Identity Server 5.1.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Rule-based user provisioning

  • The identity admin needs to provision all the employees to Google Apps at the time they join the company. 
  • Provision only the employees belong to the sales-team to Salesforce.
  • Represent Salesforce and Google Apps as provisioning identity providers in the WSO2 Identity Server. 
  • Under Salesforce Provisioning Identity Provider Configuration, under the Role Configuration, set sales-team as the role for outbound provisioning. 
  • Under the Resident Service Provider configuration, set both Salesforce and Google Apps as provisioning identity providers for outbound provisioning. 
  • Products: WSO2 Identity Server 5.0.0+ 

Nuwan BandaraDebuging & troubleshooting WSO2 ESB

ddI am asked this question almost always I do a ESB demonstration, hence thought of documenting the answer for a wider audience.

WSO2 ESB is a mediation & an orchestration engine for enterprise integrations, you can read more about the product at WSO2 docs.

Building a mediation or a orchestration with multiple external services sometimes can become a tedious task. You will have to transform, clone and create messages to send to multiple external endpoints. You will have to handle the responses and sometime handle the communications reliably with patterns like store and forward etc. In such scenarios being able to debug the message flow, understand the messages going out and coming in from the and to the ESB runtime will come very handy.

There are couple of out of the box capabilities exposed from the ESB to help the developer. Namely the LogMediator which is the simplest, you can also use the TCPMonitor to understand the messages in wire and if the communication is over SSL you can use ESB wire log dump capability.

With the log mediator you can inspect the message at each mediation stage, which is much like we used to debug php scripts back in the day with alot of <?php echo “{statement}”; ?> statmenets

The wire logging capability thats built into the ESB provide you all the information about messages coming into the runtime and going out from the runtime.

You can enable wire logs by editing the (in repository/conf) file or through ESB Management console

More information about wire logs can be found at following post –

Finally if you want to put break points and understand what really happens to the message, you can debug with ESB source. For ESB 4.9.0 its as follows for any later or any upcoming releases the source link will change.

[1] Download the mediation engine source from

[wso2 specific mediators]

[2] Build the source
[3] Open it in Eclipse as a maven project
[4] Setup Eclipse with remote debug

[5] Start the ESB in debug mode

sh wso2esb-4.9.0/bin/ -debug 8000

[6] Put a break point at one the mediators, you have in the sequence (for me its the log mediator, just to test like follows)

[6] Deploy the sequence you are trying out and send a message, that should hit the breakpoint in eclipse

[NEWS] We are also working on a graphical ESB debugging tool for WSO2 ESB 5. So folks the future is bright stay tuned:)

Happy debugging !!!

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Login to multiple service providers with the current Windows login session

  • The business users need to login to multiple service providers supporting multiple heterogeneous identity federation protocols. 
  • Some service providers are on-premise while others are in the cloud. 
  • A user logs into his Windows machine and should be able to access any service provider without further authentication.
  • Deploy WSO2 Identity Server over the enterprise active directory as the user store. 
  • Represent all the service providers in the WSO2 Identity Server and configure the corresponding inbound authenticator (SAML, OpenID, OIDC, WS-Federation). 
  • For each service provider, under local and outbound authentication configuration, enable IWA local authenticator. 
  • In each service provider, configure the WSO2 Identity Server as the trusted identity provider. For example, if Salesforce is a service provider, in Salesforce, add WSO2 Identity Server as a trusted identity provider. 
  • Products: WSO2 Identity Server 5.0.0+ 

Bhathiya Jayasekara[WSO2 APIM] Setting up API Manager Distributed Setup with Puppet Scripts

In this post we are going to use puppet to setup a 4 node API Manager distributed setup. You can find the puppet scripts I used, in this git repo.

NOTE: This blog post can be useful to troubleshoot any issues you get while working with puppet.

In my puppet scripts there are below IPs of the nodes I used. You have to replace them with yours.

Puppet Master/MySQL :
Key Manager:

That's just some information. Now let's start setting up each node, one by one.

1) Configure Puppet Master/ MySQL Node 

1. Install NTP, Puppet Master and MySQL.

> sudo su
> ntpdate ; apt-get update && sudo apt-get -y install ntp ; service ntp restart
> cd /tmp
> wget
> dpkg -i puppetlabs-release-trusty.deb
> apt-get update
> apt-get install puppetmaster
> apt-get install mysql-server

2. Change hostname in /etc/hostname to puppet (This might need a reboot)

3. Update /etc/hosts with below entry. puppet

4. Download and copy directory to /etc/puppet

5. Replace IPs in copied puppet scripts. 

6. Before restarting the puppet master, clean all certificates, including puppet master’s certificate which is having its old DNS alt names.

> puppet cert clean --all

7. Restart puppet master

> service puppetmaster restart

8. Download and copy jdk-7u79-linux-x64.tar.gz to /etc/puppet/environments/production/modules/wso2base/files/jdk-7u79-linux-x64.tar.gz

9. Download and copy to 

10. Download and copy directory to /opt/db_scripts

11. Download and copy file to /opt/ (Copy required private keys as well, to ssh to puppet agent nodes)

12. Open and update script as required, and set read/execution rights.

> chmod 755

2) Configure Puppet Agents 

Repeat these steps in each agent node.

1. Install Puppet.

> sudo su
> apt-get update
> apt-get install puppet

2. Change hostname in /etc/hostname to apim-node-1 (This might need a reboot)

3. Update /etc/hosts with puppet master's host entry. puppet

4. Download and copy file to /opt/

5. Set execution rights.

> chmod 755 

6. Download and copy file to /opt/deployment.conf (Edit this as required)

3) Execute Database and Puppet Scripts

Go to /opt in puppet master and run ./ (or you can run in each agent node.)

If you have any questions, please post below.


Pushpalanka JayawardhanaUser Store Count with WSO2 Identity Server 5.2.0

This post is to provide details on one of the new functionalities introduced with WSO2 Identity Server 5.2.0, to be released soon. This feature comes with a service to count the number of users based on user names patterns and claims and also to count the number of roles matching a role name pattern in user store. By default this supports JDBC user store implementations only and provides freedom to extend the functionality to LDAP user stores or any other type as well.

How to Use?

A new property is introduced in user store manager configuration named 'CountRetrieverClass', where we can configure the class name that carries the count implementation for particular user store domain.

Using Service

The functionality is exposed via a service named 'UserStoreCountService' which provides relevant operations as below.

Separate operations are provided to get the counts of a particular user store or the whole user store chain for following functionalities.
  • Count users matching a filter for user name
  • Count roles matching a filter for role name
  • Count users matching a filter for a claim value
  • Count users matching filters for a set of claim values (eg: count of users whose email address ends with '' and mobile number starts with '033')


In order to extend the functionality, this interface '' should be implemented by the class, packaged into an OSGI bundle and dropped into the dropins folder within WSO2 Identity Server.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Provision federated users to a tenant

  • The business users need to login to multiple service providers via multiple identity providers. For example login to Drupal via Facebook or Yahoo! credentials. 
  • Irrespective of the service provider, need to provision federated users to a single tenant (let’s say, individual tenant).
  • Define a user store with CarbonRemoteUserStoreManager in the WSO2 Identity Server pointing to the individual tenant. 
  • Represent each federated identity provider in Identity Server. For example, represent Facebook as an identity provider in Identity Server. 
  • Enable JIT provisioning for each identity provider, and pick the user store domain(CarbonRemoteUserStoreManager) to provision users. 
  • Products: WSO2 Identity Server 5.0.0+ 

Sriskandarajah SuhothayanSensing the world with Data of Things

Henry Ford once said “Any customer can have a car painted any colour that he wants so long as it is black!” which it’s now long gone. In the current context people seek for personalized treatment. Imagine calling customer service, every time you call you have to go through all the standard questions, and they don’t have a clue why you might be calling? Or whether you called before? and in the case of shopping, even if you are a regular customer and have a platinum or gold membership card you will not get any special treatment at the store, may be presenting the card at the cashier can get you a discount. 

What’s missing here? They don’t know anything about the customer to give a better service. Hence the simple remedy for the above issue is building customer profiles, this can be done with the historical data you might have about the customer, next you need to understand and react to the context the customer evolves such as whether he is in an urgency, has he contacted you before, etc, and finally you have to  react in real time to give the best customer satisfaction. Therefore to provide the best customer satisfaction identifying the context is a key element, and in the present world the best way of identifying the customer context is via the devices your customer has and via the sensors that’s around him which indeed the Internet of Things (IoT)

IoT is not a new thing, we have had lots of M2M systems that have monitored and controlled devices in the past, and when it comes to IoT we have more devices having sensors and a single device having more sensors. IoT it’s an ecosystem where IoT devices should be manufactured, app for those devices should be developed (e.g apps for phone), users 
should be using those devices and finally they should be monitored and managed. WSO2’s IoT Platform plays a key role in managing and providing analytics for the IoT devices in the ecosystem.

Data Types in IoT Analytics

Data from IoT devices are time bound because these devices do continuous monitoring and reporting. With this we can do time series processing such as energy consumption over time. OpenTSDB is a specialised DB implemented for time based processing.

Further since IoT devices are deployed in various geological locations and since some of those devices move, location is also becomes another important data type for IoT Devices. IoT devices are usually tacked with GPS and currently iBeacons are used when the devices are  within a building. Location based data enables geospatial processing such as traffic planning and better route suggestion for vehicles. Geospatially optimised processing engines such as GeoTrellis are especially developed for these type of usecases.

IoT is Distributed

Since IoT is distributed by nature, components of the IoT network constantly get added and removed. Further since IoT devices get connected to the IoT network through all type of communication networks such as from weak 3G networks to Ad-hoc peer-to-peer networks, and they also use various communication protocols such as Message Queuing Telemetry Transport (MQTT), Common Open Source Publishing Platform (CoApp) and ZigBee or Bluetooth low energy (BLE). Due to these the data flow of the IoT network continuously get modified and repurposed. As data load varies dynamically in the IoT network, on-premise deployment will not be suitable, and hence we have to move towards public or hybrid cloud based deployment. IoT has an event driven architecture to accommodate its distributed nature where its sensors reports data as continuous event streams working in an asynchronous manner.

Analytics for IoT

IoT usually produces perishable data where their value drastically degrades over time. This imposes the importance of Realtime Analytics in IoT. With Realtime Analytics temporal patterns, logical patterns, KPIs and thresholds can be detected and immediately alerted to respective stakeholders, such as alarming when temperature sensor hits a limit and notifying  via car dashboard if the tire pressure is low. Systems such as Apache Storm, Google Cloud DataFlow & WSO2 CEP are build for implementing such Realtime Analytics usecases.

Realtime alone is not enough! We should be able to understand how current situation deviates from the usual behaviour, to do so we have to process historical data. With Batch Analytics, periodic summarisation and analytics can be performed on historic data against which we can compare at realtime. Average temperature in a room last month, and total power usage of the factory last year are some example summarizations that can be done using systems like Apache Hadoop & Apache Spark on the data stored in scalable databases such as Apache Cassandra and Apache HBase.

Ok, with Batch Analytics we defined the thresholds and with Realtime Analytics we detected and alerted threshold violations. Notifying violations may results in preventing disasters but it does not help stopping similar issues arising again. To do so we need to investigate the historical data and identify the root course of the issue and eliminate that. This can be done through Interactive Analytics and with it’s Ad-Hoc Queries, it enables us to search the data set how system and all related entities have behaved before the alert was raised. Apache Drill, Apache Lucene and indexed storage systems such as Couchbase are some systems provides Interactive Analytics.

Than being reactive, staying a step ahead predicting issues & opportunities bring great value. This can be achieved through Predictive Analytics which helps in scenarios such as proactive maintenance, fraud detection and health warnings. Systems such as Apache Mahout, Apache Spark MLlib, Microsoft Azure Machine Learning, WSO2 ML & Skytree are systems that can help us build Predictive Analytics models.

An Integrated Solution for IoT Analytics

From the about technologies by selecting WSO2 Siddhi, Apache Storm, Apache Spark, Apache Lucene, Apache HBase, Apache Spark MLLib and with many other open source softwares WSO2 has built and integrated Data Analytics solutions that support Realtime, Batch, Interactive and Predictive analytics solution called WSO2 Data Analytics Server.

Issues in IoT Analytics

Extreme Load

With compared with the scale of the data produced by sensors, distributed centralised analytic platforms cannot scale and even if they can - it will not be cost effective. Hence we should look at whether we need to process and/or store all the data produced by the sensors? In most cases we only need the aggregations over time, trends that exceed thresholds, outliers, event matching a rare condition, and when the system is unstable or changing. For example from a temperature sensor we only need to send readings when there is a change in temperature and its no point periodically sending same value. This directs us to optimise sensors or data collection points to focus on doing local optimisations before publishing data. This helps in quick detection of issues as part of the data is already processed locally and instant notifications since decisions are also taken at edges. Taking decision at the edge can be implemented with the help of complex event processing libraries such as WSO2 Siddhi and Esper.


Due to the distributed nature of IoT, data produced can be duplicated, arrives out of order, missing or even be wrong.

Redundant sensors & network latency can introduce duplicated events and out of order event arrival. This can impose difficulty doing temporal event processing, such as Time Windows & Pattern Matching. These are very useful for usecases such as Fraud detection, and Realtime Soccer Analytics (based on DEBS 2013 dataset) where we build a system that monitors the soccer players and the ball and identified ball kicks, ball possession, shot on goal & offside. Algorithms based on K-Slack can help to order events before processing them on realtime.

Due to network outages data produced by the IoT sensors can go missing, and in these situations using complimenting sensor reading are very important where one of those sensor value will be some sort of an aggregation done at the edge which will help us to approximate the missing sensor values based on its aggregation. Such as publishing Load and Work reading when monitoring electricity where when some events are missed, from a later Work Event reading we will be able to approximate the Load reading that should have arrived during the outage. The other alternative is using fault-tolerant data streams such as Google Millwheel.

Further at times sensor reading won't be correct, this can be due to various reasons such as sensor quality and environment noise, in such situations we can use kalman filtering to smoothen consecutive sensor readings for better approximation. These type of issues are quite common when we use iBeacons for location sensing.

Visualisation of IoT data

Visualisation is one of the most important aspect of effective analytics and hence with Big Data and IoT visualisation becomes even complicated. Per-device & Summarization Views are essential and more than that users should be able to visualize as device groups based on various categories such as device type, location, device owner types, deployed zone and many more. Since these categories are dynamic and each person monitoring the system have various personal preferences, therefore composable & customisable dashboard is essential. Further charts and graphs should be able to visualise the huge stored data, where sampling & indexing techniques can be used for better responsiveness.  

Communicating with devices

In IoT sending a command/alert to a devices is complicated, to do so we have to use client side pooling based techniques. Here we store the data the need to be pushed to the client in a database or queue and expose them via secured APIs (through systems like WSO2 API Manager).

Reference Architecture for IoT Analytics

Here data is collected through message broker such as MQTT, immediately written to the disk by WSO2 Data Analytics Server (DAS), at the meantime the collected data is cleaned in realtime, this cleaned data is also persisted, and parallely the cleaned data is fed into realtime event processing which in deed sends alerts and provides realtime visualisations. Stored clean data is used by WSO2 Machine Learner (ML) to build machine learning models and deploy them at WSO2 DAS for realtime predict ions. Further the stored clean data is also used by Spark to run Batch analytics producing sumarisation, which are then visualised in dashboards.

It’s a pleasure for me presenting “Sensing the world with Data of Things” at Structure Data 2016, San Francisco. Please find the slides below. 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Multi-factor authentication for WSO2 Identity Server management console

  • Enable MFA for the WSO2 Identity Server Management Console. 
  • In other words, the Identity Server’s Management Console itself must be protected with MFA.
  • Introduce WSO2 Identity Server as a service provider to itself. 
  • Under the service provider configuration, configure multi-step authentication having authenticators, which support MFA in each step. 
  • Enable SAML SSO carbon authenticator through the corresponding configuration file. 
  • How-to: 
  • Products: WSO2 Identity Server 5.0.0+ 

Chanaka FernandoWSO2 ESB Passthrough Transport in a nutshell

If you have ever used WSO2 ESB, you might already know that it is one of the highest performing open source ESB solutions in the integration space. The secret behind it’s performance is the so-called Pass Through Transport (PTT) implementation which handles the HTTP requests. If you are interested in learning about PTT from scratch, you can refer the fololwing article series written by Kasun Indrasiri.

If you read through the above mentioned posts, you can get a good understanding about the concepts and the implementation. But one thing which is harder to do is remember all the diagrams in your memory. It is not impossible, but little bit harder for a person with average brain. I have tried to draw a picture to capture all the required information related to the PTT. Here is my drawing on WSO2 ESB PTT.

WSO2 ESB Passthrough Transport

If you look at the above picture, it contains 3 main features of the PTT.
  • The green boxes at the edges of the middle box contains the different methods executed from http-core library towards the Source handler of the ESB when there is any new events
  • The orange boxes represents the internal state transition of the PTT starting from REQUEST_READY up until RESPONSE_DONE.
  • Light blue boxes depicts the objects created within a lifecycle of a single message execution flow and how those objects interacted and at which time they get created.
In addition to above 3 main features, axis2 engine and synapse engine also depicted with purple and shiny purple boxes. These components were depicted as black boxes without considering the actual operations happen within those components.

Chanaka FernandoComparison of asynchronous messaging technologies with JMS, AMQP and MQTT

Messaging has been the fundamental communication mechanism which has been succeeded all over the world. Either it is human to human, machine to human or machine to machine, messaging has been the single common method of communication. There are 2 fundamental mechanisms we used to exchange messages between 2(or more)parties.

  • Synchronous messaging
  • Asynchronous messaging

Synchronous messaging is used when the message sender expects a response to the message within a specified time period and waiting for that response to carry out his next task. Basically he “blocks” until he receives the response.

Asynchronous messaging means that sender does not expect an immediate response and does not “blocks” waiting for the response. There can be a response or not, but the sender will carry out his remaining tasks.

Out of the above mentioned technologies, Asynchronous messaging has been the widely used mechanism when it comes to machine to machine communication where 2 computer programs talk to each other. With the hype of the micro services architecture, it is quite evident that we need an asynchronous messaging model to build our services.

This has been a fundamental problem in software engineering and different people and organizations have come up with different approaches. I will be describing about 3 of the most successful asynchronous messaging technologies which are widely used in the enterprise IT systems.

Java Messaging Service (JMS)

JMS has been one of most successful asynchronous messaging technology available. With the growth of the Java adoption of large enterprise applications, JMS has been the first choice for enterprise systems. It defines the API for building the messaging systems.

Here are the main characteristics of JMS.
  • standard messaging API for JAVA platform
  • Interoperability is only within Java and JVM languages like Scala, Groovy
  • Does not worry about the wire level protocol
  • Supports 2 messaging models with queues and topics
  • Supports transactions
  • Defines the message format (headers, properties and body)

Advanced Message Queueing Protocol (AMQP)

JMS was awesome and people were happy about it. Microsoft came up with NMS (.Net Messaging Service) to support their platform and programming languages and it was working fine. But then comes the problem of interoperability. How 2 programs written in 2 different programming languages can communicate with each other over asynchronous messaging. Here comes the requirement to define a common standard for asynchronous messaging. There was no standard wire level protocol with JMS or NMS. Those will run on any wire level protocol but the API was bound with the programming language. AMQP addressed this issue and come up with a standard wire level protocol and many other features to support the interoperability and rich messaging needs for the modern applications.

Here are the main features of AMQP.

  • Platform independent wire level messaging protocol
  • consumer driven messaging
  • Interoperable across multiple languages and platforms
  • It is the wire level protocol
  • have 5 exchange types direct, fanout, topic, headers, system
  • buffer oriented
  • can achieve high performance
  • supports long lived messaging
  • supports classic message queues, round-robin, store and forward
  • supports transactions (across message queues)
  • supports distributed transactions (XA, X/Open, MS DTC)
  • Uses SASL and TLS for security
  • Supports proxy security servers
  • Meta-data allows to control the message flow
  • LVQ not supported
  • client and server are equal
  • extensible

Message Queueing Telemetry Transport (MQTT)

Now we have JMS for Java based enterprise applications and AMQP for all other application needs. Why do we need a 3rd technology? It is specifically for small guys. Devices with less computing power cannot deal with all the complexities of AMQP rather they want a simplified but interoperable way to communicate. This was the fundamental requirement for MQTT, but today, MQTT is one of the main components of Internet Of Things (IOT) eco system.

Here are the main features of the MQTT.

  • stream oriented, low memory consumption
  • designed to be used for small dumb devices sending small messages over low bw networks
  • no long lived store and forward support
  • does not allow fragmented messages (hard to send large messages)
  • supports publish-subscribe for topics
  • no transactional support (only basic acknowledgements)
  • messaging is effectively ephemeral (short lived)
  • simple username, password based security without enough entropy
  • no connection security supported
  • Message is opaque
  • Topic is global (one global namespace)
  • Ability to support Last Value Queue (LVQ)
  • client and server are asymetric
  • Not possible to extend

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
JIT provision users to cloud service providers

  • The company foo has an account with the bar cloud service provider (it can be Google Apps, Salesforce, Workday). 
  • The company foo trusts employees from the company zee to login into the bar cloud service provider, under the foo account. 
  • For example, foo company wants the users from company zee to login into its Google Apps domain.
  • Introduce bar as a service provider (bar-sp) to the WSO2 Identity Server running at foo. 
  • Introduce bar as a provisioning identity provider (bar-idp) to the WSO2 Identity Server, and configure the provisioning protocol as supported by bar. For example, if bar is Salesforce, then one can pick the Salesforce provisioning connector. 
  • Introduce the company zee as an identity provider to the WSO2 Identity Server running at foo, and enable JIT provisioning. 
  • Under the bar-sp service provider configuration, under local and outbound authentication configuration, select zee as a federated identity provider. This means, a user who wants to login bar-sp, will be redirected to the zee identity provider for authentication. 
  • Under the bar-sp service provider configuration, under outbound provisioning configuration, select bar-idp as a provisioning identity provider. 
  • Introduce the WSO2 Identity Server running at foo as a trusted identity provider to the zee cloud service provider. For example in Salesforce, add WSO2 Identity Server as a trusted identity provider. 
  • Products: WSO2 Identity Server 5.0.0+ 

Dakshitha RatnayakeWSO2 API Manager - Basic Functionality Flow Diagram

For more information check out the WSO2 API Manager here

Srinath PereraWalking the Microservices Path towards Loose coupling? Look out for these Pitfalls

(image credit: Wiros from Barcelona, Spain)

Microservices are the new architecture style of building systems using simple, lightweight, loosely coupled services that can be developed and released independently of each other.

If you need to know the basics, read Martin Fowler’s Post. If you like to compare it with SOA, watch the Don Ferguson’s talk.). Also, Martin Fowler has written about “trade off of micro services” and “when it is worth doing microservices”, which let you decide when it is useful.

Let’s say that you heard, read, and got convinced about microservices. If you are trying to follow the microservices architecture, there are few practical challenges. This post discusses how you can handle some of those challenges.

No Shared Database(s)

Each microservice should have it’s own databases and Data MUST not be shared via a database. This rule removes a common case that leads to tight coupling between services. For example, if two services share the same database, the second service will break if the first service has changed the database schema. So teams will have to talk to each other.

I think this rule is a good one, and should not be broken. However, there is a problem. If two services share the same data (e.g. bank account data, shopping cart) and need to update the data transactionally, simplest approach is to keep both in the same database and use database transactions to enforce consistency. Any other solution is hard.

Solution 1: If updates happen only in one microservice (e.g. loan approval process check the balance), you can use asynchronous messaging (message queue) to share data.

Solution 1: If updates happen in both services, you can either consider merging the two services or use transactions. The post Microservices: It’s not (only) the size that matters, it’s (also) how you use them describes the first option. The next section will describe the transactions in detail.

Handling Consistency of Updates

You will run into scenarios where you will update the data from multiple places. We discuss an example in the earlier section. ( If you update the data only from one place, we already discussed how to do it).

Please note this use case typically solved using transactions. However, you can sometimes solve the problem without transactions. There are several options.

Put all updates to the same Microservice

When possible, avoid multiple updates crossing microservice boundaries. However, sometimes by doing this you might end up with few or worse one big monoliths. Hence, sometimes, this is not possible.

Use Compensation and other lesser Guarantees

As the famous post “Starbucks Does Not Use Two-Phase Commit” describes, the normal world works without transactions. For example, barista atStarbucks does not wait until your transaction is completed. Instead, they handle multiple customers same time and compensate for any erroneous conditions explicitly. You can do the same, given you are willing to do a bit more work.

One simple idea is if an option failed, you go and compensate. For example, if you are shipping the book, first deduct the money, then ship the book. If the shipping failed, you go and return the money.

Also, sometimes you can settle for eventual consistency or timeout. Another simple idea is give a button to the use to forcefully refresh the page if he can tell that it is outdated. Some other times, you bite the bullet and settle for lesser consistency (e.g. Vogel’s post is a good starting point).

Finally, Life Beyond Distributed Transactions: An Apostate’s Opinion is a detailed discussion on all the tricks.

Having said that, there are some use cases where you must do transactions to get correct results. And those MUST use transactions. see Microservices and transactions-an update. Weigh the pros and cons and choose wisely.

Microservice Security

Old approach is the service to authenticate by calling the database or Identity Server when it has received a request.

You can replace the identity server with a microservice. That, in my opinion, leads to a big complicated dependency graph.

Instead, I like the token based approach depicted by the following figure. The idea is described in the book, “Building Microservices”. Here the client ( or a gateway) would first talk to an identity/SSO server who will authenticate the user and issue a signed token that describes the user and his roles. (e.g. you can do this with SAML or OpenIDConnect). Each microservice verifies the token and authorizes the calls based on the user roles described in the token. For example, with this model, for the same query, a user with role “publisher” might see different results than a user with role “admin” because they have different permissions.

You can find more information about this approach from How To Control User Identity Within Microservices?.

Microservice Composition

Here, “composition” means “how can connect multiple microservices into one flow to deliver what end user needs”.

Most compositions with SOA looked like following. The idea is that there is a central server that runs the workflow.

Use of ESB with microservices is discouraged (e.g. Top 5 Anti-ESB Arguments for DevOps Teams). Also, you can find some counter arguments in Do Good Microservices Architectures Spell the Death of the Enterprise Service Bus?

I do not plan to get into the ESB flight in this post. However, I want to discuss whether we need a central server to do the microservices composition. There are several way to do the microservices composition.

Approach 1:Drive flow from Client

The following figure shows an approach to do microservices without a central server. The client browser handles the flow. The post, Domain Service Aggregators: A Structured Approach to Microservice Composition, is an example of this approach.

This has approach has several problems.

  1. If the client is behind a slow network, which is the most common case, the execution will be slow. This is because now multiple calls need to be triggered by the client.
  2. Might add security concerns ( I can hack my app to give me a loan)
  3. Above example thinks about a website. However, most complex compositions often come from other use cases. So general applicability of composition at the client to other use cases yet to be demonstrated.
  4. Where to keep the State? Can client be trusted to keep the state of the workflow. Modeling state with REST is possible. However, it is complicated.


Driving the flow from a central place is called orchestration. However, that is not the only way to coordinate multiple partners to carry out some work. For example, in a dance, there is no one person directing the performance. Instead, each dancer would follow who is near to her and sync up. Choreography applies the same idea to businesses process.

Typical implementation includes an eventing system, where each participant in the process listens to different events and carries out his or her parts. Each action generates asynchronous events that will trigger participants down the stream. This is the programming model used by environments like RxJava or Node.js.

For example, let’s assume that a loan process includes a request, a credit check, other outstanding loans check, manager approval, and a decision notification. The following picture shows how to implement this using choreograph. The request will be placed in a queue. It will be picked up by next actor, who will put his results into the next queue. The process will continue until the it has completed.

Just like a dance needs practice, choreography is complicated. For example, you did not know when the process has finished, nor you will know if an error has happened, or if the process is stuck. Choreography needs a monitoring system to track progress and recover or notify about the error.

On the other hand, the advantage of choreography is that it creates systems that are much loosely coupled. For example, often you can add a new actor to the process without changing other actors. You can find more information from Scaling Microservices with an Event Stream.

Centralized Server

The last but most simple option is a centralized server (a.k.a orchestration).

SOA’s implemented this often using two methods: ESB or Business Processes. Microservice folks propose an API Gateway (e.g. Microservices: Decomposing Applications for Deployability and Scalability). I guess API gateway is more lightweight and use technologies like REST/JSON. However, in a pure architectural sense, all those uses orchestration style.

Another variation of the centralized server is “backend for frontends” (BEF), which build a server side API per client type ( one for desktop, one for iOS etc). This model creates different APIs per each client type, optimized for each use case. See the pattern: Backends For Frontends for more information.

I would suggest not to go crazy with all options here and start with the API gateway as that is the most straightforward approach. You can switch to more complicated options as need arises.

Avoid Dependency Hell

We do microservices to make it possible that each service can release and deploy independently. To do that, you must avoid the dependancy hell.

Let’s consider microservices “A” who has the API “A1” and have upgraded to API “A2”. Now there are two cases.

  1. Microservice B might send messages intended for A1 to A2. This is backward compatibility.
  2. Microservice A might have to revert back to A1, and microservices C might continue to send messages intended to A2 to A1.

You must handle above scenarios somehow, and let the microservices evolve and deployed independently. If not, all your effort will be wasted.

Often, handling these cases is a matter of adding optional parameters and never renaming or removing existing parameters. More complicated scenarios, however, are possible.

The post “Taming Dependency Hell” within Microservices with Michael Bryzek discuss this in detail. Ask HN: How do you version control your microservices? is also another good source.

Finally, backward and forward compatibility support should be bounded by time. For example, you can have a rule that no microservice should depend on APIs that are more than three months old. That would let the microservices developers to eventually drop some of the code paths.

Finally, I would like to rant a bit about how your dependency graph should look like in a microservices architecture.

One option is freely invoking other microservices whenever it is needed. That will create a spaghetti architecture from the pre-ESB era. I am not a fan of that model.

The other extreme is saying that microservices should not call other microservices and all connection should be done via API gateway or message bus. This will lead to a one level tree. For example, instead of the microservice A calling B, we bring result from the microservice A to the gateway, which will call B with the results. This is the orchestration model. Most of the business logic will now live in the gateway. Yes, this makes the gateway fat.

My recommendation is either to go for the orchestration model or do the hard work of implementing choreography properly. Yes, I am asking not to do the spaghetti.


The goal of Microservices is loose coupling. Carefully designed microservice architecture let you implement a project using a set of microservices, where each is managed, developed, and released independently.

When you designed with microservices, you must keep the eye on the prize, which is “loose coupling”. There are quite a few challenges, and this post answer following questions.

  1. How can I handle scenario that needs to share data between two microservices?
  2. How can I evolve microservices API while keeping loose coupling?
  3. How to handle security?
  4. How to compose microservices?

Thanks! love to hear your thoughts.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Provision federated users by the identity provider

  • The business users need to login to multiple service providers via multiple identity providers. For example login to Drupal via Facebook or Yahoo! credentials. 
  • Irrespective of the service provider, need to group federated users by the identity provider and store all the user attributes locally. For example, the identity admin should be able to find all the Facebook user or the Yahoo users who have accessed the system (i.e. login to any service provider)
  • Deploy WSO2 Identity Server over multiple user stores and name each user store after the name of the corresponding identity provider. 
  • Represent each federated identity provider in Identity Server. For example, represent Facebook as an identity provider in Identity Server. 
  • Enable JIT provisioning for each identity provider, and pick the user store domain to provision users. 
  • Products: WSO2 Identity Server 5.0.0+ 

Lalaji SureshikaSharing applications and subscriptions across multiple application developers through WSO2 API Store

In previous WSO2 APIM versions before 1.9.0 version,only the application developer who logs into APIStore can view/manage his applications and subscriptions.But there was a requirement arose mainly due to following two reasons;

-- What if there’s a group of employees in an organization worked as developers for an application and how all those user group could get access to same subscription/application.

--  What if the APIStore logged in developer left the organization and organization want to manage his created subscriptions in-order to manage the developed applications under organization name and only prohibit the left developer of accessing those.

Since above two requirements are really valid in an app development organization perspective ,we have introduced the feature of sharing applications and subscriptions across user groups from APIM 1.9.0 version onwards. The API Manager provides facility to users of a specific logical group to view each other's' applications and subscriptions.  

We have written this feature with the capability to extend it depend on an organization requirement.As the attribute to define the logical user group will be vary based on organizations.For example:

1)In one organization,sharing applications and subscriptions need to be control based on user roles

2) In another scenario,an APIStore can be run as a common APIStore across multiple organizational users.And in that,user grouping has to be done based on organization attribute.

Because of above facts,the flow how the sharing apps/subscriptions flow is as below.

  1. An app developer of an organization tries to login to APIStore
  2. Then in the underlying APIM code,it checks,if  that APIStore server’s api-manager.xml have the config <GroupingExtractor> enabled and if a custom java class implementation defined inside it.
  3. If so,that java class implementation will run and a group ID for logged in user will be set.
  4. Once the app developer logged in and try to access ‘My Applications’ page and ‘My subscriptions’ page,from the underlying code,it’ll return all the database saved applications & subscriptions based on the user’s ‘Group ID’.
With above approach,the applications and subscriptions are shared based on defined ‘Group ID’ from the custom implementation defined in <GroupingExtractor> of api-manager.xml.
By default,we are shipping a sample java implementation as “org.wso2.carbon.apimgt.impl.DefaultGroupIDExtractorImpl” for this feature to consider the organization name which a signup user give at the time he sign up to the API Store as the group ID. From the custom java implementation,it extracts the claim of the user who tries to login and uses the value specified in that claim as the group ID. This way, all users who specify the same organization name belong to the same group and therefore, can view each other's' subscriptions and applications. 
For more information on default implementation on sharing subscriptions and applications,please refer;
In a real organization,the requirement can be bit different.The API Manager also provides flexibility to change this default group id extracting implementation.
From this blog-post,I’ll explain how to write the group id extracting extension based on below use-case.

An organization want to share subscriptions & applications based on user roles of the organization.They have disabled ‘signup’ option for users to access APIStore and their administrator is giving rights to users to access the APIStore. Basically the application developers of that organization can be categorized in-to two role levels.
  1. Application developers with ‘manager’ role
These developers control production environment deployed mobile applications subscriptions through API Store
2. Application developers with ‘dev’ role
These developers control testing environment deployed mobile applications subscriptions through API Store 
Requirement is to share the applications and subscriptions across these two roles separately.

Above can be achieved through writing a custom Grouping Extractor class to set ‘Group ID’ based on user roles.
1. First write a java class with implementing the interface org.wso2.carbon.apimgt.api.LoginPostExecutor interface  and make it as a maven module.
2. Then implement the method  logic for ‘getGroupingIdentifiers()’ of the interface.
In this method,it has to extract two separate ‘Group ID’s for users having ‘manager’ role and users having ‘dev’ role. Below is a written sample logic for similar requirement with implementing this method.You can find the complete code from here.

   public String getGroupingIdentifiers(String loginResponse) {
JSONObject obj;
String username = null;
String groupId = null;
try {
obj = new JSONObject(loginResponse);
//Extract the username from login response
username = (String) obj.get("user");
/*Create client for RemoteUserStoreManagerService and perform user management operation*/
RoleBasedGroupingExtractor extractor = new RoleBasedGroupingExtractor(true);
//create web service client for userStoreManager
//Get the roles of the user
String[] roles = extractor.getRolesOfUser(username);
if (roles != null) {//If user has roles
//Match the roles to check either he/she is from manager/dev role
for (String role : roles) {
if (Constants.MANAGER_ROLE.equals(role)) {
//Set the group id as role name
groupId = Constants.MANAGER_GROUP;
} else if (Constants.ADMIN_ROLE.equals(role)) {
//Set the group id as role name
groupId = Constants.ADMIN_GROUP;

} catch (JSONException e) {
log.error("Exception occurred while trying to get group Identifier from login response");
} catch (org.wso2.carbon.user.api.UserStoreException e) {
log.error("Error while checking user existence for " + username);
} catch (IOException e) {
log.error("IO Exception occurred while trying to get group Identifier from login response");
} catch (Exception e) {
log.error("Exception occurred while trying to get group Identifier from login response");
//return the group id
return groupId;
3.  Build the java maven module and copy the jar into AM_Home/repository/components/lib folder.
4. Then open APIStore running AM server’s api-manager.xml located at {AM_Home}/repository/conf location and uncomment  <GroupingExtractor> config inside <APIStore> config and add your wrote custom java class name in it.
For eg:
5. Then restart the APIM server.
6. Then try accessing API Store as different users with same ‘Group ID’ value.For example try login to API Store with a developer having ‘manager’ role and do a subscription.Then try to login as another user who also has ‘manager’ role and check his ‘My Applications’ and ‘My subscriptions’ views in API Store. The second user will able to see the first user created application and subscription in his API Store view as below.

Chathurika Erandi De SilvaXSLT Mediator: WSO2 ESB: Loading XSLT as a dynamic key


XSLT mediator can be used for transforming incoming and outgoing messages from ESB.

As an example, say we have a request in some form, but it's not entirely that way the backend service is expecting. In this scenario, we have to transform the request so that it will be what the back end is expecting.

We can easily transform XML with XSLT and WSO2 ESB provides the capability of using a well written XSLT and transform a given XML accordingly.

The XSLT mediator provides the capability of providing the relevant XSLT file as a static key or a dynamic key.

When using static key, we can pick up the xslt from the registry. Dynamic key is much more comprehensive because we change the relevant xslt dynamically.

How do we get it done?

In the following sample, i have defined a dynamic registry location and then used a property to point to the actual xslt file. After that the dynamic key value of the XSLT mediator read by utilizing the property.

Following is the entry made in the source view of ESB to define the registry space

<registry provider="org.wso2.carbon.mediation.registry.ESBRegistry">
        <parameter name="root">file:repository/samples/resources/</parameter>
        <parameter name="cachableDuration">15000</parameter>

Then following sequence is designed

Then we can use this sequence in a proxy service and get the transformation done. Since we are having the dynamic key, we can change the xslt as appropriate dynamically.

Isuru PereraSpecifying a custom Event Settings file for Java Flight Recorder

When you are using Java Flight Recorder (JFR), the JFR will use an event settings file to check which event types to record.

By default in JFR, there are two settings, "default" and "profile". The default setting is recommended for Continuous Recordings as it has very low overhead (typically less than 1% overhead). The profile setting has more events and useful when profiling the application.

As mentioned in my previous blog post regarding Java Flight Recorder Continuous Recordings, we use following arguments to do a Continuous Recording.

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:FlightRecorderOptions=defaultrecording=true,disk=true,repository=./tmp,dumponexit=true,dumponexitpath=./

Note: According to the Oracle documentation on "java" command, we should be able to specify "settings" parameter to the -XX:FlightRecorderOptions. However, the settings parameter has no effect when used with the -XX:FlightRecorderOptions and the default settings will be used. This is a known bug in JFR.

In JFR, the "settings" parameter specifies the path and name of the event settings file, which is of type JFC and it has the ".jfc" extension. By default, the "default.jfc" file is used and it's located in JAVA_HOME/jre/lib/jfr directory.

Most of the time, the "default" and "profile" settings are enough. However you might have seen that when analyzing a JFR dump, some tabs in Java Mission Control will tell that the particular event type is not enabled in that recording.

For example, the "Object Count" event is not enabled by default in "default" or "profile" settings. Therefore we cannot see "Object Statistics" in the Memory Group Tab.

We need to enable such events from a settings file.

Creating a custom Event Settings file

The recommended way to create a custom settings event file is to use Java Mission Control. Once you start the Java Mission Control (JMC), open "Flight Recorder Template Manager" from the Window menu.

Screenshot 01: Flight Recorder Template Manager

Let's import the existing profile.jfc to "Flight Recorder Template Manager". Click on "Import Files..." and select JAVA_HOME/jre/lib/jfr/profile.jfc. (Eg: /usr/lib/jvm/jdk1.8.0_74/jre/lib/jfr/profile.jfc)

Now let's duplicate that by selecting the "Profiling" template and clicking on "Duplicate".

Screenshot 02: The Duplicate Profiling Template

The new template is now named as "Profiling (1)". Let's edit it by clicking on "Edit".

Now let's change the "Name", "Description" and other settings that we need to record.

Screenshot 03: Template Options

I selected "Heap Statistics" and "Class Loading". I recommend you to open the default *.jfc files and go through the file to understand the events available in JFR. For example, from the file I can see that we need to enable "Heap Statistics" to enable the "Object Count" event.

Note: When we select the options shown in Screenshot 03, it will only update the "Control Elements". You can see these control elements when you open a JFC file in JAVA_HOME/jre/lib/jfr/. These control elements will change the real state of the settings. When you click on "Advanced", it will display the "Template Event Details". It will also show the warning: " If you click OK, this template will always be opened in advanced mode and the simple controls will be lost." Click on "OK" only if you want get rid of control elements from the settings file. 

Screenshot 04: Template Event Details

Click on "Cancel" and click on "OK" to save the "Template Options".

Let's export this file to a directory using "Export File..." button.

Specifying the Event Settings file for JFR

After creating the settings file, we can directly specify the settings file name in "settings" parameter. Since we cannot specify the "settings" parameter with -XX:FlightRecorderOptions, we will start a new recording from the "jcmd" command.

For example, add following parameters to the Java program.

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder

Then we can start a recording as follows.

$ jcmd `cat` JFR.start settings=/home/isuru/performance/jfr-settings/Heap.jfc  
Started recording 1. No limit (duration/maxsize/maxage) in use.

Use JFR.dump recording=1 filename=FILEPATH to copy recording data to file.


  1. You can also use settings parameter with -XX:StartFlightRecording option.
  2. I used a WSO2 server to test JFR recording and the Process ID is available in CARBON_HOME/ file.
  3. If you save the JFC file in JAVA_HOME/jre/lib/jfr directory, you can just use the filename in settings parameter. For eg: "settings=Heap"

Now we can get a JFR dump from jcmd command.

$ jcmd `cat` JFR.dump recording=1 filename=heap.jfr
Dumped recording 1, 58.4 MB written to:


When you open the JFR, you will be able to see "Object Statistics" in the Memory Group Tab.

Dakshitha RatnayakeWSO2 Governance Registry 5.x.x FAQs

The WSO2 Governance Registry (G-Reg) has gone through some major transformations, starting from G-Reg 5.0.0 (the current version as of this writing is 5.1.0). In addition to the new and enhanced registry and repository features, G-Reg now comes with multiple views for different roles, i.e. publishers, consumers/subscribers and administrators. This is a significant change from the previous versions which just included one view for all the users. Before understanding the nuts and bolts of G-Reg let's first understand what a registry is and its purpose. What does it do? Why use one?

If your business is SOA-enabled, you need to keep track of your services and who consumes them. Furthermore, as businesses undergo change, including mergers and acquisitions, the number of platforms, consumers, services, and exposed APIs can increase rapidly. SOA Governance is needed to provide full visibility into existing assets; without it, businesses lack the tools to govern and manage assets consistently.It's all about ensuring and validating that assets and artifacts within the architecture are acting as expected and maintaining a certain level of quality. This is where the registry comes into the picture to facilitate SOA governance. The registry can act as a central database that includes artifacts for all services planned for development, in use and retired. Essentially, it's a catalog of services which are searchable by service consumers and providers. The WSO2 Governance Registry is more than just a SOA registry, because in addition to providing end-to-end SOA governance, it can also store and manage any kind of enterprise asset including but not limited to: services, APIs, policies, projects, applications, people. 

Now that we understand what a registry is, here is a compilation of some FAQs and answers related to WSO2 G-Reg to understand what it offers and how it behaves. 

What is the WSO2 Governance Registry ?

The WSO2 Governance Registry (G-Reg) is a SOA-integrated registry-repository for storing and managing data or metadata related to service artifacts and other artifacts. It provides a rich set of features including SOA governance, lifecycle management, and a strong framework for governing anything. For more information on the features and functionality of WSO2 Governance Registry, go to WSO2 Governance Registry.

WSO2 Governance Registry’s main functionality falls under the following two categories.
Content repository
Governance framework

WSO2 Governance Registry provides three main web based user interfaces to facilitate the features and functionality as follows. 

G-Reg Publisher - an end-user, collaborative web interface  for governance artifacts providers to publish artifacts, manage them, show their dependencies, and gather feedback on quality and usage of them. 

G-Reg Publisher

G-Reg Store - an end-user, collaborative Web interface for consumers to self-register, discover governance artifact functionality, subscribe to artifacts, evaluate them and interact with artifact publishers.

G-Reg Store

G-Reg Management Console - a Web interface for administrators to perform admin tasks. 

Management Console

How does a service consumer use the solution to find a service and implement a service client?

The service consumers can use the G-Reg Store to self-register, discover and search for SOAP/REST services. G-Reg offers configuration options such as tags, categories, comments, properties, ratings and descriptions for a resource. It is important to plan the use of these configurations, to facilitate discovering services and enabling correct SOA Governance. Resources for service discovering tremendously help in service reuse. In fact, it's one of the major functions of a registry-repository product. G-Reg provides enhanced search capabilities to facilitate search based on tags and other advanced criteria.

How does a service provider register a new service?

G-Reg allows service providers to register services through the G-Reg Publisher. Users can choose either to enter service details manually  or to import service information using a WSDL/WADL url. The G-Reg Publisher facilitates artifact providers/creators to publish artifacts, manage them, show their dependencies, and gather feedback on quality and usage of them.

How does a service provider use the solution when making a change to a service specification or endpoint ?

A service provider can use the G-Reg publisher to perform changes such as editing or versioning an existing service.  G-Reg provides tools for asset comparison, dependency management and visualizing service descriptions. It also supports WS-Eventing-based subscriptions and notifications that can be used to govern changes made to individual resources as well as to the lifecycle to which it belongs.

How does the solution facilitate service governance?

Service reuse is the heart of SOA. Before implementing a new service, a service provider can search in the registry for existing implementations. This helps the provider to use an existing service either as it is or by developing a new service associating the existing service. Furthermore, registry-repositories help discover associations among services. This helps to get a better idea of any impacts when changing a particular service. And services in a registry undergoes lifecycle states of create, test, deploy and deprecate.

G-Reg can perform the following functions:

  • Enforce policies during transitions of the states of create, test, deploy and deprecate.
  • Define "who can access what?" of services. Access to certain services may differ depending on the user, user group or state of the service lifecycle.
  • Send notifications to relevant users once a change to a service artifact has been made.

As more and more services are introduced and reused, it is necessary to keep track of dependencies of each service in an organization. G-Reg makes life easier by keeping inter-service dependency information as relationships among service information artifacts. For example, such relationships can be Contains, Implements, Uses, Depends, etc.

Service artifacts evolve over time due to reasons such as fulfilling new requirements and yielding to different versions of the same service. G-Reg provides versioning capabilities that can enable automatic version control of artifacts stored. Additionally. G-Reg keeps older versions of artifacts to allow users to migrate smoothly from one version to another.

In summary, G-Reg provides the following capabilities to facilitate SOA Governance:

  • Record information on services
  • Add service/API information manually or import WSDLs/WADLs
  • Discover services using scheduled tasks and discovery agents
  • Search for an existing service for reuse
  • Search using tags/categories. Supported via SOLR.
  • Discover associations and dependencies of a service
  • Service lifecycle management
  • Lifecycle-based asset management
  • In-built and custom lifecycle executors
  • User access control
  • Automatic version control
  • Notification support (email, UI etc.)
  • An SDK for registry-repository extensibility

How is the solution used at design time vs. runtime?

G-Reg can be used during design-time to record service information and govern the service lifecycle .
If needed, using lifecycle executors, services can be deployed/undeployed in relevant servers based on the lifecycle state transition. For example, a Jenkins job can be triggered during a state transition by a custom lifecycle executor - the executor will invoke a remote API of Jenkins which will, for example, build and and deploy service(s) in production environment when promoted from testing to production.

Run-time policy enforcement can be done when associating a WS-Policy with a SOAP service. G-Reg can apply these policies using Handlers (Handlers provide the basis for extending the WSO2 Governance Registry functionality). This is an extension feature as G-Reg only creates an association out-of-the-box.

How does the solution manage multiple endpoints for a service (e.g. Dev, QA, Production)?

Endpoints can be added manually to a service via G-Reg Publisher.  When importing WSDLs, only one endpoint will be added; however, more endpoints (QA, Prod) can be added manually.

How does the solution manage multiple versions of a service?

G-Reg provides support to version existing services, view all versions of a service and restore to a previous version. It is possible to compare different versions of governance artifacts via the Publisher, provided that the comparison is between versions of the same artifact type. If required, resources can be automatically versioned when they are added or updated. But this feature is disabled by default.

How does the solution manage service deprecation and discontinuation?

Currently, G-Reg only changes the state of the service to Deprecated in the default service lifecycle and can be configured to notify subscribers of that service  However, if any actions need to take place as a result of the lifecycle state being changed to Deprecated, a lifecycle executor can be configured to implement such tasks.


Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Multiple login options by service provider

  • The business users need to access multiple service providers, where each service provider requires different login options. For example login to Google Apps with username/password, while login to Drupal either with username/password or Facebook. 
  • Enable multi-factor authentication for some service providers. For example login to Salesforce require username/password + FIDO.
  • Deploy WSO2 Identity Server over the enterprise user store. 
  • Represent each service providers in Identity Server and under each service provider, configure local and outbound authentication options appropriately. To configure multiple login options, define an authentication step with the required authenticators. 
  • When multi-factor authentication is required, define multiple authentication steps having authenticators, which support MFA in each step. For example username/password authenticator in the first step and the FIDO authenticator in the second step. 
  • If federated authentication is required, for example, login with Facebook, represent all federated identity providers in Identity Server, as Identity Providers and engage them with appropriate service providers under the appropriate authentication step. 
  • In each service provider, configure WSO2 Identity Server as a trusted identity provider. For example in Google Apps, add WSO2 Identity Server as a trusted identity provider. 
  • Products: WSO2 Identity Server 5.0.0+ 

Chathurika Erandi De SilvaDatabase lookup with WSO2 ESB - DB LookUp Mediator


DB Lookup Mediator in ESB can be used to query a database. The connection to the database can be established by connection pool or datasource.

Use case: 

Some values stored in the database has to be obtained. Endpoints should be called according to the obtained values.

Here first of all we need a method to obtain values from the database. Simply a select query should be run against the database. Next the values obtained through the select query should be evaluated conditionally and routed to different endpoints. 

Following proxy achieves this by using the DbLookup mediator and Switch mediator of WSO2 ESB

By using DBLookUp mediator we can execute select queries and store the resulting row as a result set.

Switch mediator is very similar to Java Switch Case where we can perform multiple conditional checks

Sample Proxy Configuration

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
               <sql>select id from emp where name=?</sql>
               <parameter value="naleen" type="VARCHAR"/>
               <result name="emp_id" column="id"/>
         <log level="custom">
            <property name="value" expression="get-property('emp_id')"/>
         <switch source="get-property('emp_id')">
            <case regex="1">
                     <address uri="http://<ip>:9793/services/CompanyIndexer/getFirstCompanyInfo"/>
               <log level="custom">
                  <property name="caseone" value="Inside Case 1"/>
            <case regex="2">
                     <address uri="http://<ip>:9793/services/CompanyIndexer/getSecondCompanyInfo"/>
               <log level="custom">
                  <property name="casetwo" value="Inside Case 2"/>
            <case regex="3">
                     <address uri="http://<ip>:9793/services/CompanyIndexer/getThirdCompanyInfo"/>
               <log level="custom">
                  <property name="casethree" value="Inside Case 3"/>
               <makefault version="soap12">
                  <code xmlns:soap12Env=""
                  <reason value="No Company"/>
                  <property name="default" value="No Company"/>

Isuru PereraJava Flight Recorder Continuous Recordings

When we are trying to find performance issues, it is sometimes necessary to do continuous recordings with Java Flight Recorder.

Usually we debug issues in an environment similar to a production setup. That means we don't have a desktop environment and we cannot use Java Mission Control for flight recording.

That also means we need to record & get dumps using command line in servers. We can of course use remote connection methods, but it's more easier to get recordings from the server.

With continuous recordings, we need to figure out how to get dumps. There are few options.
  1. Get a dump when the Java application exits. For this, we need to use dumponexit and dumponexitpath options.
  2. Get a dump manually from JFR.dump diagnostic command via "jcmd"
Note: The "jcmd" command is in $JAVA_HOME/bin. If you use the Oracle Java Installation script for Ubuntu, you can directly use "jcmd" without including  $JAVA_HOME/bin in $PATH.

Enabling Java Flight Recorder and starting a continuous recording

To demonstrate, I will use WSO2 AS 5.2.1. First of all we need to enable Java Flight Recorder in WSO2 AS. Then we will configure it to start a default recording.

$ cd wso2as-5.2.1/bin
$ vi

In VI editor, press SHIFT+G to go to the end of file. Add following lines in between "-Dfile.encoding=UTF8 \" and "org.wso2.carbon.bootstrap.Bootstrap $*"

    -XX:+UnlockCommercialFeatures \
-XX:+FlightRecorder \
-XX:FlightRecorderOptions=defaultrecording=true,disk=true,repository=./tmp,dumponexit=true,dumponexitpath=./ \

As I mentioned in my previous blog post on Java Mission Control, we use the default recording option to start a "Continuous Recording". Please look at java command reference to see the meanings of each Flight Recorder option.

Please note that I'm using disk=true to write a continuous recording to the disk. I'm also using ./tmp directory as the repository, which is the temporary disk storage for JFR recordings.

It's also important to note that the default value of "maxage" is set to 15 minutes.

To be honest, I couldn't exactly figure out how this maxage works. For example, if I set to 1m, I see events for around 20 mins. If I use 10m, I see events for around 40 mins to 1 hour. Then I found an answer in Java Mission Control forum. See the thread Help with maxage / limiting default recording disk usage.

What really happens is that maxage threshold is checked only when a new recording chunk is created. We haven't specified the "maxchunksize" above and the default value of 12MB is used. It might take a considerable time to fill the data and trigger removal of chunks.

If you need infinite recordings, you can set maxage=0 to override the default value.

Getting Periodic Java Flight Recorder Dumps

Let's use "jcmd" to get a Java Flight Recorder Dump. For this, I wrote a simple script (jfrdump)

now=`date +%Y_%m_%d_%H_%M_%S`
if ps -p $1 > /dev/null
echo "$now: The process $1 is running. Getting a JFR dump"
# Dump
jcmd $1 JFR.dump recording=0 filename="recording-$now.jfr"
echo "$now: The process $1 is not running. Cannot take a JFR dump"
exit 1

You can see that I have used "JFR.dump" diagnostic command and the script expects the Java process ID as an argument.

I have used recording id as 0. The reason is that the default recording started from the has the recording id as 0.

You check JFR recordings via JFR.check diagnostic command.

isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1$ jcmd `cat` JFR.check
Recording: recording=0 name="HotSpot default" maxage=15m (running)

I have also used the date for the recording name, which will help us to have multiple files with the date and time of the dump. Note that the recordings will be saved in the CARBON_HOME directory, which is the working directory for the Java process.

Let's test jfrdump script!

isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1$ jfrdump `cat`
2015_02_27_15_02_27: The process 21674 is running. Getting a JFR dump
Dumped recording 0, 2.3 MB written to:


Since we have a working script to get a dump, we can use it as a task for Cron.

Edit the crontab.

$ crontab -e

Add following line.

*/15 * * * * (/home/isuru/programs/sh/jfrdump `cat /home/isuru/test/wso2as-5.2.1/`) >> /tmp/jfrdump.log 2>&1

Now you should get a JFR dump every 15 minutes. I used 15 minutes as the maxage is 15 minutes. But you can adjust these values depending on your requirements.

See also: Linux Crontab: 15 Awesome Cron Job Examples

Troubleshooting Tips

  • After you edit, always run the server once in foreground (./ to see whether there are issues in script syntax. If the server is running successfully, you can start the server in background.
  • If you want to get a dump at shutdown, do not kill the server forcefully. Always allow the server to gracefully shutdown. Use "./ stop"
  • You may not be able to connect to the server if you are running jcmd with a different user. Unless you own the process, following error might happen with jcmd.
isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1$ sudo jcmd `cat` help
[sudo] password for isuru:
21674: well-known file is not secure
at Method)

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Single Sign On between multiple heterogeneous identity federation protocols

  • The business users need to access multiple service providers supporting multiple heterogeneous identity federation protocols. 
  • Some service providers are on-premise while others are in the cloud. For example Google Apps (SAML 2.0), Salesforce (SAML 2.0), Office 365 (WS-Federation) are cloud based while JIRA, Drupal, Redmine are on-premise service providers. 
  • A user logs into any of the service providers should be automatically logged into the rest.
  • Deploy WSO2 Identity Server over the enterprise user store. 
  • Represent each service provider in the WSO2 Identity Server and configure the corresponding inbound authenticators (SAML, OpenID, OIDC, WS-Federation). 
  • In each service provider, configure WSO2 Identity Server as a trusted identity provider. For example in Google Apps, add WSO2 Identity Server as a trusted identity provider. 
  • Products: WSO2 Identity Server 5.0.0+ 

Chathurika Erandi De SilvaOauth Mediator in WSO2 ESB with Oauth 2.0

In this blog i will be explaining the steps to test the Oauth Mediator in WSO2 ESB with Oauth 2.0

WSO2 Identity Server

First of all we need to register a service provider with Oauth 2.0. For that we need to go to Identity Server Management console -> Service Providers -> Add

Provide a name for the Service Provider and Register

As given the the above diagram under Inbound Authentication Configuration -> OAuth/OpenID Connect Configuration, configure Oauth 2.0 for this service provider

Note: the callback URL is insignificant in this particular scenario

After adding the above details Identity server will give us the consumer key and consumer secret for this service provider

Now we need to obtain the base64 encoded value of the above consumer key and secret. For that we need to provide consumerkey:consumersecret format to a base64 encoder

Next step is to invoke the /oauth2/token endpoint of the WSO2 IS by using the password grant type. Here we are sending the above base64 encoded value as the basic header

curl -k -d "grant_type=password&username=user1&password=user1" -H "Authorization: Basic ZXNJVUJ1Uko1bnFUcTZwTUE0VUVjY3h0elJRYTpGTXhPMlZIT2ZhTmVtcXBmcDFIZmdJcXpaZzRh, Content-Type: application/x-www-form-urlencoded"

From this as the response we will be given the access token which we will be using to invoke the proxy that we will be creating below

Now our initial steps are completed and we have to create a proxy with the Oauth mediator in WSO2 ESB

Simply the Oauth Mediator invokes a validation service in WSO2 IS and verifies whether the authorization parameters we have sent is correct. If the validation is successful, the latter part of the proxy will be invoked

Following is a simple proxy that i have created with Oauth Mediator to test this scenario. It contains a send mediator with an endpoint that will be invoked only if the validation is successful

Above the remoteServerURL is the url of the Identity server and valid username and password should be provided for the validation to happen

Finally we will invoke our proxy ( I have used SoapUI). When invoking we have send the previously obtained oauth2 token as a header and we are sending it has following

Header: Authorization
Value: Bearer <token>

If the logs are analysed when the invocation happens, we can see that the ESB calls the IS and obtains whether the token we have sent is valid as following

<version="1.0" encoding="UTF-8"?>
                xmlns:xsi="" xsi:type="ax2331:OAuth2TokenValidationResponseDTO">
                <ax2331:authorizationContextToken xsi:nil="true"></ax2331:authorizationContextToken>
                <ax2331:errorMsg xsi:nil="true"></ax2331:errorMsg>

Prabath SiriwardenaEnabling Multi-factor Authentication for WSO2 Identity Server Management Console

WSO2 Identity Server Management Console ships with the username/password based authentication. Following explains how to configure MFA.

1. Start WSO2 IS and login as an admin user with username/password, and go to Main --> Identity --> Service Providers --> Add --> fill details appropriately and Register

2. Expand the section Inbound Authenticators --> SAML2 Web SSO Configuration --> Configure. Then complete the SAML configuration as shown in the following image. Set the issuer to carbonServer, Assertion Consumer URL to https://localhost:9443/acs and check Enable Response Signing. Rest keep as defaults.

3. Under Local and Outbound Authentication Configuration, pick Advanced Configuration and define MFA.

4. Shutdown the server and edit the file IS_HOME/repository/conf/security/authenticators.xml and enable SAML2SSOAuthenticator by setting the value of the parameter disabled to false and the value of Priority element to 1.

5. Start the server and visit https://localhost:9443. Now you will notice that the login page has changed with MFA.

Prabath SiriwardenaAdding OAuth 2.0 Token Introspection Support to WSO2 Identity Server 5.1.0

WSO2 Identity Server 5.2.0 will have the support for OAuth 2.0 token introspection profile. If you are using Identity Server 5.1.0, this blog post explains how to build and deploy the introspection API on IS 5.1.0.

1. Checkout and build the code from  and deploy it as a war file in IS 5.1.0 (IS_HOME/repository/deployment/server/webapps/).

2. Restart the Identity Server and now you should be able to use the introspection API.

3. Find below the usage of the introspection API.
   Empty token:

curl -k -H 'Content-Type: application/x-www-form-urlencoded' -X POST --data 'token=' https://localhost:9443/introspect

Response: {"active":false}
   Invalid token: 

curl -k -H 'Content-Type: application/x-www-form-urlencoded' -X POST --data 'token=Bjhk98792k9hkjhk' https://localhost:9443/introspect

Response: {"active":false,"token_type":"bearer"}
   Get a valid token(replace the value of client_id:client_secret appropriately): 

curl -v -X POST --basic -u client_id:client_secret -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=client_credentials" https://localhost:9443/oauth2/token

Validate the token:

curl -k -H 'Content-Type: application/x-www-form-urlencoded' -X POST --data 'token=99f0a7092c71a6e772cbcf77addd39ea' https://localhost:9443/introspect

{ "username":"admin@carbon.super",
   Get a valid token with a scope(replace the value of client_id:client_secret appropriately): 

curl -v -X POST --basic -u LUG28MI5yjL5dATxQWdYGhDLSywa:b855n2UIxixrl_MN_juUuG7cnTUa -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "grant_type=client_credentials&scope=test1 test2" https://localhost:9443/oauth2/token

Validate the token:

curl -k -H 'Content-Type: application/x-www-form-urlencoded' -X POST --data 'token=c78ac96fe9b59061b53d0223d46ecc24'

{ "username":"admin@carbon.super",
"scope":"test1 test2 ",

Jayanga DissanayakeHow to write OSGi tests for C5 compoments

WSO2 C5 Carbon Kernel will be the heart of all the next generation Carbon products. With Kernel version 5.0.0 we introduced PAX OSGi testing.

Now we are trying to ease the life of C5 components developers by providing a utility, which will take care of most of the generic configurations need in OSGi testing. This will enable the C5 component developer to just specify a small number of dependencies and start writing PAX tests for C5 components.

You will have to depend on the following library, except for other PAX dependencies


You can find a working sample on the following Git repo

Above will load the dependencies you need by default to test Carbon Kernel functionalities. But as a component developer you will have to specify your components jars into the testing environment. This is done via @Configuration annotation in your test class.

Lets assume you work on a bundle org.wso2.carbon.jndi:org.wso2.carbon.jndi, below is how you should specify your dependencies.

public Option[] createConfiguration() {
List<Option> customOptions = new ArrayList<>();

CarbonOSGiTestEnvConfigs configs = new CarbonOSGiTestEnvConfigs();
return CarbonOSGiTestUtils.getAllPaxOptions(configs, customOptions);

Once these are done, your test should ideally work :)

Danushka FernandoOauth custom basic authenticator with WSO2 IS 5.1.0

WSO2 Identity Server supports Oauth2 authorization code grant type with basic authentication OOTB. But basic authentication is done only with WSO2 user store. So there could be use cases that basic authentication has to be done against some other system. In this case you follow below steps to achieve your requirement.
First you need to create an class which extends AbstractApplicationAuthenticator and implements LocalApplicationAuthenticator. Because this class is going to act as your application authenticator so it needs to be an implementation of application authenticator interface and to achieve this it needs to be a local authenticator as well. [2]
 public class CustomBasicAuthenticator extends AbstractApplicationAuthenticator implements LocalApplicationAuthenticator {  

Then you need to override the initiateAuthenticationRequest method so you can redirect to the page to enter user and password. In my sample I redirected to the page that is used by our default basic authenticator[1]. Code goes as follows.

protected void initiateAuthenticationRequest(HttpServletRequest request,
HttpServletResponse response,
AuthenticationContext context)
throws AuthenticationFailedException {
//TODO: Implement custom redirecting to a custom login page if needed.
String loginPage = ConfigurationFacade.getInstance().getAuthenticationEndpointURL();
String queryParams =
try {
String retryParam = "";
if (context.isRetrying()) {
retryParam = "&authFailure=true&";
response.sendRedirect(response.encodeRedirectURL(loginPage + ("?" + queryParams)) +
"&authenticators=BasicAuthenticator:" + "LOCAL" + retryParam);
} catch (IOException e) {
throw new AuthenticationFailedException(e.getMessage(), e);

Then you need to override processAuthenticationResponse method so you can do your own authentication. In my sample I didn’t do any authentication. You have to implement it to call your rest api. Then in your requirement you mentioned that you need to provision user to IS. So the code segment inside the if condition is to do that. In my sample I provisioned the user to the super tenant primary user store. You can decide on where to add user to.

protected void processAuthenticationResponse(HttpServletRequest httpServletRequest,
HttpServletResponse httpServletResponse,
AuthenticationContext authenticationContext)
throws AuthenticationFailedException {
String username = httpServletRequest.getParameter(CustomBasicAuthenticatorConstants.USER_NAME);
String password = httpServletRequest.getParameter(CustomBasicAuthenticatorConstants.PASSWORD);
boolean isAuthenticated;
//TODO: Call the rest api to validate username and password
isAuthenticated = true;
//TODO: Here this is provisioned to the primary user store under super tenant. This can be changed to provision to the correct place.
if(isAuthenticated) {
UserStoreManager manager;
try {
manager = CustomBasicAuthenticatorServiceComponent.getRealmService().getTenantUserRealm(-1234)
} catch (UserStoreException e) {
String msg = "Error while retrieving the user realm";
LOGGER.error(msg, e);
throw new AuthenticationFailedException(msg, e);
try {
if (!manager.isExistingUser(username)) {
manager.addUser(username, password, new String[0], new HashMap<String, String>(), null, false);
} catch (UserStoreException e) {
String msg = "Error while provisioning the user";
LOGGER.error(msg, e);
throw new AuthenticationFailedException(msg, e);

And you need to implement the can handle method of the authenticator. In the sample I used the implementation same as basic authenticator.

public boolean canHandle(HttpServletRequest httpServletRequest) {
String userName = httpServletRequest.getParameter(BasicAuthenticatorConstants.USER_NAME);
String password = httpServletRequest.getParameter(BasicAuthenticatorConstants.PASSWORD);
if (userName != null && password != null) {
return true;
return false;

Then you need to make this bundle an osgi bundle and register the authenticator in osgi context. That’s done by service component class.

* Copyright (c) WSO2 Inc. ( All Rights Reserved.
* WSO2 Inc. licenses this file to you under the Apache License,
* Version 2.0 (the "License"); you may not use this file except
* in compliance with the License.
* You may obtain a copy of the License at
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
package org.wso2.identity.application.authenticator.custom.basicauth.internal;
import org.wso2.identity.application.authenticator.custom.basicauth.CustomBasicAuthenticator;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.osgi.service.component.ComponentContext;
import org.wso2.carbon.identity.application.authentication.framework.ApplicationAuthenticator;
import org.wso2.carbon.user.core.service.RealmService;
import java.util.Hashtable;
* @scr.component name="application.authenticator.dbevaldev.component" immediate="true"
* @scr.reference name="realm.service"
* interface="org.wso2.carbon.user.core.service.RealmService"cardinality="1..1"
* policy="dynamic" bind="setRealmService" unbind="unsetRealmService"
public class CustomBasicAuthenticatorServiceComponent {
private static final Log LOGGER = LogFactory.getLog(CustomBasicAuthenticatorServiceComponent.class);
private static RealmService realmService;
protected void activate(ComponentContext context) {
try {
CustomBasicAuthenticator dbevaldevauthenticator = new CustomBasicAuthenticator();
Hashtable<String, String> props = new Hashtable<String, String>();
context.getBundleContext().registerService(ApplicationAuthenticator.class.getName(), dbevaldevauthenticator, props);
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("Custom authenticator bundle is activated");
} catch (Exception e) {
LOGGER.fatal(" Error while activating custom authenticator ", e);
protected void deactivate(ComponentContext context) {
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("Custom authenticator bundle is deactivated");
public static RealmService getRealmService() {
return realmService;
protected void setRealmService(RealmService realmService) {
if(LOGGER.isDebugEnabled()) {
LOGGER.debug("Setting the Realm Service");
CustomBasicAuthenticatorServiceComponent.realmService = realmService;
protected void unsetRealmService(RealmService realmService) {
if(LOGGER.isDebugEnabled()) {
LOGGER.debug("UnSetting the Realm Service");
CustomBasicAuthenticatorServiceComponent.realmService = null;

Then you can build this with maven and copy the jar file to the following folder


Then restart the Identity Server and browse to the carbon console. Then from the left menu Select Identity > Service Providers > List.

There you can select the service provider for the application and click on edit link to edit the particular service provider and expand the section Local & Outbound Authentication Configuration and there select Local authentication and select the newly added authenticator. And then select update.

Now you can trigger the authorization code grant type by invoking oauth2/authorize endpoint in IS. You can download the sample code from [3].

Chamara SilvaHow to enable wire log for non synapse products (WSO2)

As we already know in WSO2 ESB and APIM can enable wirelog for trace the synapse massages in variius situations. But if you want to see messages going inside the Governance Registry, Application Server like non synapse based products, Following wirelog prroperties can be added in to the file.  log4j.logger.httpclient.wire.content=DEBUG  log4j.logger.httpclient.wire.header=DEBUG

Afkham AzeezAsynchronous Messaging for Microservices

When it comes to interactions between microservices, asynchronous messaging is a widely used pattern. AMQP, STOMP & MQTT are some of the widely supported messaging protocols. Kasun describes this in detail in his comprehensive blog post titled Microservices in Practice. Rohit Dhall also talks about asynchronous messaging in Microservices Performance Patterns.

In this post, I will go through an example which demonstrates AMQP based messaging in practice using WSO2 Microservices Framework for Java (MSF4J).



Requests come into the Purchasing microservice via HTTP. If the items in stock fall below the reorder level, a reorder request is placed via JMS to the Reorder microservice. There are two queues, Reorder Queue & Reorder Response Queue which are created in WSO2 Message Broker(MB). Once the reorder request is received via the Reorder  Queue, the Reorder microservice will process the request, and send out a Reorder Response  Message to the Reorder Response Queue.

The code is available on GitHub at

Running the Sample

1. Download and run WSO2 Message Broker(MB)
2.  Checkout the code from
3. Build and run the purchasing microservice

         mvn clean install
    java -jar target/purchasing-0.1-SNAPSHOT.jar

4.  Build and run the reorder microservice
   mvn clean install    
   java -jar target/reorder-0.1-SNAPSHOT.jar 

5. Invoke the purchasing microservice using cURL
Keep invoking the purchasing microservice a few times using the following command.

curl -v -X POST  -H "Content-Type:application/json"  -d '{"itemCode":"i001", "name":"Bata Slippers", "quantity":100}' http://localhost:8080/purchasing

6. Output 
You will see that the cURL comman returns a UUID corresponding to your purchase. An example UUID would be 39161be6-da33-4143-89d7-a540fefdd0b8. When you keep invoking the service, at a certain point, there will be insufficient stock and you will see an HTTP response message saying "Insufficient Stock". However, when you retry after a delay, you will see that the call is successful and you are getting a UUID as a response. This indicates that the reordering has successfully happened.

Example Outputs

Purchasing Service

Reorder Service 

 Try out the sample from GitHub. Happy coding!



Nirmal FernandoHow to tune hyperparameters?

Hyperparameter tuning is one of the key concepts in machine learning. Grid search, random search, gradient based optimization are few concepts you could use to perform hyperparameter tuning automatically [1].

In this article, I am going to explain how you could do the hyperparameter tuning manually by performing few tests. I am going to use WSO2 Machine Learner 1.0 for this purpose (refer [2] to understand what WSO2 ML 1.0 is capable of doing). Dataset I have used to perform this analysis is the well-known Pima Indians Diabetes dataset [3] and the algorithm picked was Logistic regression with mini batch gradient descent algorithm. For this algorithm, there are few hyperparameters namely,

  • Iterations - Number of times optimizer runs before completing the optimization process
  • Learning rate - Step size of the optimization algorithm
  • Regularization type - Type of the regularization. WSO2 Machine Learner                                supports L2 and L1 regularizations.
  • Regularization parameter - Regularization parameter controls the model complexity and hence, helps to control model overfitting.
  • SGD Data Fraction - Fraction of the training dataset use in a single iteration of the optimization algorithm

From the above set of hyperparameters, what I wanted to know was, the optimal learning rate and the number of iterations keeping other hyperparameters at a constant value.

  • Finding the optimal learning rate and the number of iterations which improves AUC (Area under curve of ROC curve [4])
  • Finding the relationship between Learning rate and AUC
  • Finding the relationship between number of iterations and AUC


Firstly, Pima Indians Diabetes dataset was uploaded to WSO2 ML 1.0. Then, I wanted to understand a fair number for the iterations so that I could find the optimal learning rate. For that the learning rate was kept at a fixed value (0.1) and varied the number of iterations and recorded the AUC against each iterations number.

LR = 0.1


According to the plotted graph, it is quite evident that the AUC increases with the number of iterations. Hence, I picked 10000 as a fair number of iterations to find the optimal learning rate (of course I could have picked any number > 5000 (where learning rate started to climb over 0.5)). Increasing number of iterations extensively would lead to an overfitted model.
Since, I have picked a ‘fair’ number for iterations, next step is to find the optimal learning rate. For that, the number of iterations was kept at a fixed value (10000) and varied the learning rate and recorded the AUC against each learning rate.


Learning Rate / AUC graph

According to the above observations, we can see that the AUC has a global maxima at 0.01 learning rate (to be precise it is between 0.005 and 0.01). Hence, we could conclude that AUC is get maximized when learning rate approaches 0.01 i.e. 0.01 is the optimal learning rate for this particular dataset and algorithm.

Now, we could change the learning rate to 0.01 and re-run the first test mentioned in the article.

LR = 0.01


Above graph depicts that the AUC increases ever so slightly when we increase the number of iterations. So, how to find the optimal number of iterations? Well, it depends on how much computing power you have and also what level of AUC you expect. AUC will probably not improve drastically, even though you improve number of iterations.

How can I increase the AUC then? You can of course use another binary classification algorithm (Support Vector Machine) or else you could do some feature engineering on the dataset so that it reduces the noise of the training data.

This article tries to explain the process of tuning hyperparameters for a selected dataset and an algorithm. Same approach could be used with different datasets and algorithms too.

Chathurika Erandi De SilvaEDI to XML with Smooks mediator

In this blog post a scenario where conversion from EDI to XML will be addressed. Furthermore after the conversion using the Iterator mediator the xml will be spillted to chunks that can become individual requests to the back end

Prerequisites: VFS transport enabled in WSO2 ESB

Below is the sample EDI file that will be used for this scenario

Next a mapping file should be created. This mapping file will define the structure of the XML and how the fields in the EDI should be mapped to the fields in the XML. In this instance, this is saved as mymapping.xml

Third step is to create the smooks configuration file. This will direct to the mapping file. This configuration file will contain the edi related namespaces that is referred by smooks for the edi reader to work

Then the XSLT that is used to transform the message to suit the back end should be designed. A sample is given below

After creating the above files. Log in to WSO2 ESB. Upload the above smooks configuration file and the XSLT tranformation file to registry. For easiness, the mapping file is also placed inside the WSO2 ESB/repository/samples/resources/smooks folder.

Lastly the proxy service that refers the uploaded smooks configuration should be created as below. This proxy service contain the iterator mediator that splits the xml to chunks and prepare them to be sent to the back end as individual requests

Chanaka FernandoBuilding Integration Solutions : A Rethink

History of Enterprise Integration

The history of enterprise integration goes back to early computer era where we had computers only in large enterprises. The early requirements came from the concept of Material Requirement Planning (MRP) where it requires a system to plan and control production and material flows. With the growth of the businesses and the interactions among different 3rd party organizations, MRP has been evolved into an Enterprise Resource Planning (ERP) systems where they are responsible for much more functionalities to bridge the different departments of the enterprise like accounting, finance, engineering, product management and many more. Proprietary ERP solutions were dealing with so many complex use cases and failed really big in some cases. With these lessons, people realized that there should be a better to way to build the enterprise IT infrastructure beyond the ERP systems.

Integration and SOA

Service Oriented Architecture (SOA) comes into the picture in a time where the world is searching for a proper way to handle their complex enterprise IT requirements. The Wikipedia definition of the SOA is like below.
“A service-oriented architecture (SOA) is an architectural pattern in computer software design in which application components provide services to other components via a communications protocol, typically over a network. The principles of service-orientation are independent of any vendor, product or technology”

SOA Architecture simple(1).png

Rather than having a proprietary system in your enterprise, SOA has built a set of loosely coupled independent services to interact with each other and provide the business functionality to other systems/users. With the concepts of loosely coupled services came the concept of integration where we need to connect with other services to provide the business functionality. At the early stages, it was only a peer to peer communication between services. This has lead to the complex “spaghetti” integration pattern.  

Spaghetti Integration.png

If you have 10 services in your system, you may need 45 point to point connections to communicate with all of the other services. Rather than connecting the services point to point, we can connect them to a central “Bus” and do the communication over that.

The Integration Era

Once people realized the value of SOA and the integration, enterprises started moving into that space more and more than the ERP systems and it became a common architectural pattern in most enterprises. Then came the Enterprise Service Bus (ESB) concept where you connect your all the disparate systems to the central bus and made the interaction possible across different services.

Bus Integration.png

Same type of service has been provided by so many different vendors and the standards around SOA has been emerged. People started thinking about common standards in a more serious manner and all the monopolies existed on the world of software has been little by little converged into common standards. Innovative ideas came into the picture and became standards and the integration space has been emerged as a challenging technology domain. Different wire level protocols, messaging formats, enterprise messaging patterns evolved with the heavy usage of SOA and integration in the enterprises. Almost all the big software vendors  has released their own products for application integration and this has become a billion dollar business.

Beyond Integration

The technology industry has been a moving target since its inception and the pace of the movement might have been varied time to time. At the moment, we are in a time where that pace has been increased and there are lot of new concepts taking over the technology industry. Integration has been pushed to the backyards and the new technology concepts like Micro Services Architecture (MSA), Internet Of Things (IOT), Big Data and Analytics have  been taking over the world of technology.  But any of these concepts are not going to fill the same bucket as integration. They are independent concepts which has surfaced with the increased usage of technology in people’s day to day activity. But the important thing is that Integration cannot live without thinking about these trends. The below diagram depicts the interaction between MSA and Integration Bus in a real enterprise IT system. This was captured from the blog post written by Kasun Indrasiri at [1].

Figure 4: MSA and Integration Server in modern enterprise

Integration for the future

Integration has been a complex subject from the beginning and it has been able to tackle most of the integration requirements popped up in enterprise IT infrastructures. But with the  advancement of other areas, integration solutions need to pay more attention to the following emerging concepts in the future and become more and more “lean”.

  1. Enterprise architects looking for vendor neutral solutions  - Integration has been an area where you need to have not only domain experts but vendor experts to succeed. But the world is more and more moving towards domain expertise and vendor neutrality. Which means that enterprise architects always looking for solutions which can be easily replaceable with a different vendor.
  2. Integration solutions needs to be more user friendly - Architects want to see their integrations more clearly and with a more visually pleasing manner. They don’t want to read through thousands of XML files to understand a simple integration flow.
  3. Internet Of Things (IOT) will hit you very soon - Your solution needs to be able to accommodate IOT protocols and concepts as first class features.
  4. No longer sitting inside enterprise boundary -  Enterprises are moving more and more towards cloud based solutions and your solution needs to be run on the cloud while interacting with other cloud services
  5. Ability to divide will matter - Users will want to replace parts of your system with some other components which they have been using for longer time and worked for them. Your system should be able to compose into different independent components and be able to work in tandem with other systems.
  6. There will be more than “systems” to integrate - Integration has been dealing with different systems in the past and the future would be much different with the concepts of MSA where you have business functions exposed as services and there can be some other components like data, IOT gateways, smart cars you need to integrate. Better to prepare as early as possible.
  7. Make room to inject “intelligence” into your solution - Enterprises would like to inject some intelligence through the concepts like analytics and predictions to your integration solution which is the core of your enterprise IT infrastructure.


Sriskandarajah SuhothayanWSO2 Complex Event Processor 4.1

WSO2 Complex Event Processor  4.1 released  23rd February 2016.

This release mainly focuses on improving and stabilizing the product and enhancing its capabilities.

One of the main features that was included in this release is instrumenting and monitoring support for WSO2CEP  as well as Siddhi.  This enable users to identify throughput and memory consumption of each and every component of WSO2CEP and Siddhi. Through this users can identify possible bottlenecks in their queries and optimize CEP for better performance.

The same is also exposed via JMX  such that It can be monitored to third party JMX consumers  such as jconsole.

Another important feature CEP introduced in this release is visualizing Siddhi queries.

One of the notable improvement of this release is its improved high-availability support. now CEP can support high availability with more than two nodes providing zero downtime with no data loss, and as it also stores its state as a periodic snapshot to a database even during full cluster restart CEP has the capability of restoring it state from its last available snapshot. For more information refer the documentation here.

Further WSO2CEP has introduced several improvements to it’s core runtime complex event processor engine Siddhi. They are as follows:

  • Hazelcast Event Table - Allowing events to be stored and  manipulated in  Hazelcast based In-Memory Data Grid.
  • Minima and Maxima detection - Detecting maxima and minima in an event pattern, this allows detecting complex  stock market patterns using combinations of maxima and minima.
  • Map extension -  This is introduced to support arbitrary key value pairs in Siddhi which was not supported for a long time. This function allows users to create a map, add remove and check for keys and values within a hash map.
  • InsertOrUpdate to Event Table -  This facilitate doing and insert or an update as an atomic operation on Even Tables.
  • Outer and left joins in Siddhi - In addition to inner joins Siddhi now supports outer and left joins
  • Time length window - The eviction policy on Siddhi window is triggered both by time and length properties.
  • External Batch window -  Allowing batch window to get triggered by event time rather by getting triggered by the system time.

You can download WSO2 Complex Event processor 4.1 from here, its documentation from here,  and find the latest Siddhi documentation here.

Ushani BalasooriyaSimple WSO2 ESB API which Queries salesforce and build a json array using payloadFactory Mediator

1. Download WSO2 ESB and Salesforce connector.
2. Add the salesforce connector and enable it.

For more information please follow :

Then the below API can be used to query User object using profile id and then build a json object using payload factory mediator.

  • MySFConfig should have the required login information.
  • 00e90000001aVwiAAE is the profile id of a user.

 <api xmlns="" name="leads1" context="/leads1">  
<resource methods="GET">
<property name="LeadQuery" value="Select u.Username, u.ProfileId, u.Name, u.LastName, u.Email From User u where ProfileId='" scope="default" type="STRING"/>
<property name="Apostrophe" value="'" scope="default" type="STRING"/>
<property name="ProfileId" value="00e90000001aVwiAAE" scope="default" type="STRING"/>
<property name="CompleteLeadQuery" expression="fn:concat($ctx:LeadQuery, $ctx:ProfileId, $ctx:Apostrophe)" scope="default" type="STRING"/>
<salesforce.getUserInfo configKey="MySFConfig"/>
<property xmlns:ns="" xmlns:sf="" name="Name" expression="//ns:queryResponse/ns:result/ns:records/sf:Name/text()" scope="default" type="STRING"/>
<property xmlns:ns="" xmlns:sf="" name="Username" expression="//ns:queryResponse/ns:result/ns:records/sf:Username/text()" scope="default" type="STRING"/>
<property xmlns:ns="" xmlns:sf="" name="LastName" expression="//ns:queryResponse/ns:result/ns:records/sf:LastName/text()" scope="default" type="STRING"/>
<property xmlns:ns="" xmlns:sf="" name="Email" expression="//ns:queryResponse/ns:result/ns:records/sf:Email/text()" scope="default" type="STRING"/>
<log level="full" separator=","/>
<payloadFactory media-type="json">
<format>{ "ProfileId": { "source": "SALESFORCE", "id": "$1" }, "Name": "$2", "Username": "$3", "LastName": "$4", "Email": "$5" }</format>
<arg evaluator="xml" expression="$ctx:ProfileId"/>
<arg evaluator="xml" expression="$ctx:Name"/>
<arg evaluator="xml" expression="$ctx:Username"/>
<arg evaluator="xml" expression="$ctx:LastName"/>
<arg evaluator="xml" expression="$ctx:Email"/>
<property name="messageType" value="application/json" scope="axis2" type="STRING"/>


Shazni NazeerBuilding boost C++ library

Boost is a well written portable C++ source library. It's a very popular and flexible library set to use in you C++ applications. It has a huge set of libraries spanning across many utilities and uses. Good thing about Boost is that, the libraries available in Boost are either already available in C++ standard or likely to end up in it one day. Note however, not all libraries are accepted. Nevertheless, all the libraries are proven and well tested.

Download the latest boost library from

In this short guide I'll walk you through the process to easily build boost for you to generate the header and library files for you to use in your C++ application.

Once you download the boost, extract it into a directory. Navigate into the extracted directory and issue the following commands. Make sure you have a C++ compiler available in your system (e.g: g++ in GNU/Linux). Look here (which is a generic post, however) to see how to install C/C++ compilers.
$ ./ --prefix=path/to/installation/prefix

$ ./b2 install
Depending on your system tools, you may not have all the libraries built. For example, if you do not have python dev packages installed, boost python library won't build. But this won't stop building other libraries.

You are done!!! You should be able to find two directories inside the directory you specified in the PREFIX; namely includes and libs. These contain header files and .so (or .dll in Windows) shared library files to link in your C++ application.

Chamila WijayarathnaHow to Get a Proposal Ready for GSoC

Student application deadline for GSoC 2016 is just about 1 month away, which will be on 25th March, and accepted organizations will be announced tomorrow (29th February). So I thought it would be good to share my experience on how I prepared for GSoC in last 3 years.

GSoC is a wonderful opportunity for student around the world to earn experience and reputation before they join the industry and also to earn some money while studying. Currently Google presents $5500 per each student who successfully complete their projects and its a very attractive amount which is like a dream for most of students.

If we summarize what happens in GSoC, Google selects set of organizations develop open source software each year and each organization suggests set of projects to improve their open source software. Then students can create proposals for one or more such projects. Then each organization and Google evaluates these proposals and select successful proposal so that those students can carry on developing the projects. Google pays students at 3 phases, at proposal acceptance, at halfway through project and at the completion of the project. Each student who completes his project successfully will receive $5500.

According to my view, the hardest part of this process is choosing a proper project and creating a successful and comprehensive proposal. In this blog I am focusing more on what are the things a student to before proposal submission period to get his proposal accepted. This is what I did in last 3 years, but there may be other ways of achieving the same result and get a successful proposal ready for GSoC.

Its always good to start as early as possible, even before Google announces the list of organizations that  got accepted.  But the question is before Google announces, how can student know which organizations will be accepted this time? If we go to Google Melange site [1], there you can access the list of organizations accepted in last year. For example, organizations accepted in 2015 is available at [2]. 90% of the organizations that got accepted last year will get accepted this year also. So if Google hasn't already announced their accepted organizations, you can go to last year's page and select an organization from there. You don't have to do this if Google have already announced this years organizations.

Each year Google accepts about 120-150 organizations. So the next most important task is to select an organization among those for you to participate for the program. According to my view, there are 3 thing we need to take into consideration when we are choosing an organization.

  1. Is there any organizations in the list you have already contributed to?
  2. Is there organizations you haven't contributed to, but you have used there products/softwares?
  3. What are the organizations that work with technologies you are familiar with?
Now I'll explain how to select the organization by answering these questions. 

When we think about organizations in this list, some of the organization are developing a single open source software product (eg: Joomla, Moodle, Wordpress) while most of the organizations have a set of software products (eg: ApacheSoftware Foundation, WSO2, The Eclipse Foundation). When you go through the list, if you have contributed to OSS (Open Source Software) before, you need to list down such organizations. I suggest you select 1 from these organizations to write your proposal since you already know how the software is used and at-least the basic stuff needed to develop (eg: How to get source code, How to build it, Who are the persons you need to contact, etc). This is how I selected the organization to participate in my 3rd GSoC(Ruby).

But most of the students, who are trying to do GSoC for 1st time may not have any prior experience with OSS development. If so then you need to look for organizations with products you have used. For example, most students may have used Mozilla Firefox, Linux operating System, Libre Office, Eclipse IDE, and these are their in the list. If you are to contribute to a Open Source Software, first you need to be familiar with how to use it. So if you are choosing something which you have used/ are using, you don't need to put extra effort to get familiar with how to use it. This is how I choose organization to participate in my 1st and 2nd GSoC (Jruby and Apache Thrift).

If you don't have any organizations in your list yet, then what you'll have to do is, check for organizations which use the technologies you are familiar with. In organizations list page you can find skills required to contribute for each organization.

List down the organizations which uses same technologies as your expertise (You don't need to be a geek in a technology to do a GSoC project, if you know hello world, if else, for loop, array like basic programming stuff, that will be enough, you can learn anything else required while proceeding).

Now you will have a list of possible organizations. Now what you have to do is select maximum of 3 organizations that you are proceeding. If you have less in your list, it is easy for you to select. If you have 1-3 in list already, you don't have to select, you can proceed to the next step.

If you have more, now let's select. If you list organizations by products you used, now you need to check the technologies each organizations asked. In the list if you have organizations with same technologies you expertise select those and cut down others. By doing this you can reduce the list. If everything in your list have technologies you are not familiar with, don't worry, you can learn them, programming is very easy.

Now if you still have more in you list, you have to keep reducing and prioritizing. Next thing you need to do is check how many projects from each organization accepted last year. You can check it by going to page of that organization in melange site.

The number of projects accepted from each organization is an indication of how many projects going to get accepted in this year (Actually page in above image shows number of projects successfully finished, so it may be less than the number of accepted projects). We can assume something close to this number will get accepted this year also. If number of proposal get accepted in organization you apply is high, there is high possibility of your project getting accepted. So now prioritize your remaining list so that organization with highest number of accepted projects come to 1st place. (Hint : Also keep in mind that organizations start with letter A and numbers which appear in the top of the organizations list attract more students, so competition is high there. Always starts traversing through list in bottom-up manner)

Now you have a prioritized list. Now you can start thinking about a project. If google has published list of organizations that have been accepted, you can go to the project idea page from there.

If accepted organization list is not announced, you can still found project ideas, since most of the organization announces the project list very early. You can find these project ideas very easily. By going to accepted projects page of last year, you can find project ideas page of each organization.
Following are some project ideas for year 2015.

  • WSO2 -
  • Ruby on Rails -
Most times, if we change the url from 2015 to 2016, it will take you to 2016 ideas page. In this way you can find the project ideas for this year at a very early stage and you can start preparations very early. If you start very early, other than the advantage of time, mentors will also understand your eagerness.

Then go through the project ideas of your selected organization and select a project idea that which you can understand the most. I believe understanding the idea is enough, because you have time to learn other stuff. Don't think lot about how you're going to design the solution, how you gonna implement it, because if you do so you won't go forward and for sure there will not be many ideas that you will understand the a lot. Also easy ideas attract more people, so there competition is high. Also here again start reading ideas from bottom up, ideas at bottom of the page retrieve less attraction, so you will be able to avoid competition.

Now you need to start work, you may have to learn a bit about the product you are going to work before selecting the project. But do select the project as early as possible. Bow the first thing you should do is inform the organization and mentors that you are willing to contribute to this project. Subscribe to their mailing lists, write a mail to mailing list, introduce you self, say you are interested in this and ask for further information. Main intention of doing this is reserve your place. If someone has already show their interest on this, after sending the mail organization will tell this guy is also working on this. You can still proceed, but I suggest abort. Go to your 2nd choice. If you are the 1st to show interest in the project idea, go ahead, its yours, if someone else try same project later, still you'll have more opportunity. You can find the mails I wrote to get a rough idea on what to write [3], [4], [5]. Make sure you subscribe to developer mailing lists of the organization, so that you will get to know if anyone else is also working on you project.

Now you need to start working on you project. First learn how to contribute to the product. Checkout the source code and learn how to build it. You may have to learn technologies such as git, svn, maven, ant, cmake, gmake, etc. If you have any question, do not hesitate to ask questions in mailing list or chatrooms such as IRC. It will add advantage to you . You may get to know about expert contributors of the project and also you can become friends with them. Its very useful when you progressing with your project and even after your project finished.

After you build the product, then try to do some changes to code, build the product and check how it affects the products functionality. You can try to fix a issue or two reported to get familiar with the source of the product. Spend 2-3 days on this.

The you should start analyzing the problem you are going to work. Always ask questions from through mails and chat if you have any question. Don't keep silent for more than 1 week, always make others remember that you are working on this project.

After you get some idea about the project, start writing your proposal. Try to get a draft proposal ready and show it to the mentor and get feedback from him. Then improve it with the comments you get from the mentor. Some organizations provide a format to make your proposal. If not I found following formats very effective when creating proposals [6], [7], [8].

Also once you got your design ready, start implementation as soon as possible, you can even do this before proposal deadline. It will add a huge advantage for getting your proposal subscribed.

I believe if you follow these information, you will easily get your proposal accepted. This is not the only way of doing it, but this is how I did it and I was successful in 3 out of 3 times.

I think its enough for now. Wish you all best of luck in getting a project accepted for GSoC. If anything not clear please leave a comment, I will be more than willing to help in any way possible.


Chanaka FernandoMicroservices, Integration and building your digital enterprise

It's time to rethink about your enterprise IT eco system. Technology space is going through a period of major revamp and whether you accept it or not, it is changing the way people do business. You may be a software architect of a multi billion enterprise or the only software engineer of a small startup organization which is trying to figure their way out in the business world. It is essential to know about the direction of the technology space and make your moves accordingly. At the 33000 feet, enterprises throughout the world are moving (most of them are already moved) towards digital technology. You may have already brought several third party systems in to your IT eco system and they are functioning well within your requirements. All is running well and your organization is profitable. 
All is well. Why bother about these hypes? Let me tell you one thing. The world of business is moving so fast that a billion dollar company today will become an organization with a debt on their account within a very short period. There will be a new startup offering some cool ideas and they will grab all your customers if you don't provide them the innovation the world is demanding. It is hard to do the innovations without having a proper infrastructure to deliver them to your customers. That is where you need to plan your IT eco system thinking not only about today but next few years. 
Having said all the above facts, there is always one thing which is stopping you from bringing these innovations to your organization. That is none other than the budget. Your boss might say "Well, that is a cool idea. Can you do it for free?" Well, you can, upto some extent. There are several open source solutions which you can use to bring innovations to your enterprise. Let's focus more about the methods rather than the budget. 
Let's think about a scenario where your organization is going to expose new business functionality to your customers through APIs such that web clients and mobile applications can consume your services through these APIs. To provide this new functionality, you need to integrate with different internal systems and you are going to develop new set of services to cater the business requirements. You have the following requirements to deliver your new business functionality to your customers.
  1. Providing APIs
  2. Integrate different systems
  3. Develop new services
There can be more requirements like monitoring, etc. But let's focus on the major requirements and start building your system. API management has been there for some time and there are so many open source and proprietary vendors from which you can select an offering. For the integration of systems also there are many. The real interesting thing is how can I develop my services? As you may have already heard, there is a new concept looming in within the software industry for developing services. That is Microservices Architecture (MSA). You can read about MSA and it's pros/cons almost everywhere. The concept of MSA is that you develop your services in a way that they can be developed/deployed and maintained in a loosely coupled, self contained way. The basic idea is that you should build your services in a way that those services provide real business functionalities as a self contained service. You can take down that service without shutting down your entire system but only that specific service. There are several micro services frameworks available as open source offerings and here is a list of promising MS frameworks.
  1. WSO2 MSF4J -
  2. Spring Boot -
  3. Spark -
  4. RestExpress -
You can use any of the above mentioned frameworks for developing your new services. These services might expect messages with different formats and you need an integration layer to deal with these different types of messages. The below picture shows the architecture of your digital enterprise which consists of the previously mentioned key components (API, Integration, Services).

Sometimes there is a misconception about MSA that it will throw away the integration layer and built on top of "dumb pipes" for message exchange. This is not correct specially when you have more and more micro services, you need an integration layer to deal with different message types and protocol types. You need to keep this in mind and plan for future requirements rather than thinking about a simple service composition scneario where you can achive all the communication using "dump pipes".
One of the main areas of interest of MSA is the deployment strategy and the invlolvement of DevOps. You can deploy your micro services in containers such that they can be bring up and down whenever necessary without affecting other components. When it comes to integration solutions, they are like more solid components in your architecture which does not need to bring up and down so frequently. You can deploy them in a typical Hardware or Virtual infrastructure without worrying about containerization.
Once you have this kind of architecture, you can achieve the following key benefits which are critical in the modern business eco system.
  1. Ability to expose your services to multiple consumers (through APIs) in a rapid manner
  2. Ability to come with new services in a quick time (Micro services deployment is rapid)
  3. Ability to connect with third party systems without worrying about their protocols or message types (Integration layer)
Finally, you can add analytics and monitoring in to the picture and make your system fault tolerant and well monitored for any failures. That would be a subject matter for a separate article and I will stop right here.

Srinath PereraValue Proposition of Big Data after a Decade

Big data is an umbrella term for many technologies: Search, NoSQL, Distributed File Systems, Batch and Realtime Processing, and Machine Learning ( Data Science). These Different technologies are developed and proven to various degree. After 10 years, is it real? Following are few success stories of what big data has done.

  1. Nate Silver predicted outcomes of 49 of the 50 states in the 2008 U.S. Presidential election
  2. Money Ball ( Baseball drafting)
  3. Cancer detection from Biopsy cells (Big Data find 12 tell-tale patterns while doctors only knew about nine). See
  4. Bristol-Myers Squibb reduced the time it takes to run clinical trial simulations by 98%
  5. Xerox used big data to reduce the attrition rate in its call centre by 20%.
  6. Kroger Loyalty programs ( growth in 45 consecutive quarters)

As these examples show, big data indeed can work. Could that work for you. Let’s explore this a bit.

The premise of big data goes as follows.

If you collect data about your business and feed it to a Big Data system, you will find useful insights that will provide a competitive advantage — (e.g. Analysis of data sets can find new correlations to “spot business trends, prevent diseases, combat crime and so on”. [Wikipedia])

When we say Big Data will make a difference, the underline assumption is that way we and organisations operate are inefficient.

This means Big Data is as an optimization technique. Hence, you must know what is worth optimizing. If your boss asked you to make sure the organization is using big data, doing “Big Data Washing” is easy.

  1. Publish or collect the data you can with a minimal effort
  2. Do a lot of simple aggregations
  3. Figure out what data combinations makes prettiest pictures
  4. Throw in some machine learning algorithms, predict something but don’t compare
  5. Create a cool dashboard and do a cool demo. Claim that you are just scratching the surface!!

However, adding value to your organization through big data is not that easy. This is because insights are not automatic. Insights are possible only if we have right data, we look at the right place, such insights exists, and we do find the insights.

Making a difference will need you to understand what is possible with big data, what are its tools, as well as the pain points in your domain and organization? Following Pictures shows some of the applications of big data within an organization.

The first step is asking, what are some of those applications that can make a difference for your organization.

The next step is understanding tools in “Big Data toolbox”. They come in many forms.

KPI ( Key Performance Indicators) — People used to take canaries into the coal mines. Since those small birds are very sensitive to the oxygen level in the air, if they got knocked out, you need to be running out of the mine. KPIs are canaries for your organization. They are numbers that can give you an idea about the performance of something — E.g. GDP, Per Capita Income, HDI index etc for a country, Company Revenue, Lifetime value of a customer, Revenue per Square foot ( in the retail industry). Chances are your organization or your domain has already defined them. Idea is to use Big Data to monitor the KPIs.

Dashboard — Think about a car dashboard. It gives you an idea about the overall system in a glance. It is boring when all is good, but it grabs attention when something is wrong. However, unlike car dashboards, Big data dashboards have support for drill down and find root cause.

Alerts — Alerts are Notifications ( sent via email, SMS, Pager etc.). Their Goal is to give you a peace of mind by not having to check all the time. They should be specific, infrequent, and have very low false positives.

Sensors — Sensors collect data and make them available to the rest of the system. They are expensive and time-consuming to install.

Analytics — Analytics take decisions. They come in four forms: batch real-time, interactive, predictive.

  • Batch Analytics— process the data that resides in the disk. If you can wait (e.g. more than an hour) for data to be available, this is what you use.
  • Interactive Analytics —It is used by a human to issue ad-hoc queries and to understand a dataset. Think of it as having a conversation with the data.
  • Realtime Analytics— It is used to detect something quickly within few milliseconds to few seconds. Realtime analytics are very powerful in detecting conditions over time (e.g. Football Analytics). Alerts are implemented through Realtime analytics
  • Predictive Analytics — It learns a solution from examples. Example, It is very hard to write a program to drive a car. This is because there are too many edge conditions. We solve that kind of problems by giving lot of examples and asking the computer to figure out a program that solves the problem ( which we call a model). Two common forms are predicting next value (e.g. electricity load prediction) and predicting a category (e.g. is this email a SPAM?).

Drill down — To make decisions, operators need to see the data in context and drill down into detail to understand the root cause. The typical model is to start from an alert or dashboard, see data in context (other transactions around the same time, what does the same user did before and after etc.) and then let the user drill down. For example, see WSO2 Fraud Detection Solution Demo.

The process of deriving insight from the data, using above tools, looks like following.

Here different roles work together to explore data, understand data, to define KPIs, create dashboards, alerts etc.

In this process, keeping the system running is a key challenge. This includes DevOps challenges, Integrate data continuously, update models, and get feedback about the effectiveness of decisions (e.g. Accuracy of Fraud). Hence doing things in production is expensive.

On the other hand, “doing it Once” is cheap. Hence, you must first try your scenarios in an ad-hoc manner first (hire some expertise if you must) and make sure it can add value to the organization before setting up a system that does it every day.

Actionable Insights are the Key!!

Insights that you generate must be actionable. That means several things.

  1. Information you share is significant and warrant attention, and they are presented with their ramifications ( e.g. more than two technical issues would lead customer to churn)
  2. Decision makers can identify the context associated with the insight ( e.g. operators can see through history of customers who qualify)
  3. Decision makers can do something about the insight ( e.g. can work with customers to reassures and fix)

For each information you show the user, think hard “why I am showing him this?”, “what can he do with this information?”, and “what other information I can show to make him understand the context?”.

Where to Start?

Big Data projects can take many forms.

  1. Use an existing Dataset: I already have a data set, and list of potential problems. I will use Big data to solve some of the problems.
  2. **Fix a known Problem: Find a problem, collect data about it, analyse, visualize, build a model and improve. Then build a dashboard to monitor.
  3. Improve Overall Process: Instrument processes ( start with most crucial parts), find KPIs, analyze and visualize the processes, and improve
  4. Find Correlations: Collect all available data, data mine the data or visualize, find interesting correlations.

My recommendation is to start with #2, fix a known problem in the organization. That is the least expensive, and that will let you demonstrate the value of Big data right away.

Finally, the following are key take away points.

  • Big Data provide a way to optimize. However, blind application does not guarantee success.
  • Learn tools in Big Data toolbox: KPIs, Analytics ( Batch, Real-time, Interactive, Predicative), Visualizations, Dashboards. Alerts, Sensors.
  • Start small. Try out with data sets before investing in a system
  • Find a high impact problem and make it work end to end

Chathurika Erandi De SilvaCSV to XML with Smooks mediator WSO2 ESB


Smooks mediator in WSO2 ESB can be used to support various kinds of conversions and one such conversion user case is from CSV to XML.

Use Case

There is a CSV file that should be converted to XML format, so that the converted form can be later used with WSO2 ESB for further processing

Demonstration with Smooks Mediator 

First of all we need to create the smooks configuration file to convert the csv to xml

 Above configuration creates the nodes of the xml file with the given parameters as rootElementName and recordElementName

Secondly we need to create the proxy service that uses the Smooks mediator.

The above proxy uses the VFS transport to read the CSV file.

When the relevant file is placed in the location defined by transport.vfs.FileURI, the below output can be seen in the console

As shown the CSV file is read and then converted to XML.

Pushpalanka JayawardhanaAccount Deactivation with WSO2 Identity Server - 5.2.0

This is about a new feature addition that can be expected to be out with WSO2 Identity Server 5.2.0 version, which has been added to the current master branch for WSO2 IS at

This feature is to cater for account disability requirements in addition to account locking. Account disabling function is provided through a user claim as similar to account locking functionality. While account locking is a temporarily blocking of user login due to a defined policy like consecutive unsuccessful login, account disabling will cater for disabling user account which is much more long term.

How to try?
  • Enable 'org.wso2.carbon.identity.mgt.IdentityMgtEventListener' in <IS_HOME>/repository/conf/identity/identity.xml file under Event Listeners.
  <EventListener type="org.wso2.carbon.user.core.listener.UserOperationEventListener" name="org.wso2.carbon.identity.mgt.IdentityMgtEventListener"
                       orderId="50" enable="true"/>
  • in <IS_HOME>/repository/conf/identity/ file configure below properties.

After the configurations are done, restart the server to have them effective.
Under claim manaement of WSO2 Identity Server, edit the claim ""to be supported by default. How to do this is described at

Now the required configurations are done. We can disable, enable user accounts through user profile.

Danushka FernandoAdding a servlet to Orion (Eclipse Cloud IDE) from a package built using maven.

Orion is an open source project and it is under the Eclipse Cloud Development top-level project. It’s an cloud IDE which is available online and also which can be hosted in on premise. [1] [2]. Orion can work with git repos, create git repos or work with local code same as Eclipse.

Orion has two repos. For the backend server code [2] and for the client / ui code [3]. Orion server is an osgi server based on eclipse equinox. But the problems are they have not implemented the dropins concept. Because of that adding an external bundle to Orion Server is not straightforward. We have to edit the to add a bundle to Orion server. And Orion is using jetty for the servlets.

As mentioned Orion is working with Jetty and to add a servlet via a bundle is easy if we use eclipse itself to develop and bundle the package. But if we use maven to build it then the problems arises. What we have to do is add an entry to plugins.xml and package the plugins.xml to the bundle we are creating.

So to do this what has to be done is add a plugins.xml (You can copy an existing plugins.xml from orion code base) to the resources directory and edit it to have your servlets in it. And in the pom.xml under the build tag add following section. You can look into the sample code [4].


Sample plugins.xml is as below.

 <?xml version="1.0" encoding="UTF-8"?>  
<?eclipse version="3.4"?>

Servlet code will be the following

 package org.orion.sample.servlet.servlet;  
import org.eclipse.orion.server.servlets.OrionServlet;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public class HelloWorldServlet extends OrionServlet {
protected void service(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
resp.getWriter().println("Hello JCG, Hello OSGi");
public HelloWorldServlet() {

Since to deploy the bundle to Orion it has to be an osgi bundle you have add maven scr and maven bundle plugins to your project as well. So the build tag in the pom.xml should be like below.


And you have to add the Service Component class to get it activated at the startup. Code would be like following.

 package org.orion.sample.servlet.internal;  
import org.osgi.service.component.ComponentContext;
* @scr.component name="org.orion.sample.servlet.internal.ServletSampleServiceComponent" immediate="true"
public class ServletSampleServiceComponent {
protected void activate(ComponentContext context) {
System.out.println("Registration Complete");

Now you can run mvn clean install on the package and build the jar file. Now go in to the Orion server folder and create a folder named dropins and copy your jar file in to dropins. Now edit


file and following in the end add following line.



Chamila WijayarathnaBuilding WSO2 Identity Server with UI Tests

Have you ever built WSO2 Identity Server from source with its UI tests? May be you have not. Okay, did you knew there are UI tests for WSO2 Identity Server? I'm going to answer these questions in this blog and will help you to build WSO2 Identity Server with UI tests.

In one of my previous blogs [1], I described the importance of having tests for a software product and the importance of running them after doing a change/fix to the source code. There are 3 types of tests that have been used in WSO2 products (For now I only know about these 3 types :) , if I got to know about anything else, I'll write about them for sure). They are,
  • Unit tests
  • Back end integration tests
  • UI integration tests
In my previous blog [1], I have described about back end integration tests used in WSO2 IS. There we test integration scenarios supported in IS by mainly using back end service call.

Another type of tests used in WSO2 products are unit tests. Still I couldn't involved lot with implementing unit tests, so for now I don't know a lot about them. But currently we are moving from Carbon 4 platform to Carbon 5 platform, and as I heard most of the testing is going to be via unit tests in Carbon 5 platform. So we'll see lot about them in future.

For now, in this blog I will focus on UI tests that are used in WSO2 Identity Server. For now there are only a few UI tests available for Identity Server, you can find them at [2]. But if you build WSO2 Identity Server from source using "mvn clean install", these tests will not get run. Because, for now by default they are not running. But you can easily configure to run them in your product builds. You can do this by changing <IS src Dir/modules/integration/pom.xml as follows.


There you will observe the "tests-ui-integrations" module is originally commented out. This is the reason why they are not running when we build IS. By un commenting them we have configured them to run. If you build IS now with "mvn clean install", ui tests available in IS will also run.

To run these ui tests successfully with above command, you need to have installed mozilla firefox 24 which is a older version. Currently I am using firefix 44 in my Ubuntu 14.04 version. The reason for this is, in WSO2 Test Automation Framework, it uses selenium 2.40.0 for its ui tests. Usually each selenium version supports latest browser versions available at its release date. So for selenium 2.40.0 its firefox 24. So when I run UI tests with above configuration and command on my computer, they get failed.

So how can I fix that? Do I have to downgrade my firefox version? No, I can fix this without downgrading my firefox version as follows.

First I have to download mozilla firefox 24 binary files from [3] and save it to somewhere in my computer storage. Then extract the zip file.

Now I can build product-is with following command.

mvn clean install -Dwebdriver.firefox.bin=<Firefox directory>/firefox/firefox-bin

I don't know what this exactly does. But from what I heard, its like WSO2 Test Automation Framework uses this binary files when starting web driver for running ui tests. When I run following command, now build will be successful with all the UI test are passing (If you haven't break them).

So I hope now, if you are contributing to WSO2 Identity Server, before you are committing anything, you will make sure UI tests are also not have been broken by your changes.

Chanaka FernandoWhy Micro Services Architecture (MSA) is nothing but SOA in a proper, evolved state

If you are a person who is in the enterprise IT domain, there is a more chance that you may have heard the term "mircoservices". In your organization, there will be many people talking about this and you may already have read lot of material on this term. First of all, it is a great idea and if you can use those concepts in your enterprise, that is pretty good. So then, what is this topic all about?

Let me explain a bit about the topic and the message I want to spread. If you are an old enough IT professional, you may have gone through the hype of SOA and might have been adopted that in your enterprise. After spending millions of dollars and years of engineering time, now you have a solid SOA adoption and everything is running well (not perfect). As you know, technology industry does not allow you to settle down. It does not care about your money or time, it will keep on throwing some new concepts and jargons in to the picture. Micro services is that kind of thing which you have come across recently. With this blog post, I want to show the people who has spent most of their budget on SOA adoption, that you don't need to worry about the MSA hype. You are already doing that and it is a seamless transformation from where you are and where you need to go with the MSA (if you want to go that way).

I will start with a list of things that people tell about the existing SOA architecture and describes as advantages of MSA.

  • Applications are developed as single monoliths and deployed as a single unit
  • Utterly complex applications due to many components and their interaction
  • Even hard to practice agile development strategies due to tight coupling
  • Hard to scale parts of the application since entire application needs to be scaled
  • Reliability issues with one part of the application failure may cause the entire application to stop functioning
  • Startup time is minimal, since we don't need to startup fully fledged servers
Well, that is a some set of features which you need to be alarmed if your system is not having. Does that mean that you need to scamper through and collect all your resources to work and learn about MSA? Before doing that, let me explain how you can improve your existing system to fulfill these requirements without knowing anything about MSA (I'm not saying you shouldn't learn about it).

Applications are developed as single monoliths and deployed as a single unit

If you have followed the SOA principals in first place, you may not encounter this issue. The fundamental concept of SOA is to modularize your systems into independent services which caters specific requirements. If you have developed a single monolith with all the capabilities, then go and fix it. This is nothing new from MSA. it was already there and you have not executed properly. If you need to deploy these services in separate servers, you could have done that. But there were no concepts like containers back then and you didn't want to waste one server for one service. The container based deployments are not coming from MSA and it is already there and you can utilize that with your existing SOA services.

Utterly complex applications due to many components and their interaction

This is something you cannot get rid of even if you adopt MSA. It really depend on the capabilities of your application and the way you have developed and wired different components. You can revisit your application and design it properly. But it is irrelevant to SOA or MSA.

Even hard to practice agile development strategies due to tight coupling

Coupling between different services is utterly a design choice and it will be there no matter you are using MSA or not. If you design your services in a proper way, you can work on an agile way.

Hard to scale parts of the application since entire application needs to be scaled

This is again a design choice which you have taken in the past to couple the different services and deploy them in the same server. If you could have design it according to the SOA principals and if you had container based deployments, you may have not encountered this. Nothing coming from MSA.

Reliability issues with one part of the application failure may cause the entire application to stop functioning

Once again the idea of container based deployments and proper design of your services may have fixed this kind of issue.

Startup time is minimal, since we don't need to startup fully fledged servers

Nothing specific to MSA. Container based deployment and server less applications could have fulfilled this requirement. 

All in all, by considering the above facts, we can see that, there is nothing much coming from this micro services architecture but set of things which were already there in SOA and new concepts like container based deployments are represented as a concept and with a special word. I don't have any intention to criticize the term and the importance. What I wanted is to tell you is that, there is nothing much you need to change if you are already doing SOA in your enterprise and willing to adopt MSA. 

One last thing I wanted to mention is that, sometimes people think that they don't need the integration layer once they have the MSA in place. That is one of worst conclusions you have ever made and it will not going to work in your enterprise. If you need further information on that, you can read following links.


Prabath SiriwardenaA Stateless OAuth 2.0 Proxy for Single Page Applications (SPAs)

1. Build the sample SPA from

2. Copy the artifact(amazon.war) created from the above step to [CATALINA_HOME]\webapps

3. This sample assumes Apache Tomcat is running on localhost:8080 and WSO2 Identity Server 5.0.0 or 5.1.0 is running on localhost:9443

4. If you use different hostnames or ports, change the hostname and the port inside [CATALINA_HOME]\webapps\amazon\index.html and in.html

5. Also note that the value spaName query parameter in [CATALINA_HOME]\webapps\amazon\index.html it should match the value sample1, which we define later in, if you change this value make sure you change both the places.

6. Create a service provider in WSO2 Identity Server for the proxy app. Note that this is not for the SPA.

7. Configure OAuth 2.0 as the Inbound Authenticator, with https://localhost:9443/oauth2-proxy/callback as the callback URL. This is pointing to the oauth2-proxy app we are going to deploy in Identity Server later.

8. Create a file with the name under IS_HOME\repository\conf Add following properties to the file
      9. The value of the client_id and the client_secret should be copied from the service provider you created in Identity Server

      10. The value of the proxy_callback_url should match the callback URL you configured when creating a service provider in Identity Server

      11. The value of sp_callback_url and sp_logout_url should point to the amazon web app running in Apache Tomcat

      12. The properties iv and secret_key are used to encrypt the tokens, set as cookies. The value of iv must be 16 characters long. The value of is_server property must point to the Identity Server.

      13. Build the OAuth 2.0 proxy app from and copy target/oauth2-proxy.war to IS_HOME/repository/deployment/server/webapps

      14. Restart the Identity Server. Once everything is done and both Identity Serevr and Apache Tomcat are up and running, you can test this by visiting http://localhost:8080/amazon and clicking on the Login link.

Prabath SiriwardenaA Lightweight Login API for WSO2 Carbon

1. Build the API from

2. Copy the artifact(login.war) created from the above step to IS_HOME/repository/deployment/server/webapps

3. Restart the WSO2 Identity Server and make sure the login.war is deployed properly.

4. Following is an example cURL request just to authenticate a user.

curl -k -v  -H "Content-Type: application/json"  -X POST -d @auth_req.json https://localhost:9443/login

"username": "admin",
"password": "admin"

HTTP/1.1 200 OK


5. Following is an example cURL request to authenticate a user and get all his roles.

curl -k -v  -H "Content-Type: application/json"  -X POST -d @auth_req.json https://localhost:9443/login

"username": "admin",
"password": "admin",
"with_roles": true

HTTP/1.1 200 OK

6. Following is an example cURL request to authenticate a user and get all his roles and a selected set of claims.

curl -k -v  -H "Content-Type: application/json"  -X POST -d @auth_req.json https://localhost:9443/login

"username": "admin",
"password": "admin",
"with_roles" : true,
"claims" : [""]

HTTP/1.1 200 OK

Prabath SiriwardenaEnforce Password Reset for Expired Passwords During the Authentication Flow

In this blog post we will look into how to enforce password reset for expired passwords during the authentication flow. This is done by writing a custom connector and engaging it into the authentication flow.

1. Download connector code from and build the project with Maven, which will result in a org.wso2.carbon.identity.policy.password-1.0.0.jar file inside the target directory.

2. Copy the file org.wso2.carbon.identity.policy.password-1.0.0.jar to [IS_5.1.0]/repository/components/dropins/.

3. Copy to [IS_5.1.0]/repository/deployment/server/webapps/authenticationendpoint.

4. Edit the file [IS_5.1.0]/repository/conf/identity/ and add the following property.


5. Start WSO2 Identity Server.

6. Create a service provider and under the 'Local & Outbound Authentication Configuration' --> 'Advanced Configuration' - define two steps. The first step with the 'basic' local authenticator and the second step with the 'password-reset-enforcer' local authenticator.

7. Once the service provider is created, we also need to create a claim and map that claim to a user store attribute to hold the timestamp of the password reset event.

8. Claims --> Add --> Add New Claim --> Select and create a claim with the claim URI and make it ReadOnly. Also uncheck 'Supported By Default'.
9. That's it. During the authentication flow, if the password is expired, you will be prompted to reset the password.