WSO2 Venus

sanjeewa malalgodaWSO2 API Manager 2.0.0 New Throttling logic Execution Order

Here in this post i would like to discuss how throttling happens within throttle handler with newly added complex throttling for API Manager. This order is very important and we used this order to optimize run time execution. Here is the order of execution different kind of policies.

01. Blocking conditions
Blocking requests will be executed first as it's the least expensive check. All blocking conditions will be evaluated per node basis. Blocking conditions are just checks of certain conditions and we don't need to maintain counters across all gateway nodes.

02.Advanced Throttling
If request is not blocked request then we will move to API level throttling. Here we will do throttling for API level and resource level. Here always API level throttle key will be API name. That means we can control API requests per API.

03.Subscription Throttling with burst controlling
Next thing is subscription level API throttling. When you have API in store, subscribers will come there and subscribe to that API. Whenever subscription made we will make record saying user subscribed to this API using this application. So whenever API request come to API gateway we will get application id(which use to identify API uniquely) and API context + version(which we can use to identify API uniquely) to create key to do subscription level throttling. That means when subscription level throttling happens it will always count requests for API subscribed to given application.

04.Application Throttling
Application level throttling happens in application level and users can control total number of requests come to all APIs subscribed to given application. In this case counters will maintain against application user combination.

05.Custom Throttling Policies
Users are allowed to define dynamic rules according to specific use cases. This feature will be applied globally across all tenants. System administrative users should define these rules and it will be applied across all the users in the system. When you create a custom throttling policy you can define any policy you like. Users need to write a Siddhi query to address their use case. The specific combination of attributes we are checking in the policy have to be defined as the key (which is called the key template). Usually the key template will include a predefined format and a set of predefined parameters.

Please see below diagram(draw by Sam Baheerathan) to understand this flow clearly.

Screen Shot 2016-09-28 at 7.41.46 PM.png

sanjeewa malalgodaHow newly added Traffic Manager place in WSO2 API Manager distributed deployment

Here in this post i would like to add deployment diagrams for API Manager distributed deployment and how it changes after adding traffic manager to it. If you interested about complex traffic manager deployment patterns you can go through my previous blog posts. Here i will list only deployment diagrams.

Please see below distributed API Manager deployment deployment diagram.

Image result for api manager distributed deployment

Now here is how it looks after adding traffic manager instances to it.

Untitled drawing(2).jpg

How distributed deployment looks like after adding high availability for traffic manager instances.


Malith MunasingheRunning WSO2 Update Manager periodically through a script

As WSO2 update manager showcases with a feature that most of the WSO2 community was anticipating. Determining the relevant updates and preparing a deployment ready pack has become easier. Although the client has automated the process still devops will have to trigger the updates manually and prepare the pack. Checking for updates manually can be automated with a simple cron job. What required was a script which initiates wum and then executes the update.

source /etc/environment
wum init -u <wso2useremail> -p <password>
wum update <product-name>

Running this script through a cron job will connect to the server checks for latest updates and create a pack in $HOME/.wum-wso2/products/. Also a summary of updates occurred will be sent over to the email you have subscribed to WSO2 with.

Samitha ChathurangaMerging Traffic Manager and Gateway Profiles - WSO2 APIM

This guide includes how to configure a WSO2 API Manager 2.0.0 cluster, highlighting the specific scenario of merging Traffic Manager Profile into the Gateway. I will describe how to configure a sample API Manager setup which demonstrate Merging Traffic Manager and Gateway Profiles.

I will configure publisher, store and key manager components in a single server instance as the expectation is to illustrate merging gateway and traffic manager and starting that merged instance separately from other components.

This sample setup consists of following 3 nodes,
  1. Publisher + Store + Key Manager (P_S_K)                       ; offset = 0
  2. Gateway Manager/Worker + Traffic Manager (GM_T)     ; offset = 1
  3. Gateway Worker + Traffic Manager (GW_T)                    ; offset = 2
  • We will refer the 3 nodes as P_S_K , GM_T , GW_T for convenience.
  • There is a cluster of gateways, one node is acting as the manager/worker node and other one is as a simple worker node.
  • Traffic managers are configured with high availability.
  • Port offset is configured as mentioned above. To set the port offsets edit the <Offset> element in <APIM_HOME>/repository/conf/carbon.xml , in each of the nodes.

Figure 1 : Simple architectural diagram of the setup

1. Configuring datasources

We can configure databases according to the APIM documentation­Installingandconfiguringthedatabases
[Please open such these documentation links in Chrome or any other browser except Firefox, as Firefox has a bug with Confluence Atlassian document links in opening the link at the expected position of the page.] 

Follow the steps in it carefully. Assume that the names of the databases we created are as follows,

       API Manager Database ­ - apimgtdb
       User Manager Database ­- userdb 
       Registry Database​ ­ ​        - regdb

In the above mentioned doc, apply all the steps defined for all three store, publisher, key manager
nodes to our P_S_K node. Because in that documentation publisher, store, key manager are existing in different nodes, but in our this setup we are using a single node which acts as all 3 components
(publisher, store and key manager).

Following is a summary of configured datasources for each node.
         P_S_K node : ​apimgtdb , userdb , regdb
         GM_T node / GW_T node​ : Not required

2. Configuring the connections among the components

You will now configure the inter-component relationships of the distributed setup by modifying their <APIM_HOME>/repository/conf/api-manager.xml files.
This section includes the following sub-topics.
  1. Configuring P_S_K node
  2. Configuring GM_T node & GW_T node

2.1 Configuring P_S_K node

Here we have to configure this node for all the 3 components, publisher, store, key manager related functionalities.

Configurations related to  publisher -

Configurations related to  store -

Configurations related to key manager-

Note : In the above docs, setup is created as the publisher, store and key manager that are in separate nodes in a cluster. So follow the steps as per the requirement of yours, keeping in mind that you are configuring them into a single node considering the port offsets too.

2.2 Configuring GM_T node & GW_T node

Configurations for the two Gateway+Traffic Manager nodes are very much similar. So follow each below steps for both the nodes. I will mention the varying steps when required.

Please note that when starting these nodes you have to start them in default profiles as there is no customized profile for gateway+traffic manager.

2.2.1 Gateway component related configurations

This section involves setting up the gateway component related configurations to enable it to work with the other components in the distributed setup.

I will use G_T in short for the GM_T or GW_T node. Apply the IP address of the own node in the configurations below.

  1. Open the <APIM_HOME>/repository/conf/api-manager.xml file in the GM_T/GW_T node.   
  2. Modify the api-manager.xml file as follows. This configures the connection to the Key Manager component.

3. Configure key management related communication. (both nodes)

In a clustered setup if the Key Manager is fronted by a load balancer, you have to use WSClient as KeyValidatorClientType in <APIM_HOME>/repository/conf/api-manager.xml. (This should be configured in all Gateway and Key Manager components and so as per our setup configure this in GM_T and GW_T nodes)


4. Configure throttling for the Traffic Manager component. (both nodes). 
Modify the api-manager.xml file as follows

In the above configs, <connectionfactory.TopicConnectionFactory> element used to configure jms topic url where worker node uses to listen for throttling related  events. In this case, each Gateway node has to listen Topics in both Traffic managers. Because if one node gets down, throttling procedures should work without any interrupt. Even though one node gets down, the throttling related counters will now exist synced with the other node. Hence, here we have configured failover jms connection url as pointed above.

2.2.2 Clustering Gateway + Traffic Manager nodes

In our sample setup we are using two nodes.
1. Manager/Worker
2. Worker

We have followed the below steps for Traffic manager related clustering, and if you want to do the configurations for Gateway clustering with load balancer, follow the documentation and configure host names in carbon.xml appropriately, add svn deployment synchronizer, etc.
Follow the below steps for both the nodes or Traffic manager related clustering.

Open the <AM_HOME>/repository/conf/axis2/axis2.xml file

1. Scroll down to the 'Clustering' section. To enable clustering for the node, set the value of "enable" attribute of the "clustering" element to "true", in each of 2 nodes.

<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"

2. Change the 'membershipScheme' parameter to 'wka'.
<parameter name="membershipScheme">wka</parameter>
3. Specify the host used to communicate cluster messages. (in both nodes)

 name="localMemberHost"><ip of this GM_T node></parameter>

4. Specify the port used to communicate cluster messages.

        Let’s give port 4001 in GM_T node.

              <parameter name="localMemberPort">4001</parameter>

        Let’s give port 4000 in GW_T node.

              <parameter name="localMemberPort">4001</parameter>

5. Specify the name of the cluster this node will join. (for both nodes)

<parameter name="domain">wso2.carbon.domain</parameter>

6. Change the members listed in the <members> element. This defines the WKA members. (for both nodes)

2.2.3 Traffic manager related configurations

This section involves setting up the Traffic manager component related configurations to enable it to work with the other components in the distributed setup. 

1. Delete the contents of <APIM_HOME>/repository/conf/registry.xml file and copy the contents of the <APIM_HOME>/repository/conf/registry_TM.xml file, into the registry.xml file (in both nodes)

2. Remove all the existing webapps and jaggeryapps from the <APIM_HOME>/repository/deployment/server directory.
(in both nodes)

3. High Availability configuration for traffic manager component (in both nodes)
  • Open <APIM_HOME>/repository/conf/event-processor.xml and enable HA mode as below.
    <mode name="HA" enable="true">
  • Set ip for event synchronization
        <hostName>ip     of this node</hostName>

2.2.4 Configuring JMS TopicConnectionFactories

Here we are configuring TopicConnectionFactories to get data from to traffic manager. So in this cluster configuration we are using 2 TopicConnectionFactories (one for each), and configure to send data to both TopicConnectionFactories. (send to own TopicConnectionFactory and other node’s TopicConnectionFactory too)
So open in <APIM_HOME>/repository/conf/ file (in both nodes) and do th efollwoing changes in it,

.TopicConnectionFactory = amqp://admin:admin@clientid/carbon?brokerlist='tcp://localhost:5672'
line to following,
connectionfactory.TopicConnectionFactory1 = amqp://admin:admin@clientid/carbon?brokerlist='tcp://<ip of GM_T>:5673

And add new line as,
connectionfactory.TopicConnectionFactory2 = amqp://admin:admin@clientid/carbon?brokerlist='tcp://<ip of GW_T>:5674'

Finally that section would be as below.

# register some connection factories
# connectionfactory.[jndiname] = [ConnectionURL]
connectionfactory.TopicConnectionFactory1 = amqp://admin:admin@clientid/carbon?brokerlist='tcp://<ip of GM_T>:5673'
connectionfactory.TopicConnectionFactory2 = amqp://admin:admin@clientid/carbon?brokerlist='tcp://<ip of GW_T>:5674'

5673 => 5672 + portOffset of GM_T

5674 => 5672 + portOffset of GW_T

2.2.5 Add event publishers to publish data to related JMS Topics.

(Do this for both the nodes.) We have to publish data from Traffic manager component to TopicConnectionFactory in its own and other node too. So there should be 2 jmsEventPublisher files in the <APIM_HOME>/repository/deployment/server/eventPublishers/ for that.

There is already a,<APIM_HOME>/repository/deployment/server/eventPublishers/jmsEventPublisher.xml file in the default pack. In it, update the ConnectionFactoryJNDIName as below.

And that is it. U are done :-)

sanjeewa malalgodaWSO2 API Manager - Custom Throttling Policies work?

Users are allowed to define dynamic rules according to specific use cases. This feature will be applied globally across all tenants. System administrative users should define these rules and it will be applied across all the users in the system. When you create a custom throttling policy you can define any policy you like. Users need to write a Siddhi query to address their use case. The specific combination of attributes we are checking in the policy have to be defined as the key (which is called the key template). Usually the key template will include a predefined format and a set of predefined parameters.
With the new throttling implementation using WSO2 Complex Event Processor as the global throttling engine, users will be able to create their own custom throttling policies by writing custom Siddhi queries. A key template can contain a combination of allowed keys separated by a colon ":" and each key should start with the "$" prefix. In WSO2 API Manager 2.0.0, users can use the following keys to create custom throttling policies.
  • apiContext,
  • apiVersion,
  • resourceKey,
  • userId,
  • appId,
  • apiTenant,
  • appTenant

Sample custom policy

FROM RequestStream
SELECT userId, ( userId == 'admin@carbon.super'  and apiKey == '/pizzashack/1.0.0:1.0.0') AS isEligible ,
str:concat('admin@carbon.super',':','/pizzashack/1.0.0:1.0.0') as throttleKey
INSERT INTO EligibilityStream;
FROM EligibilityStream[isEligible==true]#window.time(1 min)
SELECT throttleKey, (count(throttleKey) >= 5) as isThrottled group by throttleKey
INSERT ALL EVENTS into ResultStream;
As shown in the above Siddhi query, throttle key should match keytemplate format. If there is a mismatch between the Keytemplate format and throttlekey requests will not be throttled.

Prakhash SivakumarDynamic Scanning with OWASP ZAP for Identifying Security Threats

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Danushka FernandoSetting up Single node Kubernetes Cluster with Core OS bare metal

You might know already there is an official documentation to follow to setup a Kubernetes cluster on Core OS bare metal. But when do that specially single node cluster, I found some gaps in that documentation [1]. And another reason for this blog post is to get everything into one place. So this blog post will describe how to overcome the issues of setting up a single node cluster.

Installing Core OS bare metal.

You can refer to doc [2] to install core os. 

First thing is about users. Documentation[2] tells you how to create a user without password. To login as that user you will need ssh keys. So to create a user with username password, you can use a cloud-config.yaml file. Here is a sample.

- name: user
passwd: $6$SALT$3MUMz4cNIRjQ/Knnc3gXjJLV1vdwFs2nLvh//nGtEh/.li04NodZJSfnc4jeCVHd7kKHGnq5MsenN.tO6Z.Cj/
- sudo
- docker

Here value for passwd is a hash value. One of the below methods can be used to hash a password.[3]

 # On Debian/Ubuntu (via the package "whois")  
mkpasswd --method=SHA-512 --rounds=4096
# OpenSSL (note: this will only make md5crypt. While better than plantext it should not be considered fully secure)
openssl passwd -1
# Python (change password and salt values)
python -c "import crypt, getpass, pwd; print crypt.crypt('password', '\$6\$SALT\$')"
# Perl (change password and salt values)
perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'

If you are installing this inside a private network (office network or university network) then you may need to set IP, DNS and so on. Specially about DNS, since its resolving using resolv.conf and it get replaced always then you may need to set it up as below.

Create a file in  /etc/systemd/network/ with content below. Replace the values with your network values.


Then restart the network with command below.

 sudo systemctl restart systemd-networkd  

Now your core os installation is ready to install Kubernetes.

Installing Kubernetes on Core OS

The official documentation [1] describes how to install a cluster. But what I will explain is to create a single node cluster. You can follow the same documentation. When you create certs create what's needed for the master node. And then go on and deploy the master node. Here you will not need the calcio related steps if you don't need to specifically use calcio with Kubernetes.

In Core OS Kubernetes is installed as a service named kubelet. So there what you defined is the service definition and supporting manifest files to service. There are four components of Kubernetes which are configured as manifests inside  /etc/kubernetes/manifests/

  1. API server
  2. Proxy
  3. Controller Manager
  4. Scheduler
All these four components will start as pods / containers inside the cluster.

Apart from these four configuration you have configured the kubelet service as well.

But with only these configurations if you try to create a pod it will not get created. Actually it will fail to schedule. Because you don't have a node available to schedule in the cluster. Usually masters don't schedule pods. So that's why in this documentation in master scheduling is set to false. So to turn on scheduling just edit the service definition file /etc/kubernetes/system/kubelet.service to change  --register-schedulable=false to --register-schedulable=true .

Now you will be able to schedule the pods in this node.

Configuring to use registry.

Next step is configuring to use a registry. If you have already used docker in other OS, then you should know that adding an insecure registry is done using DOCKER_OPTS.  One way to configure DOCKER_OPTS in Core OS is to add it to /run/flannel_docker_opts.env file. But it would be overridden when the server is restarted. For both insecure and proper registries use the method explained in [4].


sanjeewa malalgodaWSO2 API Manager - Subscription Throttling with burst controlling works?

Next thing is subscription level API throttling. When you have API in store, subscribers will come there and subscribe to that API. Whenever subscription made we will make record saying user subscribed to this API using this application. So whenever API request come to API gateway we will get application id(which use to identify API uniquely) and API context + version(which we can use to identify API uniquely) to create key to do subscription level throttling. That means when subscription level throttling happens it will always count requests for API subscribed to given application.

Upto API Manager 1.10 this subscription level throttling allowed per user basis. That means if multiple users use same subscription each of them can have copy of allowed quota and it will be unmanageable at some point as user base grows.

Also when you define advanced throttling policies you can also define burst control policy. This is very important because otherwise one user can consume all allocated requests within short period of time and rest of users cannot use API in fair way.

Screenshot from 2016-09-26 17-51-38.png

sanjeewa malalgodaWSO2 API Manager - How Application Level Throttling Works?

Application level throttling happens in application level and users can control total number of requests come to all APIs subscribed to given application. In this case counters will maintain against application.

Screenshot from 2016-09-26 17-52-46.png

sanjeewa malalgodaWSO2 API Manager - How advanced API and Resource level throttling works?

If request is not blocked request then we will move to API level throttling. Here we will do throttling for API level and resource level. Here always API level throttle key will be API name. That means we can control API requests per API.

Advanced API level policy applicable at 2 levels(this do not support from UI level at the moment, but runtime support this).
  1. Per user level - All API request counts happen against user(per user +api combination).
  2. Per API/Resource level -  Without considering user all counts maintain per API basis.

For the moment let's only consider per API count as it's supported OOB. First API level throttling will happen. Which means if you added some policy when you define API then it will applicable at API level.

Then you can also add throttling tiers at resource level when you create API. That means for a given resource you will be allowed certain quota. That means even if same resource accessed by different applications still it allows same amount of requests.

Screenshot from 2016-09-26 17-53-50.png

When you design complex policy you will be able to define policy based on multiple parameters such as transport headers, IP addresses, user agent or any other header based attribute. When we evaluate this kind of complex policy always API or resource ID will be picked as base key. Then it will create multiple keys based on the number of conditional groups have in your policy.

Screenshot from 2016-09-26 17-54-01.png

sanjeewa malalgodaWSO2 API Manager new throttling - How Blocking condition work ?

Blocking requests will be executed first as it's the least expensive check. All blocking conditions will be evaluated per node basis. Blocking conditions are just checks of certain conditions and we don't need to maintain counters across all gateway nodes. For blocking conditions we will be evaluate requests against following attributes. All these blocking conditions can add and will be evaluated at tenant level. That means one tenant cannot block other requests etc.
apiContext - if users need to block all requests coming to given API then we may use this blocking condition. Here API content will be complete context of API URL.
appLevelBlockingKey - If users need to block all requests coming to some application then they can use this blocking condition. Here throttle key will be construct by combining subscriber name and application name.
authorizedUser - If we need to block requests coming from specific user then they can use this blocking condition. Blocking key will be authorized users present in message context.
ipLevelBlockingKey - IP level blocking can use when we need to block specific IP address accessing our system. Then this one also apply at tenant level and blocking key will be constructed using IP address of incoming message and tenant id.

Screenshot from 2016-09-26 17-56-30.png

Imesh GunaratneA Reference Architecture for Deploying WSO2 Middleware on Kubernetes

A WSO2 white paper I recently wrote

Kubernetes is a result of over a decade and a half experience on managing production workloads on containers at Google. Google has been contributing to Linux container technologies, such as cgroups, lmctfy, libcontainer for many years and has been running almost all Google applications on them. As a result, Google started the Kubernetes project with the intention of implementing an open source container cluster management system similar to the one they use inhouse called Borg. Kubernetes provides deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. It can run on any infrastructure and can be used for building public, private, hybrid, and multi-cloud solutions. Kubernetes provides support for multiple container runtimes; Docker, Rocket (Rkt) and AppC.

Please find it at:

Thanks to Prabath for providing the idea of publishing this on Medium!

Dimuthu De Lanerolle

Publishing API Runtime Statistics Using WSO2 DAS & WSO2 APIM

Please note that for this tutorial I am using APIM 1.10.0 and DAS 3.0.1 versions.

WSO2 API Manager is a complete solution for designing and publishing APIs, creating and managing a developer community, and for securing and routing API traffic in a scalable way. It leverages proven components from the WSO2 platform to secure, integrate and manage APIs. In addition, it integrates with the WSO2 Analytics Platform, and provides out of the box reports and alerts, giving you instant insight into APIs' behavior.

In order for downloading WSO2 APIM click here.

WSO2 Data Analytics Server is a comprehensive enterprise data analytics platform; it fuses batch and real-time analytics of any source of data with predictive analytics via machine learning. It supports the demands of not just business, but Internet of Things solutions, mobile and Web apps.

In order for downloading WSO2 DAS click here.

Configuring WSO2 APIM

1. Navigate to [APIM_HOME]/repository/conf/api-manager.xml and enable the below section.

<!-- For APIM implemented Statistic client for RDBMS -->


2. Start APIM server. 

3. Login to dash-board. eg: https://localhost:9443/admin-dashboard. Select "Configure Analytics" & select enable checkbox. Configure as shown below.

Configuring WSO2 DAS

1. In-order to prevent server startup conflicts we will start DAS with port offset value of 1.
To do so navigate to [DAS_HOME]/repository/conf and open carbon.xml file.


2. Navigate to [DAS_HOME]/repository/conf/datasources/master-datasources.xml and add the below.

  <description>The datasource used for setting statistics to API Manager</description>
  <definition type="RDBMS">
      <validationQuery>SELECT 1</validationQuery>



Prio-adding this config you need to create a DB called 'TestStatsDB' in your mysql database &  and create required tables. To do so follow below instructions.

[1] Navigate to [APIM_HOME]/dbscripts/stat/sql  from your ubuntu console and select mysql.sql script.

type :   source [APIM_HOME]/dbscripts/stat/sql/mysql.sql

- Which will create the required tables for our scenario.

[2] Now navigate to [APIM_HOME]/repository/components/lib and add the required mysql connector jar from here.

Create / Publish / Invoke API

1. Navigate to APIM Publisher and create an API. Publish it.

2. Navigate to APIM Store and subscribe to the API.

3. In the Publisher instance select created API and navigate to 'Implement' section and select

            Destination-Based Usage Tracking: enable

4. Invoke the API several times.

View API Statistics

Navigate to APIM publisher instance. Select "Statistics" section. Enjoy !!!

PS ::

If you wanna do a load test scenario or just to play with Jmeter for your easy reference I have attached herewith a sample Jmeter script for 'Calculator API'.

<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="2.8" jmeter="2.13 r1665067">
    <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="HTTP Request" enabled="true">
      <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      <stringProp name="HTTPSampler.domain">localhost</stringProp>
      <stringProp name="HTTPSampler.port">8280</stringProp>
      <stringProp name="HTTPSampler.connect_timeout"></stringProp>
      <stringProp name="HTTPSampler.response_timeout"></stringProp>
      <stringProp name="HTTPSampler.protocol"></stringProp>
      <stringProp name="HTTPSampler.contentEncoding"></stringProp>
      <stringProp name="HTTPSampler.path">/calc/1.0/subtract?x=33&amp;y=9</stringProp>
      <stringProp name="HTTPSampler.method">GET</stringProp>
      <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
      <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
      <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
      <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
      <stringProp name="HTTPSampler.implementation">HttpClient4</stringProp>
      <boolProp name="HTTPSampler.monitor">false</boolProp>
      <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
      <HeaderManager guiclass="HeaderPanel" testclass="HeaderManager" testname="HTTP Header Manager" enabled="true">
        <collectionProp name="HeaderManager.headers">
          <elementProp name="" elementType="Header">
            <stringProp name="">Authorization</stringProp>
            <stringProp name="Header.value">Bearer 2e431777ce280f385c30ac82c1e1f21c</stringProp>

Supun SethungaRunning Python on WSO2 Analytics Servers (DAS)

In the previous post we discussed on how to connect jupyter notebook to pyspark. Further going forward, in this post I will discuss on how you can run python scripts, and analyze and build Machine Learning models on top of data stored in WSO2 Data Analytics servers. You may use a vanilla Data Analytics Server (DAS) or, any other analytics server such as ESB Analytics, APM Analytics or IS Analytics Servers for this purpose.


  • Install jupyter
  • Download WSO2 Data Analytcs Server (DAS) 3.1.0
  • Download and uncompress spark 1.6.2 binary.
  • Dowload pyrolite-4.13.jar

Configure the Analytics Server

In this scenario, Analytics Server will act as the external spark cluster, as well as the data source. Hence it is required to start the Analytics server in cluster mode. For that open <DAS_HOME>/repository/conf/axis2/axis2.xml and enable clustering as follows:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"

When the Analytics server starts in cluster, it creates a spark cluster as well. (Or if its pointed to a external spark cluster, it will join that external cluster). Analytics server also creates a Spark App, and will be accumilating all the existing cores in the cluster. But, when connect python/pyspark for the same cluster, it will also creates an Spark App, but since no cores are available to run, it will be in "waiting" state, and will not run. Therefore to avoid that, we need to limit the amount of resource that gets allocated to the CarbonAnalytics Spark app. In order to do that, open <DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf file, and set/change the following parameters.
carbon.spark.master.count  1

# Worker
spark.worker.cores 4
spark.worker.memory 4g

spark.executor.memory 2g

spark.cores.max 2
Note that, here "spark.worker.cores" (4) is the number of total cores we allocate for spark. And "spark.cores.max" (2) is the number of maximum  cores allocate for each spark application.

Since we are not using a minimum HA cluster in DAS/Analytics Server, we need to set the following property in  <DAS_HOME>/repository/conf/etc/tasks-config.xml file.

Now start the server by navigating to <DAS_HOME> and executing:

Once the server is up, to check whether the spark cluster is correctly configured, navigate to the spark master web UI on: http://localhost:8081/. It should show something similar to below.

Note that the number of cores allocated for the worker is 4 (2 Used). And the number of cores allocated for the CarbonAnalytics Application is 2. Also here you can see the spark master URL, on the top-left corner (spark:// This URL is used by pyspark and other clients to connect/submit jobs to this spark cluster.

Now to run a python script on top of this Analytics Server, we have two options:
  • Connect ipython/jupyter notebook, execute the python script from the UI.
  • Execute the raw python script using

Connect Jupyter Notebook (Option I)

open ~/.bashrc and add the following entries: 
export PYSPARK_DRIVER_PYTHON_OPTS='notebook' pyspark
export PYSPARK_PYTHON=/home/supun/Supun/Softwares/anaconda3/bin/python
export SPARK_HOME="/home/supun/Supun/Softwares/spark-1.6.2-bin-hadoop2.6"
export PATH="/home/supun/Supun/Softwares/spark-1.6.2-bin-hadoop2.6/bin:$PATH"
export SPARK_CLASSPATH=/home/supun/Downloads/pyrolite-4.13.jar

When we are running a python script on top of spark, pyspark will submit it as a job to the spark cluster (Analytics server, in this case). Therefore we need to add the all the external jars to the sparks class path, so that the spark executor knows where to look for the classes during runtime. Thus, we need to add absolute path of jars located in <DAS_HOME>/repository/libs directory as well as  <DAS_HOME>/repository/components/plugins directory, to the Spark Class path, seperated by colons (:) as below.
export SPARK_CLASSPATH=/home/supun/Downloads/wso2das-3.1.0/repository/components/plugins/abdera_1.0.0.wso2v3.jar:/home/supun/Downloads/wso2das-3.1.0/repository/components/plugins/ajaxtags_1.3.0.beta-rc7-wso2v1.jar.......

To make the changes take effect, run:
source ~/.bashrc 

Create a new directory, to be used as the python workspace (say "python-workspace"). This directory will be used to store the scripts we create in the notebook. Navigate to that created directory, and start the notebook, specifying the master URL of the remote spark cluster as below, when starting the notebook.
pyspark --master spark:// --conf "spark.driver.extraJavaOptions=-Dwso2_custom_conf_dir=/home/supun/Downloads/wso2das-3.1.0/repository/conf"

Finally navigate to http://localhost:8888/ to access the notebook, and create a new python script by New --> Python 3.
Then check the spark master UI (http://localhost:8081) again. You should see a second application named "PySparkShell" has been started too, and is using the remaining 2 cores. (see below)

Retrieve Data

In the new python script we created in the jupyter, we can use any spark-python API. To do spark operations with python, we are going to need the Spark Context and SQLContext. When we start jupyter with pyspark, it will create a spark context by default. This can be accessed using the object 'sc'.
We can also create our own spark context, with any additional configurations as well. But to create a new one, we need to stop the existing spark context first.
from pyspark import SparkContext, SparkConf, SQLContext

# Set the additional propeties.
sparkConf = (SparkConf().set(key="spark.driver.allowMultipleContexts",value="true").set(key="spark.executor.extraJavaOptions", value="-Dwso2_custom_conf_dir=/home/supun/Downloads/wso2das-3.1.0/repository/conf"))

# Stop the default SparkContext created by pyspark. And create a new SparkContext using the above SparkConf.
sparkCtx = SparkContext(conf=sparkConf)

# Check spark master.

# Create a SQL context.
sqlCtx = SQLContext(sparkCtx)

df = sqlCtx.sql("SELECT * FROM table1")

'df' is a spark dataframe. Now you can do any spark operation on top of that dataframe. You can also use spark-mllib and spark-ml packages and build machine learning models as well. You can refer [1] for such a sample on training a Random Forest Classification model, on top of data stored in WSO2 DAS.

Running Python script without jupyter Notebook (Option II)

Other than running python scripts with notebook, we can also run the raw python script directly on top of spark, using pyspark. For that we can use the same python script as above, with a slight modification. In the above scenario, there is a default spark context ("sc") created by notebook. But in this case there wont be any such a default spark context. hence we do not need the sc.stop() snippet. (or else it will give errors.). Once we remove that line of code, we can save the script with .py extension. Then run the saved script as below:
<SPARK_HOME>./bin/spark-submit --master spark:// --conf "spark.driver.extraJavaOptions=-Dwso2_custom_conf_dir=/home/supun/Downloads/wso2das-3.1.0/repository/conf"

You can refer [2] for a python script which does the same as the one we discussed earlier.



Lakshani GamageHow to Enable Wire Logs in WSO2 ESB/APIM

You can use below steps on WSO2 ESB or APIM to enable Wire Logs.
  1. Uncomment below line in <PRODUCT_HOME>/repository/conf/
  2. Restart Server.

Supun SethungaGenerating OAuth2 Token in WSO2 APIM

First we need to create and publish an API in the APIM Publisher. Then Go to the store, subscribe an application to the API, and get the consumer key and consumer secret. Then execute the follow cURL command to generate the oAuth2 token.
curl -k -d "grant_type=password&username=admin&password=admin" -H "Authorization: Basic OXFhdUhUSjZoX0pkQUg2aDluOFZXWTV6NXQwYTpNZ0ZqTzQ4bW5OdzhVZHRkd1Bodkx1TDh5bWth, Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token

Where OXFhdUhUSjZoX0pkQUg2aDluOFZXWTV6NXQwYTpNZ0ZqTzQ4bW5OdzhVZHRkd1Bodkx1TDh5bWth is the base64 encoded value of <consumer_key>:<consumer_secret>

The received token can be then used to invoke the above published API, Passing it in the header in the following format.
                  Authorization : Bearer <token>

Supun SethungaEnabling Mutual SSL for Admin Services in WSO2 IS

When there is a requirement of calling secured Web Services/Admin services without using user credentials, Mutual SSL can come in handy. What happens here is, the authentication is done using the public certificates keys. Following steps can be used to enable Mutual SSL in WSO2 Identity Server 5.0.0.

  1. Copy org.wso2.carbon.identity.authenticator.mutualssl_4.2.0.jar which is available under resources/dropins directory of the SP1 (WSO2-IS-5.0.0-SP01/resources/dropins/org.wso2.carbon.identity.authenticator.mutualssl_4.2.0.jar) to <IS_HOME>/repository/components/dropins directory.

  2. Open <IS_Home>/repository/conf/tomcat/catelina-server.xml file. Then set the connector property "clientAuth" to ”want”.

  3. To enable the Mutual SSL Authenticator, add the following to <IS_HOME>/repository/conf/security/authenticators.xml file.
    <Authenticator name="MutualSSLAuthenticator" disabled="false">
            <Parameter name="UsernameHeader">UserName</Parameter>
            <Parameter name="WhiteListEnabled">false</Parameter>
            <Parameter name="WhiteList"/>
    Note: If you have enable SAML SSO for IS, you need to set a higher priority for MutualSSLAuthenticator than to SAML2SSOAuthenticator.

  4. Extract WSO2 public certificate from <IS_Home>/repository/resources/security/wso2carbon.jks and add it to client’s trust store. Then add client’s public certificate to the carbon trust store, which can be find in <IS_Home>/repository/resources/security/client-truststore.jks.
    To extract a certificate from wso2carbon.jks
    keytool -export -alias wso2carbon -file carbon_public.crt -keystore wso2carbon.jks -storepass wso2carbon
    To import client's certificate to carbon trust store:
    keytool -import -trustcacerts -alias <client_alias> -file <client_public.crt> -keystore client-truststore.jks -storepass wso2carbon

  5. Now you can call the service by adding the username to either SOAP header or HTTP header as follows.
    Add Soap header:
    <m:UserName soapenv:mustUnderstand="0" xmlns:m="">admin</m:UserName>
    Add HTTP Header:
    UserName : <Base4-encoded-username>

Supun SethungaProfiling with Java Flight Recorder

Java Profiling can help you to identify asses the performance of your program, improve your code and identify any defects such as memory leaks, high CPU usages, etc. Here I will discuss on how to profile your code using the java inbuilt utility JCMD and Java Mission Control.

Getting a Performance Profile

A profile can be obtained using both jcmd as well as mission control tools. jcmd is a command line based tool where as mission control comes with a UI. But since jcmd is lightweight when compared to mission control and hence has lesser effect to the performance of program/code which you are going to profile. Therefore jcmd is preferable for taking a profile. In order to get a profile:

First need to find the process id of the running program you want to profile. 

Then, unlock commercial features for that process:
jcmd <pid> VM.unlock_commercial_features

Once the comercial features are unlocked, start the recording.
jcmd <pid> JFR.start delay=20s duration=1200s name=rec_1 filename=./rec_1.jfr settings=profile

Here 'delay', 'name' and 'filename' all are optional. To check the status of the recording:
jcmd <pid> JFR.check

Here I have set the recording for 20 mins (1200 sec.). But you can take a snapshot of the recording at any point within that duration, without stopping the recording. To do that:
jcmd <pid> JFR.dump recording=rec_1 filename=rec_1_dump_1.jfr

Once the recording is finished, it will automatically write the output jfr to the file we gave at the start. But  if you want to stop the recording in the middle and get the profile, you can do that by:
jcmd <pid> JFR.stop recording=rec_1 filename=rec_1.jfr  

Analyzing the Profile

Now that we  have the profile, we need to analyze it. For that jcmd itslef not going to be enough. We are going to need Java Mission Control. You can simply open Mission Control and then open your .jfr file using it. (drag and drop the jfr file to mission control UI). Once the file is open, it will navigate you to the overview page, which usually looks like follows:

Here you can find various options to analyze your code. You can drill down to thread level, class level and method level, and see how the code have performed during the time we record the profile. In the next blog I will discuss in detail how we can identify any defects of the code using the profile we just obtained.

Supun SethungaBasic DataFrame Operations in python


  • Install python
  • Install ipython notebook

Create a directory as a workspace for the notebook, and navigate to it. Start python jupyter by running:
jupyter notebook

Create a new python notebook. To use Pandas Dataframe this notebook scipt, we first need to import the pandas library as follows.
import numpy as np
import pandas as pd

Importing a Dataset

To import a csv file from local file system:
filePath = "/home/supun/Supun/MachineLearning/data/Iris/train.csv"
irisData = pd.read_csv(filePath)

Output will be as follows:
     sepal_length  sepal_width  petal_length  petal_width
0 NaN 3.5 1.4 0.2
1 NaN 3.0 1.4 0.2
2 NaN 3.2 1.3 0.2
3 NaN 3.1 1.5 0.2
4 NaN 3.6 1.4 0.2
5 NaN 3.9 1.7 0.4
6 NaN 3.4 1.4 0.3
7 NaN 3.4 1.5 0.2
8 NaN 2.9 1.4 0.2
9 NaN 3.1 1.5 0.1
10 NaN 3.7 1.5 0.2
11 NaN 3.4 1.6 0.2
12 NaN 3.0 1.4 0.1

Basic Retrieve Operations

Get a single column of the dataset. Say we want to get all the values of the column "sepal_length":

Get a multiple column of the dataset. Say we want to get all the values of the column "sepal_length" and "petal_length":
print(irisData[["sepal_length", "petal_length"]])
#Note there are two square brackets.
Get a subset of rows of the dataset. Say we want to get the first 10 rows of the dataset:

Get a subset of rows a column of the dataset. Say we want to get the first 10 rows of the column "sepal_length" of the dataset:

Basic Math Operations

Add a constant to each value of a column in the dataset:
print(irisData["sepal_length"] + 5)

Add two (or more) columns in the dataset:
print(irisData["petal_width"] + irisData["petal_length"])
Here values will be added row-wise. i.e: value in the n-th row of petal_width column, is added to the value in the n-th row of petal_length column.

Similarly we can do the same for other math operations such as Substraction (-), Multiplication (*) and Division (/) as well.

Supun SethungaSetting up a Fully Distributed Hadoop Cluster

Here i will discuss on how to setup a fully distributed hadoop cluster with 1-master and 2 salves. Here the three nodes are setup in three different machines.

Updating Hostnames

To start off the things, lets first give hostnames to the three nodes. Edit the /etc/hosts file with following command.
sudo gedit /etc/hosts

Add following hostname and against the ip addresses of all three nodes. Do this for the all three nodes.    hadoop.master    hadoop.slave.1    hadoop.slave.2

Once you do that, update the /etc/hostname file to include hadoop.master/hadoop.slave.1/hadoop.slave.2 as the hostname of each of the machines respectively.


For security concerns, one might prefer to have a separate user for Hadoop. In order to create a separate user execute the following command in the terminal:
sudo addgroup hadoop
sudo adduser --ingroup hadoop hduser
Give a desired password..

Then restart the machine.
sudo reboot

Install SSH

Hadoop needs to copy files between the nodes. For that it should be able to acces each node with ssh, without having to give username/password. Therefore, first we need to install ssh client and server.
sudo apt install openssh-client
sudo apt install openssh-server

Generate a key
ssh-keygen -t rsa -b 4096

Copy the key for each node
ssh-copy-id -i $HOME/.ssh/ hduser@hadoop.master
ssh-copy-id -i $HOME/.ssh/ hduser@hadoop.slave.1
ssh-copy-id -i $HOME/.ssh/ hduser@hadoop.slave.2

Try sshing to all the nodes. eg:
ssh hadoop.slave.1

You should be able to ssh to all the nodes, without proving the user credentials. Repeat this step in all three nodes.

Configuring Hadoop

To configure hadoop, change the following configurations:

Define hadoop master url in <hadoop_home>/etc/hadoop/core-site.xml , in all nodes.

Create two directories /home/wso2/Desktop/hadoop/localDirs/name and /home/wso2/Desktop/hadoop/localDirs/data (and make hduser the owner, if you create a separate user for hadop) . Give read/write rights to that folder.

Modify the <hadoop_home>/etc/hadoop/hdfs-site.xml as follows, in all nodes.

<hadoop_home>/etc/hadoop/mapred-site.xml (all nodes)

Add the hostname of master node, to <hadoop_home>/etc/hadoop/masters file, in all nodes.

Add hostname of slave nodes  to <hadoop_home>/etc/hadoop/slaves file, in all nodes.

(Only in Master) We need to format the namenodes, before we start hadoop. For that, in the master node, navigate to <hadoop_home>/etc/hadoop/bin/ directory and execute the following.
./hdfs namenode -format

Finally, start the hadoop server, by navigating to <hadoop_home>/etc/hadoop/sbin/ directory, and execute the following:

If everything goes well, hdfs should be started. And you can browse the webUI of the namenode from the URL: http://localhost:50070/dfshealth.jsp.

Supun SethungaSetting up a Fully Distributed HBase Cluster

This post will discuss on how to setup a fully distributed hbase cluster. Here we will not run zookeeper as a separate server, but will be using the zookeeper which is embedded in hbase itself. And our setup will consist of 1 master node, and 2 slave nodes.


  • Update /etc/hostname file to include hadoop.master, hadoop.slave1, hadoop.slave2 respectively, as the hostnames of the machines.
  • Download hbase 1.2.1
  • Setup a fully distributed Hadoop cluster [1].
  • Start the hadoop server.

Configure HBase

Fisrt create a directory for hbase in the hadoop file system. For that navigate to <hadoop_home>/bin and execute:
hadoop fs -mkdir /hbase

Do the following confgiurations in <hbase_home>/conf/hbase-site.xml. Note that, here the host and port of "hbase.rootdir" should be the same host and port as hadoop's, we gave at the prerequisites step.

<!-- Note that above should be the same host and port as hadoop's>

    <description>Property from ZooKeeper's config zoo.cfg.The port at which the clients will connect.</description>

Here, hbase.rootdir should be on namenode. In our case, master is the namenode. Zookeeper quorums should be the slave nodes. This tells which nodes should run the zookeerper. It is preferred to have odd number of nodes for zookeeper.

Add the hostname of slave nodes, to  <hbase_home>/conf/regionservers, in all nodes, except master/namenode

Finally, set the following jvm properties in <hbase_home>/conf/ file. The property HBASE_MANAGES_ZK  is to indicate that hbase is managing the zookeeper, and no external zookeeper server is running.

# To use built in zookeeper
export HBASE_MANAGES_ZK=true

# set java class path
export JAVA_HOME=your_java_home

# Add hadoop-conf directory to hbase class path:
export HBASE_CLASSPATH=$HBASE_CLASSPATH:<hadoop_home>/etc/hadoop

Now all the configurations are complete. Now we can start the server by navigating to <hbase_home>/bin directory and executing the following:

Once the hbase server is up, you can navigate to its master web UI from http://hadoop.master:16010/



Supun SethungaObtain a Heap/Thread Dump

jmap -dump:live,format=b,file=<filename>.hprof <PID>

Thread Dump:
jstack <PID> > <filename>

Supun SethungaConnecting an IBM MQ to WSO2 ESB

In this article I will be discussing on how to configure WSO2 ESB 4.8.1 to both read and write from/to JMS queues on IBM WebSphere MQ. Following is the scenario.


You need to have IBM MQ installed in you machine (or to access remotely). Reference [1] contains detailed information on how to install and setup IBM MQ.

Once IBM MQ is installed, create two queues: InputQueue and OutputQueue.

Configuring WSO2 ESB

Copy jta.jar and jms.jar to repository/components/lib directory, and and fscontext_1.0.0.jar to repository/components/dropins directory. (.jars can be found in the article [1] )

Next, We need to enable JMS transport details in the ESB side. For that, add the following to <ESB_HOME>/repository/conf/axis2/axis2.xml file.

<transportReceiver class="org.apache.axis2.transport.jms.JMSListener" name="jms">
    <parameter locked="false" name="default">
        <parameter locked="false" name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
        <parameter locked="false" name="java.naming.provider.url">file:///home/supun/Supun/JNDIDirectory</parameter>
        <parameter locked="false" name="transport.jms.ConnectionFactoryJNDIName">MyQueueConnFactory</parameter>
        <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        <parameter locked="false" name="transport.jms.UserName">Supun</parameter>
        <parameter locked="false" name="transport.jms.Password">supun</parameter>
    <parameter locked="false" name="myQueueConnectionFactory1">
        <parameter locked="false" name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
        <parameter locked="false" name="java.naming.provider.url">file:///home/supun/Supun/JNDIDirectory</parameter>
        <parameter locked="false" name="transport.jms.ConnectionFactoryJNDIName">MyQueueConnFactory</parameter>
<parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        <parameter locked="false" name="transport.jms.UserName">Supun</parameter>
        <parameter locked="false" name="transport.jms.Password">supun</parameter>

Here, java.naming.provider.url is the location where IBM MQ's binding file is located. "transport.jms.UserName" and "transport.jms.Password" refers to the username and the password of the login account of the machine in which IBM MQ is installed. Similarly, add the following jms sender details too, to the same axis2.xml file.

<transportSender class="org.apache.axis2.transport.jms.JMSSender" name="jms">
    <parameter locked="false" name="default">
        <parameter locked="false" name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
        <parameter locked="false" name="java.naming.provider.url">file:///home/supun/Supun/JNDIDirectory</parameter>
        <parameter locked="false" name="transport.jms.ConnectionFactoryJNDIName">MyQueueConnFactory</parameter>
        <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        <parameter locked="false" name="transport.jms.UserName">Supun</parameter>
        <parameter locked="false" name="transport.jms.Password">supun</parameter>
    <parameter locked="false" name="ConnectionFactory1">
        <parameter locked="false" name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
        <parameter locked="false" name="java.naming.provider.url">file:///home/supun/Supun/JNDIDirectory</parameter>
        <parameter locked="false" name="transport.jms.ConnectionFactoryJNDIName">MyQueueConnFactory</parameter>
        <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        <parameter locked="false" name="transport.jms.UserName">Supun</parameter>
        <parameter locked="false" name="transport.jms.Password">supun</parameter>

Deploying Proxy to Read/Write from/to JMS Queue

Save the above configurations and start the ESB server. Create a new custom proxy as follows.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="" name="JMSProducerProxy" transports="jms" startOnLoad="true" trace="disable">
         <property name="OUT_ONLY" value="true"/>
         <property name="startTime" expression="get-property('SYSTEM_TIME')" scope="default" type="STRING"/>
         <property name="messagId" expression="//senderInfo/common:messageId" scope="axis2" type="STRING"/>
         <property name="messageType" value="application/sampleFormatReceive" scope="axis2"/>
         <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"/>
         <property name="JMS_IBM_PutApplType" value="2" scope="transport" type="INTEGER"/>
         <property name="JMS_IBM_Encoding" value="785" scope="transport" type="INTEGER"/>
         <property name="JMS_IBM_Character_Set" value="37" scope="transport" type="INTEGER"/>
         <property name="JMS_IBM_MsgType" value="8" scope="transport" type="INTEGER"/>
         <property name="Accept-Encoding" scope="transport" action="remove"/>
         <property name="Content-Length" scope="transport" action="remove"/>
         <property name="User-Agent" scope="transport" action="remove"/>
         <property name="JMS_REDELIVERED" scope="transport" action="remove"/>
         <property name="JMS_DESTINATION" scope="transport" action="remove"/>
         <property name="JMS_TYPE" scope="transport" action="remove"/>
         <property name="JMS_REPLY_TO" scope="transport" action="remove"/>
         <property name="Content-Type" scope="transport" action="remove"/>
               <address uri="jms:/OutputQueue?transport.jms.ConnectionFactoryJNDIName=MyQueueConnFactory&amp;java.naming.factory.initial=com.sun.jndi.fscontext.RefFSContextFactory&amp;java.naming.provider.url=file:///home/supun/Supun/JNDIDirectory&amp;transport.jms.DestinationType=queue&amp;transport.jms.ConnectionFactoryType=queue&amp;transport.jms.Destination=OutputQueue"/>
   <parameter name="transport.jms.Destination">InputQueue</parameter>

This proxy service will read messages from "InputQueue", and will write them out to the "OutputQueue". As you can see, here I have set some custom properties and removed some other properties from the JMS message. This is done as IBM MQ expect only certain properties, and if those are not available or if there are unexpected properties, it will throw an error.

If you need to change the priority of a message using the proxy, add the following property to the insequence.
      <property name="JMS_PRIORITY" value="2" scope="axis2"/>
Here "value" is the priority you want to set to the message.



Supun SethungaSearch for file(s) inside ziped folders in Linux

Simply execute the following command in the Linux Terminal.
find . -name *.zip -exec less {} \; | grep <file_name>

Supun SethungaAdding WSSE Header to a SOAP Request

Sample SOAP Message with WSSE Header:

<soapenv:Envelope xmlns:soapenv="" xmlns:echo="">
      <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="">
         <wsu:Timestamp wsu:Id="Timestamp-13" xmlns:wsu="">
         <wsse:UsernameToken wsu:Id="UsernameToken-14" xmlns:wsu="">
            <wsse:Password Type="">admin</wsse:Password>

Supun SethungaBuilding a Random Forest Model with R

Here I will be talking on how to build a Random Forest Model, for a classification problem, with R. There are two libraries exists in R which we can used to train Random Forest models. Here I'll be using the package "randomForest" for my purpose (Other package is "caret" package). Further, we can export our built model as a PMML file, which is a standard way (an XML format) of exporting models from R. I will be using the famous iris data-set for the demonstration.


You need to have R installed. If you are using Linux, then you may Install Rstudio as well. Because in windows, R comes with a GUI, but in Linux, R has only a console to execute commands. So it would be much convenient to have Rstudio installed as well, which will provide a GUI for R operations.

Install Necessary Packages.

We are going to need to install some external standard libraries for our requirement. For that start R and in the console, execute the following commands. (You can create a new script and execute all the commands at once also).
Once the libraries are installed, we need to load them to run-time.

Prepare Data:

Next, lets import/load the dataset (iris dataset) to R.
data <- read.csv("/home/supun/Desktop/intersects4.csv",header = TRUE, sep = ",")
Now, we need to split this dataset in to two propotions. One set is to train the model, and the remaining is to test/validate the model we built. Here I'l be using 70% for training and the remaining 30% for testing.
#take a partition of 70% from the data
split <- createDataPartition(data$traffic, p = 0.7)[[1]]
#create train data set from 70% of data
trainset <- data[ split,]
#create test data set from remaining 30% of data
testset <- data[-split,]
Now we need to put some special attention to the data type of our dataset. The feature Species of the iris data set is categorical and the remaining four features are numerical. And the Species columns has been encoded for numerical values. When R imports the data, since these are numerical values, it will treat all the columns as numerical data. But we need R to treat Species as categorical data, and the remaining as Numerical. For that, we need to convert the Species column in to factors, as follows. Remember to do this only for categorical columns, but not for numerical columns. Here im creating two new Tran set and a Test set, using the converted data.
trainset2 <- data.frame(as.factor(trainset$Species), trainset$Sepal.Length, trainset$Sepal.Width, trainset$Petal.Length, trainset$Petal.Width)
testset2<- data.frame(as.factor(testset$Species), testset$Sepal.Length, testset$Sepal.Width, testset$Petal.Length, testset$Petal.Width)
# Rename the column headers
colnames(trainset2) <- c("Species","Sepal.Length","Sepal.Width","Petal.Length","Petal.Width")
colnames(testset2) <- c("Species","Sepal.Length","Sepal.Width","Petal.Length","Petal.Width")

Train Model:

Before we start training the model, we need to put our attention on the inputs needed for the randomForest model. Here im going to give five input parameters and they are as follows, in order.
  • Formula - formula defining the relationship between the response variable and the predictor variables. (here y~. means variable y is the response variable and everything else are predictor variables)
  • Dataset - dataset to be used for training. This dataset should contain the variables defined in the above equation.
  • Boolean value indicating whether to calculate feature importance or not.
  • ntree - Number of trees to grow. This should not be set to too small a number, to ensure that every input row gets predicted at least a few times.Saying that, large number for this (say about 100 would result the output model to be very large in size - few GBs. And eventually, it would take a lot of time to export the model as PMML)
  • mtry - Number of variables randomly sampled as candidates at each split. Note that the default values are different for classification (sqrt(p) where p is number of variables in x) and regression (p/3)
If you need further details on randomForest, execute the following ccommand in R, which will open the help page of the respective command.
Now we need to find the best mtry value for our case. For that execute the follow command, and in the resulting graph, pick the mtry value which gives the lowest OOB error.
bestmtry <- tuneRF(trainset[-1],factor(trainset$Species), ntreeTry=10, stepFactor=1.5,improve=0.1, trace=TRUE, plot=TRUE, dobest=FALSE)
According to the graph, OOB error minimize at mtry=2. Hence I will be using that value for model training step. To train the model, execute the following command. Here I'm training the Random Forest with 10 trees.
model <- randomForest(Species~.,data=trainset2, importance=TRUE, ntree=10, mtry=2)
Lets see how important each feature is, to our output model.  
This will result the following output.
0 1 2 MeanDecreaseAccuracy MeanDecreaseGini
Sepal.Length 1.257262 1.579965 1.985794 2.694172 8.639374
Sepal.Width 1.083289 0 -1.054093 1.085028 2.917022
Petal.Length 6.455716 4.398722 4.412181 6.071185 39.194641
Petal.Width 2.213921 2.045408 3.581145 3.338613 18.181343

Evaluate Model:

Now that we have a model with us, we need to check how good our model is. This is where our test data set comes in to play. We are going to make the prediction for the Response variable "Species" using the data in the tests set. And then compare the actual values with the predicted values.
prediction <- predict(model, testset2)
Lets calculate the confusion matrix, to evaluate how accurate our model is.
confMatrix <- confusionMatrix(prediction,testset2$Species)
You will get an output like follows:
Confusion Matrix and Statistics

Prediction 0 1 2
0 13 0 0
1 0 17 0
2 0 0 15

Overall Statistics

Accuracy : 1
95% CI : (0.9213, 1)
No Information Rate : 0.3778
P-Value [Acc > NIR] : < 2.2e-16

Kappa : 1
Mcnemar's Test P-Value : NA

Statistics by Class:

Class: 0 Class: 1 Class: 2
Sensitivity 1.0000 1.0000 1.0000
Specificity 1.0000 1.0000 1.0000
Pos Pred Value 1.0000 1.0000 1.0000
Neg Pred Value 1.0000 1.0000 1.0000
Prevalence 0.2889 0.3778 0.3333
Detection Rate 0.2889 0.3778 0.3333
Detection Prevalence 0.2889 0.3778 0.3333
Balanced Accuracy 1.0000 1.0000 1.0000

As you can see in the confusion matrix, all the values that are NOT in the digonal of the matrix are zero. This is the best model we can get for a classification problem, with 100% accuracy. In most of the real world scenarios, it is pretty hard to get this kind of a highly-accurate output. But it all depends on the data-set. 
Now that we know our model is highly accurate,  lets export this models from R, so that it can me used in other applications. Here Im using the PMML [2] format to export the model.
RFModel <- pmml(model);
write(toString(RFModel),file = "/home/supun/Desktop/RFModel.pmml");


Supun SethungaTransferring files with vfs and file-connector in WSO2 ESB

Here I will be discussing on how to transfer a set of files defined in a document, to another location using a proxy in WSO2 ESB 4.8.1. Suppose the names of the files are defined in the file-names.xml file. Suppose the file has the following xml structure.
    <files-set name="files-set-1">
        <file name="img1">image1.png</file>
        <file name="img2">image2.png</file>
    <files-set name="files-set-2">
        <file name="img3">image6.png</file>
        <file name="img4">image4.png</file>
<file name="img5">image5.png</file>
The procedure i'm going to follow is, read the file names defined in the file-names.xml using vfs, and transfer the files using the file connector. First we need to enable the vfs in the proxy. Please refer [1] on how to enable vfs transport to a proxy. Once vfs is enabled, the proxy should look like follows.
<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="" name="FileProxy" transports="vfs" statistics="disable" trace="disable" startOnLoad="true">
   <parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
   <parameter name="transport.PollInterval">15</parameter>
   <parameter name="transport.vfs.MoveAfterProcess">file:///some/path/success/</parameter>
   <parameter name="transport.vfs.FileURI">file:///some/path/in/</parameter>
   <parameter name="transport.vfs.MoveAfterFailure">file:///some/path/failure</parameter>
   <parameter name="transport.vfs.FileNamePattern">.*.xml</parameter>
   <parameter name="transport.vfs.ContentType">application/xml</parameter>
   <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>
   <description />

Here "FileURI" is the directory location of the file-names.xml. "MoveAfterProcess" is the directory where the file-names.xml should be moved after successfully reading. "MoveAfterFailure" is the location where the file-names.xml file should be moved if some failure occurs during reading it.
Now we need to extract the file names defined in this file-names.xml. For that i'm going to iterate through the file names using iterator mediator [2].
 <iterate preservePayload="true" attachPath="//files/files-set" expression="//files/files-set/file"
   <property name="image" expression="//files/files-set/file"/>
Finally, move the files using the file-connector[3] , as follows.
   <property name="fileName" expression="//files/files-set/file"/>
   <property name="fileLocation" value="ftp://some/path/in/"/>
   <property name="newFileLocation" value="ftp://some/path/success/"/>
Here, "fileLocation" refers to the directory in which the files we need to move are located, and "newFileLocation" refers to the directory to which the files should be moved. You can use either a localfile system location as well as ftp location for both these properties.



Supun SethungaUseful xpath expressions in WSO2 ESB

Reading part of the Message:

String Concatenating:

Reading values from Secure Vault:

Reading entities from ESB Registry:
         expression="get-property('registry', 'gov://_system/config/some/path/abc.txt')"

Base64 encoding:

Supun SethungaEnabling Mutual SSL between WSO2 ESB and Tomcat

Import Tomcat's public key to ESB's TrustStore

First we need to create a key-store for tomcat. For that execute the following:
keytool -genkey -alias tomcat -keyalg RSA -keysize 1024 -keystore tomcatKeystore.jks

Export public key certificate from tomcat's key-store:
keytool -export -alias tomcat -keystore tomcatKeystore
.jks -file tomcatCert.cer
Import the above exported tomcat's public key to ESB's trust-store:
keytool -import -alias tomcat -file tomcatCert.cer

Import ESB's public key to Tomcat TrustSotre

Export public key certificate from ESB's key-store:
keytool -export -alias tomcat -keystore <ESB_HOME>/repository/resources/security/wso2carbon.jks -file wso2carbon.cer
Import the above exported ESB's public key to tomcat's trust-store. (Here the we create a new trust-store for tomcat)
keytool -import -alias tomcat -file <ESB_HOME>/repository/resources/security/wso2carbon.cer -keystore tomcatTrustStore.jks

Enable SSL in ESB

In the <ESB_HOME>/repository/conf/axis2/axis2.xml file, uncomment the following property in the "<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">" block.
parameter name="SSLVerifyClient">require</parameter>

Enable SSL in Tomcat

We need to enable the HTTPS port in tomcat. By default its commented-out. Hence modify the <Tomcat_Home>/conf/server.xml as follows, and point the key-store and trust-store.
<Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"

        maxThreads="150" SSLEnabled="true" scheme="https" secure="true"

        clientAuth="false" sslProtocol="TLS"









               truststoreType="JKS" />

Supun SethungaSeasonal TimeSeries Modeling with Gradient Boosted Tree Regression

Seasonal Time Series data can be easily modeled with methods such as Seasonal-ARIMA, GARCH and HoltWinters. These are readily available in Statistical packages like R, STATA and etc. But If you wanted to model a Seasonal Time-Series using Java, there' are only very limited options available. Thus as a solution, here I will be discussion a different approach, where the time series is modeled in java using regression. I will be using Gradient Boosted Tree (GBT) Regression in Spark ML package.


I will use two datasets:

  • Milk-Production data [1]
  • Fancy data [2].

Milk-Production Data

Lets first have a good look at our dataset.

As we can see, the time-series is not stationary, as the mean of the series increases over time. In more simpler terms, it has a clear upwards trend. Therefore, before we start modeling it, we need to make it stationary. 

For this, Im going to using the mechanism called "Differencing". Differencing is simply creating a new series with the difference of tth term and the (t-m)th term in the series. This can be denoted as follows:  
   diff(m):             X't=Xt - Xt-m

We need to use the lowest m which makes the series stationary. Hence I will start with m=1. So the 1st difference becomes:
diff(1):             X' t=Xt - Xt-1

This results a series with (n-1) data points. Lets plot it against the time, and see if it has become stationary.

As we can see, there isn't any trend exist in the series. (only the repeating pattern). Therefore we can conclude that it is stationary.  If the diff(1) still shows some trend, then use a higher order differencing, say diff(2). Further, after differencing, if the series has a stationary mean (no trend) but has a non-stationary variance (range of data changes with time. eg: dataset [2]), then we need to do a transformation to get rid of the non-stationary variance. In such a scenario, logarithmic (log10 , ln or similar) transformation would do the job.

For our case, we don't need a transformation. Hence we can train a GBT regression model for this differenced data. Before be fit a regression model, we are going to need a set of predictor/independent variable. For the moment we only have the response variable (milk-production). Therefore, I introduce four more features to the dataset, which are as follows:
  • index - Instance number (Starts from 0 for both training and testing datasets)
  • time - Instance number from the start of the series (Starts from 0 for training set. Starts from the last value of training set +1, for the test set )
  • month - Encoded value representing month of the year. (Ranges from 1 to 12). Note: This is because it is monthly data. you need to add "week" feature too if its weekly data.
  • year - Encoded value representing the year. (Starts from 1 and continues for training data. Starts from the last value of year of training set +1, for the test set). 

Once these new features were created, lets train a model:
SparkConf sparkConf = new SparkConf();
sparkConf.set("spark.driver.allowMultipleContexts", "true");
JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(javaSparkContext);

// ======================== Import data ====================================
DataFrame trainDataFrame ="com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header", "true")

// Get predictor variable names
String [] predictors = trainDataFrame.columns();
predictors = ArrayUtils.removeElement(predictors, RESPONSE_VARIABLE);

VectorAssembler vectorAssembler = new VectorAssembler();

GBTRegressor gbt = new GBTRegressor().setLabelCol(RESPONSE_VARIABLE)

Pipeline pipeline = new Pipeline().setStages(new PipelineStage[] {vectorAssembler, gbt});
PipelineModel pipelineModel =;

Here I have tuned the hyper-parameters to get the best fitting line. Next, with the trained model, lets use the testing set (only contains the set of newly introduced variables/features), and predict for the future.
DataFrame predictions = pipelineModel.transform(testDataFrame);"prediction").show(300);

Follow is the prediction result:

It shows a very good fit. But the prediction results we get here is for the differenced series. We need to convert it back to the original series to get the actual prediction. Hence, lets inverse-difference the results:

X't = Xt - X t-1
Xt = X' t + X t-1

Where X' is the differenced value and Xt-1 is the original value at the previous step. Here, we have an issue for the very first data point in testing data series, as we do not have t-1 (X0 is unknown/ doesn't exist). Therefore, I made an assumption, that in testing series, X is equivalent to the last value (Xn) of the original training set. With that assumption, follow is the result of our prediction:

Fancy dataset:

Followed the same procedure to the "Fancy" dataset [2] too. If we look at the data we can see that the series is not stationary in both mean and variance wise. Therefore, unlike in previous dataset, we need to do a transformation as well, other than differencing. here I used log10 transformation after differencing. And similarly, when converting the predictions back to actual values, first I took the inverse log of the predicted value, and then did the inverse-differencing, in order to get the actual prediction. Follow is the result:

As we can see, it very closely fits to the validation data, though we may further increase the accuracy by further tuning the hyper-parameters.



Supun SethungaConnect to MySQL Database Remotely

By default, access to mysql databases is bounded to the server which is running mysql itself. Hence, if we need to log-in to the mysql console or  need to use a database from a remote server, we need to enable those configs.

Open /etc/mysql/my.cnf file.
    sudo vim /etc/mysql/my.cnf

Then uncomment "bind-address ="

*Note:  if you cannot find bind-address in the my.cnf file, it can be found in /etc/mysql/mysql.conf.d/mysqld.cnf file.

Restart mysql using:
   service mysql restart;

This will allow login in to mysql server form a remote machine. To test this, go to the remote server, and execute the following.
   mysql -u root -p -h <ip_of_the_sever_running_mysql> 

This will open up the mysql console of the remote sql server.

But this will only allow to login in to the mysql server. But here we want to use/access the databses from (a client app in) our remote machine. To enable access to databases, we need to grant access for each database.
    grant all on *.* to <your_username>@<ip_of_server_uses_to_connect> identified by '<your_password>'; 

If you want to allow any ip address to connect to the database, use '%' instead of  <ip_of_server_uses_to_connect>.

Supun SethungaBuilding Your First Predictive Model with WSO2 Machine Learner

WSO2 Machine Learner is a powerful tool for predictive analytics on big data. The outstanding feature of it is the step by step wizard which makes it easier for anyone to use and build even advanced models with just a matter of few clicks. (You can refer to this excellent article at [1] for a good read on the ability and features of WSO2 ML).

In this article I will be addressing how to deal with a classification problem with WSO2 Machine Learner. Here I will discuss on using WSO2 ML to build a simple Random Forest model, for the well known "Iris" flower data set. My ultimate goal would be to train a machine Learning model to predict the Flower "Type" using the rest of the features of the flowers.


  • Download and extract WSO2 Machine Learner (WSO2_ML) 1.0.0 from here.
  • Download the Iris Dataset from here. Make sure to save it in ".csv" format.

Navigate to <WSO2_ML>/bin directory and start the server using "./" command (or wso2server.bat in windows). Then locate the URL for the ML web UI (not the carbon console) in the terminal log, and open the ML UI. (which is http://localhost:9443/ml by default). You will be directed to the following window.

Then log in to the ML UI using the "admin" as both username and password.

Import Dataset

Once you logged in to the ML UI, you will see the following window. Here, since we haven't uploaded any data sets or, created any projects yet, it will show Datasets/projects as "0".

First thing we need to do is to import the Iris dataset to the WSO2 ML server. For that, click on "Add Dataset", and you will get the following dataset upload form. 

Give a desired name and a description for the dataset. Since we are uploading the dataset file from our local machine, select the "Source Type" as "File". Then browse the file and select it. Chose the dataformat as "CSV",  select "No" to the column headers as the csv file does not contain any headers. Once everything is filled, "select "Create Dataset". You will be navigate to a new page where the newly created dataset is listed. Refresh the page and click on the dataset name to expand the view.

If the dataset got uploaded correctly, you will see the Green tick.

Explore Dataset

Now lets visually explore the dataset to see what characteristics does our dataset hold. Click on the "Explore" button which can be find in the same tab as the Dataset name. You will be navigated to the following window.

WSO2 ML provides various sets of visualizing tools such as Scatter plot, Parallel set, Treliss Chart and Cluster diagram. I will not be discussing in detail on what does each of these charts can be used for, and will address that in a separate article. But for now, lets have a look at the Scatter plot. Here I have plotted PL (Petal Length) against PW (Petal Width), and have colored the points by the "Type". As we can see, "Type" is clearly separated in to three cluster of points, when we plotted against PL and PW.This is a good evidance that both PW and PL are important factors of deciding (predicting) the flower "Type". Hence, we should include those two points as inputs to our model. Similarly You can try plotting against different combinations of variables, and see what features/variables are important and which ones not.

Create a Project

Now that we import our data, we need to create project using the uploaded data, to start working on it. For that, click "Create Project" button which is next to the dataset name.

Enter a desired name and a description to the project, and make sure the dataset we uploaded earlier is selected as the dataset (this is selected by default if you create the project as mentioned in the previous step). Then click on "Create Project" on complete the action, which will show you the following window.

Here You will see a yellow exclamation mark next to the Project name, mentioning that no Analyses are available. This is because, to start analysing our dataset, we need to create an Analysis.

Create Analysis

To create an Analysis, in the text box under the project, Type the name of the analysis you want to create and click "Create Analysis". Then you will be directed to the following Preprocess window, where it shows the overall summary of the dataset.

Here we can see the Data type (Categorical/Numerical) and the overall distribution of each feature. For Numerical features, a histogram will be displayed, and for categorical features, a pie chart would be shown. Also we can decided which feature to be included in our Analysis ("Include") and how should we treat the missing values of each of the feature/variable. 

As per the Data explore step we did earlier, I will be including all the features for my analysis. So all the features will be kept ticked in the "include" column. And for the simplicity, I will be using "Discard" option for missing values. (This means if there is a missing value somewhere, that complete row will be discarded). Once we are done with selecting what options we need, lets proceed to the next step by clicking "Next" button on the top-right corner. We will be redirected again to the data explore step, which is optional at this stage since we have already done with our data exploration part. Thefore lets skip this step and proceed to the Model Building phase by clicking "Next" again.

Build Model

In this step, we can chose which algorithm are we going to use to train our model. As I stated at the beginning as well, I will be using "Random Forest" as the algorithm, and the flower "Type" as the variable that I want to predict (Response variable) using my model. 

We can also define what proportion of data should be used to train our model and what proportion to be used for validation/evaluation of the build model. As its a common standard, lets use 0.7 proportion of data (70%) to train the model. Yet again, click "next" to proceed to the next step, where we can set the hyper-parameters to be used by the algorithm to build the model.

These hyper-parameters are specific to the Random Forest algorithm. Each of the hyper-parameters represents the following:
  • Num Trees -  Number of trees in the random forest. Increasing the number of trees will decrease the variance in predictions, improving the model’s test-time accuracy. Also training time increases roughly linearly in the number of trees. This parameter value should be an integer greater than 0.
  • Max Depth - Maximum depth of the tree. This helps you to control the size of the tree to prevent overfitting. This parameter value should be an integer greater than 0.
  • Max Bins - Number of bins used when discretizing continuous features. This must be at least the maximum number of categories M for any categorical feature. Increasing this allows the algorithm to consider more split candidates and make fine-grained split decisions. However, it also increases computation and communication. This parameter value should be an integer greater than 0.
  • Impurity -  A measure of the homogeneity of the labels at the node. This parameter value should be either 'gini' or 'entropy' (without quotes).
  • Feature Subset Strategy - Number of features to use as candidates for splitting at each tree node. The number is specified as a fraction or function of the total number of features. Decreasing this number will speed up training, but can sometimes impact performance if too low. This parameter value can take values 'auto', 'all', 'sqrt', 'log2' and 'onethird' (without quotes).
  • Seed - Seed for the random number generator.

Those hyper-parameters have default values in WSO2 ML. Eventhough they are not optimized for the Iris dataset, they can do a decent enough job for any dataset. Hence I will be using the default values as well, to keep it simple. Click "Next" once more, to proceed to the last step of our model building phase, where we will be asked to select a dataset version. This is usefull when there are multiple dataset versions for the same dataset. but for now since we have only one version (default version), we can keep it as it is.

We are now done with all the processing need to build our model. Finally, click "Run" on the top-right corner to build the model. Then you will be directed to a page where all the models you built under an analysis are listed. Since we built only one odel for now, there will be only one model listed. It also state the current status of the model building process. Initiall it will be shown as "In Progress". After a couple of seconds, refresh the page to update the status. Then it should be saying "Completed" with a green tick next to it.

Evaluate Model

Our next and final step is to evaluate our model to see how well it performs. For that click on "View" button on the model, which will take you to a page having the following results.

In this page we can see the model's overall accuracy, the confusion matrix, and a Scatter plot where data points are marked according to their classification status (Correctly classified/ Incorrectly classified). These are generated from the 30% of the data we kept aside at the beginning of our analysis. 

As we can see from the above output, our model has a 95.74% overall accuracy which is extremely high. Also the confusion matrix shows a breakdown of this accuracy. On the ideal scenario, accuracy should be 100% and all the non-diagonal cells in the confusion matrix should be all zero. But this ideal case is far away from the real world scenarios, and the accuracy we have got here is extremely towards the higher side. Therefore, looking at the evaluation results, we can conclude that out model is a good enough model to predict for future data.


Optionally you can use this model and predict for new data points, from within the WSO2 ML itself. This is primarily for testing purpose only. For that, navigate back to the model listing page, and click on "predict".

Here we have two options: either to upload a file and predict for all the rows in the file, Or to enter a set of values (which represent a single row) and get the prediction for that single instance. I will be using the later option here. As in the above figure, Select "Feature Values" and then enter desired values for the features, and click on "Predict". The output value will be shown right below the predict button. In this case, I got the predicted output as "1" for the values I entered, which means that the flower belongs to the Type 1.

For more information on WSO2 Machine Learner, please refer the official documentation found at [1].


Supun SethungaCreating a Log Dashboard with WSO2 DAS - Part I

WSO2 Data Analytics Server (DAS) can be used to do various kinds of batch data analytics and create dashboards out of those data. In this blog, I will be discussing on how can you create a simple dashboard using the data read from a log file. As the first part, lets read the log file, load and the store the data inside WSO2 DAS.


  • Download WSO2 DAS 3.0.1 [1].
  • A log file. (I will be using a log dumped from WSO2 ESB)

Unzip the WSO2 DAS server, and start the sever by running the <DAS_HOME>/bin/ script. Then loggin to the management console (Default URL https://localhost:9443/carbon) using "admin" as both username and password.

Create Event Stream

First we need to create an event stream to which we push the data we get from the log files. To create an event stream, in the management console, navigate to the Manage→Event→Streams. Then click on Add Event Stream. You will get form like below.

Give a desired name, version and a description for your event stream. Under "Stream Attributes" section add payload attributes, which you wish to populate with the data coming from the logs.
For example, I want to store the time-stamp, class name, and the log message. Therefore, I create three attributes to store the above three information.

Once the attributes were added, we need to persist this stream to the database. otherwise the data coming to this stream will not be able to use in future. To do that click on "Next[Persist Event]" icon. In the resulting form, tick on Persist Event Stream and also tick on all the three attributes, as below.

Once done, click on "Save Event Stream".

Create Event Receiver

Our next step would be to create an event receiver to read the log file and push the data in to the event stream we just created. For that, navigate to Manage→Event→Receivers and click on Add Event Receiver .

Give a desired name to the receiver. Since we are reading a log file, select the "Input Event Adapter Type" as "file-tail" and set "false" for "Start From End" property. For "Event Stream", pick the event stream "LogEventsStream" we created earlier, and message format as "text".

Next we need to do a custom mapping for the events. In order to do so, click on Advanced. Then create three regex expressions to match the fields you want to extract from the log, and assign those three regex expressions to the three fields we defined in the event stream (see below).

Here the first regex match to the timestamp in the log file, and the second regex matches to the class name in the log file, and so on. Once done, click on "Add event Receiver".  Then DAS will read you file and store the fields we defined earlier in the Database.

View the Data

To see the data stored by DAS, navigate to Manage→Interactive Analytics→Data Explorer in management Console. Pick "LOGEVENTSSTREAM" (the event stream we created earlier),  and click on "search". This should show a table with the information from the log file, as below.

First phase of creating a log dashboard is done. In the next post I will discuss on how can you create the dashboard out of this data.



Supun SethungaCustom Transformers for Spark Dataframes

In Spark a transformer is used to convert a Dataframe in to another. But due to the immutability of Dataframes  (i.e: existing values of a Dataframe cannot be changed), if we need to transform values in a column, we have to create a new column with those transformed values and add it to the existing Dataframe. 

To create a transformer we simply need to extend the class, and write our transforming logic inside the transform() method. Below are a couple of examples:

A simple transformer

This is a simple transformer, to get the given power, of each value of any column.

public class CustomTransformer extends Transformer {
    private static final long serialVersionUID = 5545470640951989469L;
         String column;
         int power = 1;

    CustomTransformer(String column, int power) {
         this.column = column;
         this.power = power;

    public String uid() {
        return "CustomTransformer" + serialVersionUID;

    public Transformer copy(ParamMap arg0) {
        return null;

    public DataFrame transform(DataFrame data) {
        return data.withColumn("power", functions.pow(data.col(this.column), this.power));

    public StructType transformSchema(StructType arg0) {
        return arg0;

You can refer [1]  for another similar example.

UDF transformer

We can also, register some custom logic as UDF in spark sql context, and then transform the Dataframe with spark sql, within our transformer.

Refer [2] for a sample which uses a UDF to extract part of a string in a column.



Supun SethungaAnalytics for WSO2 ESB : Architecture in a Nutshell

ESB Analytics Server is the analytics distribution for the WSO2 ESB, which is built on top of WSO2 Data Analytics Server (DAS).  Analytics for ESB consists of an inbuilt dashboard for Statistics and Tracing visualization for Proxy Services, APIs, Endpoints, Sequence and Mediators. Here I will discuss on  the architecture of the Analytics Server, and how it operates behind the scenes, to provide this comprehensive dashboard.

Analytics Server can operates in three modes:

  • Statistics Mode
  • Tracing Mode
  • Offline Mode

For all the three modes, data are published from ESB Server  to the Analytics server via the data bridge. In doing so ESB server uses the "publisher" component/feature, while Analytics server uses the "receiver" component/feature of the data bridge. ESB triggers an event for a single message flow, to the Analytics Server. Each of these events contains the information about all the components that were invoked during the message flow.

If the statistics are enabled for a given Proxy/API at the ESB side, then the Analytics server will operates in "Statistics Mode". If the Tracing and capturing Syanpse properties are also  enabled at the ESB side, then the Analytics server will operates on the "Tracing Mode". Analytics server will switch between these modes on the fly, depending on the configurations set at the ESB side.

Statistics Mode

In this mode, ESB server will be sending information regarding each mediator, for each message to Analytics Server. Analytics Server will be calculating the summary statistics out of these information, and will store only summary statistics, but will not store any raw-data coming from the ESB. This is a hybrid solution of both siddhi (WSO2 CEP) and Apache Spark. This mode generate statistics in real time.

Pros: Can handle much higher throughput. Statistics are available in real time.
Cons: No tracing available. Hence any message related info will not be available in the dashboard.

Tracing Mode

Similar to the Statistics mode, ESB server will be sending information regarding each mediator, for each message to Analytics Server. Analytics Server will be calculating the summary statistics out of these information,. But unlike previous case, it will store both statistics as well as component wise data. This enables the user to trace any message using the dashboard. More importantly, this mode also allows a user to view statistics and trace messages in real time.

Pros: Statistics and Tracing info are available in real time. Message level details are also available.
Cons: Throughput is limited. Can handle upto around 7000/n events per second, where n is the number of mediators/components available in the message flow of the event sent from ESB.

Offline Tracing

This mode also allows a user to get statistics as well as tracing, similar to the previous "Tracing Mode". But this operates in an offline/batch analytics mode, unlike previous scenario. More precisely, Analytics Server will store all the incoming events/data from ESB, but will not process them on the fly. Rather a user can collect data for any period of time, and can run a predefined spark script in-order to get the stats and tracing details.

Pros: Users can trace messages, and message level details are available. A much high throughput can be achieved compared to "Tracing Mode"
Cons: No realtime processing.

Supun SethungaCheck Database size in MySQL

Login to mysql with your usernamse and password.
eg: mysql u root -proot

Then execute the following command:
SELECT table_schema "DB Name", ROUND(SUM(data_length + index_length)/1024/1024, 2) "Size in MBs"
FROM information_schema.tables
GROUP BY table_schema;

Here SUM(data_length + index_length) is in bytes. Hence we have divided it twice by 1024 to convert to Megabytes.

Supun SethungaStacking in Machine Learning

What is stacking?

Stacking is one of the three widely used ensemble methods in Machine Learning and its applications. The overall idea of stacking is to train several models, usually with different algorithm types (aka base-learners), on the train data, and then rather than picking the best model, all the models are aggregated/fronted using another model (meta learner), to make the final prediction. The inputs for the meta-learner is the prediction outputs of the base-learners.

Figure 1

How to Train?

Training a stacking model is a bit tricky, but is not as hard as it sounds. All it requires is some similar steps as k-fold cross validation. First of all, devide the original data set in to two sets: Train set and Test set. We wont be even touching the Test Set during our training process of the Stacking model. Now we need to divide the Train set in to k-number (say 10) of folds. If the original dataset contains N data points, then each fold will contain N/k number of data points. (its is not mandatory to have equal size folds.)

Figure 2

Keep one of the folds aside, and train the base models, using the remaining folds. The kept-aside fold will be treated as the testing data for this step.

Figure 3

Then, predict the valued for the remaining fold (10th fold), using all the M models trained. So this will result M number of predictions for each data point in the 10th fold. Now we have N/10 data points sets (prediction sets), each with M number of fields (predictions coming from the M number of models). i.e: matrix with N/10 * M.

Figure 4

Now, iterate the above process by changing the kept-out fold (from 1 to 10). At the end of all the iterations, we would be having N number of prediction results sets, which corresponds to each data point in the original training set, along with the actual value of the field we predict.

Data Point #prediction from base learner 1prediction from base learner 2prediction from base learner 3prediction from base learner Mactual

This will be the input data set to our meta-learner. Now we can train the meta learner, using any suitable native algorithm, by sending each prediction as an input field and the original value as the output field.


Once all the base-learners and meta-learner are trained, prediction follows the same idea as the training, except the k-folds. Simply, for a given input data point, all we need to do is to pass it through the M base-learners and get M number of predictions, and send those M predictions through the meta-learner as inputs, as in the Figure 1.

Supun SethungaConnect iPython/Jupyter Notebook to pyspak


  • Install jupyter
  • Download and uncompress spark 1.6.2 binary.
  • Dowload pyrolite-4.13.jar

Set Environment Variables

open ~/.bashrc and add the following entries: 
export PYSPARK_DRIVER_PYTHON_OPTS='notebook' pyspark
export PYSPARK_PYTHON=/home/supun/Supun/Softwares/anaconda3/bin/python
export SPARK_HOME="/home/supun/Supun/Softwares/spark-1.6.2-bin-hadoop2.6"
export PATH="/home/supun/Supun/Softwares/spark-1.6.2-bin-hadoop2.6/bin:$PATH"
export SPARK_CLASSPATH=/home/supun/Downloads/pyrolite-4.13.jar

If you are using some external third-party libraries such as spark-csv, then add that jar's absolute path to Spark Class path, seperated by colons (:) as below.
export SPARK_CLASSPATH=<path/to/third/party/jar1>:<path/to/third/party/jar2>:..:<path/to/third/party/jarN>

To make the changes take effect, run:
source ~/.bashrc 

Get Started

Create a new directory, to be used as the python workspace (say "python-workspace"). This directory will be used to store the scripts we create in the notebook. Navigate to that created directory, and run the following to start the notebook.


Here spark will start in local mode. You can check the Spark UI at http://localhost:5001

If you need to connect to a remote spark cluster, then specify the master URL of the remote spark cluster as below, when starting the notebook.
pyspark --master spark://"

Finally navigate to http://localhost:8888/ to access the notebook.

Use Spark within python

To do spark operations with python, we are going to need the Spark Context and SQLContext. When we start jupyter with pyspark, it will create a spark context by default. This can be accessed using the object 'sc'.
We can also create our own spark context, with any additional configurations as well. But to create a new one, we need to stop the existing spark context first.
from pyspark import SparkContext, SparkConf, SQLContext

# Set the additional propeties.
sparkConf = (SparkConf().set(key="spark.driver.allowMultipleContexts",value="true"))

# Stop the default SparkContext created by pyspark. And create a new SparkContext using the above SparkConf.
sparkCtx = SparkContext(conf=sparkConf)

# Check spark master.

# Create a SQL context.
sqlCtx = SQLContext(sparkCtx)

df = sqlCtx.sql("SELECT * FROM table1")

'df' is a spark dataframe. Now you can do any spark operation on top of that dataframe. You can also use spark-mllib and spark-ml packages and build machine learning models as well.

Imesh Gunaratne"Containerizing WSO2 Middleware" in Container Mind

Image source: <a href=""></a>

Deploying WSO2 Middleware on Containers in Production

A heavier car may need more fuel for reaching higher speeds than a car of the same spec with less weight [1]. Sports car manufacturers always adhere to this concept and use light weight material such as aluminum [2] and carbon fiber [3] for improving fuel efficiency. The same theory may apply to software systems. The heavier the software components, the higher the computation power they need. Traditional virtual machines use a dedicated operating system instance for providing an isolated environment for software applications. This operating system instance needs additional memory, disk and processing power in addition to the computation power needed by the applications. Linux containers solved this problem by reducing the weight of the isolated unit of execution by sharing the host operating system kernel with hundreds of containers. The following diagram illustrates a sample scenario how much resources containers would save compared to virtual machines:

Figure 1: A resource usage comparison between a VM and a container

The Containerization Process

The process of deploying a WSO2 middleware on a container is quite straightforward, it’s matter of following few steps;

  1. Download a Linux OS container image.
  2. Create a new container image from the above OS image by adding Oracle JDK, product distribution and configurations required.
  3. Export JAVA_HOME environment variable.
  4. Make the entry point invoke the bash script found inside $CARBON_HOME/bin folder.

That’s it! Start a container from the above container image on a container host with a set of host port mappings and you will have a WSO2 server running on a container in no time. Use the container host IP address and the host ports for accessing the services. If HA is needed, create few more container instances following the same approach on multiple container hosts and front those with a load balancer. If clustering is required, either use Well Known Address (WKA) membership scheme [4] with few limitations or have your own membership scheme for automatically discovering the WSO2 server cluster similar to [5]. The main problem with WKA membership scheme is that, if all well known member containers get terminated, the entire WSO2 server cluster may fail. Even though this may look like a major drawback for most high intensity, medium and large scale deployments, it would work well for low intensity, small scale deployments which are not mission critical. If containers go down due to some reason, either a human or an automated process can bring them back to the proper state. This has been the practice for many years with VMs.

Nevertheless, if the requirement is other way around, and need more high intensity, completely automated, large scale deployment, a container cluster manager and a configuration manager (CM) would provide additional advantages. Let’s see how those can improve the overall productivity of the project and final outcome:

Configuration Management

Ideally software systems that run on containers need to have two types of configuration parameters according to twelve-factor app [6] methodology:

  1. Product specific and global configurations
  2. Environment specific configurations

The product specific configurations can be burned into the container image itself and environment specific values can be passed via environment variables. In that way, a single container image can be used in all environments. However currently, Carbon 4 based WSO2 products can only be configured via configuration files and Java system properties. They do not support to be configured via environment variables. Nevertheless if anyone is willing to put some effort on this, an init script can be written to read environment variables at the container startup and update the required configuration files. Generally Carbon 4 based WSO2 middleware have considerable amount of config files and parameters inside them. Therefore this might be a tedious task. According to the current design discussions, in Carbon 5, there will be only one config file in each product and environment variables will be supported OOB.

What if a CM is run in the Container similar to VMs?

Yes, technically that would work, most people would tend to do this with the experience they have with VMs. However IMO containers are designed to work slightly differently than VMs. For an example, if we compare the time it takes to apply a new configuration or a software update by running a CM inside a container vs starting a new container with the new configuration or update, the later is extremely fast. It would take around 20 to 30 seconds to configure a WSO2 product using a CM whereas it would only take few milli-seconds to bring up a new container. The server startup/restart time would be the same in both approaches. Since the container image creation process with layered container images is much efficient and fast this would work very well in most scenarios. Therefore the total config and software update propagation time would be much faster with the second approach.

Choosing a Configuration Manager

There are many different configuration management systems available for applying configurations at the container build time, to name few; Ansible [7], Puppet [8], Chef [9] and Salt [10]. WSO2 has been using Puppet for many years now and currently uses the same for containers. We have simplified the way we use Puppet by incorporating Hiera for separating out configuration data from manifests. Most importantly container image build process does not use a Puppet master, instead it runs Puppet in masterless mode (puppet apply). Therefore even without having much knowledge on Puppet, those can be used easily.

Container Image Build Automation

WSO2 has built an automation process for building container images for WSO2 middleware based on standard Docker image build process and Puppet provisioning. This has simplified the process of managing configuration data with Puppet, executing the build, defining ports and environment variables, product specific runtime configurations and finally optimizing the container image size. WSO2 ships these artifacts [11] for each product release. Therefore it would be much productive to use these without creating Dockerfiles on your own.

Choosing a Container Cluster Manager

Figure 2: Deployment architecture for a container cluster manager based deployment

The above figure illustrates how most of the software products are deployed on a container cluster manager in general. The same applies to WSO2 middleware on high level. At the time this article being written, there are only few container cluster managers available. They are Kubernetes, DC/OS, Docker Swarm, Nomad, AWS ECS, GKE and ACE. Out of these, WSO2 has used Kubernetes and DC/OS in production for many deployments.

Figure 3: Container cluster management platforms supported for WSO2 middleware

Strengths of Kubernetes

Kubernetes was born as a result of Google’s experience on running containers at scale for more than a decade. It has covered almost all the key requirements of container cluster management in depth. Therefore it’s the first most preference for me as of today:

  • Container grouping
  • Container orchestration
  • Container to container routing
  • Load balancing
  • Auto healing
  • Horizontal autoscaling
  • Rolling updates
  • Mounting volumes
  • Distributing secrets
  • Application health checking
  • Resource monitoring and log access
  • Identity and authorization
  • Multi-tenancy

Please read this article on more detailed information on Kubernetes.

Reasons to Choose DC/OS

AFAIU at the moment DC/OS has less features compared to Kubernetes. However it’s a production grade container cluster manager which has been there for some time now. The major advantage I see with DC/OS is the custom scheduler support for BigData and Analytics platforms. This feature is still not there in Kubernetes. Many major Big Data and Analytics platforms such as Spark, Kafka and Cassandra can be deployed on DC/OS with a single CLI command or via the UI.

The Deployment Process

Figure 4: Container deployment process on a container cluster manager

WSO2 has released artifacts required for completely automating containerized deployments on Kubernetes and DC/OS. This includes Puppet Modules [12] for configuration management, Dockerfiles [11] for building docker images, container orchestration artifacts [13], [14] for each platform and WSO2 Carbon membership schemes [5], [15] for auto discovering the clusters. Kubernetes artifacts include replication controllers for container orchestration and services for load balancing. For DC/OS we have built Marathon applications for orchestration and parameters for the Marathon load balancer for load balancing. These artifacts are used in many WSO2 production deployments.

Please refer following presentations on detailed information on deploying WSO2 middleware on each platform:

Documentation on WSO2 Puppet Modules, Dockerfiles, Kubernetes and Mesos Artifacts can be found at [16].


















Containerizing WSO2 Middleware was originally published in Container Mind on Medium, where people are continuing the conversation by highlighting and responding to this story.


Read the responses to this story on Medium.

Nipuni PereraJava Collections Framework

This post will give a brief knowledge on Java Collections framework. This may not cover each and every implementation but most commonly used classes.

Before Java Collections framework was introduced, Arrays, HashTables and Vectors were the standard methods for grouping and manipulate collections of objects. But these implementations used different methods and syntax for accessing members (arrays used [], Vector used elementAt() methods while HashTable used get() and put() method. So you can't wrap one object to another easily). They were too static (changing the size and the type is hard) and had few built-in functions. Further most of the methods in Vector class were marked as final (so that you can't extend is behavior to implement a similar sort of collection), and most importantly none of them implements a standard interface. As programmers develop algorithms (such as sorting) to manipulate collections, they have to decide on what object to pass to the algorithm. Should they pass an Array or Vector or implement both interfaces?

The Java Collections framework provides a set of classes and interfaced to handle collections of objects. Most of the implementations can be found in java.util package. The Collections interface extends Iterable interface which make the Iterable interface (java.lang.Iterable) one of the root interfaced of the java collection classes. Thus all the subtypes of Collections also implements the Iterable interface. It has only one methd: iterator().

There are mainly 2 groups in the framework: Collections and Maps.


We have collection interface at the top that is used to pass collections around and manipulate then in the most generic way. All other interfaces List, Set etc extends this and make more customized objects.

Figure 1:Overview of Collection
List, Set, SortedSet, NavigableSet, Queue, Deque extends the Collection interface. The Collection interface just defines a set of  basic behaviors (methods such as adding, removing etc) that each of these collection sub-types share.
Both Set and List implements the Collections interface. The Set interface is identical to the Collection interface except for an additional method toArray() which converts a Set to an object array. The List interface also implements the Collections interface but provide more accessors that use and integer index into the List. For instance get(), remove(), and set() all take an integer that affects the indexed element in the list.


The Map interface is not derived from Collection, but provides an interface similar to the methods in java.util.Hashtable.

Figure 2:Map interface
Keys are used to put and get values. This interface is implemented with two new concrete implementations, the TreeMap and the HashMap. The TreeMap is a balanced tree implementation that sorts elements by the key.


  • The main advantage is it provides a consistent and a regular API. This reduces the effort required to design and implement APIs. 
  • Reduces programming effort - by providing data structures and algorithms so you don't have to write them yourself. 
  • Increases performance -  by providing high performance implementations of data structures and algorithms. As various implementations of each interface are interchangeable, programs can be tuned by switching implementations.  


The 2 commonly used implementations are LinkedList and ArrayList. List allow duplicates in the collections and use indexes to access elements. When deciding between these two implementations, we should consider whether the list is volatile (grows or shrinks often) and whether the access to items is random or ordered.

Following are 2 ways of initializing an ArrayList. Which method is correct?

List<String> values = new ArrayList<String>();

ArrayList<String> values = new ArrayList<String>();

Both approaches are acceptable. But the first option is more suitable. The main reason you'd do the first way is to decouple your code from a specific implementation of the interface and it allows you to switch between different implementations of the List interface with ease. eg: method public List getList() will allow you return any type that implements List interface. An interface gives you more abstraction, and make the code more flexible and resilient to changes as you can use different implementations.

ArrayList - A widely used collections framework in java. We can specify the size while initializing if not a default size will be used initially. We can add and retrieve items using add() and get() methods. But incase of removing items have to use with care. Internally this maintains an array to store items. So if you remove an item at the end, it will just remove the last element. But if you remove the first element, the operation will be very slow. (As it will remove first and copy all the following items back to fill the gap).

LinkedList - Internally maintains a doubly linked list, so that each element has a reference to the previous and next element. Thus retrieving an item from the list will be slow compared to the ArrayList as it traverse elements from the beginning.

The rule is if you want to add or remove items from the end of your list, it is efficient to use ArrayList, and to add or remove items from anywhere else it is efficient to use LinkedList.

Below sample will show using List implementations.


Set does not allow duplicates in the collection. Adding a duplicate add nothing to a Set.  Sets allow mathematical operations such as intersection, union, difference.


Maps store elements as key-value pairs. Can store any kind of objects. If you need to store customized objects as the keys, then you need to implement your own hashcode. The keys have to be unique. The inbuilt method map.keySet() returns a Set and does not have duplicates in the result. Thus if you have added different values with same keys, older will be replaced by the new one
If you iterate and retrieve items from the map, this will not maintain any order. Hence will get random order when iterate.

Two commonly used implementations are LinkedHashMap and TreeMap. LinkedHashMap will return values in order that they were inserted, while the treeMap is going to sort the keys


[1]This is a very useful set of video tutorials that I could found on - Java Collections Framework

Nipuni PereraClaim Management with WSO2 IS

WSO2 Carbon supports different claim dialects. A claim dialect can be thought of as a group of claims. A claim carries information from the underlying user store.

Claim attributes in user profile info page:

In WSO2 IS each piece of user attribute is mapped as a claim. If you visit the user profile page for a specific user (Configure --> Users and Roles --> Users --> User Profile), you can view the user profile data (see figure 1 below).

Figure 1

As you can see there are mandatory fields (eg: Profile Name), optional fields (eg: Country) and read only fields (eg: Role).
You can add a new user profile field to the above page. If you visit Claim Management list (in Configure --> Claim Management), there are set of default claim dialects listed in WSO2 IS. Among them is the default dialect for WSO2 Carbon. You can follow the steps below to add a new field to the user profile info page:
  1. Click on dialect . This will list down a set of claim attributes.
  2. Lets say you need to add attribute "Nick Name" to the user profile page. 
  3. Click on attribute "Nick Name" and "Edit" . There are a set of fields you can edit. Some important features are:
    1. Supported by Default - This will add the attribute to the user profile page
    2. Required - This will make the attribute mandatory to fill when updating user profile
    3. Read-only - This will make the attribute read-only 
  4. You can try actions listed above and add any attribute listed in the dialect (or add a new claim attribute using "Add new Claim Mapping" option)
There are some more useful dialects defined in WSO2 IS.

One such dialect is  which is defined for OpenID attribute exchange. Attributes defined in this dialect will be used when retrieving claims for user info requests (as I have described in my previous post on "Accessing WSO2 IS profile info with curl"  ).

How to add a value to a claim defined in OpenID dialect?

(This mapping is currently valid for WSO2 IS 5.0.0 and will get changed in a later release)
You can follow the steps below when adding a value to a claim attribute in the OpenID dialect.
  1.  Start WSO2 IS and login.
  2. Go to wso2 OpenID claim dialect. (
  3. Find a claim attribute that you need to add a value to. (eg: Given Name)
  4. Go to User Profile page. This will not display an entry to add Given Name attribute. 
  5. As I have described in the first section of this post add a new claim mapping to the default dialect for WSO2 Carbon ( with the name and the "Mapped Attribute (s)". (Eg: Add a new Claim with the following details: )
    1.  Display Name : Given Name
    2.  Claim Uri : given_name
    3.  Mapped Attribute (s) : cn   ----> add the same Mapped Attribute in you OpenID claim attribute
    4.  Supported by Default : check
    5. Required : check
  6. Now you have a new claim attribute added to the default dialect for WSO2 Carbon
  7. If you visit the user profile page of a user you can add a value to the newly added attribute. 
  8. If you retrieve user info as in "Accessing WSO2 IS profile info with curl" you can see the newly added value is retrieved in the format {<Claim Uri > : <given value>} eg: ({given_name : xxxxx})
Please note that if you still can't see the newly added value when retrieving user info, you may have to restart the server or retry after cache invalidates (after 15min).  

This claim mapping operate as follows:
 > When you add a value to a user profile field via the UI (eg: adding a value to "Full Name" will map the value with the mapping attribute "cn" of the claim).
 > Hence if there is any other claim attribute in OpenID dialect that has the same mapping attribute "cn" then, this will also get the value added above.
 > (Eg: say you have "Mapping Attribute"="cn" in the claim attribute "Full Name" in OpenID dialect, You can get the value you have entered in to the "Full Name" entry in the user profile.

Nipuni PereraCreating Docker Image for WSO2 ESB

Docker is a Container, used for running an application so that container is separate from others and run safely. Docker has a straightforward CLI that allows you to do almost everything you could want to a container.
Most of these commands use image id, image name, and the container id depend on the requirement. Docker daemon always run as the root user. Docker has a concept of "base containers", which you use to build your containers. After making changes to a base container, you can save those change and commit them.
One of Docker's most basic images is called "Ubuntu" (which I have used in my sample described in this post)
A Dockerfile provides a set of instructions for Docker to run on a container.
For each line in the Dockerfile, a new container is produced if that line results in a change to the image used. You can create you own images and commit them to Docker Hub so that you can share them with others. The Docker Hub is a public registry maintained by Docker, Inc. that contains images you can download and use to build containers.

 With this blog post, I am creating a docker image to start wso2 ESB server. I have created the dockerfile below to create my image. 

FROM       ubuntu:14.04


RUN apt-get update

RUN sudo apt-get install zip unzip

COPY /opt

COPY /opt

WORKDIR "/opt"

RUN unzip

RUN unzip /opt/

ENV JAVA_HOME /opt/jdk1.8.0_60

RUN chmod +x /opt/wso2esb-4.8.1/bin/

EXPOSE 9443 9763 8243 8280

CMD ["/opt/wso2esb-4.8.1/bin/"]

FROM ------------>will tell Docker what image is to base this off of.
RUN   ------------->will run the given command (as user "root") using sh -c "your-given-command"
ADD   ------------->will copy a file from the host machine into the container
WORKDIR ------ >set your location from which you need to run your commands from
EXPOSE ---------->will expose a port to the host machine. You can expose multiple ports.
CMD -------------- >will run a command (not using sh -c). This is usually your long-running process. In this case, we are running the script. 

Following are the possible errors that you may face while building the docker image with a dockerfile: 

Step 1 : FROM ubuntu:14.04

---> 1d073211c498

Step 2 : MAINTAINER Nipuni

---> Using cache

---> c368e39cc306

Step 3 : RUN unzip

---> Running in ade0ad7d1885

/bin/sh: 1: unzip: not found

As you can see the dockerfile has encountered an issue in step 3 which is “unzip is not found” to run as a program. This is because we need to add all the dependencies to the dockerfile before using them. Dockerfile creates an image based on the basic image “ubuntu:14.04” which is just a simple Ubuntu image. You need to install all the required dependencies (in my case it would be unzip) before using them.

Step 5 : RUN unzip
---> Running in e8433183014c

unzip : cannot find or open, or

The issue is docker cannot find the zip file. I have added my zip file as the same location of my dockerfile. While building the docker image, you need to copy your resources to docker instance location with “COPY” command.

Step 9 : RUN /opt/wso2esb-4.8.1/bin/

Error: JAVA_HOME is not defined correctly.

CARBON cannot execute java

We need JAVA_HOME environment variable to be set properly while running wso2 products. Docker support setting environment variables with “ENV” command. You can copy your jdk zip file similar to and set JAVA_HOME. I have added below commands to my dockerfile.

COPY /opt

RUN unzip

ENV JAVA_HOME /opt/jdk1.7.0_65

 After successfully creating the dockerfile, save it with name "Dockerfile" in you preferred location. Add and to same location.

You can then run the saved  dockerfile with command below:

 sudo docker build -t wso2-esb .

 As result you can see the commands listed in the dockerfile are running one by one with final line "Successfully built <image-ID>".

 You can view the newly created image with "sudo docker images".

 You can then run your image with command "sudo docker run -t <Image-ID>". You should be able to see the logs while starting the wso2 server.

 You also can access the server logs with "sudo docker logs <container-ID>".

Nipuni PereraUsing NoSQL databases

Databases plays a vital role when it comes to managing data in applications. RDBMS (Relational Database Management Systems) are commonly use to store/manage data/transactions in application programming.
As per the design of RDBMS, there are some limitations when applying RDBMS to manage Big/dynamic/unstructured data.
  • RDBMS use tables, join operations, references/foreign keys to make connections among tables. It will be costly to handle complex operations that involve multiple tables.
  • It is hard to restructure a table. (eg: each entry/row in the table has similar set of fields). If the data structure changed, the table has to be changed
In contrast, there are applications that process large scale, dynamic data (eg: geospatial data, data used in social networks). Due to the limitations above, the RDBMS may not be the ideal choice. 

What is No-SQL?

No-SQL (Not only SQL) is a non-relational database management system, that has some significant differences than RDBMSs. No-SQL as the name suggest does not use a SQL as the querying language and uses javascript(commonly used) instead. JSON is frequently used when storing records. 

No-SQL databases some key features that make it more flexible than RDBMS,
  1. The database, tables, fields need not to be pre-defined when inserting records. If the data structure is not present database will create it automatically when inserting data. 
  2. Each record/entry (or row in terms of RDBMS tables) need not to have the same set of fields. We can create fields when creating the records.
  3. Allows nested data structures (eg: arrays, documents)
Different types of No-SQL data:

  1. Key-Value:
    1. A simple way of storing records with a key(from which we can lookup the data) and a value (can be a simple string or a JSON value)
    1345"{Name: Nipuni, Surname: Perera, Occupation: Software Engineer}"

  2. Graph:
    1. Used when data can be represented as interconnected nodes.     
  3. Column:
    1. Uses a similar flat table structure used in RDBMSs, but keys are used in columns rather than in rows. 

  4. Document:
    1. Stored in a format like JNSON, XML.
    2. Each document can have a unique structure. (Document type is used when storing objects and support OOP)
    3. Each document usually has a specific key, which can use to retrieve the document quickly.
    4. Users can query data by the tagged elements. The result can be a String, array, object etc. (I have highlighted some of the tags in the sample document below.)
    5. A sample document data that stores personal details may look like below:
      1. {
Name”: “Nipuni”
Education”: [
{ “secondary-education”:”University of Moratuwa”}
, { “primary-education”: ”St.Pauls Girsl School”}

Possible application for No-SQL
  1. No-SQL commonly used in web applications, that involves dynamic data. As per the data type description above, No-SQL is capable of storing unstructured data. No-SQL can be a powerful candidate for handling big data. 
  2. There are many implementations available for No-SQL (eg:  CouchDB, MongoDB) that serve different types of data structures.
  3. No-SQL can use to retrieve full list (that may involve multiple tables when using RDBMS). Eg: Retrieving details of a customer in a financial company may have different levels of information about the customer (eg: personal details, transaction details, tax/income details). No-SQL can save all this data in a single entry with a nested data type (eg: document), which then can retrieve complete data set without any complex join operation. 
The decision on which scheme to use depend on the requirement of the application. Generally, 

  1. Structured, predictable data can be handled with →  RDBMS
  2. Unstructured, bid data, complex and rapidly changing data can manage with → No SQL (But there are different implementations for No-SQL that provide different capabilities. No-SQL is just a concept for database management systems.)

No-SQL with ACID properties

Relational databases usually guarantee ACID properties. ACID provides a rule set that guarantees to handle transactions keeping its data safe. It depend on which No-SQL implementation you choose, and how much the database implementation guarantee the ACID properties.

  • Atomicity - when you do something to change a database the change should work or fail as a whole. Atomicity is guaranteed in document wide transactions. Writes cannot be partially applies to an inserted document.
  • Consistency-  the database should remain consistent. This feature support depend on your chosen No-SQL implementation. As No-SQL databases mainly support distributed systems, consistency and availability may not compatible.

  • Isolation - If multiple transactions are processing at the same time they shouldn't be able to see mid-status. There are No-SQL implementations that support read/write locks to to support isolation mechanism. But this too depends on the implementation.
  • Durability - If there is a failure (hardware or software) the database needs to be able to pick itself back up. No-SQL implementations support different mechanisms (eg: MongoDB supports journaling. With journaling when you do an insert operation in mongoDB it keeps that in memory and insert into a journal. )

Limitations of No-SQL

  1. There are different DBs available that uses No-SQL, you need to evaluate and find out which fits your requirements the most.
  2. Possibility of duplication of data.
  3. ACID properties may not support for all the implementations.

I have mainly worked with RDBMS, and have a general idea about the No-SQL concept. There is are significant differences between RDBMS and No-SQL database management systems. The choice depends on the requirements of the application and the No-SQL implementation to use. IMHO the decision should take after a proper evaluation of the requirement, and the limitation that the system can afford.

Nipuni PereraBehavior parameterization in Java 8 with lambda expressions and functional interfaces

Java 8 is packed with some new features in language level. In this blog post I hope to give an introduction to behavior parameterization with samples using lambda expressions. I will first describe a simple scenario and give a solution with java 7 features and then improve that solution with java 8 features.

What is behavior parameterization?

Behavior parameterization is a technique to improve the ability to handle changing requirements by allowing a caller of a function to pass custom behavior as a parameter. In a simple note, you can pass a block of code as an argument to another method, which will parameterize the behavior based on the passed code block.

Sample scenario:

Assume a scenario of a Company with set of Employees. The management of the company need to analyze the details of the employees of the company to identify/categorize employees into set of groups. (eg: group based on age, gender, position etc).

Below is a sample code for categorizing employees based on age and gender with java 7.

Solution 1 - Using java 7

There are 2 methods filterByAge() and filterByGender() which follows the same pattern except one logic which is the logic inside the if statement. If we can parameterize the behavior inside the if block, we could use a single method to perform both filtering options. It improves the ability to tell a method to take multiple strategies as parameters and follow them internally as per the requirements.

Lets try to reduce the code to a single method using anonymous classes. We are still using java 7 and no java 8 features were used.

Solution 2 - Improved with anonymous classes

Instead of maintaining two methods we have introduced a new method “filterEmployee” which takes 2 arguments an employee inventory and an EmployeePredicate. EmployeePredicate is a customized interface that has a single abstract method test() which takes an Employee object and returns a boolean. Then we have used 2 implementations of EmployeePredicate interface as anonymous classes to pass the behavior as per the requirement.

We have changed our program from solution-1 to solution-2 with following steps:
  1. We have reduced 2 methods to a single method and improved that method to accept a behavior. (This is an improvement)
  2. We had to introduce a new custom interface EmployeePredicate and use anonymous classes to pass the behaviour. (This is not a good enough and verbose. We need to improve this)

Functional Interfaces

Functional interface is an interface that has only a one abstract method. (Similar to what we have introduced in the previous solution, EmployeePredicate interface). Functional interface can have other default methods (which is another new feature introduced in java8) as long as it includes a single abstract method.

Sample functional interfaces:

As per our latest solution with anonymous classes, we need to create our own interface that includes a method accepting an object of our preference and return some output. But with java 8 there are some generic functional interfaces that were newly introduced. We can reuse them to pass different behaviors without creating our own interfaces. I have listed some of them below:

  1. java.util.function.Predicate<T> interface has a one abstract method “test()” that takes an Object of type T and returns a boolean.
    1. Eg: as per our scenario We take Employee as an object and return a boolean to indicate if the employee’s age is less than 30.
  2. java.util.function.Consumer<T> interface has a one abstract method “accept()” that takes an Object of type T and does not return anything
    1. Eg: Assume we need to print all the details of a given Employee. But not return anything. We can use Consumer interface.
  3. java.util.function.Function<T,R> interface has a one abstract method “apply()” that takes an Object of Type T and returns an Object of type R.
    1. Eg: Assume we need to take Employee object and return the employee ID as an integer.

What should be the functional interface that we need to use to improve our existing solution in the employee categorizing scenario?
Lets try to use Predicate functional interface and improve the solution.

Solution 3 - Using java 8 functional interfaces

So far we have changed our program from solution-2 to solution-3 with the steps below:
  1. We have removed the customized interface EmployeePredicate and used an existing Predicate functional interface from java 8 - (This is an improvement and we have reduced an interface)
  2. We still use the anonymous functions. (still verbose and not good enough)

Lambda expressions:

We can use lambda expressions in any place we need to pass a functional interface. Lambda expressions can represent behavior or pass code similar to anonymous functions. It can has a list of parameters, method body and a return type.
We can describe a lambda expression as a combination of 3 parts as below:

  1. List of parameters
  2. An arrow
  3. Method body

Consider the sample lambda expression below:

(Employee employee) ---> employee.getAge() < 30

This sample shows how we can pass a behavior to a Predicate interface that we have used in solution-3. Lets first analyze a anonymous class we have used.

We are implementing the Predicate functional interface that has a single abstract method. This abstract method takes an Object as parameters and return a boolean value. In the lambda expression we have used:

  1. (Employee employee)   : Parameters for the abstract function of Predicate interface
  2. --->                                : Arrow separates the list of parameters from the body of lambda
  3. employee.getAge() < 30 : This is the body of the abstract method of the predicate. The result of the body is a boolean value. Hence the above lambda expression returns a boolean value.

Sample lambda expressions:
  1. (Employee employee) ---> System.out.println(“Employee name : ” + + “\n Employee ID: ” +
    1. This a possible implementation of the Consumer functional interface that has a single abstract method that accepts an object and return void.
  2. (String s) ---> s.length()
    1. This a possible implementation of the Function functional interface that has a single abstract method that accepts an object and return another object.
  3. () → new Integer(10)
    1. This lambda expression is for a functional interface that has a single abstract method with no arguments and return an integer.
  4. (Employee employee, Department dept)  ---> {
                              If (dept.getEmployeeList().contains(employee.getID)) {
                                          System.out.println(“Employee : ” + employee.getName());

    1. This lambda expression is for a functional interface that has a single abstract method with 2 arguments of type object and return a void.

Let's rewrite the solution using lambda expressions.
Solution - 4 (Using lambda expressions)

Solution-4 can further improved using method references. I have not discussed method references with this blog post and will not include that solution here.

Kalpa WelivitigodaWUM — WSO2 Update Manager

WUM is a CLI tool from WSO2 which enables you to keep your WSO2 products up to date. WUM sounds like yum and yes, WUM has somewhat similar…

Himasha GurugeXSLT stylesheet template to add a namespace with namespace prefix

If you need to write down a xslt stylesheet ,and you need to add a namespace to a certain element with a namespace prefix , you could write a template like below. In this it will add the namespace to <UserRequest> node.

<xsl:template match="UserRequest">
        <!--Define the namespace with namespace prefix ns0 -->
        <xsl:element name="ns0:{local-name()}" namespace="">
            <!--apply to above selected node-->
            <xsl:apply-templates select="node()|@*">

If you need to add this namespace to <UserRequest> and its child element , the template match should change to below.

<xsl:template match="UserRequest | UserRequest/*">

Himasha GurugeHandeling namespaces in xpath expressions of WSO2 ESB payload mediator

You could checkout payload factory mediator of WSO2 ESB in  If you need to provide an xml input that has namespaces (other than default namespace) included, and you need to access some node of this in <args> of payloadFactory mediator you could do it like this with xpath.

 <payloadFactory media-type="xml">
<format key="conf:resources/output.xml"/>
      <arg xmlns:ns0="" expression="//ns0:UserRequest" />

Rajendram KatheesEmail OTP Two Factor Authentication through Identity Server

In this post, I will explain how to use Email OTP two authenticator through WSO2 Identity server. In this demonstration, I am using SMTP mail transport which was used to send the OTP code via email at the time authentication happens.

Add the authenticator configuration  <IS_HOME>/repository/conf/identity/application-authentication.xml file under the <AuthenticatorConfigs> section.

<AuthenticatorConfig name="EmailOTP" enabled="true">
          <Parameter name="GmailClientId">gmailClientIdValue</Parameter>
          <Parameter name="GmailClientSecret">gmailClientSecretValue</Parameter>
          <Parameter name="SendgridAPIKey">sendgridAPIKeyValue</Parameter>    
          <Parameter name="EMAILOTPAuthenticationEndpointURL">emailotpauthenticationendpoint/emailotp.jsp</Parameter>
          <Parameter name="GmailRefreshToken">gmailRefreshTokenValue</Parameter>
          <Parameter name="GmailEmailEndpoint">[userId]/messages/send</Parameter>
          <Parameter name="SendgridEmailEndpoint"></Parameter>
          <Parameter name="accessTokenRequiredAPIs">Gmail</Parameter>
          <Parameter name="apiKeyHeaderRequiredAPIs">Sendgrid</Parameter>
          <Parameter name="SendgridFormData">sendgridFormDataValue</Parameter>
          <Parameter name="SendgridURLParams">sendgridURLParamsValue</Parameter>
          <Parameter name="GmailAuthTokenType">Bearer</Parameter>
          <Parameter name="GmailTokenEndpoint">
          <Parameter name="SendgridAuthTokenType">Bearer</Parameter>

Configure the Service Provider and Identity Provider Configuration as we normally configure for Two factor authentication. Now we will configure EmailOTP Identity provider for SMTP transport.

SMTP transport sender configuration.
   Add the SMTP transport sender configuration in the <IS_HOME>/repository/conf/axis2/axis2.xml file.
  Here you need to replace {USERNAME}, {PASSWORD} and {SENDER'S_EMAIL_ID} with real values.

<transportSender name="mailto" class="org.apache.axis2.transport.mail.MailTransportSender">
       <parameter name=""></parameter>
       <parameter name="mail.smtp.port">587</parameter>
       <parameter name="mail.smtp.starttls.enable">true</parameter>
       <parameter name="mail.smtp.auth">true</parameter>
       <parameter name="mail.smtp.user">{USERNAME}</parameter>
       <parameter name="mail.smtp.password">{PASSWORD}</parameter>
       <parameter name="mail.smtp.from">{SENDER'S_EMAIL_ID}</parameter>

Comment <module ref="addressing"/> module from axis2.xml in <IS_HOME>/repository/conf/axis2.
Email Template configuration.
Add the email template in the <IS_HOME>/repository/conf/email/email-admin-config.xml file.

    <configuration type="EmailOTP">
           <subject>WSO2 IS EmailOTP Authenticator One Time    Password</subject>
       Please use this OTP {OTPCode} to go with EmailOTP authenticator.
       Best Regards,
       WSO2 Identity Server Team

When authentication is happening in second step, the code will be sent to email  which is saved in email claim of  user's user profile.
If the user apply the code, WSO2 IS will validate the code and let the user sign in accordingly.

Rajendram KatheesSMS OTP Two Factor Authentication through Identity Server

In this post, I will explain how to use SMS OTP multifactor authenticator through WSO2 Identity server. In this demonstration, I am using Twilio SMS Provider which was used to send the OTP code via SMS at the time authentication happens.

SMS OTP Authentication Flow

The SMS OTP authenticator of WSO2 Identity Server allows to authenticate the system using multifactor authentication. This authenticator authenticates with user name and password as a first step, then sending the one time password to the mobile via SMS as a second step. WSO2 IS will validate the code and let the user sign in accordingly

Add the authenticator configuration <IS_HOME>/repository/conf/identity/application-authentication.xml file under the <AuthenticatorConfigs> section.

<AuthenticatorConfig name="SMSOTP" enabled="true">
<Parameter name="SMSOTPAuthenticationEndpointURL">smsotpauthenticationendpoint/smsotp.jsp</Parameter>
 <Parameter name="BackupCode">false</Parameter>

Configure the Service Provider and Identity Provider Configuration as we normally configure for Two factor authentication. Now we will configure SMS OTP Identity provider for Twilio specific SMS Provider.

Go to ​­twilio​  and create a twilio account.

While registering the account, verify your mobile number and click on console home​  to get free credits (Account SID and Auth Token).

Twilio uses a POST method with headers and the text message and phone number are sent asthe payload. So the fields would be as follows.

SMS URL   ­04­01/Accounts/{AccountSID}/SMS/Messages.json
HTTP Method     POST
HTTP Headers    Authorization: Basic base64{AccountSID:AuthToken}
HTTP Payload    Body=$ctx.msg&To=$ctx.num&From={FROM_NUM}

You can go to SMS OTP Identity Provider and configure to send the SMS using Twilio SMS Provider.

Twilio SMS Provider Config

When authentication is happening in second step, the code will be sent to mobile no which is saved in mobile claim of  user's user profile.
If the user apply the code, WSO2 IS will validate the code and let the user sign in accordingly.

Chanaka CoorayImplement a pax exam test container for your OSGi environment

Pax Exam is the widely using OSGi testing framework. It provides many features to test your OSGi components with JUnit or testng tests.

Rajendram KatheesSMS OTP Two Factor Authentication through Identity Server

In this post, I will explain how to use SMS OTP multifactor authenticator through WSO2 server. In this demonstration, I am using Twilio SMS Provider which was used to send the OTP code via SMS at the time authentication happens.

SMS OTP Authentication Flow

The SMS OTP authenticator of WSO2 Identity Server allows to authenticate the system using multifactor authentication. This authenticator authenticates with user name and password as a first step, then sending the one time password to the mobile via SMS as a second step. WSO2 IS will validate the code and let the user sign in accordingly

Add the authenticator configuration <IS_HOME>/repository/conf/identity/application-authentication.xml file under the <AuthenticatorConfigs> section.

<AuthenticatorConfig name="SMSOTP" enabled="true">
<Parameter name="SMSOTPAuthenticationEndpointURL">smsotpauthenticationendpoint/smsotp.jsp</Parameter>
 <Parameter name="BackupCode">false</Parameter>

Configure the Service Provider and Identity Provider Configuration as we normally configure for Two factor authentication. Now we will configure SMS OTP Identity provider for Twilio specific SMS Provider.

Go to ​­-twilio​  and create a twilio account.

While registering the account, verify your mobile number and click on console home​  to get free credits (Account SID and Auth Token).

Twilio uses a POST method with headers and the text message and phone number are sent asthe payload. So the fields would be as follows.

SMS URL   ­04­01/Accounts/{AccountSID}/SMS/Messages.json
HTTP Method     POST
HTTP Headers    Authorization: Basic base64{AccountSID:AuthToken}
HTTP Payload    Body=$ctx.msg&To=$ctx.num&From={FROM_NUM}

You can go to SMS OTP Identity Provider and configure to send the SMS using Twilio SMS Provider.

Twilio SMS Provider Config

When authentication is happening in second step, the code will be sent to mobile no which is saved in mobile claim of  user's user profile.
If the user apply the code, WSO2 IS will validate the code and let the user sign in accordingly.

Dilshani SubasingheHow to enable Linux Audit Daemon in hosts where WSO2 carbon run times are deployed

Please find the post from:

Amila MaharachchiMake a fortune of WSO2 API Cloud

For those who do not know about WSO2 API Cloud:

WSO2 API Cloud is the API management solution in cloud, hosted by WSO2. In other words, this solution is WSO2's API Manager product as a service. You can try it for free after reading this post.

What you can do with it:

Of course, what you can do is, manage your APIs. i.e. If you have a REST or a SOAP service, which you want to expose as a properly authenticated/authorized service, you can create an API in WSO2 API Cloud and proxy the requests to your back end service with proper authentication/authorization. There are many other features which you can read from here.

HOW and WHO can make a fortune of it:

There are many entities who can make a fortune out of API Cloud. But, in this post, I am purely concentrating on the system integrators. They undertake projects to combine multiple components to achieve some objective. These components might involve databases, services, apis, web UIs etc.

Now, lets pay attention to publishing a managed api to expose an existing service to be used in the above mentioned solution. We all know, no one will write api management solution from scratch to achieve this when there are api management solutions available. If a SI, decides to go ahead with WSO2 API Cloud, they

1. Can create, publish and test the apis within hours. If their scenario is complex, it might take a day or two if they know what they are doing and with some help from WSO2 Cloud team.

2. Don't need to worry about hosting the api, availability and its scalability.

3. Can subscribe for a paid plan starting from 100 USD per month. See the pricing details.

Now, lets say the SI decided to go ahead with API Cloud and subscribed to a paid plan which costs 100 USD per month. If the SI charges 10,000 USD for this solution, you can see the profit margin. You pay very less and you get a great api management solution in return. If the SI do couple of such projects, they can make a fortune of it :)

Rajjaz MohammedCustom Window extension for siddhi

Siddhi  Window Extension allows events to be collected and expired without altering the event format based on the given input parameters like the Window operator. In this post, we are going to look how to write a custom window extension for siddhi and test case to test the function. By default in window extension archetype 2.0.0 it generates the code for length window so let's go deep into

Rajjaz MohammedUse siddhi Try it for Experimentation

This blog post explains how to use Siddhi Try It tool which comes with WSO2 CEP 4.2.0. The Siddhi Try It is a tool used for experimentation of event sequences through Siddhi Query Language (SiddhiQL) statements in real time basis. You can define an execution plan to store the event processing logic and input an event stream to test the Siddhi query processing functionality. Follow the

Chandana NapagodaJava - String intern() Method

String Intern method returns an individual representation for the given String object. When the intern() method get invoked on a String object, it will look up the other interned strings, and if a String object exists in the memory with the same content, it will return the existing reference. Otherwise, it will return a new reference.

Example usage of String intern:

Think about a web application with a caching layer. If cache got missed, it would go to the Database. When the application is running with the high level of concurrency, we should not send all the request to the database. Such a situation we can check whether, multiple calls coming to the same reference by checking String intern.


String name1 = "Value";
String name2 = "Value";
String name3 = new String("Value");
String name4 = new String("Value").intern();

if ( name1 == name2 ){
    System.out.println("name1 and name2 are same");
if ( name1 == name3 ){
    System.out.println("name1 and name3 are same" );
if ( name1 == name4 ){
    System.out.println("name1 and name4 are same" );
if ( name3 == name4 ){
    System.out.println("name1 and name4 are same" );


name1 and name2 are same
name1 and name4 are same

You can see that name1, name2 and name4, objects have the same reference and name3 reference is different.

Lakshani GamageHow to Receive Emails to WSO2 ESB

  1. Uncomment below line in <ESB_HOME>/repository/conf/axis2/axis2.xml/axis2.xml to enable Email transport listener.
    <transportReceiver name="mailto" class="org.apache.axis2.transport.mail.MailTransportListener">

  3. Restart WSO2 ESB if has already started.
  4. Log in to Management console and add below proxy. 
  5. Here, proxy transport type is mailto. The mailto transport supports sending messages (E-Mail) over SMTP and receiving messages over POP3 or IMAP.
    <?xml version="1.0" encoding="UTF-8"?>
    <proxy xmlns=""
    <log level="custom">
    <property expression="$trp:Subject" name="Subject"/>
    <parameter name="transport.PollInterval">5</parameter>
    <parameter name=""></parameter>
    <parameter name="mail.pop3.password">wso2pass</parameter>
    <parameter name="mail.pop3.user">wso2user</parameter>
    <parameter name="mail.pop3.socketFactory.port">995</parameter>
    <parameter name="transport.mail.ContentType">text/plain</parameter>
    <parameter name="mail.pop3.port">995</parameter>
    <parameter name="mail.pop3.socketFactory.fallback">false</parameter>
    <parameter name="transport.mail.Address"></parameter>
    <parameter name="transport.mail.Protocol">pop3</parameter>
    <parameter name="mail.pop3.socketFactory.class"></parameter>

  6. Send email to You can see the email recieving to ESB from the logs.
Note : If you are using gmail to receive emails, you have to allow external apps access in your google account as mention in here.

Sameera JayasomaKeep your WSO2 products up-to-date with WUM

We at WSO2 continuously improve our products with bug fixes, security fixes , and various other improvements. Every major release of our…

Nandika JayawardanaWhats new in Business Process Server 3.6.0

With a release of BPS 3.6.0, we have a whole set of new features added to the business process server.

User substitution capability

One of the main features of this release is the user substitution capability provided by BPS. It allows the users to define a substitution for a period of absence. ( For example , a task owner going on vacation). When the substitution period starts, all the tasks assigned will be transferred to the substituted user. Any new user tasks created against the user will be automatically assigned to the substitute as well.

See more at

JSON and XPath-based data manipulation capability.
When writing a Business Process,  it is necessary to manipulate the data we are dealing with various forms. These data manipulations include, extracting data, concatenating, conversions ect. Often, we would be dealing with eitXML xml or json messages for our workflows. Hence we are introducing the JSON and XML data manipulation capabilities with this release.

See more at

Instance data audit logs
With BPS 3.6.0, we are introducing the ability to search, and view BPMN process instances from BPMN explorer UI. In addition to that, it will show a comprehensive audit information with respect to process instances data.

See more at

Enhanced BPEL process visualiser.
In addition to that, we are introducing enhanced BPEL process visualiser with BPS 3.6.0.

Human Tasks Editor
We are also introducing WS-Human Tasks editor with developer studio. With this editor, you will be able to implement a human tasks package for business process server with minimum effort and time.

See more at

In addition to above main features, there are many bug fixes and security fixes included in BPS 3.6.0 release.

Lakshani Gamage[WSO2 ESB]How to Aggregate 2 XMLs Based on a Element Value Using XSLT Mediator

The XSLT Mediator applies a specified XSLT transformation to a selected element of the current message payload.

Suppose you are getting 2 XML responses from 2 service endpoints like below.

<?xml version="1.0" encoding="UTF-8"?>
<policyList xmlns="">
<holderName>Ann Frank</holderName>
<holderName>Shane Watson</holderName>

<?xml version="1.0" encoding="UTF-8"?>
<policyList xmlns="">

Above two responses are dynamic responses. So, above 2 XMLs should be aggregated to one xml before applying XSLT mediator. For that you can use PayloadFactory Mediator, and get aggregated XML like below.
<?xml version="1.0" encoding="UTF-8"?>
<policyList xmlns="">
<holderName>Ann Frank</holderName>
<holderName>Shane Watson</holderName>

Assume you want to get the response as follows.
<?xml version="1.0" encoding="UTF-8"?>
<ns:policyList xmlns:ns="">
<policy xmlns="">
<holderName>Ann Frank</holderName>
<policy xmlns="">
<holderName>Shane Watson</holderName>

You can use the XSLT mediator to above transformation. Use below XSL file with XSLT mediator. Upload following XSL file into registry.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="" xmlns:sn="" version="1.0">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" />
<xsl:strip-space elements="*" />
<xsl:key name="policy-by-id" match="sn:details/sn:policy" use="sn:policyId" />
<!-- identity transform -->
<xsl:template match="@*|node()">
<xsl:apply-templates select="@*|node()" />
<xsl:template match="/sn:policyList">
<xsl:apply-templates select="sn:summary/sn:policy" />
<xsl:template match="sn:policy">
<xsl:apply-templates />
<xsl:copy-of select="key('policy-by-id', sn:policyId)/sn:claims" />
<xsl:copy-of select="key('policy-by-id', sn:policyId)/sn:startedDate" />
<xsl:copy-of select="key('policy-by-id', sn:policyId)/sn:validityPeriod" />

Define XSLT mediator like this in your sequence
<xslt key="gov:/xslt/merger.xslt"/>

Dimuthu De Lanerolle

Useful OSGI Commands

1. Finding a package inside which jar?
     -  p

     - Reply :; version="0.0.0"< [24]>
          ; version="0.0.0"<jmqi_7.5.0.3_1.0.0 [82]>

2. Getting actual jar name
      - diag 24

Lakshani GamageHow to Send an Email From WSO2 ESB

  1. Configure email transport in <ESB_HOME>/repository/conf/axis2/axis2.xml/axis2.xml (There is transportsender commented out in default axis2.xml. You can uncomment them and change the parameter values as you wanted.)
    <transportsender class="org.apache.axis2.transport.mail.MailTransportSender" name="mailto">
    <parameter name=""></parameter>
    <parameter name="mail.smtp.port">587</parameter>
    <parameter name="mail.smtp.starttls.enable">true</parameter>
    <parameter name="mail.smtp.auth">true</parameter>
    <parameter name="mail.smtp.user">sender</parameter>
    <parameter name="mail.smtp.password">password</parameter>
    <parameter name="mail.smtp.from"></parameter>

  3. If ESB has already started, restart the server.
  4. Log in to Management console and add below proxy.
    <transportsender class="org.apache.axis2.transport.mail.MailTransportSender" name="mailto">
    <parameter name=""></parameter>
    <parameter name="mail.smtp.port">587</parameter>
    <parameter name="mail.smtp.starttls.enable">true</parameter>
    <parameter name="mail.smtp.auth">true</parameter>
    <parameter name="mail.smtp.user">myemail</parameter>
    <parameter name="mail.smtp.password">password</parameter>
    <parameter name="mail.smtp.from"></parameter>

  6. Invoke the proxy and then you can see a mail in's inbox from
Note : If you are using gmail to send above mail, you have to allow external apps access in your google account as mention in here.

Amani SoysaiPad Apps helps autistic children to spread their wings

Today there is app for everything, for maths, science, history, music and many more. Can these apps be useful for children with special needs? Kids with special needs cannot use the traditional approaches in learning, specially kids with autism need extra help than the rest of these children. They find it bit hard to focus on the things you teach, and process the information that is targeted. For kids like this Apps can be a life saver.

There  is an app for every skill you want to impart. For example there are apps targeted:

      • Improve Attention Span
      • Communication
      • Eye Contact
      • Social Skills
      • Motor co-ordination
      • Language and literacy
      • Time Management
      • Emotional Skills and Self Care

There are many apps in the appstore that facilitate Augmentative and Alternative Communication (AAC ) which can be highly use for autistic kids who are non verbal or speech partially impaired. These apps use text to speech technology, where kids can type words, or drag and drop images to communication with others on their needs.

Here are some of the reasons why iPad apps are so attractive for the kid's with autism and why they simply love it.
  • Visual Learning - Children with autism tend to learn more visually or by touch. In most cases they struggle with instructions given in the traditional school setting. Therefore, ipad provides an interactive educational environment to help them learn.
  • Structured and Consistent - Autistic children prefer devices compared to humans mostly because they are structured and constant. The voice is constant phase is constant, and they like things to be orderly and the ipad waits, which is something most human educators lack.
  • Promote independent learning and gives immediate feedback - Some apps which were build for ASD gives sensory re-enforcement to the child and when giving negative feedback, it chose to give it smoothly without any loud noises to distract the child.
  • Diverse Therapeutic Resource - Technology evolves and expand rapidly, so every day you see new apps with lot of creativity. Kids will not get bored, and the apps they use will get updated time to time
  • Cost Effective - Compared to other therapeutic resources ipad is inexpensive. For example Assistive Technology Devices such as Saltillo, Dynavox, Tobil cost much more than the iPad.
  • Socially Desirable and Portable- Most of therapeutic devices can lead to bullying in school, for kids with special needs. But if you give a child with an iPad, this makes the child popular at school (because every other kid likes to use iPad).
  • Apps can be easily customised - Most apps which were developed targetting autism can be customised, you can add your unique videos, images and voice to these apps. Specially the language and communication apps, so the child can learn from the things they see in their day to day life.

Here are some of the common iPad apps which target Communication, Social, Emotional and Linguistic Skills.
    • Communication - Tap to talk, iComm, Look2Learn, voice4u, My Talk Tools, iCommunicate, Proloqu2go, Inner Voice.
    • Social Skills - Model Me Going Places, Ubersense, TherAd, iMove (these apps will mostly do video self modelling social stories and give step by step  guidance for social stories such as brushing their teeth, going for a hair cut etc).
    • Linguistic Skills - First Phrases, iLanguage/ Language Builder, Rainbow Sentences, Interactive Alphabet, Touch and Write.
    • Emotional Skills - Smarty Pants.
Special Note for Educators, Teachers and Parents

Even though, there is a vast amount of apps available today, iPad is not a baby sitter, you should always be present when educating your child. The educational apps are their to help your child develop some cognitive processors or enhance their skills at reading, writing, spatial reasoning or just a way of communication. However, in order your child to grow, there should be an educator present when child is learning.. It can be a great tool and a resource for educational development but all things should come in moderation and also under supervision. It's actually quality of teaching and support that gives a positive out come, not just the device.

Dimuthu De Lanerolle

Testing WebsphereMQ 8.0 (Windows server 2012) together with WSO2 ESB 4.8.1 message stores and message processors - Full Synapse Configuration

For configuaring IBM WebsphreMQ with WSO2 ESB please click on this link :

WSO2 ESB Full Synapse Configuration
<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="">
   <registry provider="org.wso2.carbon.mediation.registry.WSO2Registry">
      <parameter name="cachableDuration">15000</parameter>
   <proxy name="MySample2"
          transports="https http"
            <property name="FORCE_SC_ACCEPTED"
            <property name="OUT_ONLY" value="true" scope="default" type="STRING"/>
            <store messageStore="JMSStore2"/>
   <proxy name="MySample"
          transports="https http"
            <property name="FORCE_SC_ACCEPTED"
            <property name="OUT_ONLY" value="true" scope="default" type="STRING"/>
            <store messageStore="JMSStore1"/>
   <proxy name="MyMockProxy"
          transports="https http"
            <log level="custom">
               <property name="it" value="** Its Inline Sequence of MockProxy"/>
            <payloadFactory media-type="xml">
                  <Response xmlns="">
            <header name="To" action="remove"/>
            <property name="RESPONSE" value="true" scope="default" type="STRING"/>
            <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
            <property name="messageType" value="application/xml" scope="axis2"/>
   <endpoint name="MyMockProxy">
      <address uri=""/>
   <sequence name="fault">
      <log level="full">
         <property name="MESSAGE" value="Executing default 'fault' sequence"/>
         <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/>
         <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/>
   <sequence name="main">
         <log level="full"/>
         <filter source="get-property('To')" regex="http://localhost:9000.*">
      <description>The main sequence for the message mediation</description>
   <messageStore class=""
      <parameter name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
      <parameter name="store.jms.password">wso2321#qa</parameter>
      <parameter name="java.naming.provider.url">file:\C:\Users\Administrator\Desktop\jndidirectory</parameter>
      <parameter name="store.jms.connection.factory">MyQueueConnectionFactory</parameter>
      <parameter name="store.jms.username">Administrator</parameter>
      <parameter name="store.jms.JMSSpecVersion">1.1</parameter>
      <parameter name="store.jms.destination">LocalQueue1</parameter>
   <messageStore class=""
      <parameter name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
      <parameter name="store.jms.password">wso2321#qa</parameter>
      <parameter name="java.naming.provider.url">file:\C:\Users\Administrator\Desktop\jndidirectory</parameter>
      <parameter name="store.jms.connection.factory">MyQueueConnectionFactory5</parameter>
      <parameter name="store.jms.username">Administrator</parameter>
      <parameter name="store.jms.JMSSpecVersion">1.1</parameter>
      <parameter name="store.jms.destination">LocalQueue5</parameter>
   <messageProcessor class="org.apache.synapse.message.processor.impl.forwarder.ScheduledMessageForwardingProcessor"
      <parameter name="client.retry.interval">1000</parameter>
      <parameter name="">5</parameter>
      <parameter name="interval">10000</parameter>
      <parameter name="">true</parameter>
   <messageProcessor class="org.apache.synapse.message.processor.impl.forwarder.ScheduledMessageForwardingProcessor"
      <parameter name="">2</parameter>
      <parameter name="client.retry.interval">1000</parameter>
      <parameter name="interval">10000</parameter>
      <parameter name="">true</parameter>

Important Points :

[1] Identify the message flow ...

 SoapRequest  --> MySampleProxy --> MessageStore -->MessageQueue --> MessageProcessor
     (SoapUI)                 (ESB)                        (ESB)           (WebsphereMQ)             (ESB)
                                                                                                         MyMockProxy  <--|

[2] Remember to set additional MS / MP settings depending on your requirement.
      eg: Maximum delivery attempts , Retry period etc.

Thushara RanawakaHow to Create a New Artifacts Type in WSO2 Governance Registry(GReg) - Registry Extensions(RXT)

Hi Everybody today I'm going to explain How to write an RXT from scratch and new features we have included in GReg 5.2.0 RXTs.  This article will help you to write an RXT from scratch and modify the out of the box RXTs for your needs. You can find the full RXT I'm explaining below from here. In the end of the document, I will explain how to upload this rxt to WSO2 GReg.

For this example, I will be creating an rxt fields using actual use-case. First, let's defined the header of the rxt. For that, I'm using following parameters.

mediaType : application/vnd.dp-restservice+xml
shortName : dprestservice

<artifactType type="application/vnd.dp-restservice+xml" shortName="dprestservice" singularLabel="DP REST Service" pluralLabel="DP REST Services" hasNamespace="false" iconSet="20">

  • type - Defines the media type of the artifact. The type format should be application/vnd.[SOMENAME]+xml. SOMENAME can contain any alphanumeric character, "-" (hyphen), or "." (period).
  • shortName - Short name for the artifact
  • singularLabel - Singular label of the artifact
  • pluralLabel - Plural label of the artifact
  • hasNamespace - Defines whether the artifact has a namespace (boolean)
  • iconSet - Icon set number used for the artifact icons

We can name type(mediaType) and shartName as the most important parameters. therefore please keep those in mind all the time. You can't change shortName once you defined it(saved the artifact type) but you can change this by reapplying the whole rxt with the new shartName. You can change mediaType later on but we are not recommending it since there is a risk of loosing all the assets If you want to change the mediaType you have to change the mideaType for all the assets that already in the GReg.

Next big thing is to create the storage path. This path is used to store the metadata type rxt. In the example, I'm using /trunk/dprestservices/@{overview_version}/@{overview_name} to store the assets creating using this rxt. denotes the dynamic data that users going to input. aoverview is a table name that we are going to include name and version data of all the asset that will create using DP REST service.


For example if we create a asset using overview_name=testdp and overview_version=1.0.0 then the storage path for that asset would be /trunk/dprestservices/1.0.0/testdp. The name attribute is specially used to denote asset name.


likewise you can add namespaceAttribute to denote the namespace if and only you uses namespace,


Defining nameAttribute and namespaceAttribute is not necessary if your using something other than default rxt table values. Default rxt table values will be overview_name and overview_namespace.

If you want to attach a lifecycle in the asset creation you have to add below tag with the lifecycle name. for this example, I'm using ServiceLifeCycle which is also available out of the box with GReg 5.2.0.


Now we are done with the upper section of the rxt definition. Let's start creating the asset listing page. This section is specially created to list the assets in management console(https://localhost:9443/carbon). Creating the listing page is straight forward. If you want to display name and version in the list page simply add below lines,

            <column name="Name">
                <data type="path" value="overview_name" href="@{storagePath}"/>
            <column name="Version">
                <data type="path" value="overview_version" href="/trunk/dprestservices/@{overview_version}"/>

There are two types of data which are path and textpath is a clickable hyperlink, It will direct you to the point which is mentioned by href and text is not clickable, it will just be a label. As per the above example if you click on the name you will be directed to metadata file, If you click on the version you will be directed to the collection(directory). Let's add an another column call namespace and set the type as text.

<column name="Service Namespace">
        <data type="text" value="overview_namespace"/>

Add above tag somewhere within the list tag.

Well now we have come half way through and its time to create the form to get the user data. Let's start with the overview table.
The overview is the table name that we have used in this example to store name, version, and namespace(additional). Likewise, you can use any table name that you preferred. However, if you're using something other than overview we recommended you to create publisher and store extension accordingly. I will be creating another article on this in the near future.
Let's create the content
        <table name="Overview">
            <field type="text" required="true" readonly="true">
                <name label="Name">name</name>
            <field type="text" required="true" validate="\/[A-Za-z0-9]*">
            <field type="text" url="true">
  <name label="URL">url</name>

            <field type="text" required="true" default="1.0.0">
                <name label="version">version</name>

let's start from the simplest explanation and move to the more complex once later.

  • default="1.0.0"

Firstly talk about default attribute, This attribute initialized the values for a specific field with the given value. In this example, version field will get the initial value of 1.0.0 which editable any time.

  • url="true"
This will wrap it as a clickable link. Users can link another asset simply by storing asset ID. Paste below bolted value of another asset and users can make simple links.

Example : https://localhost:9443/publisher/assets/dprestservice/details/50631a7c-646e-4156-88af-dad46be2f428
  • <name>context</name>

The value defined in name tag is the reference name of a specific field. The user has to omit spaces is this value and use of camel case is preferred. In WSO2 GReg reference name is created using concatenation of table name and value in name tag, Therefore reference names for above 3 fields will be overview_name, overview_context and overview_version.

  • validate="\/[A-Za-z0-9]*"

Validate attribute is an inline regex validation for the field, in this case, it is context. Likewise, you can add any regex you want. 

  • label="Name"

the label name is the display name for the field. Therefore the user can use any kind of characters and sentences in here. 

  • readonly="true"

readonly means once you save it for the first time users are not allowed to change it. 

  • required="true"

Make the field a mandatory or not.

  • name="Overview"
This denotes the table name for set of fields inside of it.
  • type

There are 6 different field types available in GReg 5.2.0,  Kindly find all the types with an example.


<field type="text" url="true">
       <name label="URL">url</name>



<field type="options">
                <name label="Transport Protocols">transportProtocols</name>


<field type="text-area">


<field type="checkbox">
   <name label="BasicAuth">BasicAuth</name>



<field type="date">
 <name label="From Date">FromDate</name>


     <heading>Contact Type</heading>
     <heading>Contact Name/Organization Name/Email Address</heading>

<field type="option-text" maxoccurs="unbounded">
                <name label="Contact">Contact</name>
                    <value>Technical Owner</value>
                    <value>Technical Owner Email</value>
                    <value>Business Owner</value>
                    <value>Business Owner Email</value>

maxoccurs="unbounded" makes the field infinite.  let's say if you want to make a set of different field types unbounded like this you can use this attribute in a table like below and achieve that task.

<table name="Doc Links" columns="3" maxoccurs="unbounded">
                <heading>Document Type</heading>
            <field type="options">
                <name label="Document Type">documentType</name>
            <field type="text" url="true" validate="(https?:\/\/([-\w\.]+)+(:\d+)?(\/([\w/_\.]*(\?\S+)?)?)?)">
                <name label="URL">url</name>
            <field type="text-area">
                <name label="Document Comment">documentComment</name>

How to load dynamic content to a filed.

<field type="options" >
    <name label="WADL">wadl</name>
    <values class="org.wso2.sample.rxt.WSDLPopulator"/>


For that please refer this blog post.

To deploy this in GReg please follow below steps.

1. Login to the carbon console: https://localhost:9443/carbon/

2 .Find Extensions from the left vertical bar and click it.

3. Click on add new extension.

4. First, remove the default sample by selecting all and paste the new RXT content from here.
5. Finally as per a best practice make sure to upload dprestservice.rxt to <GREG_HOME>/repository/resources/rxts/ directory as well.

Please add a comment if you have any clarifications regarding this.

Thushara RanawakaGood way to monitor carbon servers - Netdata v1.2.0

Netdata is a monitoring tool for real-time systems Linux that will allow us to visualize the core specs of a system such as CPU, memory, speed of the hard disk, network traffic, application, etc.. This basically shows most of the details we can get from Linux performance tools. This is similar to Netflix Vector. Netdata focuses on the real-time visualization only. Currently, the data is stored only in memory and there is a plan to store historical data in some data store in a future version. The biggest strength of Netdata is ease of use. It only takes 3 minutes to start using it. Let me show you how easy it is to getting started with it on a ubuntu server,

      1. First, prepare your ubuntu server by installing required packages.

$ apt-get install zlib1g-dev uuid-dev libmnl-dev gcc make git autoconf autogen automake pkg-config

      2. Clone/Download the netdata source and go the netdata directory.

$ git clone --depth=1
cd netdata

      3. Then install netdata.

$ ./

      4. After installation complete kill the running Netdata instance.

$ killall netdata

      5. Start again Netdata.

$ /usr/sbin/netdata

      6Now open a new browser tab and enter blow URL,


If you're running this on your local machine the URL should be http://localhost:19999.

That it!! Now Netdata is up and running in your machine/server/VM.

Netdata is developed using C language. the installation is simple and the resources used are extremely small, actually takes less than 2% CPU

After going through all the cons and pros I believe this will be a good way to monitor carbon servers remotely.

Thushara RanawakaRetrieving Associations Using WSO2 G-Reg Registry API Explained

This was a burning issue I had while implementing a client to retrieve association related data. In this post, I will be rewriting WSO2 official documentation for Association registry REST API. Without further a due let's send some requests and get some response :).

The following terms explain the meaning of the query parameters passed with the following REST URIs.
pathPath of the resource(a.k.a. registry path).
typeType of association. By default, Governance Registry has 8 types of association, such as usedBy, ownedBy...
startStart page number.
sizeNumber of associations to be fetched.
targetTarget resource path(a.k.a. registry path).

Please note that the { start page } and { number of records } parameters can take any value greater than or equal to 0. The { start page } and { number of records } begins with 1. If both of them are 0, then all the associations are retrieved.

Get all the Associations on a Resource

HTTP Method                
Request URI/resource/1.0.0/associations?path={ resource path }&start={ start page }&size={ number of records }
HTTP Request HeaderAuthorization: Basic { base64encoded(username:password) }
ResponseIt retrieves all the associations posted on the specific resource.
ResponseHTTP 200 OK
Response Typeapplication/json

Sample Request and Response

Get Associations of Specific Type on a Given Resource

HTTP Method                 
Request URI/resource/1.0.0/associations?path={ resource path }&type={ association type }
HTTP Request HeaderAuthorization: Basic { base64encoded(username:password) }
It retrieves all the associations for the specific type of the given resource
ResponseHTTP 200 OK
Response Typeapplication/json

Sample Request and Response

Add Associations to a Resource

  1. Using a Json pay load

HTTP Method                 
Request URI/resource/1.0.0/associations?path={resource path}
HTTP Request Header
Authorization: Basic { base64encoded(username:password) }
Content-Type: application/json
Payload[{ "type":"<type of the association>","target":"<valid resource path>"}] 
It adds the array of associations passed as the payload for the source resource
ResponseHTTP 204 No Content.
Response Typeapplication/json

    2. Using Query Params

HTTP Method                 
Request URI/resource/1.0.0/associations?path={resource path}&targetPath={target resource}&type={assocation type}
HTTP Request Header
Authorization: Basic { base64encoded(username:password) }
Content-Type: application/json
ResponseHTTP 204 No Content.
Response Typeapplication/json

Delete Associations on a Given Resource

HTTP Method                 
Request URI/resource/1.0.0/association?path={resource path}&targetPath={target path}&type={association type}
It deletes the association between the source and target resources for the given association type.
ResponseHTTP 204 No Content.
Response Typeapplication/json

Again this is a detailed version of WSO2 official documentation. This concludes the post. 

Thushara RanawakaHow to remotely debug WSO2 product which is in a docker container.

This post contains how to debug a docker container. As per the prerequisites, users have to clone two WSO2 repos along the road. Other than that he or she should have good knowledge in remote debugging. Without further due let's start the process.

First clone and download the Puppet modules for WSO2 products.
Move to the v2.1.0 tag
Add the product pack and JDK to following locations.
 Download and copy JDK 1.7 (jdk-7u80-linux-x64.tar.gz) pack to below directory.
 Download the relevant product pack and copy them to below directory.
 Then clone WSO2 Dockerfiles
Now set puppet home using below terminal command. 
export PUPPET_HOME=<puppet_modules_path>
export PUPPET_HOME=/Users/thushara/Documents/greg/wso2/puppet-modules
Update the following lines in
echo "Starting ${SERVER_NAME} with [Startup Args] ${STARTUP_ARGS}, [CARBON_HOME] ${CARBON_HOME}"
${CARBON_HOME}/bin/ start
sleep 10s
tail -500f ${CARBON_HOME}/repository/logs/wso2carbon.log

Lets install Vim in the docker container for debugging purposes. For that add following lines to Dockerfile, before last 2 lines.


RUN apt-get install -y vim
Now update the following line to open remote debug ports.
bash ${common_folder}/ -n ${product_name} -p 9763:9763 -p 9443:9443 -p 5005:5005 $*
Thats it with the configuration now you can build and start docker container to start debugging.
First build the docker container,
<dockerfiles_home>/wso2greg $ bash -v 5.2.0 -r puppet
Now run the docker container,
<dockerfiles_home>/wso2greg $ bash -v 5.2.0
To find the docker container name please list all the docker processes running using below command,
<dockerfiles_home>/wso2greg $ docker ps
Let's login to the docker instance using image name(wso2greg-default) which we got using docker ps command.
<dockerfiles_home>/wso2greg $ docker exec -it wso2greg-default /bin/bash
Now you can start remote debugging as you use to it :)
Below I have added additional commands you might need. To see the list of docker machines up and running use below commands,
<dockerfiles_home>/wso2greg $ docker-machine ls
That's it from here. If you have any questions please create a StackOverflow question and attached it as a comment below.

Thushara RanawakaHow to Schedule a Task Using WSO2 ESB 4.9.0

Schedule task is one of the very useful hidden features that comes with WSO2 ESB 4.9.0. This is a much improved and reliable version of schedule task comparing to previous versions of ESB. This Task Scheduler can be worked with clustered environments such as 1 manager and 2 worker..etc. Let's start hacking the WSO2 ESB

First We have to create a Sample Back-End Services. The back-end sample services come with a pre-configured Axis2 server, and demonstrates in-only and in-out SOAP/REST or POX messaging over HTTP/HTTPS and JMS transports using WS-Addressing, WS-Security, and WS-Reliable Messaging. They also handle binary content using MTOM and SwA.
1. Each back-end sample service can be found in a separate folder in <ESB_HOME>/samples/axis2Server/src directory. They can be built and deployed using Ant from each service directory. You can do this by typing "ant" without quotes on a console from a selected sample directory. For example,
user@host:/tmp/wso2esb-2.0/samples/axis2Server/src/SimpleStockQuoteService$ ant
Buildfile: build.xml
      [jar] Building jar: /tmp/wso2esb-2.0/samples/axis2Server/repository/services/SimpleStockQuoteService.aar
Total time: 3 seconds
2. Next, start the Axis2 server. Go to <ESB_HOME>/samples/axis2Server directory and execute either (for Linux) or axis2server.bat(for Windows) script. For example,
This starts the Axis2 server with the HTTP transport listener on port 9000 and HTTPS on 9002 respectively.
3. Now add the sample sequence, Please follow below steps.
i.Click on Sequences Under Service Bus from left pane.
ii.Click on Add Sequences.
iii.Then click on "switch to source view" from the tab

iv.And delete everything in that box and add below sequence.
<?xml version="1.0" encoding="UTF-8"?>
<sequence name="iterateSequence" xmlns="">
    <iterate attachPath="//m0:getQuote"
        expression="//m0:getQuote/m0:request" preservePayload="true"
        xmlns:ns="http://org.apache.synapse/xsd" xmlns:ns3="http://org.apache.synapse/xsd">
                        <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
                <log level="custom">
                        name="Stock_Quote_on" xmlns:ax21="http://services.samples/xsd"/>
                        name="For_the_organization" xmlns:ax21="http://services.samples/xsd"/>
                        name="Last_Value" xmlns:ax21="http://services.samples/xsd"/>

Note: you have to change the endpoint address if your running SimpleStockQuoteService in some other endpoint.

v. Save & Close the view.

4. Let's add the scheduling task.
Click on Scheduled Tasks Under Service Bus from the left pane. Select add task and fill the fields like below.

i. Task Name - CheckQuote
ii. Task Group - synapse.simple.quartz
iii. Task Implementation - org.apache.synapse.startup.tasks.MessageInjector
iv. Set below properties 
sequenceName - Literal - iterateSequence
injectTo - Literal - sequence
message - XML - 
<m0:getQuote xmlns:m0="http://services.samples">

v. Trigger type - Single
vi. Count - 100 
Note: This means the task will run 100 times. If you want to set an infinite tasks simply set the count to -1.
vii. Interval (in seconds) - 10

Thant's it! Now click on Schedule button and the task will start execution according to the Interval. In this example, task will start in 10 seconds. 

Kindly find the schedule task xml code from below:

<task class="org.apache.synapse.startup.tasks.MessageInjector"
        group="synapse.simple.quartz" name="CheckQuote">
        <trigger count="100" interval="10"/>
        <property name="sequenceName" value="iterateSequence" xmlns:task=""/>
        <property name="message" xmlns:task="">
            <m0:getQuote xmlns:m0="http://services.samples">
        <property name="injectTo" value="sequence" xmlns:task=""/>


Thushara RanawakaHow to Write a Simple G-Reg Lifecycle Executor to send mails when triggered using WSO2 G-Reg 5 Series.

When its comes to SOA governance lifecycle management aka LCM is a very useful feature. Recently I wrote a post about lifecycle checkpoints which is again another useful feature that comes with WSO2 G-Reg 5 series. Before start writing an LC executor you need to have basic knowledge of G-Reg LCM, lifecycle syntax and good knowledge in java.
In this sample, I will create a simple lifecycle that has 3 states which are Development, Tested, and Production. In each state, I will call the LC executor and send mails to the defined list in the lifecycle.

To get a basic idea about WSO2 G-Reg Lifecycles please go through Lifecycle Sample documentation. If you have a fair knowledge about LC tags and attributes you can straightaway add the first checkpoint to LC. You can find the official G-Reg documentation for this from here.

First let's start off with writing a simple lifecycle call EmailExecutorLifeCycle, In the LC there is only below things that you need to consider if you have basic knowledge in WSO2 LCs

<state id="Development">
     <data name="transitionExecution">
       <execution forEvent="Promote" class="">                             <parameter name="email.list" value=""/>                  </execution>
                    <transition event="Promote" target="Tested"/>   

Let's explain above bold syntax.
<datamodel> : This is where we define additional data models that need to be executed in a state change.

<data name="transitionExecution"> : Within this tag we define general stuff that need to be executed subsequent to the state change. Like wise we have transitionValidation, transitionPermission, transitionScripts, transitionApproval, etc..

forEvent="Promote" : This defines for which event this execution need to happen. In this case, it is promote.

class="" : Class path of the executor, we will be discussing this file later. For the record this jar file need to be added to the  dropins or libs directory which is located in repository/components/.

name="email.list" : This is a custom parameter that we define for this sample. Just add some valid emails for the sake of testing.

<transition event="Promote" target="Tested"/> : Transition actions available for the state, there can be 0 to many transitions available for the state. In this case, it is promote an event for the Tested state.

Please download the EmailExecutorLifeCycle.xml and apply it using G-Reg mgt console.

Since we are sending a mail using our executor we need to fill email settings of the mail admin. Please go to axis2.xml which is located in repository/conf/axis2/. Now in that XML file uncomment mail transport sender codes. Please find the sample settings for gmail transport sender.

<!-- To enable mail transport sender, ncomment the following and change the parameters
<transportSender name="mailto"
        <parameter name="mail.smtp.from"><></parameter>
        <parameter name="mail.smtp.user"><></parameter>
        <parameter name="mail.smtp.password"><password></parameter>
        <parameter name=""></parameter>

        <parameter name="mail.smtp.port">587</parameter>
        <parameter name="mail.smtp.starttls.enable">true</parameter>
        <parameter name="mail.smtp.auth">true</parameter>

Please fill the bold fields with correct values.

If you have enabled 2 step verification in your Gmail account you have to disable that first. Please follow these steps:

 1. Login to Gmail. 
 2. Go to Gmail security page.
 3. Find 2 step verification from there. open it.
 4. Select "Turn off".

Lets download and save the file in <GREG_HOME>/repository/components/libs/. Then do a restart.

Now it's time to see the executor in action,

1. Add the EmailExecutorLifeCycle.xml to any metadata type rxt such as soapservice, restservice, etc... using the mgt console.
2. Login to the publisher and from the lifecycle tab promote the LC state to next state.

3. Now check the inbox of any mail ID that you included in email.list.

You can download the source code from here.

Yashothara ShanmugarajahIntroduction to WSO2 BPS

We can use WSO2 BPS for efficient business Process Management by allowing easy deploy of business processes written using either WS-BPEL standard or BPMN 2.0 standard.

Business Process: Collection of related and structured activities or tasks, that meants a business use case and produces a specific service or output. A process may have zero or more well-defined inputs and an output.

Process Initiator: The person or Apps who initiates business process.

Human task: In here human interaction is involved in business process.

BPEL: It is an XML-based language used for the definition and execution of business. Composing of web services and orchestration.

BPMN: Graphical notation of business processes.

Simple BPEL Process Modeling
  • Download BPS product.
  • Go to <BPS_HOME> -> bin through terminal and execute sh ./ command.
  • Install the plug-in with pre-packaged Eclipse - This method uses a complete plug-in installation with pre-packaged Eclipse, so that you do not have to install Eclipse separately. On the WSO2 BPS product page, click Tooling and then download the distribution according to your operating system under the Eclipse JavaEE Mars + BPS Tooling 3.6.0 section.
  • Then we need to create carbon Composite Application Project. These steps clearly mentioned in

sanjeewa malalgodaLoad balance data publishing to multiple receiver groups - WSO2 API Manager /Traffic Manager

In previous articles we discussed about traffic Manager and different deployment patterns. In this article we will further discuss about different traffic manager deployments we can for deployments across data center. Cross data center deployments must use publisher group concept as each event need to sent all data center if we need global count across DCs. 

In this scenario there are two group of servers that are referred to as Group A and Group B. You can send events to both the groups. You can also carry out load balancing for both sets as mentioned in load balancing between a set of servers. This scenario is a combination of load balancing between a set of servers and sending an event to several receivers.
An event is sent to both Group A and Group B. Within Group A, it will be sent either to Traffic Manager -01 or Traffic Manager -02. Similarly within Group B, it will be sent either to Traffic Manager -03 or Traffic Manager -04. In the setup, you can have any number of Groups and any number of Traffic Managers (within group) as required by mentioning them accurately in the server URL. For this scenario it's mandatory to publish events to each group but within group we can do it two different ways.

  1. Publishing to multiple receiver groups with load balancing within group
  2. Publishing to multiple receiver groups with failover within group

Now let's discuss both of these options in detail. This pattern is the recommended approach for multi data center deployments when we need to have unique counters across data centers. Each group will reside within data center and within datacenter 2 traffic manager nodes will be there to handle high availability scenarios.

Publishing to multiple receiver groups with load balancing within group

As you can see diagram below data publisher will push events to both groups. But since we do have multiple nodes within each group it will send event to only one node at a given time in round robin fashion. That means within group A first request goes to traffic manager 01 and next goes to traffic manager 02 and so. If traffic manager node 01 is unavailable then all traffic will go to traffic manager node 02 and it will address failover scenarios.
Copy of traffic-manager-deployment-LBPublisher-failoverReciever(5).png

Similar to the other scenarios, you can describe this as a receiver URL. The Groups should be mentioned within curly braces separated by commas. Furthermore, each receiver that belongs to the group, should be within the curly braces and with the receiver URLs in a comma separated format. The receiver URL format is given below.

          Binary    {tcp://,tcp://},{tcp://,tcp://}
{ssl://,ssl://}, {ssl://,ssl://}

Publishing to multiple receiver groups with failover within group

As you can see diagram below data publisher will push events to both groups. But since we do have multiple nodes within each group it will send event to only one node at a given time. Then if that node goes down then event publisher will send events to other node within same group. This model guarantees message publishing to each server group.  

Copy of traffic-manager-deployment-LBPublisher-failoverReciever(3).png
According to following configurations data publisher will send events to both group A and B. Then within group A it will go to either traffic manager 01 or traffic manager 02. If events go to traffic manager 01 then until it becomes unavailable events will go to that node. Once its unavailable events will go to traffic manager 02.

Binary{tcp:// | tcp://},{tcp:// | tcp://}
{ssl://,ssl://}, {ssl://,ssl://}

Thushara RanawakaSOA Governance with WSO2 Governance Registry(G-Reg)

The WSO2 Governance Registry(G-Reg) has gone through a major transformation, from Good to best ever SOA governance platform. Starting from G-Reg 5 series WSO2 Governance Registry has shown great improvement compared to its previous versions. G-Reg now comes with multiple views for different roles, i.e. publishers, consumers/subscribers(aka Store User) and administrators. This is a significant change from the previous versions which just included one view which good old carbon console for all the users. Before understanding G-Reg let's first understand what an SOA registry is and it's purpose. 
A service-oriented architecture registry (SOA registry) is a resource that sets access rights for data that is necessary for service-oriented architecture projects. An SOA registry allows service providers to discover and communicate with consumers efficiently, creating a link between service providers and service customers. This is where the registry comes into the picture to facilitate SOA governance. The registry can act as a central database that includes artifacts for all services planned for development, in use and retired. Essentially, it's a catalog of services which are searchable by service consumers and providers. The WSO2 Governance Registry is more than just an SOA registry, because, in addition to providing end-to-end SOA governance, it can also store and manage any kind of enterprise asset including but not limited to services, APIs, policies, projects, applications, people.
Let's talk about some features WSO2 G-Reg Offers.
  • Seamless integration with great WSO2 product stack which contains award winning ESB, APIM, DAS, IS and much more.
  • Various integration techniques with other 3rd party applications or products.
  • Ability to store different types of resources out of the box(Content type and metadata type)
 - Content type:- WSDL, WADL, Schema, Policy and Swagger
 - Metadata type:- Rest Service, SOAP Service.
  • Add resources from the file system and from URL.
  • CRUD operations using UI for resources and capable of doing all the governance-related tasks.

  • Strong Taxonomy and filtering search capabilities.
  • Complex Searchability and Quick Search using Solr-based search engine.
  • Reuse stored searches using search history.
  • Great social features such as review, rate and much more...

  • XML-based graphical Lifecycle designers.
  • D3 Based Lifecycle Management UI.
  • Strong life-cycle management(LCM) capabilities.
  • Ability to automate actions using LCM executors and G-Reg Handlers.
  • Strong role-based visualizations.
  • Visualize dependencies and Associations.

  • UI or API capability for subscriptions and notifications.
  • Lifecycle subscriptions

  • Set of strong REST APIs that can do all the CRUD operations including governing an asset.
  • Use as an embedded repository.
  • Use as a service repository with UDDI.

  • Configure static and custom reports using admin console.
  • Ability to produce static snapshot reports(search) using admin console.
  • Report Generation Scheduling.

If you want to know more about G-Reg and its capabilities, We prefer you to download and run G-Reg in your local machine. It will only take just 5 minutes. Please make sure to follow the getting started guide which will walk through all the major features that G-Reg offers out of the box. 

Lakshani Gamage[WSO2 App Manager] How to Add a Custom Radio Button Field to a Webapp

In WSO2 App Manager, when you create a new web app, you have to fill a set of predefined values. If you want to add any custom fields to an app, you can easily do it.

Suppose you want to add custom radio button field to webapp create page. Say the custom radio button field name is "App Category".

First, Let's see how to add a custom field to UI (Jaggery APIs).
  1. Modify <APPM_HOME>/repository/resources/rxt/webapp.rxt.
    <field type="text" required="true">
    <name>App Category</name>

    Note : If you don't want add the custom field as mandatory, required="true" part is not necessary.
  3. Login to Management console and navigate to Home > Extensions > Configure > Artifact Types and delete "webapp.rxt"
  4. Add below code snippet to the required place of <APPM_HOME>repository/deployment/server/jageeryapps/publisher/themes/appm/partials/add-asset.hbs
    <div class="form-group">
    <label class="control-label col-sm-2">App Category: </label>
    <div class="col-sm-10">
    <div class="radio">
    <input type="radio" data-value="free" class="appCategoryRadio" name="appCategoryRadio">
    <div class="radio">
    <input type="radio" data-value="premium" class="appCategoryRadio" name="appCategoryRadio">

    <input type="hidden" class="col-lg-6 col-sm-12 col-xs-12" name="overview_appCategory"

  6. Add below code snippet to <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/partials/edit-asset.hbs.
    <div class="form-group">
    <label class="control-label col-sm-2">App Category: </label>
    <div class="col-sm-10">
    <div class="radio">
    <input type="radio" data-value="free" class="appCategoryRadio" name="appCategoryRadio">
    <div class="radio">
    <input type="radio" data-value="premium" class="appCategoryRadio" name="appCategoryRadio">

    <input type="hidden"
    value="{{{snoop "fields(name=overview_appCategory).value" data}}}"
    class="col-lg-6 col-sm-12 col-xs-12" name="overview_appCategory"

  8. To save selected radio button value in registry, you need to add below function inside $(document ).ready(function() {} of <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/js/resource-add.js
    var output = [];
    $( ".appCategoryRadio" ).each(function( index ) {
    var categoryValue = $(this).data('value');
    if( $(this).is(':checked')){

  10. To preview the selected radio button value in app edit page, add below code snippet inside $(document ).ready(function() {} of  <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/js/resource-edit.js.
    var output = [];
    $( ".appCategoryRadio" ).each(function( index ) {
    var categoryValue = $(this).data('value');
    if( $(this).is(':checked')){


    var appCategoryValue = $('#overview_appCategory').val().split(',');
    $( ".appCategoryRadio" ).each(function( index ) {
    var value = $(this).data('value');
    if($.inArray(value, appCategoryValue) >= 0){
    $(this).prop('checked', true);

  12. When you create a new version of an existing webapp, to copy the selected radio button value to the new version, add below line to
    <input type='text' value="{{{snoop "fields(name=overview_appCategory).value" data}}}" name="overview_appCategory" id="overview_appCategory"/>

    Now, Let's see how to add customized fields to the REST APIs.
  14. Go to Main -> Browse -> in Management console and navigate to   /_system/governance/appmgt/applicationdata/custom-property-definitions/webapp.json and click on "Edit As Text". Add the custom fields which you want to add.

  16. Restart App Manager.
  17. Web app create page with the newly added radio button will be shown as below. save image

Lakshani Gamage[WSO2 App Manager] How to Add a Custom Checkbox to a Webapp

WSO2 App Manager supports users to customize the server as they need. Here, I'm going to talk about how to add a custom checkbox to webapp create page.

First, Let's see how to add a custom checkbox field to UI (Jaggery APIs).

For example,  let's 
take the checkbox attribute name as “Free App”.
  1. Modify <APPM_HOME>/repository/resources/rxt/webapp.rxt.
    <field type="text">
    <name label="Free App">Free App</name>

  3. Login to Management console and navigate to Home > Extensions > Configure > Artifact Types and delete "webapp.rxt"
  4. Add below code snippet to the required place of <APPM_HOME>repository/deployment/server/jageeryapps/publisher/themes/appm/partials/add-asset.hbs
    <div class="form-group">
    <label class="control-label col-sm-2">Free App: </label>
    <div class="col-sm-10 checkbox-div">
    <input type="checkbox" class="freeApp_checkbox">

    <input type="hidden" required="" value="FALSE" class="col-lg-6 col-sm-12 col-xs-12" name="overview_freeApp" id="overview_freeApp">

  6. Add below code snippet to <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/partials/edit-asset.hbs as well.
    <div class="form-group">
    <label class="control-label col-sm-2">Free App: </label>
    <div class="col-sm-10 checkbox-div">
    <input type="checkbox" class="freeApp_checkbox" value="{{{snoop "fields(name=overview_freeApp).value" data}}}">

    <input type="hidden" required="" value="{{{snoop "fields(name=overview_freeApp).value" data}}}"
    class="col-lg-6 col-sm-12 col-xs-12" name="overview_freeApp" id="overview_freeApp">

  8. To save newly added value in registry, add below function inside   $(document).ready(function(){} of <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/js/resource-add.js
    var output = [];
    $( ".freeApp_checkbox" ).each(function( index ) {
    if( $(this).is(':checked')){
    } else {

  10. To preview check box value in app edit page, add below code snippet inside $( document ).ready(function() {} of  <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/js/resource-edit.js.
    var freeAppValue = $('#overview_freeApp').val();
    $(".freeApp_checkbox").each(function (index) {
    if (freeAppValue == "TRUE") {
    $(this).prop('checked', true);
    } else {
    $(this).prop('checked', false);

    var output = [];
    $( ".freeApp_checkbox" ).each(function( index ) {
    if( $(this).is(':checked')){
    } else {


  12.  When you create a new version of an existing webapp, to copy the checkbox value to the new version, add below line to
    <input type='text' value="{{{snoop "fields(name=overview_freeApp).value" data}}}" name="overview_freeApp" id="overview_freeApp"/>

    Now, Let's see how to add customized fields to the REST APIs.
  14. Go to Main -> Browse -> in Management console and navigate to   /_system/governance/appmgt/applicationdata/custom-property-definitions/webapp.json and click on "Edit As Text". Add the custom fields which you want to add.

  16. Restart App Manager.
  17. Web app create page with the newly added checkbox will be shown as image

Amani SoysaAutistim ..the good, the bad and the ugly

Autism or Autism Spectrum Disorder is a brain based disorder which makes the brain function in a different way compared to the rest of the wold.

 Autism affect three major area's of a person;

  1. Social Interaction - Autistic people do not like to socialise, and they tend to be in their own world, they find it very hard to keep eye contact, and rarely shows empathy or compassion towards other human beings.

  2. Communication - Most kids on the spectrum has delayed speech or non verbal (40% of autistic kids are non verbal), this can be an early sign for you to identify who has autism. Kids with autism has trouble with speech and communication and they find it hard to express what they need also they find it hard to understand what you are talking to them. This can be because they find it hard to focus on you as they are mostly distracted by their surroundings.

  3. Behaviours and Interest - Behaviours and interest of autistic people can differ from person to person. Most children or grownups with Autism have repetitive movements with objects, repeated body movements such as rocking and hand flapping.

These are few of the characteristics of autism
  • Difficulty with verbal communication, including problems using and understanding language
  • Inability to participate in a conversation, even when the child has the ability to speak
  • Difficulty with non-verbal communication, such as gestures and facial expressions
  • Difficulty with social interaction, including relating to people and to his or her surroundings
  • Difficulty making friends and preferring to play alone
  • Unusual ways of playing with toys and other objects, such as only lining them up a certain way
  • Difficulty adjusting to changes in routine or familiar surroundings, or an unreasonable insistence on following routines in detail
  • Repetitive body movements, or patterns of behavior, such as hand flapping, spinning, and head banging
  • Preoccupation with unusual objects or parts of objects
However, autism is a spectrum disorder, therefore, children/people with autism behave very uniquely and challenges they go via differs from person to person. furthermore, the severity of the autism may change from time to time.

Autism can be a problem if it's not identified at child's early ages, autism can be identified within 18 months from child birth or even earlier. Infants with autism have no big smiles or other warm, joyful expressions by six months or later and no no back-and forth gestures such as pointing,showing reaching or waving by 12 months. Some kids with autism has no speech, babbling or social skills at any age. These are very few early signs of autism, if you see these red flags, its better not delay in asking your pediatrician or family doctor for an evaluation.

It's always good to identify autism at very early age so you can get the necessary help, with communication, language development, emotional skills and education.  Autism kid's will have a hard time dealing with social interaction and traditional approaches of learning, however, they can be very smart, knowledgable and gifted children. If they were given the right set of tools for their learning, and if educators can help them  understand how they can learn, they can do wonders.

Autistic Children are gifted and special, they have a really good focus on things and they see what most people miss. Due to this they have incredible skills in areas of mathematics arts music and many more. They see the world in a different manner and can help improve the world greatly.  Here are few of the famous autistic people.

Albert Einstein - Very intelligent physician   with many inventions
Amadeus Mozart - Talented musician
Sir Isaac Newton - Scientist with many inventions
Charles Darwin - Naturalist and geologist
Thomas Jefferson - President of United States
Michelangelo - Artist

As mention above, autistic people have un imaginable talents, and they make our world a better place.

The ugly part of autism is not the autistic people, its the society and how they look at autistic people. Due to lack of social skills, there are many instances autistic children get bullied and misunderstood. And most of them cannot understand sarcasm or phrasal words,  and they only understand the literal meanings of words. Their behaviour, and how they perceive things are different compared to majority of people,  due to that society tend to make autistic people feel less important or weird. According to Centres for Disease Control and Prevention, 1 in 68 people have autism. This means there is a large population of autistic people in the world today, if we can be more patient and understanding when dealing with them, we can make a better world for them as well as us.

Imesh Gunaratne"Is Mesos DC/OS really a Data Center Operating System?" in Container Mind

Image source: <a href=""></a>

A walk through of DC/OS container cluster manager

Would it be possible for an entire datacenter to run on a single operating system? Generally, the role of an operating system is to provide access to hardware resources and system software for running software applications. Technically, DC/OS is a container cluster manager which provides a platform for deploying software applications on containers. More importantly, it can run on physical or virtual machines by abstracting out underlying infrastructure from the application layer. DC/OS can be installed on any datacenter on hundreds and thousands of machines, and act as an operating system for its applications. Due to these reasons, DC/OS is considered as a datacenter operating system.

Nevertheless, at a glance, it may sound more like a marketing term than what it actually offers. DC/OS is very much similar to other container cluster managers such as Kubernetes and Docker Swarm. Those systems provide almost the same set of features as DC/OS. However, there is a clear difference between DC/OS and other container cluster managers on deploying Big Data and Analytics solutions. That’s with its ability to extend the scheduler for providing dedicated container scheduling capabilities. For an example systems such as Apache Spark, Apache Storm, Hadoop, Cassandra have implemented Mesos schedulers for running workloads on Mesos for specifically optimizing container scheduling for their needs. More importantly such complex distributed systems can be deployed on Mesos with few clicks. Additional information on that can be found here.

Apache Mesos the DC/OS Kernel

Figure 1: Image source: <a href=""></a>

The Mesos core, which is a cluster manager which was initially developed at University of California, Berkeley in around year 2009 and later donated to Apache. It is being now used by many large organizations including Twitter, Airbnb and Apple. As shown in the above figure, Mesos provides an extension point for plugging in task schedulers as Mesos frameworks. The schedulers receive resource offers for scheduling tasks for end user applications. Tasks get scheduled in Mesos slave nodes via executors. These tasks can be executed either using Mesos or Docker containerizers. Mesos containerizer is the first container runtime supported by Mesos which used Linux kernel features such as cgroups and namespaces. Later with the introduction of Docker, Mesos added support for running Docker containers on its cluster manager.

Marathon the PaaS Framework

Figure 2: Mesos Marathon Framework

Marathon comes bundled with DC/OS. It’s the core Mesos framework which provides platform as a service features for DC/OS. End user applications can be deployed on DC/OS using Marathon applications. DC/OS also provides a store full of industry well-known software systems. This is called DC/OS Universe. A Marathon application specifies the resource requirements (CPU, memory), the Docker image id, container ports, service ports (ports to be exposed by the load balancer), networking model (bridge/host), startup parameters, labels and health checks of the software system.

Once an application is deployed Marathon will first check the availability of the resources against the requirement and then schedule containers accordingly. Afterwards it will make use the given health checks to verify the status of the containers and auto heal them if the system is not functioning properly.

Figure 3: Marathon Application Deployment Architecture

Marathon applications will use Marathon load balancer (which is haproxy) for exposing service ports and load balancing containers. It will be deployed as a separate Marathon application and will make use of the Marathon API for dynamically updating the load balancer configuration. Marathon applications can define hostnames for their clusters for enabling hostname based routing. Marathon also provides DNS names for Marathon applications via Mesos DNS server.


Figure 4: DC/OS High Level Architecture

The above diagram illustrates the high-level architecture of DC/OS. Initially, DC/OS solution was implemented as a commercial offering and later in April 2016, it was open sourced. I believe this was a critical business decision taken by Mesosphere for competing with Kubernetes. Unless it was open sourced, it may not have got much attention and traction compared to an open source project.

Current Limitations

As of DC/OS 1.7, the following limitations were identified:

  • No overlay network support. As a result, each container port needs to be exposed via host ports and internal communication get proxied.
  • No service concept similar to Kubernetes services. Therefore Marathon applications need to be load balanced via Marathon load balancer even when session affinity is not required. Kubernetes services do network level routing using IP table rules and it’s much faster than traditional load balancers.


I worked on designing and implementing several production grade middleware deployments on DC/OS 1.7 and those were quite stable. DC/OS provides a vagrant deployment script and many other installation guides for setting it up without much effort. One of the key reasons for choosing DC/OS over Kubernetes in above projects was the support for Big Data and Analytics platforms. At the time this article being written Mesosphere is implementing a layer 4 load balancer called Minuteman for DC/OS. This would overcome the limitations mentioned above in future DC/OS releases.

Is Mesos DC/OS really a Data Center Operating System? was originally published in Container Mind on Medium, where people are continuing the conversation by highlighting and responding to this story.


Read the responses to this story on Medium.

Amalka SubasingheAdd a new app type to the WSO2 App Cloud

This blog explain how you can add a new app type to the WSO2 App cloud.

Step 1: Get a clone of WSO2 app cloud code base
git clone

Step 2: Create the docker files for the runtime
Please refer our existing docker files when creating new docker files for particular runtime.

Following blog post gives you some details about structure of the docker files

Step 3: Required database changes
When adding new app type you need add some database records, following diagram gives you an idea of database schema.

AC_CLOUD defines the cloud types
AC_APP_TYPE defines app types
AC_RUNTIME defines runtimes
AC_CONTAINER_SPECIFICATIONS defines container specs
AC_TRANSPORT defines the ports we expose for end users

-- insert app type
-- insert app type, cloud mapping
-- insert runtime
-- insert app type, runtime mapping
-- insert container spec if required
-- insert runtime, container spec mapping
-- insert transport if required
-- insert runtime, transport mapping

Step 4: Specify app-type meta data in app-types-properties.json file

This json file we used to load the app type details to the App Cloud UI.
Step 5: Add a sample
We need to add a sample to implement deploy sample option.

Commit your sample archive here:

Specify the sample location here with the property <app_type>_sample_artifact_url

Step 7: Implement endpoints section loading to the app home page
Please refer the followign blog post to see how we have developed the "Endpoints" section per app type.

Amalka SubasingheLoading endpoints to the app home page in WSO2 App Cloud

Currently we have 6 app types in WSO2 App Cloud and when we load a app home page of created applications we can see a section called "Endpoints". This blog post is about how we load endpoints per each and every app type and it's implementation

Loading endpoints for Java web application, Jaggery and PHP app types
for these 3 app types, user have to define the application context, when creating an application
as shown in below image.

Then we append that context to the app version url/default url and display in app home page as below.

Loading endpoints for Microservices app type
for microservices app type, microservices 2.0.0 itself provides a swagger url, we just display the swagger url here.

Loading endpoints for WSO2 dataservices app type
for dataservices app type, we need to invoke the ServiceAdmin soap service to get the endpoints. So we developed a axis2service and deployed in dss runtime, which invokes the ServiceAdmin and return the endpoints to the app cloud.

You can see the axis2Service here:

Loading endpoints for WSO2 ESB app type
for ESB app type, we have developed an API and deployed in ESB runtime to get the SOAP and REST endpoints

You can see the code of the API here:

The implementation code we use to load endpoints per app types, you can find here.

per app type we need to define which mechanism we need to load to the app home page, that we have defined in app-types-properties.json file, as endpointExtractorObject property.

enabling or disabling appContext property, will show/hide Application Context field in Create Application page.

providing a value to the defaultContext will show a default context in Application Context field in Create Application page

Amalka SubasingheAdd a new runtime to a existing app type in WSO2 App Cloud

This blog post explains how you can add a new runtime to a existing app type. Recently WSO2 Carbon team released WSO2 AS 6.0.0 M3 release. I'm gonna explain how we can add this WSO2 AS 6.0.0 M3 as a new runtime of war app type.

Step 1: get a clone of app-cloud code base
git clone

Step 2: Create required docker files
When creating docker files, please refer the existing docker files we have created for other runtimes and get an idea.

You can find the existing docker files here

You can find the wso2as docker files here. We need to add dockefiles related to WSO2AS 6.0.0 M3 dockerfiles here

This folder contains docker files required to build wso2as base images

This folder contains docker files which require to build a images with wso2as base image and the war file we upload when we do create application.

This folder contains docker files which require to build a images with wso2as base image and the war file we upload via url when we do create application.

Step 3: Database changes required
When adding a new runtime, you will need to update the database schema, following diagram explains the relationships with AC_RUNTIME table.

AC_RUNTIME defines the runtimes
AC_CONTAINER_SPECIFICATIONS defines the container specs we allow in our App Cloud setup
AC_TRANSPORT defines the ports we expose, to end users to access
AC_APP_TYPE defines the app types

Database queries required to add WSO2 AS 6.0.0 M3 runtime

-- add WSO2 AS 6.0.0 M3 runtime to the AC_RUNTIME table
INSERT INTO `AC_RUNTIME` (`id`, `name`, `repo_url`, `image_name`, `tag`, `description`) VALUES
(10, 'Apache Tomcat 8.0.28 / WSO2 Application Server 6.0.0-M3','', 'wso2as', '6.0.0-m3', 'OS:Debian, JAVA Version:8u72');

-- add app type-runtime mapping
INSERT INTO `AC_APP_TYPE_RUNTIME` (`app_type_id`, `runtime_id`) VALUES
(1, 10);

-- add relavent container spec mappings
(10, 3),
(10, 4);

-- add relavent transport mappings
INSERT INTO AC_RUNTIME_TRANSPORT (`transport_id`, `runtime_id`) VALUES
(3, 10),
(4, 10);

Step 4: integration above changes to App Cloud
Once you complete the above changes, you can send us a pull request :)

Dimuthu De Lanerolle

Understanding different WSO2 ESB Scopes 


1. Synapse scope
When the scope of a property mediator is synapse, its value is available throughout both the in sequence and the out sequence

2. axis2 scope
When the scope of a property mediator is axis2, its value is available only throughout the sequence for which the property is defined (e.g., if you add the property to an in sequence, its value will be available only throughout the in sequence).

Senduran BalasubramaniyamHow to setup a local SVN server

Following commands guide to set a local SVN server, this is helpful when you want to use a temporary SVN sever (eg: testing a cluster setup)

Creating the repository

(go to the desired path where you want to setup the repository)
svnadmin create <repository-name>

Setting up username and password

uncomment the following in <repository-name>/conf/svnserve.conf
  • anon-access = none
  • auth-access = write
  • password-db = passwd 
add the users in the <repository-name>/conf/passwd in the following format 
username = password 

Starting the SVN server 

svnserve -d -r <repository-name>

svn server will start on the default port 3690

Checking out the repository 

svn co svn://localhost/

Committing a file 

svn ci -m "adding hello file" hello.txt --username username --password password

Stopping the svn server

ps -ef | grep [s]vnserve | awk '{print $2}' | xargs kill -9

Amalka SubasingheGeneralising WSO2 App Cloud to Implement another cloud perspective (view)

Currently, WSO2 App Cloud follows the container based approach to provide different runtime to deploy application artifacts. Likewise we can follow the same approach to deploy ESB car file as a app type in WSO2 App Cloud. But, our requirement is to provide a separate cloud perspective (view) for end users for integration solutions, so we thought to generalize the app cloud to operate in two modes (as app cloud and integration cloud) on single app cloud deployment.

1. We need two different URLs ( and ) to login to separate clouds.

How SSO works :
Currently, when a user login, it redirect to WSO2IS for SSO and then it comes to app cloud with url. Same as when user requests integration cloud it should redirect to WSO2IS and then it comes to the app cloud with url. For that we use 2 different issuers and 2 SPs configured in IS side. When the request first come to the app cloud, we find which cloud the user requests based on the host name, and then redirect the request to IS with correct issuer.

Loading relevant cloud view with valid titles, breadcrumbs, navigation buttons, etc ...:
Once the SSO happens we put the requested cloud to the session and then loads the UI with the correct UI elements reading the following json file.

    "app-cloud" : {
      "pageTitle": "WSO2 App Cloud",
      "cloudTitle" : "Application Cloud",
      "properties" : {
          "documentationUrl": "AppCloud.Documentation.Url",
          "supportUrl": "AppCloud.Support.Url”
    "integration-cloud" : {
      "pageTitle": "WSO2 Integration Cloud",
      "cloudTitle" : "Integration Cloud",
      "properties" : {
          "documentationUrl": "IntegrationCloud.Documentation.Url",
          "supportUrl": "IntegrationCloud.Support.Url”

2. Based on the selected cloud, app cloud should operate as follows.

- We want to differentiate app types per cloud.
- Application home page should list the application which was created in selected cloud.
- Application home page search function should work on application which was created in selected cloud only.
- Separate subscription plans required per cloud. [max number of applications and databases per cloud]
- Separate white listing required per cloud.

So we changed the app cloud database table structure as shown in below diagram and updated the implementation to get per cloud data.
With these changes we can deploy the app cloud as a separate deployment if required in future.

3. Unified UI design

Per app type, we will be require loading different UI components to the app home page.
As an example: How we display endpoints per app type. Different type of application provides different types of endpoints. ESB app types give SOAP and REST endpoints. Web/PHP gives just a web url. JAX-WS gives SOAP endpoint, etc…Likewise we will be required to add more UI components per app types. So we decided to go with unified UI design approach per app type with javascript abstraction layer.

This is how we render endpoints per app type:
When user navigates to the app home page we make a call to the container and get the urls and generate the UI component to display in app home page.
We don’t persist these endpoints in database. So user can’t see the endpoints when the container is not up and running.

Imesh Gunaratne"Evolution of Linux Containers and Future" in Container Mind

Image source: <a href=""></a>

Linux containers is an operating system level virtualization technology for providing multiple isolated Linux environments on a single Linux host. Unlike virtual machines (VMs) containers do not run dedicated guest operating systems rather they share the host operating system kernel and make use of the guest operating system system libraries for providing the required OS capabilities. Since there is no dedicated operating system, containers start much faster than VMs.

Image credit: Docker Inc.

Containers make use of the Linux kernel features such as Namespaces, Apparmor, SELinux profiles, chroot & CGroups for providing an isolated environment similar to VMs. Linux security modules guarantee that access to the host machine and the kernel from the containers is properly managed to avoid any intrusion activities. In addition containers can run different Linux distributions from its host operating system if both operating systems can run on the same CPU architecture.

In general containers provide means of creating container images based on various Linux distributions, an API for managing the lifecycle of the containers, client tools for interacting with the API, features to take snapshots, migrating container instances from one container host to another, etc.

Container History

Below is a short summary of container history extracted from Wikipedia and other sources:

1979 — chroot

Container concept was started way back in 1979 with UNIX chroot. It’s an UNIX operating-system system call for changing the root directory of a process and its children to a new location in the filesystem which is only visible to a given process. The idea of this feature is to provide an isolated disk space for each process. Later in 1982 this was added to BSD.

2000 — FreeBSD Jails

FreeBSD Jails is one of the early container technologies introduced by Derrick T. Woolworth at R&D Associates for FreeBSD in year 2000. It is an operating-system system call similar to chroot but included additional process sandboxing features for isolating the filesystem, users, networking, etc. As a result it could provide means of assigning an IP address for each jail, custom software installations and configurations, etc.

2001 — Linux VServer

Linux VServer is a another jail mechanism that can be used to securely partition resources on a computer system (file system, CPU time, network addresses and memory). Each partition is called a security context and the virtualized system within it is called a virtual private server.

2004 — Solaris Containers

Solaris Containers was introduced for x86 and SPARC systems, first released publicly in February 2004 in build 51 beta of Solaris 10, and subsequently in the first full release of Solaris 10, 2005. A Solaris Container is a combination of system resource controls and the boundary separation provided by zones. Zones act as completely isolated virtual servers within a single operating system instance.

2005 — OpenVZ

OpenVZ is similar to Solaris Containers and make use of a patched Linux kernel for providing virtualization, isolation, resource management, and checkpointing. Each OpenVZ container would have an isolated file system, users & user groups, a process tree, network, devices and IPC objects.

2006 — Process Containers

Process Containers was implemented at Google in year 2006 for limiting, accounting and isolating resource usage (CPU, memory, disk I/O, network, etc) of a collection of processes. Later on it was renamed to Control Groups to avoid the confusion multiple meanings of the term “container” in the Linux kernel context and merged to the Linux kernel 2.6.24. This shows how early Google was involved in container technology and how they have contributed back.

2007 — Control Groups

As explained above Control Groups AKA cgroups was implemented by Google and added to the Linux Kernel in year 2007.

2008 — LXC

LXC stands for LinuX Containers and it is the first, most complete implementation of Linux container manager. It was implemented using cgroups and Linux namespaces. LXC was delivered in liblxc library and provided language bindings for the API in python3, python2, lua, Go, ruby, and Haskell. Contrast to other container technologies LXC works on vanila Linux kernel without requiring any patches. Today LXC project is sponsored by Canonical Ltd and hosted here.

2011 — Warden

Warden was implemented by CloudFoundry in year 2011 by using LXC at the initial stage and later on replaced with their own implementation. Unlike LXC, Warden is not tightly coupled to Linux rather it can work on any operating system that can provide ways of isolating the environments. It runs as a daemon and provides an API for managing the containers. Refer Warden documentation and this blog post for more detailed information on Warden.

2013 — LMCTFY

lmctfy stands for “Let Me Contain That For You”. It is the open source version of Google’s container stack, which provides Linux application containers. Google started this project with the intension of providing guaranteed performance, high resource utilization, shared resources, over-commitment and near zero overhead with containers (Ref: lmctfy presentation). The cAdvisor tool used by Kubernetes today was started as a result of lmctfy project. The initial release of lmctfy was made in Oct 2013 and in year 2015 Google has decided to contribute core lmctfy concepts and abstractions to libcontainer. As a result now no active development is done in LMCTFY.

The libcontainer project was initially started by Docker and now it has been moved to Open Container Foundation.

2013 — Docker

Docker is the most popular and widely used container management system as of January 2016. It was developed as an internal project at a platform as a service company called dotCloud and later renamed to Docker. Similar to Warden Docker also used LXC at the initial stages and later replaced LXC with it’s own library called libcontainer. Unlike any other container platform Docker introduced an entire ecosystem for managing containers. This includes a highly efficient, layered container image model, a global and local container registries, a clean REST API, a CLI, etc. At a later stage Docker also took an initiative to implement a container cluster management solution called Docker Swarm.

2014 — Rocket

Rocket is a much similar initiative to Docker started by CoreOS for fixing some of the drawbacks they found in Docker. CoreOS has mentioned that their aim is to provide more rigorous security and production requirements than Docker. More importantly it is implemented on App Container specification to be a more open standard. In addition to Rocket, CoreOS also develops several other container related products used by Docker & Kubernetes; CoreOS Operating System, etcdflannel.

2016 — Windows Containers

Microsoft also took an initiative to add container support to Microsoft Windows Server operating system in year 2015 for Windows based applications and it’s called Windows Containers. This is to be released with Microsoft Windows Server 2016. With this implementation Docker would be able to run Docker containers on Windows natively without having to run a virtual machine to run Docker (earlier Docker ran on Windows using a Linux VM).

The Future of Containers

As of today (Jan 2016) there is a significant trend in the industry to move towards containers from VMs for deploying software applications. The main reasons for this are the flexibility and low cost that containers provide compared to VMs. Google has used container technology for many years with Borg & Omega container cluster management platforms for running Google applications at scale. More importantly Google has contributed to container space by implementing cgroups and participating in libcontainer project. Google may have gained a huge gain in performance, resource utilization and overall efficiency using containers during past years. Very recently Microsoft who did not had an operating system level virtualization on Windows platform took immediate actions to implement native support for containers on Windows Server.

Docker, Rocket and other container platforms cannot run on a single host in a production environment, the reason is that they are exposed to single point of failure. While a collection of containers are run on a single host, if the host fail, all the containers that run on that host will also fail. To avoid this a container host cluster need to be used. Google took a step to implement an open source container cluster management system called Kubernetes with the experience they got from Borg. Docker also started a solution called Docker Swarm. Today these solutions are at their very early stages and it may take several months and may be another year to complete their full feature set, become stable and widely use in the industry in production environments.

Microservices is another groundbreaking technology rather a software architecture which uses containers for their deployment. A microservice is nothing new but a lightweight implementation of a web service which can start extremely fast compared to a standard web service. This is done by packaging a unit of functionality (may be a single service/API method) in one service and embedding it into a lightweight web server binary.

By considering above facts we can predict that on next few years containers may take over virtual machines, sometimes might replace completely. Last year I worked with handful of enterprises on implementing container based solutions on POC level. There were few who wanted to take the challenge and put them in production. This may change very quickly as the container cluster management systems get more mature.

Evolution of Linux Containers and Future was originally published in Container Mind on Medium, where people are continuing the conversation by highlighting and responding to this story.


Read the responses to this story on Medium.

Farasath AhamedConfiguring WSO2 Identity Server to return Attribute Profile claims in SAML SSO Response

Configuring SAML SSO for an external Service Provider with WSO2 Identity Server is probably one of the most common use cases I heard from my day 1 at WSO2.

Setting up is quite easy, Just follow the docs here.

Now let me start from there, what if someone wants to retrieve certain claims of a user in the SAML Response. How easy is it to configure that?

Well, Let me show you :)

Step 1

Assuming that you have setup a Service Provider in Identity Server by following the docs, you should have a configuration like the one below,

The most important part of this config is the "Enable Attribute Profile" tick, that allows you to get a set of pre-configured claims in the SAML response. Be sure to have it ticked.

Step 2

Now your are done with Step 1, In Step 2 you simply configure the claims that you want to be returned in the SAML response. To do this,

Go to the "Claim Configuration" section of the service provider,

Now click on "Add Claim URI" and select the claims that you want to be returned,

Step 3

This is the most important step according to me. Doing this right will save a lot of time scratching your head and hating your life. :)

Just make sure that the claims you configured are filled out in the user profile :)
You can go to user profile by navigating to Users and Roles --&gt;  List --&gt; Users

Now you are done with the configurations. Time to get into action. I tried this using the travelocity sample that ships with IS. You can use your own sample.

At the end of the SAML SSO flow, we should end up with a SAML Response that includes the requested claims of the user. You can find a sample Response below. Pay attention to the AttributeStatement section.

<saml2p:Response xmlns:saml2p="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:xs="" Destination="http://localhost:8080/" ID="_03e9e7b4edc91c5dd1bcfb75b72d845b" InResponseTo="lhlohefjlfaonknijcjnipgdenkocehhhpgcnbdl" IssueInstant="2016-09-02T17:24:37.136Z" Version="2.0">
<saml2:Issuer xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">localhost</saml2:Issuer>
<ds:Signature xmlns:ds="">
<ds:CanonicalizationMethod Algorithm=""/>
<ds:SignatureMethod Algorithm=""/>
<ds:Reference URI="#_03e9e7b4edc91c5dd1bcfb75b72d845b">
<ds:Transform Algorithm=""/>
<ds:Transform Algorithm="">
<ec:InclusiveNamespaces xmlns:ec="" PrefixList="xs"/>
<ds:DigestMethod Algorithm=""/>
<saml2p:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
<saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="" ID="_fe2af7a470b2c48202017a12a49ae190" IssueInstant="2016-09-02T17:24:37.138Z" Version="2.0">
<saml2:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">localhost</saml2:Issuer>
<ds:Signature xmlns:ds="">
<ds:CanonicalizationMethod Algorithm=""/>
<ds:SignatureMethod Algorithm=""/>
<ds:Reference URI="#_fe2af7a470b2c48202017a12a49ae190">
<ds:Transform Algorithm=""/>
<ds:Transform Algorithm="">
<ec:InclusiveNamespaces xmlns:ec="" PrefixList="xs"/>
<ds:DigestMethod Algorithm=""/>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress"></saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
<saml2:SubjectConfirmationData InResponseTo="lhlohefjlfaonknijcjnipgdenkocehhhpgcnbdl" NotOnOrAfter="2016-09-02T17:29:37.136Z" Recipient="http://localhost:8080/"/>
<saml2:Conditions NotBefore="2016-09-02T17:24:37.138Z" NotOnOrAfter="2016-09-02T17:29:37.136Z">
<saml2:AuthnStatement AuthnInstant="2016-09-02T17:24:37.143Z" SessionIndex="a5dca970-a8c5-4d9c-a522-9c7556f0db39">
<saml2:Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
<saml2:AttributeValue xmlns:xsi="" xsi:type="xs:string">Sri Lanka</saml2:AttributeValue>
<saml2:Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
<saml2:AttributeValue xmlns:xsi="" xsi:type="xs:string"></saml2:AttributeValue>
<saml2:Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
<saml2:AttributeValue xmlns:xsi="" xsi:type="xs:string">admin_lastname</saml2:AttributeValue>

Yashothara ShanmugarajahJAVA Flight Recorder with WSO2 Products

Brief JAVA Flight Recorder

A profiling and event collection framework built into the Oracle JDK. It gathers low level information about the JVM and application behavior without performance impact (less than 2%). With Java Flight Recorder system administrators and developers have a new way to diagnose production issues. JFR provides a way to collect events from a Java application from the OS layer, the JVM, and all the way up to the Java application. The collected events include thread latency events such as sleep, wait, lock contention, I/O, GC, and method profiling. JFR can be enabled by default and continuously collect low level data from Java applications in production environments. This will allow a much faster turnaround time for system administrators and developers when a production issue occurs. Rather than turning on data gathering after the fact, the continuously collected JFR data is simply written to disk and the analysis can be done on data collected from the application leading up to the issue, rather than data collected afterwards. We can use this JFR to do long run testing in effective way.

Here we will see using JFR to do long run testing to WSO2 products through JMC (JAVA Mission Control). Here I am taking WSO2 CEP as WSO2 Product.
  •  Go to <CEP_HOME>/bin and open using gedit.
  • In that you can see $JAVACMD. Below that add these to enable JFR
    •  -XX:+UnlockCommercialFeatures \
    •   -XX:+FlightRecorder \
  • Start WSO2 CEP by using sh ./ in terminal.
  • Then start JMC by clicking on that. You can see this org.wso2.carbon.bootstrap.Bootstrap in that UI like below.

  • Right click on org.wso2.carbon.bootstrap.Bootstrap and click on start Flight Recording. Following window will be appear.

  • Then go to next steps and do your  relevant changes and click on Finish.
  • After specific time you mentioned you will get a .jfr file you can see the memory growth, cpu usage..... in that.
  • You can open this .jfr file anytime.

    Thanks all :)

Charini NanayakkaraSetting JAVA_HOME environment variable in Ubuntu

This post assumes that you have already installed JDK in your system.

Setting JAVA_HOME is important for certain applications. This post guides you through the process to be followed to set JAVA_HOME environment variable.

  • Open a terminal
  • Open "profile" file using following command: sudo gedit /etc/profile
  • Find the java path in /usr/lib/jvm. If it's JDK 7 the java path would be something similar to /usr/lib/jvm/java-7-oracle
  • Insert the following lines at the end of the "profile" file
          export JAVA_HOME
          export PATH
  • Save and close the file. 
  • Type the following command: source /etc/environment
  • You may have to restart the system
  • Check whether JAVA_HOME is properly set with following command: echo $JAVA_HOME. If it's properly set, /usr/lib/jvm/java-7-oracle would be displayed on the terminal.


Prakhash SivakumarOverview: Cross-Site Request Forgery (CSRF) -Recommended Approach for WSO2 Products

A CSRF attack forces a logged-on victim’s browser to send a forged HTTP request, which allows the attacker to trick a victim and perform…

Tharindu EdirisingheSAML Multi Valued Attributes in WSO2 Servers - Retrieving Role Claim of Users as a Single or Multi Valued attribute

In my previous article [1], I explained how to retrieve user claims [2] in the SAML response when a relying party application uses SAML authentication with WSO2 Identity Server.  In this article, I am explaining the Single and Multi Valued attributes in SAML protocol and how to deal with them in WSO2 servers.

In the SAML authentication flow, the Identity Provider sends the SAML response to the relying party upon successful authentication. In order to identify the logged in user, the Subject of the SAML assertion can be used by the relying party. If the relying party needs to know more information about the logged in user (first name, last name, roles etc.), Identity Provider can include those attributes in the SAML AttributeStatement in the assertion.

There are user attributes with single values and also multiple values. For example, the lastname of a user would be a single value which would be represented as given below in the SAML response.

<saml2:Attribute Name="lastname">

A a user may have multiple roles granted.

Therefore, in the SAML response  can be included as following.

<saml2:Attribute Name="role">

Now since you know the difference between single and multi valued attributes, let’s see how it is used for the role claim of WSO2 Identity Server.

In WSO2 Identity Server 5.0.0 version, by default all the user roles were sent in a single value attribute as shown below where the roles were separated by a comma (,).

<saml2:Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
   <saml2:AttributeValue xmlns:xsi="" xsi:type="xs:string">

When comes to WSO2 Identity Server 5.1.0, this behavior is changed where, by default the roles are treated as a multi valued attribute as given below.

<saml2:Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
   <saml2:AttributeValue xmlns:xsi="" xsi:type="xs:string">
   <saml2:AttributeValue xmlns:xsi="" xsi:type="xs:string">
   <saml2:AttributeValue xmlns:xsi="" xsi:type="xs:string">
   <saml2:AttributeValue xmlns:xsi="" xsi:type="xs:string">

If you had been using Identity Server 5.0.0 and migrate it to the 5.1.0 version, the relying party would not be able to identify the role claim properly, when processing the SAML response sent by the Identity Server.

In that case, you can achieve the same behavior where to retrieve all user roles in a single valued attribute. This is can be done through a property.

You need to add the property MultiAttributeSeparator in the userstore configuration where the user is in. If the user is in the PRIMARY userstore, you need to configure this in SERVER_HOME/repository/conf/user-mgt.xml file under the particular userstore configuration. If the user is located in a secondary userstore [3], you need to add the property in the particular configuration file located in SERVER_HOME/repository/deployment/server/userstores directory.

Role claim in a Multi Valued attribute
<Property name="MultiAttributeSeparator">,</Property>
Role claim in a Single Valued attribute
<Property name="MultiAttributeSeparator">,,</Property>

This question was raised in Stack Overflow [4] and I hope this article would be useful for those who face the same problem.


Tharindu Edirisinghe
Platform Security Team

Dimuthu De Lanerolle

Proxy with enrich mediator sends HTTP 200 together with a custom message

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <log level="custom">
            <property name="it" value="** Its Inline Sequence ****"/>
         <property name="HTTP_SC" value="200" scope="axis2"/>
            <source type="inline" clone="true">
               <soapenv:Envelope xmlns:soapenv="">
            <target type="envelope"/>
         <property name="messageType" value="application/soap+xml" scope="axis2"/>
         <header name="To" action="remove"/>
         <property name="RESPONSE" value="true" scope="default" type="STRING"/>

Thilini IshakaHow to write a SaaS app on Stratos

Wikipedia defines software as a service (SaaS) as; "On-demand software provided by an application service provider." "A software delivery model in which software associated data are centrally hosted in the cloud."

Gartner defines SaaS as; "Software that is owned, delivered and managed remotely by one or more providers. The provider delivers software based on one set of common code and data definitions that is consumed in a one-to-many model by all contracted customers at anytime on a pay-for-use basis or as a subscription based on use metrics."

Typical Application Vs SaaS


SaaS Technical Requirements
1. Elastic
       Scales up and down as needed.
       Works with the underlying IaaS.
2. Self-service
       De-centralized creation and management of tenants.
       Automated Governance across tenants.
3. Multi-tenant
       Virtual isolated instances with near zero incremental cost.
       Implies you have a proper identity model.
4. Granularly Billed and Metered
       Allocate costs to exactly who uses them.
5. Distributed/Dynamically Wired
       Supports deploying in a dynamically sized cluster.
       Finds services across applications even when they move.
6. Incrementally Deployed and Tested
       Supports continuous update, side-by-side operation, in-place testing and
       incremental production.

A cloud platform offers,

  • Automated Provisioning.
  • Versioning.
  • Lifecycle management.
  • Infrastructure as a Service.
  • Virtualization.
  • Federated Identity Management.
  • Clustering.
  • Caching.
  • Billing and Metering.

WSO2 Stratos


  • WSO2 Stratos is WSO2’s Cloud Middleware Platform(CMP).
  • A complete SOA platform.
  • In private cloud or public cloud.
  • 100% Open Source under Apache licence.
  • Can run on top of any Cloud IaaS.
  • WSO2 StratosLive is the PaaS offering by WSO2.
  • Significantly ahead of the competition.
  • Stratos is the only 100% Open Source, Open Standards option.
  • Based on OSGi - modular, componentized, standard.

Cloud Native features supported in Stratos
  • Elasticity.
  • Multi-tenancy.
  • Billing and Metering.
  • Self Provisioning.
  • Incremental Testing.

Super Tenant SaaS Applications Vs Tenant SaaS Applications

Tenant SaaS application do not have certain permissions and will not be able to access or modify other tenant's data where the super tenant applications have full controllability and permissions.

Tenant SaaS Web Applications
 Configure security in web.xml


Super Tenant SaaS Application can access tenant's user level information. It uses org.wso2.carbon.context.PrivulegedCarbonContext to access tenant's information such as registry, cache, tenant manager, queue, etc.


[1]A sample is available at
[2]Creating a SaaS App with the Multi-Tenant Carbon Framework:
[3]Webinar on Building and Deploying SaaS Applications:

Thilini IshakaBPM Use Cases

Industry: Cross-Industry
Automating the investment decision and expenditure approval process

Business Problem:
    Replace the manual, e-mail based process with an automated process
    Provide managers with an insight to the ongoing and completed projects

Business Solution:
    WSO2 BPS is capable of creating business processes to automate the investment procedures and expenditure approval processes

This particular use case can very easily be leveraged for other investment approval processes, e.g. marketing spend requests, travel requests, equipment purchase requests, etc.

Business Benefits
  •     Increase business efficiency and agility
  •     Increase reliability
  •     Increase accountability
  •     Improve process visibility by providing real-time statistics, and analytics on investments through dashboards

Health-care industry : Managing patient cases

Business Problem
  •     Manage patient cases
  •     Patient care levels are degrading
  •     Pressures to reduce cost of health
  •     Unable to link treatments to outcomes
Business Solution

You can create processes that enables them to facilitate patient care and coordinate resources across multiple treatment centers to optimize health.

Business Benefit

  • Improve quality of health care
  • Streamline communication
  • Determine treatment automatically based on previous data in the system
  • Consistent and timely access to patient information during continuum of patient treatment is labor intensive and manual

Business Solution

You can create processes that enables them to facilitate patient care and coordinate resources across multiple treatment centers to optimize health

Furthermore, Streamlining Order Management in the Telecommunication industry, Streamlining procurement process, Managing purchases for promotions, Managing claims and returns, etc in the Retails, Claim management in the Financial sector are some of the real-world examples.

sanjeewa malalgodaFailover throttle data receiver pattern for API Gateway.(WSO2 API Manager Traffic Manager)

In this pattern we connect gateway workers to two traffic managers. If one goes down then other can act as traffic manager for gateway. So we need to configure gateway to push throttle events to both traffic managers. Please see the diagram below to understand how this deployment works. As you can see gateway node will push events to both traffic manager node01 and node02. And also gateway receiver's throttle decision updates from both traffic managers using failover data receiver pattern.

Traffic Managers are fronted with load balancer as shown in the diagram. 

Then admin dashboard/publisher server will communicate with traffic manager through load balancer. When user creates new policy from admin dashboard that policy will be stored in database and publish to one traffic manager node through load balancer. Since we do have deployment synchronize mechanism for traffic managers one traffic manager can update other one with latest changes. So it's sufficient to publish throttle policy to one node in active/passive pattern(if one node is active keep sending requests to it and if it's not available send to other node). If we are planning to use svn based deployment synchronizer then created throttle policies should always publish to manager node(traffic manager instance) and workers need to synchronize with it.

The idea behind failover data receiver endpoint is stopping single-point-of-failure in a system. As the broker deployed in each traffic manager and storing and forwarding the messages, if that server goes down entire message flowing of the system will go down no matter what other servers and functions are involved. Thus in order to make a robust messaging system it is mandatory to have a fail-over mechanism.

When we have few instances of Traffic Manager servers up and running in the system generally each of these server is having broker. If one broker goes down then gateway automatically switch to the other broker and continue throttle message receiving. If that one also fails it will try next and so on. Thus as a whole system will not have a downtime.

So in order to achieve high availability for data receiving side we need to configure JMSConnectionParameters to connect multiple borocker running within each traffic manager. So for that we need add following configuration to each gateway. If single gateway is communicating with multiple traffic manager this is the easiest way to configure gateway to communicate with multiple traffic managers.


Amani SoysaSocial Cognitive Approaches to Learning

My second theory which I am going to discuss on education psychology and how it can be applied when in teaching is social cognitive approaches to Learning. This mainly focuses on how learning happen via observations. The social learning theory was proposed by Albert Bandura with his Bobo doll experiment where he demonstrated that children learn and imitate behaviours they have observed in other people. And this theory can be used in educational psychology where you can influence children and improve their learning and development via observation. In my earlier post I spoke about how behavioural theories of learning was the result of association formed by conditioning, reinforcement and punishment. In this section I would like to discuss about social learning theory where learning can also occur simply by observing the actions of others and how this can be used on classroom activities to educate children.

Social Cognitive Theory 

Social cognitive theory explains that social and cognitive factors as well as behaviour plays an important learn in learning. And this plays a major role in schools and educational environments. Mostly because cognitive factors involved students brain development and success in their education and social factors include student’s observing their teachers and their behaviour. 

In this example we will look at how Bandura’s social cognitive theory can be applied in a classroom environment to achieve success 

  1. Cognition influences behaviour ­ At school a student name Alex develop strategies to solve a maths problem by thinking logically about how to solve the problem.
  2. Behaviour influences cognition ­ Alex working on the Math problem has led to her achieve good grades. And this will lead to self satisfaction which leads to cognition. 
  3. Environment influences Behaviour ­ Alex will enroll on higher maths program where it gives tips on solving problems efficiently and this will improve Alex’s maths skills.
  4. Behaviour influences environment ­ The higher math class succeed and many student in Alex’s class join that same program so that they will get tips to succeed in maths. 
  5. Cognition influences environment ­ The teachers and principals feel the need and expectations to have the higher maths program.
  6. Environment influence cognition ­ school establish a resources center where students can have books on tips on higher mathematics to improve their maths skills. 
Bandura further discuss self efficacy also known as confident in doing something is also very important when it comes to social cognitive theory, as even if the students have all the resources and models students might not get motivated to observe or pay attention/study hard if he/she does not have confident in learning 

Observational Learning 

There are three main concepts when it comes to observational learning or social learning and I will discuss each of these concepts from a teacher’s point of view and focus more on educational environment. 1) Students can learn from observation, 2) Student’s internal mental states matter when it comes to observational learning, 3) Just because a student learn some behaviour it does not mean he/she will change his behaviour. 

1. Students can learn via observation 

Observation learning is the learning that involves acquiring skills, strategies and beliefs by looking at others. And observation learning happen at classrooms by student’s observing models. There are three differant types of models that can influence observational learning 

 a) A live model ­ This can be a teacher or another peer student at a classroom who is an actual individual demonstrating or acting out the behaviour 
 b) Verbal instruction model ­ This can be a verbal instructor, a lecturer or announcer who make announcement at classrooms verbally who does not have to present while giving it. 
 c) A symbolic Model­ This can be a real or fictional character displaying the behaviour in books or documentary videos for example characters from history books or scientist on TV programs when it comes to educational environments 

2. Mental states are important to learning 

When it comes to observational learning mental state of the student is also very important. if the student is not confident when it comes to learning he/she will not learn at all. Bandura mention this as intrinsic reinforcement. If a student want to do observational learning he should get internal reward such as pride/satisfaction or some sense of accomplishment when learning the observations. This emphasis on internal thoughts and cognition helps connect learning theories to cognitive developmental theories. 

3. Learning does not necessarily leads to a change in behaviour 

When it comes to observational learning unlike behavioural learning learning does not lead to permanent change in behaviour, it can only demonstrate new information can be learn without demonstrating new behaviours 

Bandura highlighted this theory using a modeling process, Modeling process highlight the factors that make a learning successful via observation. These factors are known as Attention, Retention, Reproduction and Motivation. 

In order for a student to learn he/she must pay attention. Anything that distracts their attention is going to have a negative effect on learning. Therefore, teachers as model must make sure that students are paying attention while teaching and make sure the lesson are interesting so that students will pay more attention observing the teacher as the model 

For student to observe and learn the student must have the ability to store information. Therefore teachers must ensure that students have the capacity to store what they learn when teaching and give enough breaks when modeling. 

Once the student have paid attention to the teacher or the model and further retained information. As teachers, you should make sure students actually perfom the behaviour. It can be a language class or any other observation, students must practice what they learn in order to improve their skills 

Finally, in order for observational learning to be successful, student must be motivated to pay attention, retain and reproduce. If the students are not motivated enough he/she will not learn properly.Reinforcement and punishment play an important role in motivation. While experiencing these motivators can be highly effective, so can observing other experience some type of reinforcement or punishment. For example, if you see another student rewarded with extra credit for being to class on time, you might start to show up a few minutes early each day. 

How do you use observational Learning in your Classroom?

Examples of Social Learning at classroom 

Even Though most of the observational learning is not done on purpose, most children learn by observing their teachers specially when it comes to kids who are at kindergarten and preschool. Therefore, its always important that teachers become good role models for their students. These are some of the outcomes in observational learning. 
  1. Students learning how to solve maths problems by looking at their teacher solving the maths problem and they learn to solve similar maths problems by using the same theory. 
  2. Student’s learn how to speak a foreign language by listening to their teacher. Therefore, the student will get the teacher’s accent. 
  3. Students learn how write by looking at how the teacher writes 
  4. Student can learn art and craft work by looking at their art teach and learn hand work 
  5. Student learn how to keep their desks clean and tidy by looking at their teachers desks 

These are few examples of social learning that happen at classroom, however, teachers can use this theory to educate children. 

● For example teachers can invite popular athlete to come to class room and give their student a talk, so the student get influenced by the athlete and get motivated and learn skills from the athlete further that athlete can be a role model for students who are into athletics. 

● Another example would be teachers can influence weak students (students who are behind their schedule) by making them associate with brighter peers, so that weaker students can observe and learn from their peers. 

● Albert Bandura’s social learning theory can be applied in classroom with the use of technology. By doing educational computer programs and educational documentaries children learn and they change their behaviour by observing the characters on movies and educational games. A good example of this can be is the popular video game “Dora the explorer” or “Sesame Streat”, most preschoolers learn language and other soft skills by watching how dora plays and they learn via observations 

These are few techniques that can be used at classroom, with the use of observational learning. However, its important that the teachers make sure the models for these observations are the right and very influential. 

Advantages of Observational Learning 

There are many advantages of observational learning as most children learn their basics at their very early ages via observation. These are the list of advantages you can gain at classroom via observational learning. The main advantage of observational learning is it lacks objectives most of the time and it is not traditional formal learning structure and students learn unknowingly with the uses of practical and real life experience and gain life skills at very young age. 

● Learning makes fun and can learn things without effort.
     ○ Since observational learning is not like traditional learning student learn things more naturally and it does not take an effort to learn them. It further creates a flexible learning environment if applied in classrooms and unlike behavioural approaches children are able to explore and gain creativity. As a results children learn observation skills, problem solving skills and creativity. 

● Encourages social interactions
    ○ Since observational learning is about observing others and learning children learn language and communication skills which are building blocks of social interaction. Further, observation learning encourage pretend play which improve development skills in early childhood development. Further it will build character and self esteem so that they can interact with the society with diverse activities. 

● Improve their behaviour and quality of life
   ○ Teachers can discipline their students via observation learning. Specially if there are students with behavioural issues, teachers can improve their behaviour by showing models that students appreciate and they will eventually fix their behaviour by looking at the models they adore (this can be fictional or real life models). 

● Improve memory and brain development
   ○ Observation learning practices retention and retention improves memory and cognition. When students learn from their environment by retaining the information they observe it will enhance their memory capacities. 

● Expands and exchange knowledge
    ○ Observational learning helps students expand their knowledge by observing different roles. For example if a classroom supports observational learning, the children learn via various models including their peers. This helps them gain new information and also exchange new information via their peers. Further, if technology is used on observational learning, students can gain knowledge at international level. 

Disadvantages of Observational Learning 

Even though, observational learning has their benefits there are some drawbacks with observational learning at classroom when it is done poorly. 

● Poor Role Models Demonstrate Poor Behavior
    ○ If the role models gives bad examples like teachers showing aggression in front of children or if teachers show war related documentaries children learn aggression via observation (just like bandura’s bobo doll experiment). Therefore as teacher’s they must be very careful when behaving in front of students, as they tend to learn a lot from teachers. 

● Undesirable Models May Reinforce Behavior
    ○ When students work with their peers they can learn bad behaviours from their peers and learn bad behaviours as socially acceptable. This can happen via technology as well as the staff at school. Therefore, bad behaviour and bad practices seems very desirable to children and influence them in a negative way. 

● Evidence of Learning is Not Always Visible
    ○ In observational learning students learn by observation retaining and reproduction. However, the outcomes of learning is not always visible. It can be good or bad behaviour, even if the children learn something it will not always show until it triggers. 

● Observational Learning Requires Motivation 
   ○ Unlike behavioural tools observational learning requires motivation within themselves. If students are not motivated to reproduce what they learn they will stop at attention model itself and for some students if the role models are not powerful for them, Further for students who have various disabilities like ADHD or Autism, observational learning will not be a good tool to use 

In conclusion, observational learning can be a powerful tool for educating children. Most children learn from observation therefore, its important for teachers to make sure they are exposed to powerful role models who can influence students in a positive manner. Further teachers as educators must encourage collaborative learning and incentive and the supportive environment since learning happen via the environment. Further teacher must not miss use this observational tool and use it carefully since both good and bad can be observed by children and it can lead to socially unacceptable behaviours.

Amani SoysaBehavioural approaches to Learning

Behavioural theories of educational psychology focus on learning via behavioural changes, and as a person learn, he/she alter the way they perceive their environment. Which helps them process new information. Behavioural studies focus on two main areas when it comes to education psychology. 1. Classical conditioning founded by John B Watson, and the 2. Operant conditioning founded by B.F Skinner. Classical conditioning concepts are not widely used in classroom environments therefore, I will be focusing more on operant conditioning concepts and how it can be used for educational purposes to motivate students and excel their capabilities to the maximum capacity.

Operant Conditioning used as a teaching technique

Discipline is the key to success of child’s development and learning and one of the most popular approach of disciplining a child by the use of Operant conditioning. Operant conditioning was invented by B. F skinner which is also known as instrumental learning. This is a method of learning via reward and punishment methodologies. Operant condition can be used to reward the good behaviour by reinforcement and minimize bad behaviour by punishments. In operant conditioning if an action is followed by a reinforcement (either positive or negative) it’s mostly likely to occur again in the future. For example if a student in the class clean the classroom on time and you have given him a gold star (a token as a positive reinforcement) the student will likely to clean the classroom again in order to get praised from the teacher. Conversely if an action has bad consequences and result in punishments this has a likelihood of weaken the action. For example if a student does not do his/her homework and if you give them extra hours of school (positive punishment) it;s more likely that student will do his/her homework on time next time in order to avoid punishment. This way you can motivate your students for better behaviour in classrooms using operant conditioning.

Here are some behaviours which can be improved by re enforcement 

Behavior Consequences Future Behavior
Positive Reinforcement
Student ask a good questions
Teacher Praise the student
Students will continue asking more questions
Student does homework on time
Teacher gives a gold star
Student will always do the home work on time
Negative Reinforcement
When the student write a good report
Teacher gives less homework
Student will continue writing good reports
When the student come to the classroom on time
Teacher finish the lesson early
Students will continue to come to class on time

Here are some students behaviours that can be decreased by punishments

Behavior Consequences Future Behavior
Positive Punishment
Student interrupts at class
Teacher yells at the student
Student will minimize interruptions
Student gives a bad report
Teacher gives a black star
student will try to do a better report next time
Negative punishment
When student is late for class
Teacher take his play time for classroom work
Student will try to come to class on time
When student text during the class
Teacher takes away his mobile phone
Student will not use his mobile at class

Those are some of the common uses of operant conditioning at classroom, however, when applying theory of B.F skinner its important to consider the age level of the student when rewarding/punishment student’s behaviour. And there are various different types of reinforcers/punishments can be used increase or decreased the behaviour according to the desired behaviour of the teacher in a classroom setting. They can be appear in different forms such as 1. Consumable (ie Candy) 2. Social (ie praise/yelling) 3. Activity (More time on the playground) 4, Exchangeables (tokens/stars) and 5. Tangible (getting the best seat). Among these components most educational valuable and relevant can activity as they can be done with educational value such as doing an educational game or watching an educational video. However, no matter what the method is, its important to give the consequences right after the behaviour.

There are various types of reinforcement schedules can be used at classrooms, namely fixed ratio, fixed interval, variable ratio and variable interval. In an educational setting the two variable schedules are the best to maintain the desired behaviour as they are very unpredictable. For example if a students were given the chance to watch an educational movie after handing their assignments on time, they would be more motivated to do it if the outcome is always not the same (variable ratio) or the time during which they hand over the assignment and the reward has unpredictable gaps (variable interval).

Advantages of using operant conditioning in classroom
  • When a child is rewarded for his/her good behaviour the child will get motivated and keep on doing that behaviour and he/she will eventually absorbed good behaviour 
  • When the child is punished for his bad behaviour he/she will stop doing it. 
  • It will be effective when it comes to shaping and discipline preschool and kindergarten children 
  • Internal reinforcement benefits students with self satisfaction, when doing a math problem right will motivate them to do more and more maths and eventually the student will be a pro at it (provide students an incentive to learn) 
  • Deliberate use by teachers provides students with the ability to respond to antecedents without fully realising it is influencing their behaviour. 
  • Can be easily used as a tool for kids with disabilities such as ADHD/Autism/Conduct disorder to discipline and get attention
Disadvantages of using operant conditioning in classrooms
  • It will decrease child’s creativity and to think from his own as he/she will be scared of punishment 
  • It will not work for for everyone, and some children might take punishment as a reinforcement (ie a student who does not get enough attention will do bad things to get attention from the teacher and will enjoy getting punished) 
  • The student can pretend to stop the behaviour just to get the reward or avoid the punishment. Once the reward is given they could go back to bad behaviour 
  • If rewards or punishments are misused it would give bad outcomes 
  • Operant conditioning does not take cognitive factors into account 

In conclusion operant conditioning is a very effective tool for teachers, specially when it comes to shaping the children for good behaviour and also to help them learn. This can also be used for kids with disabilities such as autism or ADHD to discipline them in order to get their attention. However, it must be used cautiously and consistently. Since behaviour management will be different for different students, punishment and rewarding must be done at individual level at individual circumstances.

Thilini IshakaStart a Process Instance using the BPMN REST API

Request endpoint to start a BPMN process instance

Request type: 

Request Body:  

Request body (start by process definition Id):

   "variables": [
        "value":"This is a variable"

Request body (start by process definition key):

   "variables": [
        "value":"This is a variable"

Starting the process instance via an ESB proxy service (with PayloadFactory Mediator)

<?xml version="1.0" encoding="UTF-8"?>

<proxy xmlns=""
       transports="https http"


//Calling the request endpoint
         <http method="POST"

//PayloadFactory Mediator transforms or replaces the contents of a message. Configuring 
the format of the request/response and mapping it to the arguments provided.

         <payloadFactory media-type="json">
   "variables": [
        "value":"This is a variable"

//Passing basic-auth credentials to invoke the secured service in BPS. Another way is to 
get these information from the secure vault.

         <property name="Authorization"
                   expression="fn:concat('Basic ', base64Encode('admin:admin'))"

         <property name="Content-Type"

Ayesha DissanayakaEnable debug logs for a jaggery app

Goto  [Product-Home]/repository/deployment/server/jaggeryapps/<app-folder>
    ex: to enable debug logs for WSO2 GREG  Publisher app,             [GREG-HOME]/repository/deployment/server/jaggeryapps/publisher

Open the jaggery.conf file

Modify the "logLevel": "info" entry to "logLevel": "debug"

Restart the server and now you can see debug logs in the console.


Amani SoysaTechnology For Education ..

For the next few years, I will be focusing more on technology for education and going to discuss how technology can play a major role in early childhood education. Also I will be talking about education psychology, technology for  children with special needs and a whole lot about Autism.
The main reason for this change is because, as educators/care givers/human beings, we need to be aware of  children with special needs and see the world from their point of view and always help them to achieve their best. For the next couple of years I will be researching more about educational platforms that can improve cognition, social and  communication, emotional intelligence and linguistic skills in special needs child. Also how they can learn how to learn and see how we can improve the traditional education system for the betterment of our children.

Kamidu Sachith PunchihewaConfigure Device communication when using an existing SSL Certificated with Enterprise Mobility Manager

The two main components of WSO2 Enterprise Mobility Manager are mobile device management and mobile application management. Setting up WSO2 EMM can be done by following the “Getting Started” guide as mentioned in the documentation. This article mainly emphasizes on how to obtain the certification configuration for your personal domain.

Enrolled devices and WSO2 Enterprise Mobility Manager communicates using the HTTPS protocol. This is to make sure that the private and sensitive data stored in the mobile device cannot be retrieved by a third party or unauthorized personals. All the communication carried out between devices, APNS and EMM server is based on certificates included in the key-store files with the extension “jks”. These security features are critical since EMM supports both cooperate owned (COPE) and personal (BOYD) device management. In the section “Configuring the product” guide you have been provided with the steps to configure the EMM server to used in your local subnet where the server and the devices uses a SSL certificate issued by the inbuilt Certificate Authority of the EMM server.

Communications between devices and EMM server

WSO2 EMM server consists of the following components:
  • SCEP server component.
  • CA server component.
  • Device Management Component.

The iOS device acts as a SCEP client where it sends the SCEP request to the Server. For enrollment purposes, this communication requires a certificate which will be generated by the CA server component of EMM. The iOS device will generate a private/public key pair and send a certificate signing/authorization request to the CA where the CA component will need to generate the public key certificate and store the public key for encryption which will be used later.

There is communication between IOS devices and APNS as well as Android devices and GCM for policy monitoring and to perform operations. All the devices will communicate with the EMM server using the agent applications. All these communications must be secured using certificates.
You can see the communication flow in Figure 1 given below.

In order to provide secure communication between the components represented in Figure 1, you have to obtain an SSL certificate for your domain from a Certificate Authority. When hosted under a public domain the obtained SSL certificates needs to be included into the key stores.

Obtaining an SSL Certificates for your domain

You can follow the “Get SSL on my website” guide for more information on how to obtain SSL certification.

Configuring for IOS device management

Configuring the IOS device management and communications is a three step process :
  1. Obtaining a signed CSR from WSO2.
  2. Configuring EMM server for IOS device management.
  3. Configuring the IOS client.

Obtaining a signed CSR from WSO2

Create a Certificate Signing Request (CSR) file from the EMM server using your private key. You can use commands given below to generate the CSR file:

openssl genrsa -des3 -out <Your_Private_Key_File> 2048
openssl req -new -key <Your_Private_Key_File> -out <You_CSR_File>

Make sure to create both Your_Private_Key_File and Your_CSR_Filefiles with .pem extension

Provide correct information to the prompted questions related to your organization and the project. Make sure to provide the actual organization name as this is a required field. The Email address provided should be valid as this will act as the identification of your CSR request in order to identify you in a CSR expiration situation. Common name stands for the fully qualified domain name of your server. Make sure that the information you have provided is of high accuracy since the artifacts provided will bind to the provided domain name. IOS device can be only managed by the server which is hosted under the provided host name.
You can submit the CSR request to the “Obtain the signed CSR file” form. Make sure to enter the same information as you entered in the CSR request when filling the above form.
You will be provided with the following artifacts which is required to configure the EMM server to manage IOS devices:
  1. The signed CSR file in .plst format.
  2. Agent source code.
  3. P2 repository, which contains the feature list.

Please refer “Obtaining the Signed CSR File” guide for more information on obtaining a signed CSR file.

Configuring EMM server for IOS device management

IOS server configuration is a complex and prolonged process which can be described by the following steps. By following these steps in order you can easily configure the EMM server for iOS device management:

  1. Installing IOS feature to EMM server.
  2. Configure general IOS server settings.
  3. Generate the MDM APNS certificate.

Installing IOS feature to the EMM server

Start the EMM server in order to install the features from the P2 repository obtained via the CSR request.
You can navigate to the carbon console using <YOUR_DOMAIN>/carbon and then navigate to the configure tab. Then select the features option from the list.
IOS related features will be available in the P2 repository provided to you with the signed CSR. Install all the three features given. After the installation of the features is completed, stop the EMM server and process to the following location : <EMM_HOME>/repository/conf
You will find a new configuration file “ios-config.xml” in the directory. Modify the “iOSEMMConfigurations” accordingly. Please refer to “Installing WSO2 EMM iOS Features via the P2 Repository” guide for more information.
Configure general IOS server settings.
In order to setup your server with IOS, follow the instructions given in “General iOS Server Configurations” guide until Step 5.
After completing Step 5 follow the instructions given below:
  • Convert the downloaded ssl certificates from your vendor to .pem files.
openssl x509 -in <RA CRT> -out <RA CERT PEM>
openssl x509 -in your-domain-com-apache.crt -out your-domain-com-apache.pem
openssl x509 -in your-domain-com-ee.crt -out your-domain-com-ee.pem
  • Create a certificate chain with the root and intermediate certifications.
cat your-domain-com-apache.pem your-domain-com-ee.pem >> clientcertchain.pem
cat your-domain-com-apache.crt your-domain-com-ee.crt >> clientcertchain.crt
  • Export the SSL certificate chain file as a PKCS12 file with "wso2carbon" as the alias.
openssl pkcs12 -export -out <KEYSTORE>.p12 -inkey <RSA_key>.key -in ia.crt -CAfile clientcertchain.pem -name "<alias>"
openssl pkcs12 -export -out KEYSTORE.p12 -inkey ia.key -in ia.crt -CAfile clientcertchain.pem -name "wso2carbon"

After following the steps as above resume the configuration from Step 7.b as in “General iOS Server Configurations” guide.
Note that Step 6 and 7.a need to be skipped since the server configuration mentioned in those steps is for the public domain with already obtained SSL certificates.

Generate the MDM APNS certificate.
Go to the Apple Push Certificate Portal and upload the .plist file provided with the signed CSR from WSO2 and generate the MDM certificate. Follow the instructions given in “Generate MDM APNS Certificate” guide in order to convert the downloaded certificate to .pxf format.

After completing the instructions given, you can proceed with the IOS platform configuration as instructed in the “IOS Platform Configuration” guide.

Configuring Android device management

To enable secure communication between android devices and your EMM server please follow the “Android Configurations” guide. You can skip the certificate generation described in Step 1 under “Generating a BKS File” and move to Step 2 directly since you have already completed the above when configuring the IOS device communication.

Configuring Windows device management

There are no additional configurations needed to enable windows device management.

Lakshani Gamage[WSO2 App Manager] How to Publish Webapp to Multiple Tenant Stores.

In WSO2 App Manager, by default webapps are published to their own app store. But you can configure it to publish apps to external stores as well. 
In this post, we are talking about how to do that.
  1. Log in to Management Console.
  2. Go to Main -> Browse -> and navigate to   /_system/governance/appmgt/applicationdata/external-app-stores.xml, and click on "Edit As Text". Add each external app store which you want to publish apps, inside <ExternalAPPStores> element .
    <ExternalAPPStore id="Store1" type="wso2" className="org.wso2.carbon.appmgt.impl.publishers.WSO2ExternalAppStorePublisher">

    <ExternalAPPStore id="Store2" type="wso2" className="org.wso2.carbon.appmgt.impl.publishers.WSO2ExternalAppStorePublisher">

  4. Create a webapp and publish it.  Then go to webapp overview page. All the external app stores added in step 2, are previewed on "External Stores" tab.
  5. Select the stores and click on Save to publish the webapp to selected stores.  
  6. From WSO2 App Manager 1.2.1, if you want to publish web apps to external store, you need to follow this step too. Go to Main -> Browse -> in Management Console and navigate to   /_system/config/store/configs/store.json and click on "Edit As Text". Set "publicVisibility"  to true.
  7. Then, go to the relevant external store. You will see the app there with  an "Ad" label. Here Ad stands for "Advertised
  8. This apps life cycle status can be changed by any publisher but it can be edited by the original publisher.

Prabath AriyarathnaEnable SecureVault Support for - WSO2 ESB - MB 3.0

We cannot use cipertool to automate encryption process for the selected elements in the file, because we can only specify Xpath notation here, but still we can use the manual process.

Sample [ESB_home]/repository/conf/ file
# register some connection factories
# connectionfactory.[jndiname] = [ConnectionURL]
connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist

# register some queues in JNDI using the form
# queue.[jndiName] = [physicalName]
queue.MyQueue = example.MyQueue

# register some topics in JNDI using the form
# topic.[jndiName] = [physicalName]
topic.MyTopic = example.MyTopic
1. Enable secure valut in the ESB
sh -Dconfigure

2. Go to the [ESB_home]/bin and execute the following command to generate the encrypted value for the clear text  password.
3. It will prompt following  console for input value.  Answer: wso2carbon
[Please Enter Primary KeyStore Password of Carbon Server : ]
4. Then it will appear second console for  following input value.
     (Answer: According to our property file, the plain text is "amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5672'".)

Encryption is done Successfully
Encrypted value is :cpw74SGeBNgAVpryqj5/xshSyW5BDW9d1UW0xMZDxVeoa6xS6CFtU

5. Open the file, which is under [ESB_home]/repository/conf/security and add the following entry.
6. Open the [ESB_home]/repository/conf/ file and update the key/value of connectionfactory field.

sanjeewa malalgodaDeploy multiple traffic managers with load balance data publisher failover data receiver pattern.

Sending all the events to several receivers

Copy of Copy of traffic-manager-deployment-LBPublisher-failoverReciever.png
This setup involves sending all the events to more than one Traffic Manager receiver. This approach is mainly followed when you use other servers to analyze events together with Traffic Manager servers. You can use this functionality to publish the same event to both servers at the same time. This will be useful to perform real time analytics with CEP and to persist the data, and also to perform complex analysis with DAS in nearly real time with the same data.

Similar to load balancing between a set of servers, in this scenario you need to modify the Data Agent URL. You should include all DAS/CEP receiver URLs within curly braces ({}) separated with commas as shown below.

<ReceiverUrlGroup>{tcp://},{tcp://} </ReceiverUrlGroup>

sanjeewa malalgodaDeploy multiple traffic managers with load balance data publisher failover data receiver pattern.

Load balancing events to sets of servers  

Copy of traffic-manager-deployment-LBPublisher-failoverReciever(3).png

In this setup there are two group of servers that are referred to as Group A and Group B. You can send events to both the groups. You can also carry out load balancing for both sets as mentioned in load balancing between a set of servers. This scenario is a combination of load balancing between a set of servers and sending an event to several receivers. An event is sent to both Group A and Group B. Within Group A, it will be sent either to Traffic Manager -01 or Traffic Manager -02. Similarly within Group B, it will be sent either to Traffic Manager -03 or Traffic Manager -04. In the setup, you can have any number of Groups and any number of Traffic Managers (within group) as required by mentioning them accurately in the server URL.

Similar to the other scenarios, you can describe this as a receiver URL. The Groups should be mentioned within curly braces separated by commas. Furthermore, each receiver that belongs to the group, should be within the curly braces and with the receiver URLs in a comma separated format. The receiver URL format is given below.

           <Type>Binary</Type>    <ReceiverUrlGroup>{tcp://,tcp://},{tcp://,tcp://} </ReceiverUrlGroup>
<AuthUrlGroup>{ssl://,ssl://}, {ssl://,ssl://}</AuthUrlGroup>

sanjeewa malalgodaDeploy multiple traffic managers with load balance data publisher failover data receiver pattern.

Load balancing events to a set of servers

Copy of Copy of traffic-manager-deployment-LBPublisher-failoverReciever(1).png

This setup shows load balancing the event to publish it to all Traffic Manager receivers. The load balanced publishing is done in a Round Robin manner, sending each event to each receiver in a circular order without any priority. It also handles fail-over cases such as, if Traffic Manager Receiver-1 is marked as down, then the Data Agent will send the data only to Traffic Manager Receiver-2(and if we have more node then for all of them) in a round robin manner. When Traffic Manager Receiver-1 becomes active after some time, the Data Agent automatically detects it, adds it to the operation, and again starts to load balance between all receivers. This functionality significantly reduces the loss of data and provides more concurrency.

For this functionality, include the server URL in the Data Agent as a general DAS/CEP receiver URL. The URL should be entered in a comma separated format as shown below.


sanjeewa malalgodaDeploy multiple traffic managers with fail over data publisher fail over data receiver pattern.

Like we discussed early gateway data receiver need to configure with fail over data receiver. But data publisher can be configured according to load balance or failover pattern. In this section we will see how we can publish throttling events to traffic manager in failover pattern.

Failover configuration


When using the failover configuration in publishing events to Traffic manager, events are sent to multiple Traffic Manager receivers in a sequential order based on priority. You can specify multiple Traffic Manager receivers so that events can be sent to the next server in the sequence in a situation where they were not successfully sent to the first server. In the scenario depicted in the above image, first events are sent to Traffic Manager Receiver-1. If it is unavailable, then events will be sent to Traffic Manager Receiver-2. Further, if that is also available, then events will be sent to Traffic Manager Receiver-3.

           <ReceiverUrlGroup>tcp:// | tcp://</ReceiverUrlGroup>
           <AuthUrlGroup>ssl:// | ssl://</AuthUrlGroup>

sanjeewa malalgodaFail over Traffic Manager data receiver pattern for API Gateway.

The idea behind fail over data receiver endpoint is stopping single-point-of-failure in a system. As the broker deployed in each traffic manager and storing and forwarding the messages, if that server goes down entire message flowing of the system will go down no matter what other servers and functions are involved. Thus in order to make a robust messaging system it is mandatory to have a fail-over mechanism.

When we have few instances of Traffic Manager servers up and running in the system generally each of these server is having brocker. If one broker goes down then gateway automatically switch to the other broker and continue throttle message receiving. If that one also fails it will try next and so on. Thus as a whole system will not have a downtime.

So in order to achieve high availability for data receiving side we need to configure JMSConnectionParameters to connect multiple broker running within each traffic manager. So for that we need add following configuration to each gateway. If single gateway is communicating with multiple traffic manager this is the easiest way to configure gateway to communicate with multiple traffic managers.

To configure that you need to add following configuration to each gateway worker. Then it will pick updates from any of the traffic manager event if some of them not functioning.


Sohani Weerasinghe

Using a Class Mediator to process JSON payload


This blog post describes about using a class mediator to process  JSON payload

sample JSON:

   "query": "InformationSystem[@id\u003d\"22174\"]",

   "result": [
      "id": [
      "description": [
        "test 123"
      "Application Type": [
        "Sample Application"
      "{Sample} ID": [

You can use WSO2 Developer Studio to create a mediator project and since I have used jackson as the library I have added below dependencies to the pom.xml of the mediator project




The code is as follows:

package org.wso2.sample;

import java.util.Iterator;
import java.util.Map;

import org.apache.synapse.MessageContext;
import org.apache.synapse.commons.json.JsonUtil;
import org.apache.synapse.core.axis2.Axis2MessageContext;
import org.apache.synapse.mediators.AbstractMediator;

import com.fasterxml.jackson.core.JsonFactory;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ArrayNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.fasterxml.jackson.databind.node.TextNode;

public class JSONClassMediator extends AbstractMediator {

    public static final String RESULT_NODE = "result";
    public static final String QUERY_NODE = "query";

    public boolean mediate(MessageContext context) {
        try {
            // Getting the json payload to a string
            String jsonPayloadToString = JsonUtil.jsonPayloadToString(((Axis2MessageContext) context)

            JsonFactory factory = new JsonFactory();
            ObjectMapper mapper = new ObjectMapper(factory);

            JsonNode rootNode = mapper.readTree(jsonPayloadToString);
            //create a new root object
            ObjectNode rootObject = mapper.createObjectNode();
            //process query node
            TextNode queryNodeValue = (TextNode) rootNode.get(QUERY_NODE);
            rootObject.put(QUERY_NODE, queryNodeValue.asText());
            //process result node
            ArrayNode resultNode = (ArrayNode) rootNode.get(RESULT_NODE);
            JsonNode resultChildNode = ((ArrayNode) resultNode).get(0);
            Iterator<Map.Entry<String, JsonNode>> fieldsIterator = resultChildNode.fields();
            ObjectNode newNode = mapper.createObjectNode();
            ArrayNode resultArray = mapper.createArrayNode();
            while (fieldsIterator.hasNext()) {
                Map.Entry<String, JsonNode> field =;
                String newKey = field.getKey().replace(" ", "-").replace("{", "").replace("}", "").replace("%", "");
                newNode.put(newKey, field.getValue());
            rootObject.put(RESULT_NODE, (JsonNode) resultArray);
            // Setting the new json payload.
            String transformedJson = rootObject.toString();
            JsonUtil.newJsonPayload(((Axis2MessageContext) context).getAxis2MessageContext(), transformedJson, true,

        } catch (Exception e) {
            handleException("error in processing the json payload", e,context);
            return false;

        return true;

Proxy configuration is as follows:

<?xml version="1.0" encoding="UTF-8"?>
            <endpoint key="conf:/repository/sampleEP"/>
         <class name="org.wso2.sample.JSONClassMediator"/>
         <log level="full">
            <property name="************RESULT************" expression="$body"/>
         <description>test 123</description>
         <Application-Type>Sample Application</Application-Type>

Sohani Weerasinghe

Using a Script Mediator to process JSON payload

This blog post describes about using a script mediator to process  JSON payload


When the JSON payload has spaces and special characters for element names, then replace those spaces and characters as required

sample JSON:

  "result": [
      "id": [
      "description": [
        "test 123"
      "Application Type": [
        "Sample Application"
      "{Sample} ID": [

Please follow below steps:

1. In the proxy service configuration, first you need to assign the JSON payload to a property as below, (this will assign the array inside the 'result')

       <property name="JSONPayload" expression="json-eval($.result[0])" />
2. Then define the script mediator as below

       <script language="js" key="conf:/repository/transform.js" function="transform" />
based on this you need to create a registry resource (.js) in the registry ( eg: at conf:/repository/) and need to add the js script into that file

3. In this sample we are going to remove the spaces and curly braces from the elements, hence we have used below script.

function transform(mc) {
    var obj = mc.getProperty('JSONPayload'); //gets the property from the message context
    var jsonObject = JSON.parse(obj); // casting the json string to a json object

    var jsonArray = {
        result: []     // initializing the array
    var item = {}
    for (var prop in jsonObject) {
        var attrName = prop;
        var att = attrName.replace(/ /g, '');  //replacing white spaces
        var att = att.replace(/[{}]/g, ""); //replacing curly braces
        var attrValue = jsonObject[prop];
        var jsonText = JSON.stringify(att);
        item[att] = attrValue;
    jsonArray.result.push(item); //adds the sanitized json to the defined array
    mc.setPayloadJSON(jsonArray); //set the created json payload to the message context

Please find the below complete proxy configuration as a reference

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="" name="sampleProxy" transports="https,http" statistics="disable" trace="disable" startOnLoad="true">
            <endpoint key="conf:repository/sampleEP" />

         <property name="JSONPayload" expression="json-eval($.result[0])" />
         <script language="js" key="conf:/repository/transform.js" function="transform" />
           <log level="full">
            <property name="************RESULT************" expression="$body" />
   <description />

Final result will be as below:

<jsonObject xmlns="">
      <description>test 123</description>
      <ApplicationType>Sample Application</ApplicationType>

Script Mediator -
JSON support in ESB -

Yasassri RatnayakeHow to Record Jmeter Scripts - Tips and Tricks

When using Jmeter I have come across two easy ways to record Jmeter scripts. I will explain these methods and the some tips to make your reording more usefull.

1. Using  blazemeter plugin.

This is a browser plugin that can be used to record Jmeter Scripts. Note that you should have a account created in blazemeter website for this and account creation is free. You can get the plugin for chrome from here.

After Installing the plugin, setup the plugin as shown below and start recording perform the action you want to record in the browser and stop the recording.  After stooping the recording you can download a .jmx file and open it with Jmeter.

2. Using Recording Controller

This is Jmeter inbuild option that is available for recording. To record with Recording  Controller follow the below instructions.

1. Create a Proxy server in the browser. To do this open Browser options and do the following configurations. Note that I'm adding localhost as my proxy host so I should access my website which needs to be recorded through the same domain "localhost".

2. Now right click on the WorkBench and add a "HTTP(S) Test Script Recorder" which can be found under non test elements and set the following configs. Make sure that the proxy port matches the proxy port that was set in step 01.

3. [OPTIONAL] Now add URL exclude patterns as shown below. This is to avoid Jmeter from recording unnecessary HTTP calls. (e.g: Loading javascripts/ Server Pings etc.)

4. [OPTIONAL] Now add a User Defined Variables Element and add any variable you might want to parameterize. What ever variable added here, if the value defined for the variable matches with any content of the recording, Jmeter will automatically parameterize this. This is quite handy when dealing with complex recordings. Make sure this resides in the same Thread group where the Recording Controller is located.

5. Now add a Recording Controller with the project. You can observe this in the above screenshot.

6. Now Start the proxy server and acess the browser and navigate to the website you want to record. Now expand the recording controller and observer the magic happening.

So that's it hope this might help someone, and please drop a comment if you have any other queries.

Anupama PathirageHow to setup a Cluster using wso2 Message Broker

It is possible to install multiple instances of WSO2 products in a cluster to ensure that if one instance becomes unavailable or is experiencing high traffic, another instance will seamlessly handle the requests.A cluster consists of multiple instances of a product that act as if they are a single instance and divide up the work.
WSO2 provides Hazelcast Community Edition as its default clustering engine.

For this setup I have use two WSO2 MB nodes with a mysql database. For this sample setup I have use the same server for both MB nodes and database. So host configurations needs to be changed on a real environment.

NOTE: I will be using Node 1 with default ports and Node 2 with a port offset of 3 as I'm running the both instances in the same server. So first change the port offset of the 2nd MB instance by changing the following property in carbon.xml file


Creating and sharing the Database

Create following four databases.

  • WSO2_USER_DB -     JDBC user store and authorization manager
  • REGISTRY_DB     - Shared database for config and governance registry mounts in the product's nodes
  • REGISTRY_LOCAL1     - Local registry space in the manager node
  • REGISTRY_LOCAL2     - Local registry space in the worker node
From most production environments, it is recommended to externalize the databases to a JDBC database of your choice and split the registry space to manage registry resources in a better way.