WSO2 Venus

Dimuthu De Lanerolle

Json  Notes:  

Json Objects:

var me = {
"age" : "25",
"address" : "Kottawa",
"gender" : "male"

Json Arrays:

var family = [{
   "name" : "Dimuthu",
   "age" : "25",
   "gender" : "male"
   "name" : "Nadeesha",
   "age" : "25",
   "gender" : "male"

JsonNeted Objects:

var family = {
   "Dimuthu" : {
       "sirname" : "Lanerolle",
       "age" : "25",
       "gender" : "male"
   "Nadeesha" : {
       "sirname" : "Lanerolle",
       "age" : "25",
       "gender" : "male"

Rukshan PremathungaWSO2 APIM publishing custom statistics

WSO2 APIM publishing custom statistics

  1. Introduction

  2. In this blog i will explain the possible approach to take, extend the runtime statistics in the APIM to DAS. Custom event publishing is important when you want to publish custom event which are not provided apim by default. Also it will be help to handle more advanced business requirements like message body based analytics which is also not proved by the APIM by default.

  3. Using Class mediator

  4. Class mediator[1] is another approach to extend the APIM runtime statistics as a event. You can create a java class with DAS data publisher which will publisher the event when Class mediator get invoke. Here the sample java class for the class mediator. Before start implement make sure to import all the required java dependencies to the project. For that please check the sample section for the source code.

    package com.rukspot.sample.eventpublisher;

    import org.apache.synapse.MessageContext;
    import org.apache.synapse.mediators.AbstractMediator;
    import org.wso2.carbon.apimgt.gateway.APIMgtGatewayConstants;
    import org.wso2.carbon.apimgt.impl.utils.APIUtil;
    import org.wso2.carbon.databridge.agent.DataPublisher;

    public class SimpleClassMediator extends AbstractMediator {

    public boolean mediate(MessageContext mc) {

    String context = (String) mc.getProperty(RESTConstants.REST_API_CONTEXT);
    String apiPublisher = (String) mc.getProperty(APIMgtGatewayConstants.API_PUBLISHER);
    String apiVersion = (String) mc.getProperty(RESTConstants.SYNAPSE_REST_API);
    String api = APIUtil.getAPINamefromRESTAPI(apiVersion);

    AuthenticationContext authContext = APISecurityUtils.getAuthenticationContext(mc);
    String username = "";
    String applicationName = "";
    if (authContext != null) {
    username = authContext.getUsername();
    applicationName = authContext.getApplicationName();
    try {
    DataPublisher publisher = EventPublisher.getPublisher();
    String id = "org.wso2.apimgt.statistics.custom:1.0.0";
    Object[] payload= new Object[]{api,applicationName,username};
    publisher.publish(id, null, null, payload);
    } catch (Exception e) {
    System.out.println("error Sending event " + e.getMessage());
    return true;
  5. Using handler

  6. Another way is to implement the data publisher in a synapse handler[2]. It is similar to the class mediator but it is implemented as a handler and place in the handler section in the api synapse. Here the sample handler class.

    package com.rukspot.sample.eventpublisher;

    import org.apache.synapse.MessageContext;
    import org.wso2.carbon.apimgt.gateway.APIMgtGatewayConstants;
    import org.wso2.carbon.apimgt.impl.utils.APIUtil;
    import org.wso2.carbon.databridge.agent.DataPublisher;

    public class SimpleEventPublisherHandler extends AbstractHandler {
    public boolean handleRequest(MessageContext mc) {
    String context = (String) mc.getProperty(RESTConstants.REST_API_CONTEXT);
    String apiPublisher = (String) mc.getProperty(APIMgtGatewayConstants.API_PUBLISHER);
    String apiVersion = (String) mc.getProperty(RESTConstants.SYNAPSE_REST_API);
    String api = APIUtil.getAPINamefromRESTAPI(apiVersion);

    AuthenticationContext authContext = APISecurityUtils.getAuthenticationContext(mc);
    String username = "";
    String applicationName = "";
    if (authContext != null) {
    username = authContext.getUsername();
    applicationName = authContext.getApplicationName();
    try {
    DataPublisher publisher = EventPublisher.getPublisher();
    String id = "org.wso2.apimgt.statistics.custom:1.0.0";
    Object[] payload= new Object[]{api,applicationName,username};
    publisher.publish(id, null, null, payload);
    } catch (Exception e) {
    System.out.println("error Sending event " + e.getMessage());
    return true;

    public boolean handleResponse(MessageContext messageContext) {
    return true;
  7. Using Event mediator

    • Event mediator[3] is one of the way to integrate to your mediation flow as a mediator. Mediator is a simple way to add custom event stream without touching any java level codes.
    • APIM 2.0.0 is not ship with Event mediator by default. But it is possible to install feature in APIM 2.0.0 and configure.
    • To install feature first add the relevant feature repository to the APIM according to this[4].
    • Then find the PublishEvent Mediator Aggregate:4.6.1 feature and restart after done.
    • To configure event publishing you need to configure the EventSink configuration and EventPublisher mediator.
    • EventSink[5] configuration is to define the DAS connection related information and sample configuration is as follow. The configuration is need to deploy in the /home/rukshan/apim/2.0.0/test/wso2am-2.0.0/repository/deployment/server/event-sinks/ directory with name testSink.xml

    • <?xml version="1.0" encoding="UTF-8"?>
    • Next we need to define the event mediator in the Synapse API as follow.

    • <?xml version="1.0" encoding="UTF-8"?>
      <api xmlns=""
      <resource methods="GET" url-mapping="/one" faultSequence="fault">
      <property name="api.ut.backendRequestTime"
      <filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">

      <attribute name="atr1" type="STRING" defaultValue="" value="testValue"/>
      <attribute name="atr2" type="STRING" defaultValue="" value="Test Region"/>

      <endpoint name="admin--sample_APIproductionEndpoint_0">
      <http uri-template="http://localhost:8080/simple-service-webapp/webapi/myresource"/>
      <property name="ENDPOINT_ADDRESS"
      <sequence key="_sandbox_key_error_"/>
      <class name="org.wso2.carbon.apimgt.usage.publisher.APIMgtResponseHandler"/>
      <resource methods="GET" url-mapping="/two" faultSequence="fault">
      <property name="api.ut.backendRequestTime"
      <filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
      <endpoint name="admin--sample_APIproductionEndpoint_1">
      <http uri-template="http://localhost:8080/simple-service-webapp/webapi/myresource"/>
      <property name="ENDPOINT_ADDRESS"
      <sequence key="_sandbox_key_error_"/>
      <class name="org.wso2.carbon.apimgt.usage.publisher.APIMgtResponseHandler"/>
      <handler class="org.wso2.carbon.apimgt.gateway.handlers.common.APIMgtLatencyStatsHandler"/>
      <handler class="">
      <property name="apiImplementationType" value="ENDPOINT"/>
      <handler class=""/>
      <handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.ThrottleHandler">
      <property name="productionMaxCount" value="2"/>
      <handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageHandler"/>
      <handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtGoogleAnalyticsTrackingHandler">
      <property name="configKey" value="gov:/apimgt/statistics/ga-config.xml"/>
      <handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerExtensionHandler"/>
  8. Using Bam mediator

    • Bam mediator[6] is similar to the Event Mediator but it is now deprecated. But it will be useful if you are using older version of APIM. Older version of APIM support BAM mediator by default.
    • To configure event publisher here also there is two component to configure. First we need to define the bam server profile[7].In server profile we define the connection setting as well as stream definition. Once you define the these definition we can directly refer them in the mediator to send the event
    • To configure mediator, open your synapse API file and define the mediator in a proper place.

    • <bam xmlns="">
      <serverProfile name="test-bam-profile">
      <streamConfig name="org.wso2.apimgt.statistics.custom" version="1.0.0"></streamConfig>
    • And complete synapse file would be like this.

    • <?xml version="1.0" encoding="UTF-8"?>
      <api xmlns=""
      <resource methods="GET" url-mapping="/*" faultSequence="fault">
      <filter source="$ctx:AM_KEY_TYPE" regex="PRODUCTION">
      <property name="api.ut.backendRequestTime"

      <bam xmlns="">
      <serverProfile name="test-bam-profile">
      <streamConfig name="org.wso2.apimgt.statistics.custom" version="1.0.0"></streamConfig>

      <endpoint name="admin--sample_APIproductionEndpoint_0">
      <http uri-template="http://localhost:8080/simple-service-webapp/webapi/myresource"/>
      <sequence key="_sandbox_key_error_"/>
      <class name="org.wso2.carbon.apimgt.usage.publisher.APIMgtResponseHandler"/>
      <handler class="">
      <property name="apiImplementationType" value="ENDPOINT"/>
      <handler class=""/>
      <handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
      <property name="id" value="A"/>
      <property name="policyKeyResource"
      <property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
      <property name="policyKeyApplication"
      <handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageHandler"/>
      <handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtGoogleAnalyticsTrackingHandler">
      <property name="configKey" value="gov:/apimgt/statistics/ga-config.xml"/>
      <handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerExtensionHandler"/>
  9. Sample

  10. Here you can find the above mentioned sample. Class mediator and Handler implementation can be found on the jar inside the zip. Download the sample from here. Please find the readme for more about the configuration.

  11. References


Tanya MadurapperumaAdding a new xml element to the payload - WSO2 ESB

In this blog post I'm going to explain how we can insert a single new xml element in to the payload in WSO2 ESB.

Why not payloadFactory Mediator ?
If you are familiar with WSO2 ESB you may know that if we want to change the entire payload into a different payload or build a new payload by extracting properties using the existing payload, we can use payloadFactory mediator. But for this requirement that I'm going to describe, payload factory mediator will not be the ideal mediator due to different reasons. One major reason is, if the current payload is bit lengthy one, you will have to build the other parts of the payload even though you don't need to change them at all.

If not payloadFactory then what?
It will be very convenient to get this requirement done using Enrich Mediator. Enrich Mediator will take your desired OMElement (xml element) using the configuration that you state in the source section and then will insert it to the target location you state.

I will explain the configuration using a sample usecase. Say I receive some request into ESB and depending on some properties in the payload I want to set some element in the payload. For an example assume I need to insert a new element into the payload if the received payload is not a valid payload.

In the above example I have set the incoming request to ESB in to a property (INCOMING_REQUEST) during a previous mediation. And hence using the first enrich mediator I am replacing the body of the current payload using that property.
So the second mediator is the one actually does the job.
It will take the OMElement given under source which is
and will insert as a child element of /BookingRequest/Booking xpath location in the INCOMING_REQUEST xml.

Incoming Request

After enrich mediator
In this example I have modified the payload by adding only one xml element. Even if you need to add more elements also you can follow the same approach.

Many thanks to Rajith from WSO2 for the tip.

Hariprasath ThanarajahWorking with WSO2 ESB Salesforce REST Connector

First, we need to know about what is ESB connectors,

ESB Connectors - Connectors allow you to interact with a third-party product's functionality and data from your ESB message flow. This allows ESB to connect with disparate cloud APIs, SaaS applications, and on-premise systems.

WSO2 have over 150+ connectors in the store. From this blog post, I am going to explain how to configure the WSO2 Salesforce REST Connector.

The Salesforce REST connector allows you to work with records in Salesforce, a web-based service that allows organizations to manage contact relationship management (CRM) data. You can use the Salesforce connector to create, query, retrieve, update, and delete records in your organization's Salesforce data. The connector uses the Salesforce REST API to interact with Salesforce.

These sections provide step-by-step instructions on how to set up your web services environment in Salesforce REST and start using the connector.


  • WSO2 ESB
  • Salesforce REST connector
  • Salesforce developer account
First, You can follow this to get an accessToken and refreshToken to call/access the Salesforce API. The accessToken validation is different between organization. You can change the accessToken validation period, Go to setup -> security controls -> session settings -> you can select the accessToken expiration time.

In the connector we take the 1 hour period as default expiration time for accessToken. But no worries about the time greater than 1 hour because the API won't give a new accessToken before your default expiration time in the organization. If your organization session timeout less than 1 hour, then you must give the intervalTime as per your organization in the request you are using to access the salesforce organization using salesforce REST connector. 

Configuring Salesforce REST connector

For the first time, you use the salesforce REST connector via WSO2 ESB. For that, you must get an accessToken and refreshToken using here

Assume that you are going to create a record in Account object in salesforce.

From the above part, you got an accessToken and refreshToken. For the first time, you must give the valid accessToken within the intervalTime(If you give the intervalTime in the request).

Setup the WSO2 ESB with Salesforce REST Connector

  • Download the ESB from here and start the server.
  • Download the Salesforce REST connector from WSO2 Store.
  • Add and enable the connector via ESB Management Console.
  • we need to create a proxy to create a record in Account from Salesforce REST and invoke the proxy with the following request.

Proxy Service

<proxy xmlns="" name="create"
       statistics="disable" trace="disable" transports="https http">
            <property name="accessToken" expression="json-eval($.accessToken)"/>
            <property name="apiUrl" expression="json-eval($.apiUrl)"/>
            <property name="sobject" expression="json-eval($.sobject)"/>
            <property name="sObjectName" expression="json-eval($.sObjectName)"/>
            <property name="clientId" expression="json-eval($.clientId)"/>
            <property name="refreshToken" expression="json-eval($.refreshToken)"/>
            <property name="clientSecret" expression="json-eval($.clientSecret)"/>
            <property name="hostName" expression="json-eval($.hostName)"/>
            <property name="apiVersion" expression="json-eval($.apiVersion)"/>
            <property name="registryPath" expression="json-eval($.registryPath)"/>
            <property name="intervalTime" expression="json-eval($.intervalTime)"/>
            <log category="INFO" level="full" separator=","/>


  "clientId": "3MVG9ZL0ppGP5UrBrnsanGUZRgHqc8gTV4t_6tfuef8Zz4LhFPipmlooU6GBszpplbTzVXXWjqkGHubhRip1s",
  "refreshToken": "5Aep861TSESvWeug_xvFHRBTTbf_YrTWgEyjBJo7Xr34yOQ7GCFUN5DnNPxzDIoGoWi4evqOl_lT1B9nE5dAtSb",
  "clientSecret": "9104967092887676680",
  "intervalTime" : "100000",
  "apiVersion": "v32.0",
  "registryPath": "connectors/SalesforceRest",
  "sObjectName": {
    "name": "wso2",
    "description":"This Account belongs to WSO2"

For the first time the accessToken should be valid within the value of intervalTime and you must give the registryPath value as "connectors/<any-value>". After that you don't need to worry about the accessToken expiration. Connector itself do the refresh part and get the new accessToken if the accessToken expired.

Anupama PathirageHow to Limit the WSO2 Service Log File Size

Limit the file size of wso2carbon.log
By default wso2carbon.log file is rotated on daily basis. But if we want to rotate the log files based on file size we can do as follows.

  • Change the log4j.appender.CARBON_LOGFILE appender in the /repository/conf/ file as follows. 

  • Add following two properties under the RollingFileAppender. You can change the MaxFileSize and MaxBackupIndex as Required.

MaxFileSize = Maximum allowed file size (in bytes) before rolling over. Suffixes "KB", "MB" and "GB" are allowed.
MaxBackupIndex  = Maximum number of backup files to keep
  • Now the configuration looks like below
# CARBON_LOGFILE is set to be a DailyRollingFileAppender using a PatternLayout.
# Log file will be overridden by the configuration setting in the DB
# This path should be relative to WSO2 Carbon Home
# ConversionPattern will be overridden by the configuration setting in the DB
log4j.appender.CARBON_LOGFILE.layout.ConversionPattern=TID: [%T] [%S] [%d] %P%5p {%c} - %x %m {%c}%n
log4j.appender.CARBON_LOGFILE.layout.TenantPattern=%U%@%D [%T] [%S]

If the size of the log file is exceeding the value defined in the MaxFileSize property, then the content is copied to a backup file and the logs are continued to be added to a new empty log file.For example if MaxBackupIndex is two you will see 3 files maximum. The wso2carbon.log.2 log file contains the oldest logs and wso2carbon.log contains the latest logs.


Disable Http Access Logs

HTTP Requests/Responses are logged in the access log(s) and are helpful to monitor your application's usage activities, such as the persons who access it, how many hits it receives, what the errors are etc. This information is useful for troubleshooting. As the runtime of WSO2 products are based on Apache Tomcat, you can use the AccessLogValve variable in Tomcat to configure HTTP access logs in WSO2 products.
The /repository/conf/tomcat/catalina-server.xml contains the valve related to the access logs as below. If you want to disable access logs comment out the below configuration.

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="${carbon.home}/repository/logs"
 prefix="http_access_" suffix=".log"


Hariprasath ThanarajahWorking with WSO2 ESB Gmail Connector

First, we need to know about what is ESB connectors,

ESB Connectors - Connectors allow you to interact with a third-party product's functionality and data from your ESB message flow. This allows ESB to connect with disparate cloud APIs, SaaS applications, and on-premise systems.

WSO2 have over 150+ connectors in the store. From this blog post, I am going to explain how to configure the WSO2 Gmail Connector.

The Gmail connector allows you to access the Gmail REST API through WSO2 ESB. Gmail is a free Web-based e-mail service provided by Google. Users may access Gmail as a secure webmail. The Gmail program also automatically organizes successively related messages into a conversational thread.

These sections provide step-by-step instructions on how to set up your web services environment in Gmail and start using the connector.


  • WSO2 ESB
  • Gmail connector
  • Gmail account
Connect with Gmail API

  • After creating the project Go to credentials tab and Create credentials -> OAuth Client ID but before that you must configure the consent screen, For that go to OAuth consent screen under Credentials and give a name in "Product name shown to users".

  • Select webapplication -> create and give a name and a redirect URI(Get the code to request an accessToken call to Gmail API) -> create

  • When you create the app you will get the ClientID and Client Secret,

  • After that

copy the above URL(need to replace the redirect_uri and client_id according to your app) into a browser and enter -> You need to log in with your email -> allow 

After that, you will get a code with your redirect URI like,

  • For successful responses, you can exchange the code for an access token and a refresh token. To make this token request, send an HTTP POST request to the endpoint, and include the following parameters and the content-type should be application/x-www-form-URL encoded

Replace the code and clientId , clientSecret and redirectUri according to your app and send. From the response, you will get the accessToken and refreshToken.

Setup the WSO2 ESB with Gmail Connector

  • Download the ESB from here and start the server.
  • Download the Gmail connector from WSO2 Store.
  • Add and enable the connector via ESB Management Console.
  • Create Proxy service to send a mail from Gmail and invoke the proxy with the following request.
Proxy Service 
<proxy xmlns=""
<property name="to" expression="json-eval($.to)"/>
<property name="subject" expression="json-eval($.subject)"/>
<property name="from" expression="json-eval($.from)"/>
<property name="messageBody" expression="json-eval($.messageBody)"/>
<property name="cc" expression="json-eval($.cc)"/>
<property name="bcc" expression="json-eval($.bcc)"/>
<property name="id" expression="json-eval($.id)"/>
<property name="threadId" expression="json-eval($.threadId)"/>
<property name="userId" expression="json-eval($.userId)"/>
<property name="refreshToken" expression="json-eval($.refreshToken)"/>
<property name="clientId" expression="json-eval($.clientId)"/>
<property name="clientSecret" expression="json-eval($.clientSecret)"/>
<property name="accessToken" expression="json-eval($.accessToken)"/>
<property name="apiUrl" expression="json-eval($.apiUrl)"/>
<property name="accessTokenRegistryPath" expression="json-eval($.accessTokenRegistryPath)"/>
<parameter name="serviceType">proxy</parameter>


  "clientId": "",
  "refreshToken": "1/3e68t0-PStjwMYDVR4zgUx8QxXkR51xKcWjubEIq5PI",
  "clientSecret": "qlCLwJN9mBxStMmP0iw8s9i2",
  "userId": "",
  "messageBody":"Hi hariprasath",

No need to get an accessToken for every request. The connector itself getting the accessToken by refreshing by refreshToken.

Dimuthu De Lanerolle

Wso2 Server Error Tips

1. Starting wso2server gives below error ...

ERROR {org.opensaml.xml.XMLConfigurator} -  Can not create instance of org.opensaml.xml.schema.impl.XSAnyMarshaller
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
at java.lang.reflect.Constructor.newInstance(
at org.opensaml.xml.XMLConfigurator.createClassInstance(
at org.opensaml.xml.XMLConfigurator.initializeObjectProviders(
at org.opensaml.xml.XMLConfigurator.load(
at org.opensaml.xml.XMLConfigurator.load(
at org.opensaml.xml.XMLConfigurator.load(
at org.opensaml.DefaultBootstrap.initializeXMLTooling(
at org.opensaml.DefaultBootstrap.initializeXMLTooling(
at org.opensaml.DefaultBootstrap.bootstrap(
at org.apache.rahas.Rahas.init(
at org.apache.axis2.context.ConfigurationContextFactory.initModules(
at org.apache.axis2.context.ConfigurationContextFactory.init(
at org.apache.axis2.context.ConfigurationContextFactory.createConfigurationContext(
at org.wso2.carbon.core.CarbonConfigurationContextFactory.createNewConfigurationContext(
at org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(
at org.wso2.carbon.core.init.CarbonServerManager.start(
at org.wso2.carbon.core.internal.CarbonCoreServiceComponent.activate(
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(
at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(
at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(
at org.eclipse.equinox.internal.ds.Resolver.getEligible(
at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(
at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(
at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(
at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(

Solution :

Download these 2 jars from Oracle and place inside Java\jdk1.7.0_10\jre\lib\security


Anupama PathirageChange the Log File Location of WSO2 ESB

Lets say we need to change log file location to "/home/services/wso2logs" instead of default /repository/logs folder. 
  • Change following file locations in /repository/conf/ as follows.
log4j.appender.ATOMIKOS.File = /home/services/wso2logs/tm.out

  • Change below value in  /repository/conf/tomcat/catalina-server.xml as follows to change http_access_management_console.log file location.
Valve className="org.apache.catalina.valves.AccessLogValve" directory="/home/services/wso2logs/"

  • We still have patches.log file and empty wso2carbon.log and wso2carbon-trace-messages.log in the default location. Since patch application process is done even before the carbon server is started those log configuration file is located within org.wso2.carbon.server-.jar file. So as a workaround to move these log files we can open the file within /lib/org.wso2.carbon.server-.jar and change the below properties as follows.

  • Make sure write permission is there for the new location and restart the service. Now all the log files are created in the new location. 

Anupama PathirageWSO2 ESB Log Files Explained

WSO2 ESB is generating several log files during its operation. This post will describe the usage of these logs along with some example logs. I am running the ESB 4.9.0 with the below proxy service for generate these sample logs. By default all these log files will be created at the location /repository/logs/ folder.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
<log level="custom">
<property name="TEST" value="RECV"/>
<payloadFactory media-type="xml">
<test xmlns="">5</test>

WSO2 Carbon log

This is the main log file of the service and named as wso2carbon.log. It contains all the server level and service level logs. The default log level is INFO and it can be changed by changing below property in file to desired level.

Trace log

It offers you a way to monitor a mediation execution and named as wso2-esb-trace.log. Trace logs will be enabled only if tracing is enabled on particular proxy or a sequence. If we enable tracing for the above proxy by using the management console we will get below.

13:54:03,490 [-] [PassThroughMessageProcessor-1]  INFO TRACE_LOGGER Proxy Service TestProxy received a new message from :
13:54:03,491 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Message To: /services/TestProxy.TestProxyHttpEndpoint/mediate
13:54:03,491 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER SOAPAction: null
13:54:03,491 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER WSA-Action: null
13:54:03,492 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Setting default fault-sequence for proxy
13:54:03,493 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Using the anonymous in-sequence of the proxy service for mediation
13:54:03,493 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Start : Sequence <anonymous>
13:54:03,494 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Sequence <SequenceMediator> :: mediate()
13:54:03,494 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Mediation started from mediator position : 0
13:54:03,494 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Start : Log mediator
13:54:03,496 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER TEST = RECV
13:54:03,496 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER End : Log mediator
13:54:03,496 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Start : Loopback Mediator
13:54:03,496 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Setting default fault-sequence for proxy
13:54:03,496 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Start : Sequence <anonymous>
13:54:03,496 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Sequence <SequenceMediator> :: mediate()
13:54:03,496 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Mediation started from mediator position : 0
13:54:03,496 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Building message. Sequence <SequenceMediator> is content aware
13:54:03,518 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Start : Send mediator
13:54:03,519 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER Sending response message using implicit message properties..
Sending To: null
SOAPAction: null
13:54:03,529 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER End : Send mediator
13:54:03,529 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER End : Sequence <anonymous>
13:54:03,529 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER End : Loopback Mediator
13:54:03,529 [-] [PassThroughMessageProcessor-1] INFO TRACE_LOGGER End : Sequence <anonymous>
13:54:45,755 [-] [http-nio-9443-exec-18] INFO TRACE_LOGGER Building Axis service for Proxy service : TestProxy
13:54:45,755 [-] [http-nio-9443-exec-18] INFO TRACE_LOGGER Loading the WSDL : <Inlined>
13:54:45,755 [-] [http-nio-9443-exec-18] INFO TRACE_LOGGER Did not find a WSDL. Assuming a POX or Legacy service
13:54:45,755 [-] [http-nio-9443-exec-18] INFO TRACE_LOGGER Exposing transports : [http, https]
13:54:45,757 [-] [http-nio-9443-exec-18] INFO TRACE_LOGGER Adding service TestProxy to the Axis2 configuration
13:54:45,770 [-] [http-nio-9443-exec-18] INFO TRACE_LOGGER Successfully created the Axis2 service for Proxy service : TestProxy

Patches log

All patch related logs are recorded in the Patches.log file. 

[2016-10-19 13:38:25,951]  INFO {org.wso2.carbon.server.util.PatchUtils} -  Checking for patch changes ...
[2016-10-19 13:38:25,953] INFO {org.wso2.carbon.server.util.PatchUtils} - New patch available - patch0049
[2016-10-19 13:38:25,957] INFO {org.wso2.carbon.server.extensions.PatchInstaller} - Patch changes detected
[2016-10-19 13:38:31,949] INFO {org.wso2.carbon.server.util.PatchUtils} - Backed up plugins to patch0000
[2016-10-19 13:38:31,949] INFO {org.wso2.carbon.server.util.PatchUtils} - Applying patches ...
[2016-10-19 13:38:31,951] INFO {org.wso2.carbon.server.util.PatchUtils} - Applying - patch0049
[2016-10-19 13:38:31,966] INFO {org.wso2.carbon.server.util.PatchUtils} - Patched org.wso2.carbon.inbound.endpoint_4.4.10.jar(MD5:6f76773b9d10b200e8b7786a2787fc96)
[2016-10-19 13:38:31,996] INFO {org.wso2.carbon.server.util.PatchUtils} - Patched synapse-nhttp-transport_2.1.3-wso2v11.jar(MD5:907846d87999fbbc4ba8ac0381e97822)
[2016-10-19 13:38:31,997] INFO {org.wso2.carbon.server.util.PatchUtils} - Patch verification started
[2016-10-19 13:38:32,019] INFO {org.wso2.carbon.server.util.PatchUtils} - Patch verification successfully completed

Audit log

Audit logs provide information related to the users who has tried to access the server and do changes. It is recommended to enable audit logs in the production environment and it is named as audit.log.

[2016-10-19 13:44:43,522]  INFO -  'admin@carbon.super [-1234]' logged in at [2016-10-19 13:44:43,520-0500] 
[2016-10-19 13:45:14,190] INFO - 'admin@carbon.super [-1234]' logged out at [2016-10-19 13:45:14,0188]
[2016-10-19 13:46:14,976] WARN - Failed Administrator login attempt 'testuser[-1234]' at [2016-10-19 13:46:14,971-0500]
[2016-10-19 13:47:49,084] INFO - 'admin@carbon.super [-1234]' logged in at [2016-10-19 13:47:49,081-0500]
[2016-10-19 13:48:29,501] INFO - Initiator : null@carbon.super | Action : Change Password by User | Target : null@carbon.super | Data : { } | Result : Failed

Error log

All Errors and warnings at the server can be found in this log file named as wso2-esb-errors.log

2016-10-19 13:38:36,504 [-] [Start Level Event Dispatcher]  WARN ValidationResultPrinter Carbon is configured to use the default keystore (wso2carbon.jks). To maximize security when deploying to a production environment, configure a new keystore with a unique password in the production server profile.
2016-10-19 13:38:54,768 [-] [localhost-startStop-1] WARN DefaultSchemaGenerator We don't support method overloading. Ignoring [validateAudienceRestriction]
2016-10-19 13:46:14,971 [-] [http-nio-9443-exec-32] WARN CarbonAuthenticationUtil Failed Administrator login attempt 'tet[-1234]' at [2016-10-19 13:46:14,971-0500]
2016-10-19 13:48:29,515 [-] [http-nio-9443-exec-16] ERROR UserAdminClient Error while updating password. Wrong old credential provided
org.wso2.carbon.user.mgt.stub.UserAdminUserAdminException: UserAdminUserAdminException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
at java.lang.reflect.Constructor.newInstance(
at java.lang.Class.newInstance(

ESB Service log

Informaton related to all service invocations information about called services etc and all logs that have been made with the "Log" Mediator in Synapse Proxies or Sequences can be found here.  By default INFO log level is used the file is named as wso2-esb-service.log

2016-10-19 14:50:09,209 [-] [localhost-startStop-1]  INFO TestProxy Building Axis service for Proxy service : TestProxy
2016-10-19 14:50:09,212 [-] [localhost-startStop-1] INFO TestProxy Adding service TestProxy to the Axis2 configuration
2016-10-19 14:50:09,226 [-] [localhost-startStop-1] INFO TestProxy Successfully created the Axis2 service for Proxy service : TestProxy
2016-10-19 14:55:06,491 [-] [PassThroughMessageProcessor-1] INFO TestProxy TEST = RECV

Access Logs

Service and REST API invocation access log

tracks when a service or REST API is invoked. By default, the service/API invocation access logs are disabled for performance reasons. Can be enable by using below. The log file is created with the name as http_access_YYYY_MM_DD.log.

If you invoke the service using below url we can see the logs as follows.

- fe80:0:0:0:d84a:683c:68a5:209%11 - - [19/Oct/2016:12:59:03 -0600] "GET /services/TestProxy.TestProxyHttpEndpoint/mediate HTTP/1.1" - - "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36"
- fe80:0:0:0:d84a:683c:68a5:209%11 - [19/Oct/2016:12:59:03 -0600] "- - " 200 - "-" "-"

Management Console Access Log

This tracks usage of the Management Console. By default it is created as http_access_management_console_YYYY_MM_DD.log file. It is rotated on a daily basis. To change the configurations of this log need to change below value in /repository/conf/tomcat/catalina-server.xml

<Valve className="org.apache.catalina.valves.AccessLogValve">

0:0:0:0:0:0:0:1 - - [19/Oct/2016:13:29:14 -0500] "GET /carbon/admin/js/jquery.ui.tabs.min.js HTTP/1.1" 200 3594 "https://localhost:9443/carbon/admin/login.jsp" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36"
0:0:0:0:0:0:0:1 - - [19/Oct/2016:13:29:14 -0500] "GET /carbon/admin/js/cookies.js HTTP/1.1" 200 1136 "https://localhost:9443/carbon/admin/login.jsp" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36"
0:0:0:0:0:0:0:1 - - [19/Oct/2016:13:29:14 -0500] "GET /carbon/admin/js/jquery.ui.core.min.js HTTP/1.1" 200 1996 "https://localhost:9443/carbon/admin/login.jsp" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36"

Trace Messages log

By default trace message logs are not enabled in the ESB. You can enable it for any required component by changing the file. For example trace for org.apache.synapse.transport.http.headers can be enabled by adding below. The logs will be created at wso2carbon-trace-messages.log file.

log4j.appender.CARBON_TRACE_LOGFILE.layout.ConversionPattern=[%d] %P%5p {%c} - %x %m %n
log4j.appender.CARBON_TRACE_LOGFILE.layout.TenantPattern=%U%@%D [%T] [%S]

The generated log file contains below.

[2016-10-19 14:43:47,286] DEBUG {org.apache.synapse.transport.http.headers} -  http-incoming-1 >> POST /services/TestProxy.TestProxyHttpEndpoint/mediate HTTP/1.1 
[2016-10-19 14:43:47,288] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 >> Content-Type: application/xml; charset=UTF-8
[2016-10-19 14:43:47,289] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 >> Cookie: region3_registry_menu=visible; region1_configure_menu=none; region4_monitor_menu=none; region5_tools_menu=none; MSG14769029100080.6387579241161663=true; menuPanel=visible; menuPanelType=main; requestedURI="../../carbon/service-mgt/index.jsp?region=region1&item=services_list_menu"; current-breadcrumb=manage_menu%2Cservices_menu%2Cservices_list_menu%23; JSESSIONID=9AEF469AF1960BA436D77E2B6F9BADEC
[2016-10-19 14:43:47,297] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 >> User-Agent: Axis2
[2016-10-19 14:43:47,305] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 >> Host: OMISLABL9:8280
[2016-10-19 14:43:47,309] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 >> Transfer-Encoding: chunked
[2016-10-19 14:43:47,423] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 << HTTP/1.1 200 OK
[2016-10-19 14:43:47,424] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 << Cookie: region3_registry_menu=visible; region1_configure_menu=none; region4_monitor_menu=none; region5_tools_menu=none; MSG14769029100080.6387579241161663=true; menuPanel=visible; menuPanelType=main; requestedURI="../../carbon/service-mgt/index.jsp?region=region1&item=services_list_menu"; current-breadcrumb=manage_menu%2Cservices_menu%2Cservices_list_menu%23; JSESSIONID=9AEF469AF1960BA436D77E2B6F9BADEC
[2016-10-19 14:43:47,427] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 << Host: OMISLABL9:8280
[2016-10-19 14:43:47,434] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 << Content-Type: application/xml; charset=UTF-8; charset=UTF-8
[2016-10-19 14:43:47,440] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 << Date: Wed, 19 Oct 2016 19:43:47 GMT
[2016-10-19 14:43:47,446] DEBUG {org.apache.synapse.transport.http.headers} - http-incoming-1 << Transfer-Encoding: chunked

Transaction logs

A transaction is a set of operations that executed as a single unit.WSO2 carbon platform has integrated the "Atomikos" transaction manager which is a implementation of Java Transaction API (JTA). Some products like WSO2DSS, WSO2ESB shipped this transaction manager by default. Then infomation related to Atomikos are logged in tm.out file.

INFO Start Level Event Dispatcher com.atomikos.logging.LoggerFactory - Using Slf4J for logging.
WARN Start Level Event Dispatcher com.atomikos.icatch.config.UserTransactionServiceImp - Using init file: /C:/Users/apathira/Desktop/WEST/Runtime/LOGGIN~1/WSO2ES~1.0/WSO2ES~1.0/bin/../lib/
INFO Start Level Event Dispatcher com.atomikos.persistence.imp.FileLogStream - Starting read of logfile C:\Users\apathira\Desktop\WEST\Runtime\LOGGIN~1\WSO2ES~1.0\WSO2ES~1.0\repository\data\tmlog12.log
INFO Start Level Event Dispatcher com.atomikos.persistence.imp.FileLogStream - Done read of logfile
INFO Start Level Event Dispatcher com.atomikos.persistence.imp.FileLogStream - Logfile closed: C:\Users\apathira\Desktop\WEST\Runtime\LOGGIN~1\WSO2ES~1.0\WSO2ES~1.0\repository\data\tmlog12.log
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING core version: 3.8.0
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.console_file_name = tm.out
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.console_file_count = 1
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.automatic_resource_registration = true
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.client_demarcation = false
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.threaded_2pc = false
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.serial_jta_transactions = true
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.log_base_dir = repository/data
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.console_log_level = INFO
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.max_actives = 50
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.checkpoint_interval = 500
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.enable_logging = true
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.output_dir = repository/logs
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.log_base_name = tmlog
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.console_file_limit = 1073741824
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.max_timeout = 8000000
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.tm_unique_name =
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING java.naming.factory.initial = com.sun.jndi.rmi.registry.RegistryContextFactory
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING java.naming.provider.url = rmi://localhost:1099
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.service = com.atomikos.icatch.standalone.UserTransactionServiceFactory
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.force_shutdown_on_vm_exit = true
INFO Start Level Event Dispatcher com.atomikos.icatch.config.imp.AbstractUserTransactionService - USING com.atomikos.icatch.default_jta_timeout = 5000000

Aruna Sujith KarunarathnaSpring Boot Application Live Reload (Hot Swap) with Intellij IDEA

While developing Spring Boot Applications using Intellij IDEA, it was so annoying to restart the spring boot app after each and every change. Spring boot provides live reload (how swap) of application out of the box using the following dependency.  <dependency>    <groupid>org.springframework.boot</groupid>    <artifactid>spring-boot-devtools</artifactid>  </dependency>But it didn't live

Samitha ChathurangaHow to browse the "Activity" H2 database of WSO2 BPS and typical H2 databases in any WSO2 product

This guide will show you how to browse the activity database in WSO2 BPS and also the typical WSO2CARBON_DB of any WSO2 product too if you read the latter part.

All WSO2 Products are embedded with H2 database, and so there is no need of downloading h2 database separately and running it to browse WSO2 H2 database. (But remember that it can be done too).

And H2 databases are equipped with a Console Web application which lets you access and browse it using a browser.

Browse the "Activity" H2 database of WSO2 BPS

WSO2 BPS uses a separate database for its activity engine (which relate with BPMN process management) and if you are working with BPMN, and want to see the activity database, this is what you should do.

1. Open carbon.xml located in <PRODUCT_HOME>/repository/conf

2. Uncomment the <H2DatabaseConfiguration> element. (only uncomment the first 3 property elements of it as follows)

        <property name="web" />
        <property name="webPort">8082</property>
        <property name="webAllowOthers" />
        <!--property name="webSSL" />
        <property name="tcp" />
        <property name="tcpPort">9092</property>
        <property name="tcpAllowOthers" />
        <property name="tcpSSL" />
        <property name="pg" />
        <property name="pgPort">5435</property>
        <property name="pgAllowOthers" />
        <property name="trace" />
        <property name="baseDir">${carbon.home}</property-->

 3. Open the <WSO2_BPS_HOME>/repository/conf/datasources/activiti-datasources.xml file.

Now copy the <url> element's first part before the first semicolen (;). It would be as follows.


4. Start the Product Server.
5. Open bowser and enter "http://localhost:8082/" (if you changed the <webport> in carbon.xml in step 2, you have to change the URL according to that)
6. Now you will see the H2 Database Console in the browser.

7. Here you have to give the JDBC URL , User Name and Password.
  •  JDBC URL : jdbc:h2:repository/database/activiti  
        (the one you copied in step 3- note that the "activity" is the name of the database in H2, you want to access)

  • User Name : wso2carbon
  • Password : wso2carbon
(Default User name and passwords are as above, but if you have editted them in activiti-datasources.xml file they should be changed accordingly )

8. Now press "Connect" and you are done. If you have successfully connected "jdbc:h2:repository/database/activiti" database will be displayed and there will be a list of database tables such as, "ACT_EVT_LOG" , "ACT_GE_BYTEARRAY", "ACT_GE_PROPERTY", etc. which are created by the BPS activity engine. You can run queries there too.

Now in the terminal you started the wso2carbon BPS server, you can see long log traces related to h2 database and if you want to get rid of them you have only to comment out or remove the <property name="trace" /> in carbon.xml in step 2. Restart the server and connect to h2 console and you will see, they have gone.

Browse typical H2 database of any wso2 product

 All the wso2 products have the typical "WSO2CARBON_DB" and if you want to connect to it only difference is that, you only have to change the JDBS_URL given as follows in the step 7.

(You can skip the step 3 here as you know the to use the WSO2CARBON_DB 's JDBC_URL. Anyway if you want to know, this url is also defined in repository/conf/master-datasources.xml file, and if its default configurations are editted, you will have to see this file)

Browse H2 database of any WSO2 product using an external H2 client.

 Anyway if you don't like to edit any wso2 product configuration file, you can do that too using a external H2 client. So you do not have to edit the carbon.xml as described in step 2.

But there is a trade off in this method. It is, we cannot access the h2 database via this external client, while the wso2 product is running. We have to shut-down the server before accessing via this method. And note that the attempt of vise-versa ( trying to start wso2 product server, while we have connected to the wso2 h2 database via this external client) would fail successful starting of wso2 product as the external client locks the h2 database so that the wso2 product cannot access it.

Anyway keeping these facts in mind let's see how to do the task.

1. Download H2 Database from here
   (a latest version is preferred)
2. Extract zip file and execute the in bin directory with terminal (start terminal from inside bin directory and run sh
3. The H2 web console will be automatically started in your browser.
4. Then give the,
  •  JDBC URL : jdbc:h2:/<WSO2_PRODUCT_HOME>/repository/database/WSO2CARBON_DB

    eg: jdbc:h2:/home/samithac/Repositories/product-bps-2/product-bps/modules/distribution/target/wso2bps-3.6.0-SNAPSHOT/repository/database/WSO2CARBON_DB

    Note that you have include the full path of the database directory here. (not just a relative path as we did before)
    (if you want to connect to wso2 bps's, activity database, just change above url's "
    WSO2CARBON_DB" part to "activity")
  • User Name : wso2carbon
  • Password : wso2carbon
5. Then Press connect & you are done..!!!

Lakshani GamageHow to Set up WSO2 App Manager with Oracle

These steps can be used to configure App Manager standalone with Oracle.
  1. Login to Oracle server and create following tables.
  2. Edit master-datasource.xml in <AppM_HOME>/repository/conf/datasources as follow.
    Note: <defaultautocommit>false</defaultautocommit> property is mandatory for "jdbc/WSO2AM_DB" datasource.
  3. Edit social.xml in <AppM_HOME>/repository/conf as follow.
  4. Copy the Oracle JDBC driver, which compatible to your Oracle database, to <AppM_HOME>/repository/components/lib
  5. Start the server with -Dsetup sh -Dsetup

Suhan DharmasuriyaWhere did everything go in Puppet 4.x?

This is a summary of [1] at

  • New All-in-One puppet-agent package - On managed *nix systems, you’ll now install puppet-agent instead of puppet. (This package also provides puppet apply, suitable for standalone Puppet systems.) It includes tools (private versions) like Facter, Hiera, and Ruby; also MCollective.
  • *nix executables are in /opt/puppetlabs/bin/ - On *nix platforms, the main executables moved to /opt/puppetlabs/bin. This means Puppet and related tools aren’t included in your PATH by default. You have to add it to your PATH or use full path when running puppet commands. 
    1. Private bin directories - The executables in /opt/puppetlabs/bin are just the “public” applications that make up Puppet. Private supporting commands like ruby and gem are in /opt/puppetlabs/puppet/bin
  • *nix confdir is Now /etc/puppetlabs/puppet - Puppet’s system confdir (used by root and the puppet user) is now /etc/puppetlabs/puppet, instead of /etc/puppet. Open source Puppet now uses the same confdir as Puppet Enterprise.
    1. ssldir is inside confdir - The default location is in the $confdir/ssl on all platforms.
    2. Other stuff in /etc/puppetlabs - Other configs are in /etc/puppetlabs directory. Puppet Server now uses /etc/puppetlabs/puppetserver, and MCollective uses /etc/puppetlabs/mcollective.
  • New codedir holds all modules(1), manifests(1) and data(2) - The default codedir location is /etc/puppetlabs/code on *nix. It has environments, modules directories and hiera.yaml config file.
    1. Directory environments are always on - The default environmentpath is $codedir/environments and a directory is created at install for the default production environment. Modules are in $codedir/environments/production/modules and main manifest is in $codedir/environments/production/manifests. However you can still use global modules in $codedir/modules and a global manifest.
    2. Hiera data goes in environments by default - Hiera’s default settings now use an environment-specific datadir for the YAML and JSON backends. So the production environment’s default Hiera data directory would be /etc/puppetlabs/code/environments/production/hieradata
  • Some other directories have moved - The system vardir for puppet agent has moved, and is now separate from Puppet Server’s vardir. For *nix: /opt/puppetlabs/puppet/cache. The rundir, where the service PID files are kept, (on *nix) has moved to /var/run/puppetlabs. (Puppet Server has a puppetserver directory in this directory.)


Gobinath LoganathanJPA - Hello, World! using Hibernate 5

The article JPA - Hello, World! using Hibernate explains how to use Hibernate v4.x. This tutorial introduces using JPA with the latest Hibernate v5.2.3. Same as the previous article, it also targets beginners who are new to Hibernate and JPA. You need Java 1.8, Eclipse for Java EE developers and MySQL server in your system in-order to try this tutorial. Step 1: Create a database “javahelps

Lakshani GamageStart Multiple WSO2 IoTS Instances on the Same Computer

If you want to run multiple WSO2 IoTS on the same machine, you have to change the default ports with an offset value to avoid port conflicts. The default HTTP and HTTPS ports (without offset) of a WSO2 product are 9763 and 9443 respectively.

Here are the steps to offset ports. Let's assume you want to increase all ports by 1.

  1. Set Offset value to 1 in <IoTS_HOME>/repository/conf/carbon.xml

  3. Change the hostURL under <authenticator class="org.wso2.carbon.andes.authentication.andes.OAuth2BasedMQTTAuthenticator"> in <IoTS_HOME>/repository/conf/broker.xml according to the port offset.
    <authenticator class="org.wso2.carbon.andes.authentication.andes.OAuth2BasedMQTTAuthenticator">
    <property name="hostURL">https://<IoTS_HOST>:<IoTS_PORT>/services/OAuth2TokenValidationService</property>
    <property name="username">admin</property>
    <property name="password">admin</property>
    <property name="maxConnectionsPerHost">10</property>
    <property name="maxTotalConnections">150</property>

  5. Start the Server.

Dumidu HandakumburaHow to find more needles

If you're someone like me who needs to find needles in log haystacks then bash and grep are surely your friends. Here are two tricks I've come to use to make my work easier.

Making errors scream!

Okay so you have four or five terminal tabs going tailing different but related logs and the logs keep on pilling up. A script like the one I got below should be useful (given that you have headphone on, unless you getting weird looks from people around you). 

The script tails a log periodically and checks for the existence of a word(or phrase). If a match is made it plays an audio file(wave) to get your attention.



Redirect errors to a file

You have a server with debug log level enable at the root. The logs are putting on MBs instead of KBs. Grep out what you need to a separate file to make analysis more manageable.



Dumidu HandakumburaThoughts on the 6th Colombo IoT meetup

I suppose I have to leave this post here for the lack of a better place.

The meetup was titled Rise with Cloud Intelligence and staying with the theme the talks revolved around IoT PaaS offerings. The first couple of speakers(employed at Persistent Systems and Virtusa Polariz respectively) talked about their experiences working with IBM Bluemix, their segments leaned more towards demonstrations and explanations towards the work they had done with the platform. I felt considering the time constraints of the event it would have been more useful had some of these talks been done at a conceptual level because at times it seemed like time was wasted explaining simple nitty gritty. Rounding up, the segment gave a holistic image of the capabilities of IBM Bluemix.

The segment on Bluemix was followed with an interesting talk on an IoT enabled PaaS solution by two gentlemen with thick american accents. The first speaker(one Dean Hamilton) started off sharing some statistics on IoT space(published by an organization going by the name of ABI research). The space according to the speaker can be divided to 3 subcategories, the device manufacturers(hardware), connectivity(business with getting the sensor data to the servers) and the server side of things(control and analytics). Common sense and the statistics shown to us said most of the big IoT money(everyone is talking about) is expected to be made in software and service category.

Interestingly, the solution Dean presented to us was aimed at the first category. An IoT enabled PaaS that device manufacturers can use to monetize sensor data pushed back to them, to get a piece of the IoT pie. He took the example of how a manufacturer of agricultural machinery (harvesters and such) could enable the sensor data to be push on to a PPaaS, be enriched with other services(aligning with the manufacturer's interests) and to be sold to any interested party(such as fertilizer manufacturers). Their solution tries to cater to this need.

As an engineer with the aim of attending such events as a way to keep in beat with happenings of the local community I walked away satisfied.          

Anupama PathirageHow to use Class Mediator in WSO2 ESB

The Class Mediator creates an instance of a custom-specified class and sets it as a mediator. Use the Class mediator for user-specific, custom developments only when there is no built-in mediator that already provides the required functionality.

Creating a Class Mediator

There are two ways you can create a class mediator:
  •     Implement the mediator interface
  •     Extend AbstractMediator class

We can use WSO2 Developer Studio to Create class mediator easily.

Create a New Mediator Project By selecting File-> New Project -> WSO2 -> Extensions -> Project Types -> Mediator Project

It will create the class mediator by extending the AbstractMediator. Then we need to implement the mediate method which is invoked when the mediator is executed at the mediation flow.  The return statement of the mediate method decides whether the mediation flow should continue further or not.

MessageContext is the representation for a message within the ESB message flow. In the mediate method we get access to the message context. From the message context we can access message payload, message headers, properties, ESB configurations, etc. We can also set parameters to the mediator.

Sample Class mediator implementation is as follows.

package org.wso2.test;

import java.util.Enumeration;
import java.util.Map;
import java.util.Properties;

import org.apache.synapse.MessageContext;
import org.apache.synapse.core.axis2.Axis2MessageContext;
import org.apache.synapse.mediators.AbstractMediator;

public class SampleClassMediator extends AbstractMediator {

    public String classProperty;
    public boolean mediate(MessageContext context) {
        //Mediator Property
        System.out.println("------Mediator Property-------");
        System.out.println("Using Property :  " + classProperty);
        //Accessing Message Properties
        System.out.println("------Message Properties-------");
        String prop = (String) context.getProperty("StringProp");
        System.out.println("Using msg Prop : "+ prop);
        //Accessing Axis2 Properties
        System.out.println("------Axis2 Properties-------");
        Map<String,Object> mapAxis2Properties = ((Axis2MessageContext)context).getAxis2MessageContext().getProperties();
        for (Map.Entry entry : mapAxis2Properties.entrySet()) {
            System.out.println("AXIS:"+entry.getKey() + ", " + entry.getValue());
        //Accessing Transport Headers
        System.out.println("------Transport Headers-------");
        Map<String,Object> mapTransportProperties = (Map<String, Object>) ((Axis2MessageContext)context).getAxis2MessageContext().getProperty("TRANSPORT_HEADERS");
        for (Map.Entry entry : mapTransportProperties.entrySet()) {
            System.out.println("TRANS:" +entry.getKey() + ", " + entry.getValue());
        //Accessing Syanpse Configuration properties
        System.out.println("------Synapse Config Properties-------");
        Properties p = ((Axis2MessageContext)context).getEnvironment().getSynapseConfiguration().getProperties();
        Enumeration keys = p.keys();
            String key = (String)keys.nextElement();
            String value = (String)p.get(key);
            System.out.println("PRP: "+ key + " | " + value);
        //Accessing Soap envelop
        System.out.println("------Soap envelop-------");
        System.out.println("MSG : " + context.getEnvelope());
        return true;
    public String getClassProperty() {
        return classProperty;
    public void setClassProperty(String propValue) {
        this.classProperty = propValue;

After implementing the method export the project as .jar file and copy to /repository/components/lib folder and restart the ESB.

Creating the Proxy service

You can use the implemented class mediator within a proxy service as follows.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <log level="custom">
            <property name="BEGIN" value="Begin Sequence"/>
         <property name="StringProp"
         <class name="org.wso2.test.SampleClassMediator">
            <property name="classProperty" value="5"/>

Then send a request to the proxy service and you will see following output in the consloe as we are printing to the console.

[2016-10-18 07:03:27,414]  INFO - LogMediator BEGIN = Begin Sequence
------Mediator Property-------
Using Property :  5
------Message Properties-------
Using msg Prop : "TestString"
------Axis2 Properties-------
AXIS:addressing.validateAction, false
AXIS:local_WSO2_WSAS, org.wso2.carbon.core.init.CarbonServerManager@182ab504
AXIS:CARBON_TASK_JOB_METADATA_SERVICE, org.wso2.carbon.task.JobMetaDataProviderServiceHandler@4d72eb52, {}
AXIS:CARBON_TASK_MANAGEMENT_SERVICE, org.wso2.carbon.task.TaskManagementServiceHandler@fdccc49, org.wso2.carbon.event.core.internal.CarbonEventBroker@51c8cdb1
AXIS:CARBON_TASK_MANAGER, org.wso2.carbon.task.TaskManager@1c87c019
AXIS:wso2tracer.msg.seq.buff, org.wso2.carbon.utils.logging.CircularBuffer@467590a0
AXIS:ContainerManaged, true
AXIS:WORK_DIR, /home/anupama/Anupama/Workspace/WSO2Products/ESB/wso2esb-4.9.0/tmp/work
AXIS:PASS_THROUGH_TRANSPORT_WORKER_POOL, org.apache.axis2.transport.base.threads.NativeWorkerPool@2fa2b2a3
AXIS:CARBON_TASK_REPOSITORY, org.apache.synapse.task.TaskDescriptionRepository@389047a3
AXIS:local_current.server.status, RUNNING
AXIS:tenant.config.contexts, {}
AXIS:CONFIGURATION_MANAGER, org.wso2.carbon.mediation.initializer.configurations.ConfigurationManager@1a35cfa0, {}
AXIS:primaryBundleContext, org.eclipse.osgi.framework.internal.core.BundleContextImpl@2506f4
AXIS:GETRequestProcessorMap, {info=org.wso2.carbon.core.transports.util.InfoProcessor@70afacd6, wsdl=org.wso2.carbon.core.transports.util.Wsdl11Processor@4a223fca, wsdl2=org.wso2.carbon.core.transports.util.Wsdl20Processor@7035d472, xsd=org.wso2.carbon.core.transports.util.XsdProcessor@44c95d1b, tryit=org.wso2.carbon.tryit.TryitRequestProcessor@490d9048, wadltryit=org.wso2.carbon.tryit.WADLTryItRequestProcessor@120a3acc, stub=org.wso2.carbon.wsdl2form.StubRequestProcessor@4c42eabb, wsdl2form=org.wso2.carbon.wsdl2form.WSDL2FormRequestProcessor@42ed57ad, wadl2form=org.wso2.carbon.tryit.WADL2FormRequestProcessor@67c70068}
------Transport Headers-------
TRANS:Content-Type, application/soap+xml; charset=UTF-8; action="urn:mediate"
TRANS:Cookie, menuPanel=visible; menuPanelType=main; region1_manage_menu=visible; JSESSIONID=4B3870641F979576800AB05FAEB9360D; requestedURI=../../carbon/admin/index.jsp; region1_configure_menu=none; region3_registry_menu=none; region4_monitor_menu=none; region5_tools_menu=none; current-breadcrumb=manage_menu%2Cservices_menu%2Cservices_list_menu%23
TRANS:Host, anupama:8280
TRANS:Transfer-Encoding, chunked
TRANS:User-Agent, Axis2
------Synapse Config Properties-------
PRP: synapse.xpath.func.extensions |
PRP: statistics.clean.interval | 1000
PRP: synapse.commons.json.preserve.namespace | false
PRP: resolve.root | ./.
PRP: synapse.home | .
PRP: __separateRegDef | true
PRP: synapse.global_timeout_interval | 120000
PRP: synapse.temp_data.chunk.size | 3072
PRP: statistics.clean.enable | true
PRP: synapse.observers | org.wso2.carbon.mediation.dependency.mgt.DependencyTracker
PRP: synapse.sal.endpoints.sesssion.timeout.default | 600000
PRP: | org.wso2.carbon.mediation.initializer.handler.CarbonTenantInfoConfigurator
------Soap envelop-------
MSG : <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv=""><soapenv:Body><test>Test Data</test></soapenv:Body></soapenv:Envelope>

Dhananjaya jayasingheConfigure WSO2 products for Continuous JFR Recordings

Production systems are daily exposing to huge traffic and sometimes these production systems are not serving requests as expected. In those kind of scenarios, We need a way to figure out what went wrong in the past few hours.

With Java Flight Recorder, If we have already enabled it, It is very easy to figure out what went wrong with information on

  • Memory Usage
  • CPU Usage 
  • Thread Usage 
  • Etc.

In order to configure a WSO2 server to continuously record this information, You can change the startup script of wso2 server as follows.

For that, you need to open file which is located in the bin directory of the WSO2 server and add following lines to it. 

 -XX:+UnlockCommercialFeatures \  
-XX:+FlightRecorder \
-XX:FlightRecorderOptions=defaultrecording=true,disk=true,maxage=60m,repository=./tmp,dumponexit=true,dumponexitpath=./ \

Once you add them, It will look like follows.
  -Dfile.encoding=UTF8 \   
-XX:+UnlockCommercialFeatures \
-XX:+FlightRecorder \
-XX:FlightRecorderOptions=defaultrecording=true,disk=true,maxage=60m,repository=./tmp,dumponexit=true,dumponexitpath=./ \
org.wso2.carbon.bootstrap.Bootstrap $

Then you can save it and restart the server. It will automatically, Dump records to the tmp directory in  your WSO2 server. 

Dhananjaya jayasingheAcquire Heap Dump of Java Process

When it comes to production troubleshooting with Java Servers, It is a main fact that we need to analyze the memory consumption.  Normally if we have configured following properties , It will automatically create the heap dump when there is an OutOfMemory exception.


There are some OOM exceptions which is not creating heap dump since it is not related to memory. You can understand on various OutOfMemory Exceptions in the article [1].

But, If you need to get a heap dump at a time which server is consuming memory but not yet throwing OOM Exception, You can use following command to do that.

Assuming that you have set JAVA_HOME and PID is the Process ID.

In a Linux based system.
 jmap -dump:format=b,file=./heap.hprof <PID>  

In a windows based system

 <JAVA_HOME>/bin/jmap -dump:format=b,file=c:\temp\heap.hprof <PID>  

Once you obtain the heap dump, You can analyze it using tools like MAT, VisualVM.


Yashothara ShanmugarajahRabbitMQ SSL connection Using RabbitMQ Event Adpter in WSO2 CEP- Part 2

stHi all,

This is the continuous blog of this blog. In the previous blog we saw about rabbitMQ broker side configuration. Here we are going to see about specifying CEP side Configuration. RabbitMQ Receiver is an extension for the CEP. So we need to download it and put it into the CEP. Those are given below.

  1.  Please visit this link to download RabbitMQ Receiver extension.
  2. Download rabbitmq client jar from this link. 
  3. Place the extension .jar file into the <CEP_HOME>/repository/components/dropins directory which you downloaded in step 1.
  4. Place the rabbitmq client .jar file into the <CEP_HOME>/repository/components/lib directory which you downloaded in step 2.
  5. Start the CEP Server by going to directory <CEP_HOME>/bin in terminal and giving command like sh ./
  6. Create a Stream in CEP. Refer more here.
  7. Go to Main => Manage => Receiver to open the Available Receivers page and then click Add Event Receiver in the management console. You can get the management console by visiting the
  8. Select Input Event Adapter Type as rabbitmq.
  9. Then you will get a page like below.

   9. Then you need to specify values for the properties given above. Here I have mentioned important and  mandatory properties' values.

  • Host Name: localhost
  • Host Port: 5671
  • Username: guest
  • Password: guest
  • Queue Name: testsslqueue
  • Exchange Name: testsslexchange
  • SSL Enabled: True
  • SSL Keystore Location: ../client/keycert.p12
  • SSL Keystore Type: PKCS12
  • SSL Keystore Password: MySecretPassword
  • SSL Truststore Location: ../rabbitstore (These are the values we took from the part 1 blog step 5)
  • SSL Truststore Type: JKS
  • SSL Truststore Password: rabbitstore (The password is the one we used when we are key tool command in Step 5)
  • Connection SSL Version: SSL
After specifying above values, select the stream you had add in step 3 and save it. 

Then if there is a message in the queue you mentioned in RabbitMQ broker, you will get it into the CEP. The data from the queue can be used in further in Execution plan for processing and we can publish it.

Yashothara ShanmugarajahRabbitMQ SSL connection Using RabbitMQ Event Adpter in WSO2 CEP- Part 1

Hi all,

I am writing this blog by assuming you are familiar with RabbitMQ Broker. Here I mainly focus on Securing the connection between RabbirMQ Message Broker and WSO2 CEP. That means how to receive secure messages from RabbitMQ Broker from WSO2 CEP Receiver. The use case here going to explain is, CEP is going to act as a consumer and consumes messages from RabbitMQ server. So simply CEP will act as a Client and RabbitMQ Server Will act as the Server.

 Introduction to RabbitMQ SSL connection

In normal connection we send messages without secure. But some confidential information like credit card number, we can not send without secure. For that purpose, we use SSL. SSL stands for Secure Sockets Layer. SSL allows sensitive information to be transmitted securely. This layer ensures that all data passed between the server and client remain private and integral. SSL is an industry standard. SSL is a security protocol. Protocols describe how algorithms should be used. In this case, the SSL protocol determines variables of the encryption for both the link and the data being transmitted.


  1.  As First Step we need to create own certificate Authority.
    • For that in terminal and go to specific folder (location) by using cd command.
    • Then use below commands.
      • $ mkdir testca
      • $ cd testca
      • $ mkdir certs private
      • $ chmod 700 private
      • $ echo 01 > serial
      • $ touch index.txt
    • Then create a new file using the following command, inside the tesca directory.  
      • $ gedit openssl.cnf

        When we using this commanda file will open in gedit. So copy and paste following thing and save it.

        [ ca ]
        default_ca = testca

        [ testca ]
        dir = .
        certificate = $dir/cacert.pem
        database = $dir/index.txt
        new_certs_dir = $dir/certs
        private_key = $dir/private/cakey.pem
        serial = $dir/serial

        default_crl_days = 7
        default_days = 365
        default_md = sha256

        policy = testca_policy
        x509_extensions = certificate_extensions

        [ testca_policy ]
        commonName = supplied
        stateOrProvinceName = optional
        countryName = optional
        emailAddress = optional
        organizationName = optional
        organizationalUnitName = optional

        [ certificate_extensions ]
        basicConstraints = CA:false

        [ req ]
        default_bits = 2048
        default_keyfile = ./private/cakey.pem
        default_md = sha256
        prompt = yes
        distinguished_name = root_ca_distinguished_name
        x509_extensions = root_ca_extensions

        [ root_ca_distinguished_name ]
        commonName = hostname

        [ root_ca_extensions ]
        basicConstraints = CA:true
        keyUsage = keyCertSign, cRLSign

        [ client_ca_extensions ]
        basicConstraints = CA:false
        keyUsage = digitalSignature
        extendedKeyUsage =

        [ server_ca_extensions ]
        basicConstraints = CA:false
        keyUsage = keyEncipherment
        extendedKeyUsage =
        • Now we can generate the key and certificates that our test Certificate Authority will use. Still within the testca directory:
          $ openssl req -x509 -config openssl.cnf -newkey rsa:2048 -days 365 -out cacert.pem -outform PEM -subj /CN=MyTestCA/ -nodes
          $ openssl x509 -in cacert.pem -out cacert.cer -outform DER

  2.  Generating certificate and key for the Server
    • Apply following commands. (Assuming you are still in testca folder)
      • $ cd ..
        $ ls
        $ mkdir server
        $ cd server
        $ openssl genrsa -out key.pem 2048
        $ openssl req -new -key key.pem -out req.pem -outform PEM -subj /CN=$(hostname)/O=server/ -nodes
        $ cd ../testca
        $ openssl ca -config openssl.cnf -in ../server/req.pem -out ../server/cert.pem -notext -batch -extensions server_ca_extensions
        $ cd ../server
        $ openssl pkcs12 -export -out keycert.p12 -in cert.pem -inkey key.pem -passout pass:MySecretPassword
  3.  Generating certificate and key for the client
    •  Apply following commands.
      • $ cd ..
        $ ls
        server testca
        $ mkdir client
        $ cd client
        $ openssl genrsa -out key.pem 2048
        $ openssl req -new -key key.pem -out req.pem -outform PEM -subj /CN=$(hostname)/O=client/ -nodes
        $ cd ../testca
        $ openssl ca -config openssl.cnf -in ../client/req.pem -out ../client/cert.pem -notext -batch -extensions client_ca_extensions
        $ cd ../client
        $ openssl pkcs12 -export -out keycert.p12 -in cert.pem -inkey key.pem -passout pass:MySecretPassword
  4. Configuring RabbitMQ Server
    To enable the SSL support in RabbitMQ, we need to provide to RabbitMQ the location of the root certificate, the server's certificate file, and the server's key. We also need to tell it to listen on a socket that is going to be used for SSL connections, and we need to tell it whether it should ask for clients to present certificates, and if the client does present a certificate, whether we should accept the certificate if we can't establish a chain of trust to it.

    For that we need to create file inside "/etc/rabbitmq". You have to name the file as "rabbitmq.config". Inside the file copy and paste following configuration.

    {rabbit, [
    {ssl_listeners, [5671]},
    {ssl_options, [{cacertfile,"/path/to/testca/cacert.pem"},
  5. Trust the Client's Root CA
    Use the following command.
    $ cat testca/cacert.pem >> all_cacerts.pem
    To validating certificate use below command.
    keytool -import -alias server1 -file /path/to/server/cert.pem -keystore /path/to/rabbitstore

    If you want study more about this configuration, go to this link. Now we have finished configuration in server side and created certificates. In my next blog, I will continue this blog by specifying CEP side Configuration. :)

Pubudu Priyashan[WSO2 ESB] Switch mediator example.

Anyone with a little bit of programming knowledge understands the behavior of a switch statement.

Suhan DharmasuriyaAccessing Google APIs using OAuth 2.0

Here is my use case.
I recently wanted to use WSO2 ESB GoogleSpreadsheet Connector. Before attempting any other spreadsheet operation in connector, I have to call the googlespreadsheet.init element first. This configuration authenticates with Google Spreadsheet by configuring the user credentials using OAuth2 authentication for accessing the Google account that contains the spreadsheets.

I couldn't initially get any simple answers/instructions in www on how to set up the following four elements from my Google Account. I have shared the process for your benefit hoping this will save loads of your valuable time.
1. Client ID
2. Client Secret
3. Refresh Token
4. Access Token

1. Go to
2. Under Credentials -> Create Credentials -> OAuth client ID

3. Select Application Type: Web Application
    Name: <appName>
   Authorized redirect URIs:

4. Click Create.
Google will immediately show your Client ID and Client Secret as follows (you can get these details later also).

Following is how the edit view shows you the required information.

6. Click the settings icon.
7. Select the Use your own OAuth credentials check box
8. Fill two fields OAuth Client ID and OAuth Client secret with the values you retrieved in step 4.

9. Now in Step 1 select the Google Sheets APIv4 all 4 scopes as shown below.
10. Click on Authorize APIs
11. Once ask approve permission to access the selected scopes.

 12. In Step 2 Click on Exchange authorization code for tokens button.
13. These are your Refresh and Access Tokens

Now all four fields required are retrieved.
Following is how you will be using googlespreadsheet.init element inside your proxy/sequence configuration.


Following is a sample JSON payload to invoke googlespreadsheet.init method in ESB google spreadsheet connector. Note that the registry path is optional here.

  "googlespreadsheetRefreshToken": "1/KSKKDKDK_Isdf_dsalkfdffafdskfdsafdksfj_sdfSWE",
  "googlespreadsheetClientId": "",
  "googlespreadsheetClientSecret": "0GeUJ4KK3Flk8C9S0Ua51jZS",
  "googlespreadsheetApiUrl": ""

Note: As of today WSO2 ESB googlespreadsheet connector is compatible with Google Spreadsheets API v3(Legacy). But for authentication purposes there will be no much of a difference. I have written my own connector for Google Spreadsheets API v4 to use v4 specific methods like spreadsheets.create
Refer [1] for more information on writing your own connector.


Suhan DharmasuriyaBuilding/Uploading/Using the Sample ESB Connector in few minutes

I have used WSO2 ESB 5.0.0 in this article.
Go to an empty folder and run the following command to generate the maven project sample connector code.

mvn archetype:generate -DarchetypeGroupId=org.wso2.carbon.extension.archetype -DarchetypeArtifactId=org.wso2.carbon.extension.esb.connector-archetype -DarchetypeVersion=2.0.0 -DgroupId=org.wso2.carbon.connector -DartifactId=org.wso2.carbon.connector.sample -Dversion=1.0.0 -DarchetypeRepository=

You will be asked to specify a name for the connector.
Define value for property 'connector_name': : Sample
After asking to confirm the properties type Y and enter to confirm.

Issue the following command to build the connector zip file.

mvn clean install -Dmaven.test.skip=true

Download/Start WSO2 ESB 4.9.0 or ESB 5.0.0 Server.

Login to the management console (default username/password: admin/admin) and upload the created 
connector zip file.
(zip file will be at ../org.wso2.carbon.connector.sample/target/

Add the following proxy service. (Home -> Manage -> Services -> Add -> Proxy Service)
Select custom proxy service -> view source -> paste following code -> Save

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <log level="custom">
            <property name="**********" value="Inside sampleConnectorProxy **********"/>

Following will be printed in the server log file

TID: [-1234] [] [2016-10-11 13:33:36,755]  INFO {org.wso2.carbon.mediation.dependency.mgt.DependencyTracker} -  Proxy service : sampleConnectorProxy was added to the Synapse configuration successfully {org.wso2.carbon.mediation.dependency.mgt.DependencyTracker}
TID: [-1234] [] [2016-10-11 13:33:36,755]  INFO {org.apache.synapse.core.axis2.ProxyService} -  Building Axis service for Proxy service : sampleConnectorProxy {org.apache.synapse.core.axis2.ProxyService}
TID: [-1234] [] [2016-10-11 13:33:36,757]  INFO {org.apache.synapse.core.axis2.ProxyService} -  Adding service sampleConnectorProxy to the Axis2 configuration {org.apache.synapse.core.axis2.ProxyService}
TID: [-1234] [] [2016-10-11 13:33:36,758]  INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} -  Deploying Axis2 service: sampleConnectorProxy {super-tenant} {org.wso2.carbon.core.deployment.DeploymentInterceptor}
TID: [-1234] [] [2016-10-11 13:33:36,759]  INFO {org.apache.synapse.core.axis2.ProxyService} -  Successfully created the Axis2 service for Proxy service : sampleConnectorProxy {org.apache.synapse.core.axis2.ProxyService}
Go to services -> Try this service -> Send

Following will be printed in the server log file

TID: [-1234] [] [2016-10-11 13:33:47,314]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  ********** = Inside sampleConnectorProxy ********** {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-10-11 13:33:47,316]  INFO {org.apache.synapse.mediators.builtin.LogMediator} -  To: /services/sampleConnectorProxy.sampleConnectorProxyHttpSoap12Endpoint, WSAction: urn:mediate, SOAPAction: urn:mediate, MessageID: urn:uuid:e3b5f31c-37b8-4ca8-b483-9bbc4a6daf04, Direction: request, *******template_param******** = Hi, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv=""><soapenv:Body/></soapenv:Envelope> {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1234] [] [2016-10-11 13:33:47,316]  INFO {org.wso2.carbon.connector.sampleConnector} -  sample sample connector received message :Hi {org.wso2.carbon.connector.sampleConnector}

To do modifications to the sample connector and experiment, you can set up your development environment. I'm using Intellij IDEA Community Edition. Refer to the following command.

mvn clean install idea:idea

Learn more about the components inside the connector.

Creating a third party connector and publishing it in WSO2 store.

Lakshani Gamage[WSO2 App Manager]Registry Extension (RXT) Files

All data related to any application you create in WSO2 App Manager is stored in the registry which is embedded to the server. Those data is stored in a format which is defined in a special set of files called “Registry Extensions (RXTs)”[1] . When you save a web application, the format it is saved in the registry is given in “webapp.rxt”, and that of mobile applications is given in “mobileapp.rxt”. You can see these files in the file system under <APPM_HOME>/repository/resources/rxts folder. When you want to add a new field to an application you create, you need to edit these RXT files.

These RXT files can also be found in Home > Extensions > Configure > Artifact Types in App Manager management console like below.

But when you want to edit these file, it is better to edit them from the file system, because every time a new tenant is created, it picks relevant RXTs from the file system to populate the data in the registry.

App Manager reads RXT files from the file system and populates them in the management console only if they are not already populated. So, whenever you edit RXTs from the file system, you have to delete the rxt files from the management console and restart the server to populate updated RXT files in the management console. If you have multiple tenants, you need to delete RXT files of each tenant from the management console.

There are “field” tags in every RXT file. In each field tag, it contains field type and if it is a required field or not. See below two examples.

eg :
<field type="text" required="true">

<field type="text-area">
 <name>Terms and Conditions</name>

In rxt files, there are two types of fields. They are, “text” and “text-area”. “text” is used for text fields, and “text-area” is used for large text contents.  If you want to set the type of a field as double, Integer etc., you have to use a “text” field and have a type validation in application code.

Gobinath LoganathanNever Duplicate A Window Again - WSO2 Siddhi Event Window

A new feature known as Event Window is introduced in WSO2 Siddhi 3.1.1 version, which is quite similar to the named window of Esper CEP in some aspects. This article presents the application and the architecture of Event Window using a simple example. According to Siddhi version 3.1.0, a window can be defined on a stream inside a query and the output can be used in the same query itself. For

sanjeewa malalgodaSimple auto scaling logic for software scalling

Here in this post i will list sample code(not exact code but more like pseudo code) to explain how auto scaling components works. We can use this logic in scalable load balancers to take decisions based on number of requests. 

required_instances =  request_in_fly / number_of_max_requests_per_instance;
if (required_instances > current_active_instance)
    //This need scale up
    if(required_instances < max_allowed)
       spwan_inatances( required_instances - current_active_instance );
     //Cannot handle load
    //This is scale down decision
    if(required_instances > min_allowed)
      terminate_inatances( current_active_instance - required_instances );

Pubudu Priyashan[ESB] What does it mean by an anonymous endpoint or a sequence

While I was mapping automated test cases with manual test cases in WSO2 test case repository I came across the terms anonymous endpoint and…

Himasha GurugeCreating a custom mode and worker files in Ace editor

[1] and [2]  contain a good example on how to create your own worker code , and integrating the worker to the mode with a worker client. The problem I faced was  though this content was added to two new files, syntax validation was not working as the worker file was lacking some initial methods that was coming from ace editor itself. So in order to create your custom worker with all those basic methods include this is what you can follow.

Let us assume your custom language is called 'lang1'.
1. Create mode and worker content files with the naming convention a 'lang1.js' and 'lang1_worker.js.
2. Download ace source code from
3. In ace source code go to '/lib/ace/mode' folder and place your lang1.js and lang1_worker.js files.
4. Build ace source code with command ' node ./Makefile.dryice.js '.(Check github page for instructions)
5. Once successfully built, go to 'build/src' folder and you can find your mode file as 'lang1.js' and your worker file created as 'worker-nel.js'.

Now you can place these files in your source code location and work with them. Do remember to update the worker name to new one,  when creating the worker client.
var worker = new WorkerClient(["lang1"], "lang1/mode/worker-lang1", "Lang1Module");


Lakshani GamageHow to Send an Email From WSO2 ESB

  1. Configure email transport in <ESB_HOME>/repository/conf/axis2/axis2.xml/axis2.xml (There is transportsender commented out in default axis2.xml. You can uncomment them and change the parameter values as you wanted.)
    <transportsender class="org.apache.axis2.transport.mail.MailTransportSender" name="mailto">
    <parameter name=""></parameter>
    <parameter name="mail.smtp.port">587</parameter>
    <parameter name="mail.smtp.starttls.enable">true</parameter>
    <parameter name="mail.smtp.auth">true</parameter>
    <parameter name="mail.smtp.user">sender</parameter>
    <parameter name="mail.smtp.password">password</parameter>
    <parameter name="mail.smtp.from"></parameter>

  3. If ESB has already started, restart the server.
  4. Log in to Management console and add below proxy.
    <?xml version="1.0" encoding="UTF-8"?>
    <proxy xmlns=""
    <address uri=""/>
    <property name="messageType"
    <property name="ContentType" scope="axis2" value="text/html"/>
    <property name="Subject" scope="transport" value="ESB"/>
    <property name="OUT_ONLY" value="true"/>
    <property name="FORCE_SC_ACCEPTED" scope="axis2" value="true"/>
    <address uri=""/>

  6. Invoke the proxy and then you can see a mail in's inbox from
Note : If you are using gmail to send above mail, you have to allow external apps access in your google account as mention in here.

Gobinath LoganathanJPA - Hello, World! using Hibernate

Java Persistence API (JPA) is a Java application programming interface specification that describes the management of relational data in applications using Java Platform, Standard Edition and Java Platform, Enterprise Edition. Various frameworks like Hibernate, EclipseLink and Apache OpenJPA provide object relational mapping according to JPA standards. This tutorial guides you to develop a

Dimuthu De Lanerolle

Accessing H2 DB from the browser

1. Navigate to  [Product_Home]/repository/conf/carbon.xml
2. Uncomment below.
        <property name="web" />
        <property name="webPort">8082</property>
        <property name="webAllowOthers" />
        <property name="webSSL" />
        <property name="tcp" />
        <property name="tcpPort">9092</property>
        <property name="tcpAllowOthers" />
        <property name="tcpSSL" />
        <property name="pg" />
        <property name="pgPort">5435</property>
        <property name="pgAllowOthers" />
        <property name="trace" />
        <property name="baseDir">${carbon.home}</property>

3. Navigate to your primary datasource which points to the H2 datasoure to identify username & password.


            <description>The datasource used for registry and user manager</description>
            <definition type="RDBMS">
                    <validationQuery>SELECT 1</validationQuery>

4. Type in the browser-url        :      https://localhost:8280

Gobinath LoganathanBest Download Manager For Linux

Those who switched to Linux from Windows mostly have this question: “What is the alternative for IDM in Linux?”. I accept the truth that…

Nifras IsmailCreate Your First Rails Application

ruby_rails_bannerCreating all rails applications in the same way. To create an rails application create a directory and inside the directory run your rails  command



mkdir rails_application_directory
cd rails_application_directory
rails new my_first_app


As seen on the above animation a skeleton is created. In this skeleton.

Directory Purpose
app/ This is the core part of the application. This is the place contains models, views, containers
app/assets Which is the place Java Script, CSS and images to be include
 config/  Application Configuration like database configuration, routing…
 db/  Database files
 doc/ documentation files
 lib/  library modules
 log/ log files
 public/  The assets accessible by public ( Error pages … )
 test/  application test files
tmp/  temporary files
 vendor/ 3rd party plugin and Gems
 README.rdoc  A brief description of Application
 Gemfile  Gem requirements  Configuration file for Rake middleware

Bundle Install

When we finish the application creation we are using bundle command to install the application and getting the required Gems to the app. The command bundle install is responsible for the installation it is automatically runs on your creation time.

Note: If you made any changes on Gemfile. You should run the bundle install command to update/install your gems to your application.


In Gem you may notice some Gems specified with version number and some are not. Unless you specify the version number Gem getting the latest version of the specified Gems.  You can explicit the version specifying the version number.

gem ‘jquery-rails’

to specify the version as second parameter

gem ‘jquery-rails’, ‘2.0.0’

And also you can specify the particular gem to an particular group environment. Let say we are going to use the sqllite only to the development environment. You specify the Gem as follow.

group :development do
gem ‘sqlite3’, ‘1.3.5’

Once you assemble the Gemfile don’t forget to install the Gems inside the application.

bundle install

Rails Server

Until now we have know the two important rails commands rails new and bundle install. Now we are going to see how to run our application. Specially rails comes with a command line with local web server on your development environment.

The following command shows you how to run your rails application on your local machine.

rails server

After running your application, Go to your browser with http://localhost:3000/ You will see the welcome page of rails as below


Chathurika Erandi De SilvaReading an XML file and Exposing it using a REST API: WSO2 ESB

In this post, I will be explaining on exposing the content of the xml file as a REST API via WSO2 ESB.

In order to get this done, we need to read the xml file first. WSO2 ESB does not have a default mediator that reads xml directly, so I have used a class mediator to read the content and map it to an OM element.

Class mediator [1]

There after I have used this class mediator in a sequence and simply just returned the content set to the message content through above using respond mediator. A property mediator is used here to take the file path as given below

Finally I have used the above sequence in an API as below


Tharindu EdirisingheOWASP Dependency Check CLI - Analyzing Vulnerabilities in 3rd Party Libraries

When developing software, we have to use 3rd party libraries in many cases. For example, if we want to send an email from our application, we would use a well known email sending library like If we want to make an HTTP call to an external website or an API,  from our application we would use the httpclient library of apache. However before using a 3rd party library, it is important to check if there are any known security vulnerabilities reported against these libraries. For that, you can search in the National Vulnerability Database [1] and make sure it is safe to use the library in your application. However, it is difficult to do this manually when you have many external dependencies in your application. In such case, you can use a tool to do the search for you. OWASP Dependency Check  [2] is an example for this which I am going to explain in this blog post.

Here I will explain how to use the command line tool of OWASP Dependency Check to analyze external dependencies and generate a report based on the known vulnerabilities detected. First, download the command line tool from the official website. At the time of this writing, the latest version is 1.4.3.

In the bin directory of the dependency-check tool, you can find the executable script. dependency-check.bat file is for running the tool on Windows and the file is for running on Linux.

If you just execute the script without providing any parameters, you can see the list of parameters that you need to provide for performing the vulnerability analysis and generating reports.

Following are the basic parameters that are required when running a vulnerability analysis.

You can specify a name for the project and this would appear in the report
The folder which contains the 3rd party dependency libraries
The folder where the vulnerability analysis reports should be generated
An XML file that contains the known vulnerabilities that should be hidden from the report (false positives)

Now let’s do an analysis using OWASP Dependency Check. First I download  commons-httpclient-3.1 and httpclient-4.5.2 libraries and put them in a folder.

Following is the sample command to run for performing the vulnerability analysis.

./  --project "<myproject>" --scan <folder containing 3rd party libraries> --out <folder to generate reports> --suppression <xml file containing suppressions>

Here I skip the suppression and just do the analysis. I have put the above downloaded 2 libraries to /home/tharindu/dependencies/mydependencies folder in my filesystem and I would generate the reports to the /home/tharindu/dependencies/reports folder.

./  --project "myproject" --scan /home/tharindu/dependencies/mydependencies --out /home/tharindu/dependencies/reports
When you run the OWASP Dependency Check for the very first time, it would download the known vulnerabilities from the National Vulnerability Database (NVD) and it would maintain these information in a local database. So, when running this for the very first time, it would take some time as it has to download all the vulnerability details.

By default the duration for syncing the local database and NVD is 4 hours. If you have run the Dependency Check within 4 hours, it will just use the data in local database without trying to update the local database with NVD.

Once you run the Dependency Check against the folder where your project dependencies are, it would generate the vulnerability analysis report.

Based on the analysis, I can see that the commons-httpclient-3.1.jar is having 3 known security vulnerabilities, but httpclient-4.5.2.jar is not having any reported security vulnerability.

Following are the 3 known security vulnerabilities reported against commons-httpclient-3.1.jar.

A known security vulnerability would have a unique identification number (CVE [3]) and a score (CVSS [4], a scalo from 0 to 10) and the severity. The severity is decided based on the CVSS score. The identification number follows the format CVE-<reported year>-sequence number.

When we identify that there is a known security vulnerability in a 3rd party library, we can check if that library has a higher version where this issue is fixed. In above example httpclient 3.1’s vulnerabilities are fixed in its latest version.

If the latest version of a 3rd party library is also having known vulnerabilities, you can try to use an alternative which has no reported vulnerabilities so you can make sure your project is not dependent on any external library that is not safe to use.

However there comes situations where you have no option other than using a particular 3rd party library, but still that library has some known vulnerabilities. In such case, you can go through each vulnerability and check if it has any impact to your project. For example, the 3 known issues in httpclient 3.1 are related to SSL, hostname and certificate validation. Let’s say your project uses this library just to call some URL (API) via HTTP (not HTTPS), then your project has no impact from the reported vulnerabilities of httpclient. So these issues become false positives in your case.

In such situation, you might need to get a vulnerability analysis report for 3rd party dependencies that clearly reflects the actual vulnerabilities and hides false positives. Then you can use the suppress option in Dependency Check.

When you get the Dependency Check report, next to each vulnerability in the report, there is a button named suppress. If you want to hide that vulnerability from the report, click on this button and it would popup a message that contains some XML content.

You can create an XML file with below content and put the XML content you got by clicking on the suppress button as child elements under <suppressions> tag.
<?xml version="1.0" encoding="UTF-8"?>
<suppressions xmlns="">

<!-- add each vulnerability suppression here-->


A sample is below.

<?xml version="1.0" encoding="UTF-8"?>
<suppressions xmlns="">
     <notes><![CDATA[file name: commons-httpclient-3.1.jar]]></notes>

Then you can run the Dependency Check again with the --suppression parameter where you provide the file path to the XML file that contains the suppressions.

./  --project "myproject" --scan /home/tharindu/dependencies/mydependencies --out /home/tharindu/dependencies/reports --suppression /home/tharindu/dependencies/suppress.xml

Then the report would show how many vulnerabilities are suppressed.

Also the report would contain a new section called Suppressed Vulnerabilities and you can make this section hidden or visible in the report.

In conclusion, before using an external library as a dependency for your project, it is important to know those are safe to use. You can simply use a tool like OWASP Dependency Check and do a vulnerability analysis for the 3rd party dependencies. You can follow this as a process in your organization to ensure that you do not use components with known vulnerabilities in your projects. This is one major software security risk listed under OWASP Top 10 [5] as well.


Tharindu Edirisinghe
Platform Security Team

Samitha ChathurangaMerging Traffic Manager and Gateway Profiles - WSO2 APIM

This guide includes how to configure a WSO2 API Manager 2.0.0 cluster, highlighting the specific scenario of merging Traffic Manager Profile into the Gateway. I will describe how to configure a sample API Manager setup which demonstrate Merging Traffic Manager and Gateway Profiles.

I will configure publisher, store and key manager components in a single server instance as the expectation is to illustrate merging gateway and traffic manager and starting that merged instance separately from other components.

This sample setup consists of following 3 nodes,
  1. Publisher + Store + Key Manager (P_S_K)                       ; offset = 0
  2. Gateway Manager/Worker + Traffic Manager (GM_T)     ; offset = 1
  3. Gateway Worker + Traffic Manager (GW_T)                    ; offset = 2
  • We will refer the 3 nodes as P_S_K , GM_T , GW_T for convenience.
  • There is a cluster of gateways, one node is acting as the manager/worker node and other one is as a simple worker node.
  • Traffic managers are configured with high availability.
  • Port offset is configured as mentioned above. To set the port offsets edit the <Offset> element in <APIM_HOME>/repository/conf/carbon.xml , in each of the nodes.

Figure 1 : Simple architectural diagram of the setup

1. Configuring datasources

We can configure databases according to the APIM documentation­Installingandconfiguringthedatabases
[Please open such these documentation links in Chrome or any other browser except Firefox, as Firefox has a bug with Confluence Atlassian document links in opening the link at the expected position of the page.] 

Follow the steps in it carefully. Assume that the names of the databases we created are as follows,

       API Manager Database ­ - apimgtdb
       User Manager Database ­- userdb 
       Registry Database​ ­ ​        - regdb

In the above mentioned doc, apply all the steps defined for all three store, publisher, key manager
nodes to our P_S_K node. Because in that documentation publisher, store, key manager are existing in different nodes, but in our this setup we are using a single node which acts as all 3 components
(publisher, store and key manager).

Following is a summary of configured datasources for each node.
         P_S_K node : ​apimgtdb , userdb , regdb
         GM_T node / GW_T node​ : Not required

2. Configuring the connections among the components

You will now configure the inter-component relationships of the distributed setup by modifying their <APIM_HOME>/repository/conf/api-manager.xml files.
This section includes the following sub-topics.
  1. Configuring P_S_K node
  2. Configuring GM_T node & GW_T node

2.1 Configuring P_S_K node

Here we have to configure this node for all the 3 components, publisher, store, key manager related functionalities.

For this follow this steps mentioned in the given docs. (it is recommended not to open these links in Firefox browser ) In them, the setup is created as the publisher, store and key manager that are in separate nodes in a cluster. So follow the steps as per the requirement of yours, considering the port offsets too.

2.1.1 Configurations related to  publisher

Follow all the steps in the above wso2 documentation except the configurations for file. Configurations for that file should be changed as following.

This is for configuring fail over for publishing Custom Templates and Block conditions into the Gateway node.
In <APIM_HOME>/repository/conf/ file, line,

connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientid/carbon?brokerlist='tcp://<Traffic-Manager-host>:5676'
topic.throttleData = throttleData

should be changed as following.

connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientID/carbon?failover='roundrobin'%26cyclecount='2'%26brokerlist='tcp://<IP_Of_GM_T_Node>:5673?retries='5'%26connectdelay='50';tcp://<IP_Of_GW_T_Node>:5674?retries='5'%26connectdelay='50''

In the above config,
5673 => 5672+offset of GM_T node
5674 => 5672+offset of GW_T node

2.1.2 Configurations related to  store

Follow all the steps appropriately in the below wso2 documentation link

2.1.3 Configurations related to key manager

Follow all the steps appropriately in the below wso2 documentation link

Note : In the above docs, setup is created as the publisher, store and key manager that are in separate nodes in a cluster. So follow the steps as per the requirement of yours, keeping in mind that you are configuring them into a single node considering the port offsets too.

2.2 Configuring GM_T node & GW_T node

Configurations for the two Gateway+Traffic Manager nodes are very much similar. So follow each below steps for both the nodes. I will mention the varying steps when required.

Please note that when starting these nodes you have to start them in default profiles as there is no customized profile for gateway+traffic manager.

2.2.1 Gateway component related configurations

This section involves setting up the gateway component related configurations to enable it to work with the other components in the distributed setup.

I will use G_T in short for the GM_T or GW_T node. Apply the IP address of the own node in the configurations below.

  1. Open the <APIM_HOME>/repository/conf/api-manager.xml file in the GM_T/GW_T node.   
  2. Modify the api-manager.xml file as follows. This configures the connection to the Key Manager component.

3. Configure key management related communication. (both nodes)

In a clustered setup if the Key Manager is fronted by a load balancer, you have to use WSClient as KeyValidatorClientType in <APIM_HOME>/repository/conf/api-manager.xml. (This should be configured in all Gateway and Key Manager components and so as per our setup configure this in GM_T and GW_T nodes)


4. Configure throttling for the Traffic Manager component. (both nodes). 
Modify the api-manager.xml file as follows

In the above configs, <connectionfactory.TopicConnectionFactory> element used to configure jms topic url where worker node uses to listen for throttling related  events. In this case, each Gateway node has to listen Topics in both Traffic managers. Because if one node gets down, throttling procedures should work without any interrupt. Even though one node gets down, the throttling related counters will now exist synced with the other node. Hence, here we have configured failover jms connection url as pointed above.

2.2.2 Clustering Gateway + Traffic Manager nodes

In our sample setup we are using two nodes.
1. Manager/Worker
2. Worker

We have followed the below steps for Traffic manager related clustering, and if you want to do the configurations for Gateway clustering with load balancer, follow the documentation and configure host names in carbon.xml appropriately, add svn deployment synchronizer, etc.
Follow the below steps for both the nodes or Traffic manager related clustering.

Open the <AM_HOME>/repository/conf/axis2/axis2.xml file

1. Scroll down to the 'Clustering' section. To enable clustering for the node, set the value of "enable" attribute of the "clustering" element to "true", in each of 2 nodes.

<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"

2. Change the 'membershipScheme' parameter to 'wka'.
<parameter name="membershipScheme">wka</parameter>
3. Specify the host used to communicate cluster messages. (in both nodes)

 name="localMemberHost"><ip of this GM_T node></parameter>

4. Specify the port used to communicate cluster messages.

        Let’s give port 4001 in GM_T node.

              <parameter name="localMemberPort">4001</parameter>

        Let’s give port 4000 in GW_T node.

              <parameter name="localMemberPort">4001</parameter>

5. Specify the name of the cluster this node will join. (for both nodes)

<parameter name="domain">wso2.carbon.domain</parameter>

6. Change the members listed in the <members> element. This defines the WKA members. (for both nodes)

2.2.3 Traffic manager related configurations

This section involves setting up the Traffic manager component related configurations to enable it to work with the other components in the distributed setup. 

1. Delete the contents of <APIM_HOME>/repository/conf/registry.xml file and copy the contents of the <APIM_HOME>/repository/conf/registry_TM.xml file, into the registry.xml file (in both nodes)

2. Remove all the existing webapps and jaggeryapps from the <APIM_HOME>/repository/deployment/server directory.
(in both nodes)

3. High Availability configuration for traffic manager component (in both nodes)
  • Open <APIM_HOME>/repository/conf/event-processor.xml and enable HA mode as below.
    <mode name="HA" enable="true">
  • Set ip for event synchronization
        <hostName>ip     of this node</hostName>

2.2.4 Configuring JMS TopicConnectionFactories

Here we are configuring TopicConnectionFactories to get data from to traffic manager. So in this cluster configuration we are using 2 TopicConnectionFactories (one for each), and configure to send data to both TopicConnectionFactories. (send to own TopicConnectionFactory and other node’s TopicConnectionFactory too)
So open in <APIM_HOME>/repository/conf/ file (in both nodes) and do th efollwoing changes in it,

.TopicConnectionFactory = amqp://admin:admin@clientid/carbon?brokerlist='tcp://localhost:5672'
line to following,
connectionfactory.TopicConnectionFactory1 = amqp://admin:admin@clientid/carbon?brokerlist='tcp://<ip of GM_T>:5673

And add new line as,
connectionfactory.TopicConnectionFactory2 = amqp://admin:admin@clientid/carbon?brokerlist='tcp://<ip of GW_T>:5674'

Finally that section would be as below.

# register some connection factories
# connectionfactory.[jndiname] = [ConnectionURL]
connectionfactory.TopicConnectionFactory1 = amqp://admin:admin@clientid/carbon?brokerlist='tcp://<ip of GM_T>:5673'
connectionfactory.TopicConnectionFactory2 = amqp://admin:admin@clientid/carbon?brokerlist='tcp://<ip of GW_T>:5674'

5673 => 5672 + portOffset of GM_T

5674 => 5672 + portOffset of GW_T

2.2.5 Add event publishers to publish data to related JMS Topics.

(Do this for both the nodes.) We have to publish data from Traffic manager component to TopicConnectionFactory in its own and other node too. So there should be 2 jmsEventPublisher files in the <APIM_HOME>/repository/deployment/server/eventPublishers/ for that.

There is already a,<APIM_HOME>/repository/deployment/server/eventPublishers/jmsEventPublisher.xml file in the default pack. In it, update the ConnectionFactoryJNDIName as below.

And that is it. U are done :-)

Lakshman UdayakanthaSend user attributes to web app using SAML SSO in WSO2 Application Server 6.0.0

WSO2 Application server 6.0.0 is a pure tomcat plus libraries to support WSO2 platform. It includes a utility component to host web apps with SAML SSO using WSO2 identity server. You can find more information how to setup SAML SSO using Accessing Applications using Single Sign-On [1] article. Suppose a scenario a developer wants to send some user attributes to web apps for the authorisation purposes. Then aforementioned utility component facilitate that as well.  Follow the below steps to send user attributes to web apps.

1. Follow the article[1] to setup SAML SSO between web apps.

2. I am working with the samples provided in SAML SSO setup tutorial [1]. Lets say I want to send some user attributes to musicstore app.
For that, Go to the identity server carbon console.  Go to the created musicstore-app service provider configuration. Expand Inbound Authentication Configuration. Click the edit button of musicstore-app issuer.

Click the edit button of musicstore-app issuer

3. Click the Enable Attribute Profile tick box and the Include Attributes in the Response Always tick box.

4. Click update button at bottom.

5. Now If you are redirected to issuer listing page, go to musicstore-app edit button again and expand the claim configuration.

6. Click Add Claim URI link and add the user attributes you want. I have added Organization, country, email address, role attributes.

7. Now make sure you have these claims (i.e user attributes) for your user. If user doesn't have these claims, that attributes will not go to web app. I have admin user here. So I can check whether admin user have these claims or not, by browsing admin user profile. To browse the user profile click following menu sequence

Main -> Users And Roles -> List -> Users

Now click the user profile link of relevant user to browse the profile. In my case I click the user profile button of admin user.

8. I have added details in profile as below. These are the needed attribute values.

Admin user have default roles. If you want you can change those roles as well. Now click the update button below.

8. Now all the things have setup properly. You should be able to access those attributes in music store web app.
ex : If you want to access the roles the user have, use below code.


By calling the above code you can get a JSON string like this.
{ "saml2SSO": { "subjectId": "admin@carbon.super", "responseString": ..., "assertionString": ..., "sessionIndex": "3f2c29c4-6346-4824-a7cf-128b978f2e7a", "subjectAttributes": { "": ["wso2"], "": ["Sri Lanka"], "": [""], "": ["Application/musicstore-app", "Application/bookstore-app", "Internal/everyone", "admin"] } }}

See. There is an element called subjectAttributes and it filled with the data we filled for user attributes earlier. You can cut down this json and utilize those values.


Thusitha Thilina Dayaratne

Build MSF4J with WSO2 Carbon Kernel 5.1.0

MSF4J supports standalone mode as well as the OSGi mode. Unlike other WSO2 products, MSF4J is not released as a typical server. But you can build up your own server by installing the MSF4J feature in to the Carbon Kernel.
You can used the pom based approach to install the MSF4J feature into the Kernel 5.1.0


Copy the following pom.xml and carbon.product file to a some directory
Run “mvn clean install” inside the directory.
If the build is success, you can find the build product inside the target folder. You can run the server by executing the wso2carbon-kernel-5.1.0/bin/ file in the target directory.

Add a Microservice to the Server

You can build the stockquote bundle sample as mentioned in copy the bundle to wso2carbon-kernel-5.1.0/osgi/dropins directory and start the server.
To invoke the deployed service use below curl command
curl http://localhost:9090/stockquote/IBM

Thusitha Thilina Dayaratne

MSF4J will Support JAX-RS Sub-Resource and Running Multiple Microservice Runners

WSO2 MSF4J will will support JAX-RS sub resources and running multiple microservice runners in its’ next release along with some great features.

JAX-RS Sub-Resource

You can try out the sub-resources with the latest MSF4J snapshot version. is the MSF4J Sub resource sample that demonstrate how to use sub-resource with MSF4J.

Running Multiple MicroservicesRunners

In previous MSF4J versions we only support running a single MicroservicesRunner in a single JVM. With the next release you can run several MicroservicesRunners in a single JVM.
Following code segment shows a simple sample of the usage.
Tryout the new features in the MSF4J and provide your feedback. Also please do raise issues in JIRA/Github if you encounter anything.

Isura KarunaratnePassword History Extension for WSO2 Identity Server 5.2.0

WSO2 Identity Server 5.2.0 was released in last month (September 2016). You can download the Identity Sever 5.2.0 from here.

It supports a lot of Identity and Access Management features OOTB and you can find them from here.
Currently, In Identity Server 5.2.0 version does not support password history validation feature OOTB.  (This feature will be supported OOTB in next release which is planned in December 2016).
Although this feature is not supported OOTB, it can be supported easily through an extension. In this blog, I have implemented a sample  which will support following features for IS 5.2.0.

  • Password cannot have been used in previous 'n' password changes
  • Password cannot have been previously used in past 'm' hours. 

Here the 'n' and 'm' should be configurable parameters. 

You can go through following steps to add password history feature in IS 5.2.0.
  1. Download Identity Server 5.2.0 from here
  2. Go through the installation guide and install Java and Maven.
  3. Download the Extention source code from here.
  4. Goto inside password_history folder and run the command "mvn clean install"
  5. Copy password_history/target/org.wso2.custom-1.0.0.jar file to <IS_HOME>/repository/components/dropins folder
  6. password_history/src/main/resources/dbScripts directory contains following db scripts files. Run the relevant configuration file based on your database configured in identity.xml file.
    • db2.sql  
    • informix.sql 
    • mysql.sql    
    • oracle.sql
    • h2.sql   
    • mssql.sql     
    • oracle_rac.sql  
    • postgresql.sql
  7. Copy password_history/src/main/resources/ file into <IS_HOME>/repository/conf/Identity directory. It has following configrable parameters and configure them according to the requirements.
    • #If true, password history feature will be enabled
    • PasswordHistory.Enable=true

    • #Password cannot have been used in the previous 'X' password changes
    • PasswordHistory.Count=5

    • #Password cannot have been previously used in the past 24 hours
    • PasswordHistory.Time=24

    • #Password Digest Algorithm
    • PasswordHistory.hashingAlgorithm=SHA-256

    • #Password History data store extension point
  8. Start Identity Server
  9. Then you are done. You can try the feature by adding user and updating credentials.

Kamidu Sachith PunchihewaConfigure Device communication when using an existing SSL Certificated with Enterprise Mobility Manager

The two main components of WSO2 Enterprise Mobility Manager are mobile device management and mobile application management. Setting up WSO2 EMM can be done by following the “Getting Started” guide as mentioned in the documentation. This article mainly emphasizes on how to obtain the certification configuration for your personal domain.

Enrolled devices and WSO2 Enterprise Mobility Manager communicates using the HTTPS protocol. This is to make sure that the private and sensitive data stored in the mobile device cannot be retrieved by a third party or unauthorized personals. All the communication carried out between devices, APNS and EMM server is based on certificates included in the key-store files with the extension “jks”. These security features are critical since EMM supports both cooperate owned (COPE) and personal (BOYD) device management. In the section “Configuring the product” guide you have been provided with the steps to configure the EMM server to used in your local subnet where the server and the devices uses a SSL certificate issued by the inbuilt Certificate Authority of the EMM server.

Communications between devices and EMM server

WSO2 EMM server consists of the following components:
  • SCEP server component.
  • CA server component.
  • Device Management Component.

The iOS device acts as a SCEP client where it sends the SCEP request to the Server. For enrollment purposes, this communication requires a certificate which will be generated by the CA server component of EMM. The iOS device will generate a private/public key pair and send a certificate signing/authorization request to the CA where the CA component will need to generate the public key certificate and store the public key for encryption which will be used later.

There is communication between IOS devices and APNS as well as Android devices and GCM for policy monitoring and to perform operations. All the devices will communicate with the EMM server using the agent applications. All these communications must be secured using certificates.
You can see the communication flow in Figure 1 given below.

In order to provide secure communication between the components represented in Figure 1, you have to obtain an SSL certificate for your domain from a Certificate Authority. When hosted under a public domain the obtained SSL certificates needs to be included into the key stores.

Obtaining an SSL Certificates for your domain

You can follow the “Get SSL on my website” guide for more information on how to obtain SSL certification.

Configuring for IOS device management

Configuring the IOS device management and communications is a three step process :
  1. Obtaining a signed CSR from WSO2.
  2. Configuring EMM server for IOS device management.
  3. Configuring the IOS client.

Obtaining a signed CSR from WSO2

Create a Certificate Signing Request (CSR) file from the EMM server using your private key. You can use commands given below to generate the CSR file:

openssl genrsa -des3 -out <Your_Private_Key_File> 2048
openssl req -new -key <Your_Private_Key_File> -out <You_CSR_File>

Make sure to create both Your_Private_Key_File and Your_CSR_Filefiles with .pem extension

Provide correct information to the prompted questions related to your organization and the project. Make sure to provide the actual organization name as this is a required field. The Email address provided should be valid as this will act as the identification of your CSR request in order to identify you in a CSR expiration situation. Common name stands for the fully qualified domain name of your server. Make sure that the information you have provided is of high accuracy since the artifacts provided will bind to the provided domain name. IOS device can be only managed by the server which is hosted under the provided host name.
You can submit the CSR request to the “Obtain the signed CSR file” form. Make sure to enter the same information as you entered in the CSR request when filling the above form.
You will be provided with the following artifacts which is required to configure the EMM server to manage IOS devices:
  1. The signed CSR file in .plst format.
  2. Agent source code.
  3. P2 repository, which contains the feature list.

Please refer “Obtaining the Signed CSR File” guide for more information on obtaining a signed CSR file.

Configuring EMM server for IOS device management

IOS server configuration is a complex and prolonged process which can be described by the following steps. By following these steps in order you can easily configure the EMM server for iOS device management:

  1. Installing IOS feature to EMM server.
  2. Configure general IOS server settings.
  3. Generate the MDM APNS certificate.

Installing IOS feature to the EMM server

Start the EMM server in order to install the features from the P2 repository obtained via the CSR request.
You can navigate to the carbon console using <YOUR_DOMAIN>/carbon and then navigate to the configure tab. Then select the features option from the list.
IOS related features will be available in the P2 repository provided to you with the signed CSR. Install all the three features given. After the installation of the features is completed, stop the EMM server and process to the following location : <EMM_HOME>/repository/conf
You will find a new configuration file “ios-config.xml” in the directory. Modify the “iOSEMMConfigurations” accordingly. Please refer to “Installing WSO2 EMM iOS Features via the P2 Repository” guide for more information.
Configure general IOS server settings.
In order to setup your server with IOS, follow the instructions given in “General iOS Server Configurations” guide until Step 5.
After completing Step 5 follow the instructions given below:
  • Convert the downloaded ssl certificates from your vendor to .pem files.
openssl x509 -in <RA CRT> -out <RA CERT PEM>
openssl x509 -in your-domain-com-apache.crt -out your-domain-com-apache.pem
openssl x509 -in your-domain-com-ee.crt -out your-domain-com-ee.pem
  • Create a certificate chain with the root and intermediate certifications.
cat your-domain-com-apache.pem your-domain-com-ee.pem >> clientcertchain.pem
cat your-domain-com-apache.crt your-domain-com-ee.crt >> clientcertchain.crt
  • Export the SSL certificate chain file as a PKCS12 file with "wso2carbon" as the alias.
openssl pkcs12 -export -out <KEYSTORE>.p12 -inkey <RSA_key>.key -in clientcertchain.crt -CAfile clientcertchain.pem -name "<alias>"
openssl pkcs12 -export -out KEYSTORE.p12 -inkey ia.key -in clientcertchain.crt -CAfile clientcertchain.pem -name "wso2carbon"

After following the steps as above resume the configuration from Step 7.b as in “General iOS Server Configurations” guide.
Note that Step 6 and 7.a need to be skipped since the server configuration mentioned in those steps is for the public domain with already obtained SSL certificates.

Generate the MDM APNS certificate.
Go to the Apple Push Certificate Portal and upload the .plist file provided with the signed CSR from WSO2 and generate the MDM certificate. Follow the instructions given in “Generate MDM APNS Certificate” guide in order to convert the downloaded certificate to .pxf format.

After completing the instructions given, you can proceed with the IOS platform configuration as instructed in the “IOS Platform Configuration” guide.

Configuring Android device management

To enable secure communication between android devices and your EMM server please follow the “Android Configurations” guide. You can skip the certificate generation described in Step 1 under “Generating a BKS File” and move to Step 2 directly since you have already completed the above when configuring the IOS device communication.

Configuring Windows device management

There are no additional configurations needed to enable windows device management.

Rukshan PremathungaWSO2 APIM Error codes

WSO2 APIM Error codes

  • WSO2 APIM maintain list of error codes associated with different faulty scenarios. Which is very helpful to handle erroneous situation in your environment. In this Document i will listed down possible error codes that can be occurred. APIM contain APIM specific error codes due to API management related faulty.
  • Also it can we thrown Synapse related error codes due to transport related exception. When APIM gateway faces a erroneous situation it will notify to the client using these error codes by embedding to the response message.
  • Also if you have enabled runtime statistics you will be able get those error as a event. Also in APIM dashboard you can see the faulty related invocations. Note that in statistics you only get the transport related error.
  • APIM Specific error codes and reasons

  • Error code Error Message Description
    900900 Unclassified Authentication Failure An unspecified error has occurred
    900901 Invalid Credentials Invalid Authentication information provided
    900902 Missing Credentials No authentication information provided
    900905 Incorrect Access Token Type is provided The access token type used is not supported when invoking the API. The supported access token types are application and user accesses tokens. See Access Tokens.
    900906 No matching resource found in the API for the given request A resource with the name in the request can not be found in the API.
    900907 The requested API is temporarily blocked The status of the API has been changed to an inaccessible/unavailable state.
    900908 Resource forbidden The user invoking the API has not been granted access to the required resource.
    900909 The subscription to the API is inactive Happens when the API user is blocked.
    900910 The access token does not allow you to access the requested resource Can not access the required resource with the provided access token. Check the valid resources that can be accessed with this token.
    900800 Message throttled out The maximum number of requests that can be made to the API within a designated time period is reached and the API is throttled for the user.
    700700 API blocked This API has been blocked temporarily. Please try again later or contact the system administrators.
  • Synapse related error codes and reasons

  • Error code Description
    101000 Receiver input/output error sending
    101001 Receiver input/output error receiving
    101500 Sender input/output error sending
    101501 Sender input/output error receiving
    101503 Connection failed
    101504 Connection timed out (no input was detected on this connection over the maximum period of inactivity)
    101505 Connection closed
    101506 NHTTP protocol violation
    101507 Connection canceled
    101508 Request to establish new connection timed out
    101509 Send abort
    101510 Response processing failed
  • Some of the following synapse error codes are depends on the API synapse

  • Error code Description
    303000 Load Balance endpoint is not ready to connect
    303000 Recipient List Endpoint is not ready
    303000 Failover endpoint is not ready to connect
    303001 Address Endpoint is not ready to connect
    303002 WSDL Address is not ready to connect
    309001 Session aware load balance endpoint, No ready child endpoints
    309002 Session aware load balance endpoint, Invalid reference
    309003 Session aware load balance endpoint, Failed session
    303100 A failover occurred in a Load balance endpoint
    304100 A failover occurred in a Failover endpoint
    305100 Indirect endpoint not ready
    401000 Callout operation failed (from the callout mediator)

sanjeewa malalgodaWSO2 API Manager 2.0.0 New Throttling logic Execution Order

Here in this post i would like to discuss how throttling happens within throttle handler with newly added complex throttling for API Manager. This order is very important and we used this order to optimize run time execution. Here is the order of execution different kind of policies.

01. Blocking conditions
Blocking requests will be executed first as it's the least expensive check. All blocking conditions will be evaluated per node basis. Blocking conditions are just checks of certain conditions and we don't need to maintain counters across all gateway nodes.

02.Advanced Throttling
If request is not blocked request then we will move to API level throttling. Here we will do throttling for API level and resource level. Here always API level throttle key will be API name. That means we can control API requests per API.

03.Subscription Throttling with burst controlling
Next thing is subscription level API throttling. When you have API in store, subscribers will come there and subscribe to that API. Whenever subscription made we will make record saying user subscribed to this API using this application. So whenever API request come to API gateway we will get application id(which use to identify API uniquely) and API context + version(which we can use to identify API uniquely) to create key to do subscription level throttling. That means when subscription level throttling happens it will always count requests for API subscribed to given application.

04.Application Throttling
Application level throttling happens in application level and users can control total number of requests come to all APIs subscribed to given application. In this case counters will maintain against application user combination.

05.Custom Throttling Policies
Users are allowed to define dynamic rules according to specific use cases. This feature will be applied globally across all tenants. System administrative users should define these rules and it will be applied across all the users in the system. When you create a custom throttling policy you can define any policy you like. Users need to write a Siddhi query to address their use case. The specific combination of attributes we are checking in the policy have to be defined as the key (which is called the key template). Usually the key template will include a predefined format and a set of predefined parameters.

Please see below diagram(draw by Sam Baheerathan) to understand this flow clearly.

Screen Shot 2016-09-28 at 7.41.46 PM.png

sanjeewa malalgodaHow newly added Traffic Manager place in WSO2 API Manager distributed deployment

Here in this post i would like to add deployment diagrams for API Manager distributed deployment and how it changes after adding traffic manager to it. If you interested about complex traffic manager deployment patterns you can go through my previous blog posts. Here i will list only deployment diagrams.

Please see below distributed API Manager deployment deployment diagram.

Image result for api manager distributed deployment

Now here is how it looks after adding traffic manager instances to it.

Untitled drawing(2).jpg

How distributed deployment looks like after adding high availability for traffic manager instances.


Malith MunasingheRunning WSO2 Update Manager periodically through a script

As WSO2 update manager showcases with a feature that most of the WSO2 community was anticipating. Determining the relevant updates and preparing a deployment ready pack has become easier. Although the client has automated the process still devops will have to trigger the updates manually and prepare the pack. Checking for updates manually can be automated with a simple cron job. What required was a script which initiates wum and then executes the update.

source /etc/environment
wum init -u <wso2useremail> -p <password>
wum update <product-name>

Running this script through a cron job will connect to the server checks for latest updates and create a pack in $HOME/.wum-wso2/products/. Also a summary of updates occurred will be sent over to the email you have subscribed to WSO2 with.

sanjeewa malalgodaWSO2 API Manager - Custom Throttling Policies work?

Users are allowed to define dynamic rules according to specific use cases. This feature will be applied globally across all tenants. System administrative users should define these rules and it will be applied across all the users in the system. When you create a custom throttling policy you can define any policy you like. Users need to write a Siddhi query to address their use case. The specific combination of attributes we are checking in the policy have to be defined as the key (which is called the key template). Usually the key template will include a predefined format and a set of predefined parameters.
With the new throttling implementation using WSO2 Complex Event Processor as the global throttling engine, users will be able to create their own custom throttling policies by writing custom Siddhi queries. A key template can contain a combination of allowed keys separated by a colon ":" and each key should start with the "$" prefix. In WSO2 API Manager 2.0.0, users can use the following keys to create custom throttling policies.
  • apiContext,
  • apiVersion,
  • resourceKey,
  • userId,
  • appId,
  • apiTenant,
  • appTenant

Sample custom policy

FROM RequestStream
SELECT userId, ( userId == 'admin@carbon.super'  and apiKey == '/pizzashack/1.0.0:1.0.0') AS isEligible ,
str:concat('admin@carbon.super',':','/pizzashack/1.0.0:1.0.0') as throttleKey
INSERT INTO EligibilityStream;
FROM EligibilityStream[isEligible==true]#window.time(1 min)
SELECT throttleKey, (count(throttleKey) >= 5) as isThrottled group by throttleKey
INSERT ALL EVENTS into ResultStream;
As shown in the above Siddhi query, throttle key should match keytemplate format. If there is a mismatch between the Keytemplate format and throttlekey requests will not be throttled.

Prakhash SivakumarDynamic Scanning with OWASP ZAP for Identifying Security Threats

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Danushka FernandoSetting up Single node Kubernetes Cluster with Core OS bare metal

You might know already there is an official documentation to follow to setup a Kubernetes cluster on Core OS bare metal. But when do that specially single node cluster, I found some gaps in that documentation [1]. And another reason for this blog post is to get everything into one place. So this blog post will describe how to overcome the issues of setting up a single node cluster.

Installing Core OS bare metal.

You can refer to doc [2] to install core os. 

First thing is about users. Documentation[2] tells you how to create a user without password. To login as that user you will need ssh keys. So to create a user with username password, you can use a cloud-config.yaml file. Here is a sample.

- name: user
passwd: $6$SALT$3MUMz4cNIRjQ/Knnc3gXjJLV1vdwFs2nLvh//nGtEh/.li04NodZJSfnc4jeCVHd7kKHGnq5MsenN.tO6Z.Cj/
- sudo
- docker

Here value for passwd is a hash value. One of the below methods can be used to hash a password.[3]

 # On Debian/Ubuntu (via the package "whois")  
mkpasswd --method=SHA-512 --rounds=4096
# OpenSSL (note: this will only make md5crypt. While better than plantext it should not be considered fully secure)
openssl passwd -1
# Python (change password and salt values)
python -c "import crypt, getpass, pwd; print crypt.crypt('password', '\$6\$SALT\$')"
# Perl (change password and salt values)
perl -e 'print crypt("password","\$6\$SALT\$") . "\n"'

If you are installing this inside a private network (office network or university network) then you may need to set IP, DNS and so on. Specially about DNS, since its resolving using resolv.conf and it get replaced always then you may need to set it up as below.

Create a file in  /etc/systemd/network/ with content below. Replace the values with your network values.


Then restart the network with command below.

 sudo systemctl restart systemd-networkd  

Now your core os installation is ready to install Kubernetes.

Installing Kubernetes on Core OS

The official documentation [1] describes how to install a cluster. But what I will explain is to create a single node cluster. You can follow the same documentation. When you create certs create what's needed for the master node. And then go on and deploy the master node. Here you will not need the calcio related steps if you don't need to specifically use calcio with Kubernetes.

In Core OS Kubernetes is installed as a service named kubelet. So there what you defined is the service definition and supporting manifest files to service. There are four components of Kubernetes which are configured as manifests inside  /etc/kubernetes/manifests/

  1. API server
  2. Proxy
  3. Controller Manager
  4. Scheduler
All these four components will start as pods / containers inside the cluster.

Apart from these four configuration you have configured the kubelet service as well.

But with only these configurations if you try to create a pod it will not get created. Actually it will fail to schedule. Because you don't have a node available to schedule in the cluster. Usually masters don't schedule pods. So that's why in this documentation in master scheduling is set to false. So to turn on scheduling just edit the service definition file /etc/kubernetes/system/kubelet.service to change  --register-schedulable=false to --register-schedulable=true .

Now you will be able to schedule the pods in this node.

Configuring to use registry.

Next step is configuring to use a registry. If you have already used docker in other OS, then you should know that adding an insecure registry is done using DOCKER_OPTS.  One way to configure DOCKER_OPTS in Core OS is to add it to /run/flannel_docker_opts.env file. But it would be overridden when the server is restarted. For both insecure and proper registries use the method explained in [4].


sanjeewa malalgodaWSO2 API Manager - Subscription Throttling with burst controlling works?

Next thing is subscription level API throttling. When you have API in store, subscribers will come there and subscribe to that API. Whenever subscription made we will make record saying user subscribed to this API using this application. So whenever API request come to API gateway we will get application id(which use to identify API uniquely) and API context + version(which we can use to identify API uniquely) to create key to do subscription level throttling. That means when subscription level throttling happens it will always count requests for API subscribed to given application.

Upto API Manager 1.10 this subscription level throttling allowed per user basis. That means if multiple users use same subscription each of them can have copy of allowed quota and it will be unmanageable at some point as user base grows.

Also when you define advanced throttling policies you can also define burst control policy. This is very important because otherwise one user can consume all allocated requests within short period of time and rest of users cannot use API in fair way.

Screenshot from 2016-09-26 17-51-38.png

sanjeewa malalgodaWSO2 API Manager - How Application Level Throttling Works?

Application level throttling happens in application level and users can control total number of requests come to all APIs subscribed to given application. In this case counters will maintain against application.

Screenshot from 2016-09-26 17-52-46.png

sanjeewa malalgodaWSO2 API Manager - How advanced API and Resource level throttling works?

If request is not blocked request then we will move to API level throttling. Here we will do throttling for API level and resource level. Here always API level throttle key will be API name. That means we can control API requests per API.

Advanced API level policy applicable at 2 levels(this do not support from UI level at the moment, but runtime support this).
  1. Per user level - All API request counts happen against user(per user +api combination).
  2. Per API/Resource level -  Without considering user all counts maintain per API basis.

For the moment let's only consider per API count as it's supported OOB. First API level throttling will happen. Which means if you added some policy when you define API then it will applicable at API level.

Then you can also add throttling tiers at resource level when you create API. That means for a given resource you will be allowed certain quota. That means even if same resource accessed by different applications still it allows same amount of requests.

Screenshot from 2016-09-26 17-53-50.png

When you design complex policy you will be able to define policy based on multiple parameters such as transport headers, IP addresses, user agent or any other header based attribute. When we evaluate this kind of complex policy always API or resource ID will be picked as base key. Then it will create multiple keys based on the number of conditional groups have in your policy.

Screenshot from 2016-09-26 17-54-01.png

sanjeewa malalgodaWSO2 API Manager new throttling - How Blocking condition work ?

Blocking requests will be executed first as it's the least expensive check. All blocking conditions will be evaluated per node basis. Blocking conditions are just checks of certain conditions and we don't need to maintain counters across all gateway nodes. For blocking conditions we will be evaluate requests against following attributes. All these blocking conditions can add and will be evaluated at tenant level. That means one tenant cannot block other requests etc.
apiContext - if users need to block all requests coming to given API then we may use this blocking condition. Here API content will be complete context of API URL.
appLevelBlockingKey - If users need to block all requests coming to some application then they can use this blocking condition. Here throttle key will be construct by combining subscriber name and application name.
authorizedUser - If we need to block requests coming from specific user then they can use this blocking condition. Blocking key will be authorized users present in message context.
ipLevelBlockingKey - IP level blocking can use when we need to block specific IP address accessing our system. Then this one also apply at tenant level and blocking key will be constructed using IP address of incoming message and tenant id.

Screenshot from 2016-09-26 17-56-30.png

Imesh GunaratneA Reference Architecture for Deploying WSO2 Middleware on Kubernetes

A WSO2 white paper I recently wrote

Kubernetes is a result of over a decade and a half experience on managing production workloads on containers at Google. Google has been contributing to Linux container technologies, such as cgroups, lmctfy, libcontainer for many years and has been running almost all Google applications on them. As a result, Google started the Kubernetes project with the intention of implementing an open source container cluster management system similar to the one they use inhouse called Borg. Kubernetes provides deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. It can run on any infrastructure and can be used for building public, private, hybrid, and multi-cloud solutions. Kubernetes provides support for multiple container runtimes; Docker, Rocket (Rkt) and AppC.

Please find it at:

Thanks to Prabath for providing the idea of publishing this on Medium!

Dimuthu De Lanerolle

Publishing API Runtime Statistics Using WSO2 DAS & WSO2 APIM

Please note that for this tutorial I am using APIM 1.10.0 and DAS 3.0.1 versions.

WSO2 API Manager is a complete solution for designing and publishing APIs, creating and managing a developer community, and for securing and routing API traffic in a scalable way. It leverages proven components from the WSO2 platform to secure, integrate and manage APIs. In addition, it integrates with the WSO2 Analytics Platform, and provides out of the box reports and alerts, giving you instant insight into APIs' behavior.

In order for downloading WSO2 APIM click here.

WSO2 Data Analytics Server is a comprehensive enterprise data analytics platform; it fuses batch and real-time analytics of any source of data with predictive analytics via machine learning. It supports the demands of not just business, but Internet of Things solutions, mobile and Web apps.

In order for downloading WSO2 DAS click here.

Configuring WSO2 APIM

1. Navigate to [APIM_HOME]/repository/conf/api-manager.xml and enable the below section.

<!-- For APIM implemented Statistic client for RDBMS -->


2. Start APIM server. 

3. Login to dash-board. eg: https://localhost:9443/admin-dashboard. Select "Configure Analytics" & select enable checkbox. Configure as shown below.

Configuring WSO2 DAS

1. In-order to prevent server startup conflicts we will start DAS with port offset value of 1.
To do so navigate to [DAS_HOME]/repository/conf and open carbon.xml file.


2. Navigate to [DAS_HOME]/repository/conf/datasources/master-datasources.xml and add the below.

  <description>The datasource used for setting statistics to API Manager</description>
  <definition type="RDBMS">
      <validationQuery>SELECT 1</validationQuery>



Prio-adding this config you need to create a DB called 'TestStatsDB' in your mysql database &  and create required tables. To do so follow below instructions.

[1] Navigate to [APIM_HOME]/dbscripts/stat/sql  from your ubuntu console and select mysql.sql script.

type :   source [APIM_HOME]/dbscripts/stat/sql/mysql.sql

- Which will create the required tables for our scenario.

[2] Now navigate to [APIM_HOME]/repository/components/lib and add the required mysql connector jar from here.

Create / Publish / Invoke API

1. Navigate to APIM Publisher and create an API. Publish it.

2. Navigate to APIM Store and subscribe to the API.

3. In the Publisher instance select created API and navigate to 'Implement' section and select

            Destination-Based Usage Tracking: enable

4. Invoke the API several times.

View API Statistics

Navigate to APIM publisher instance. Select "Statistics" section. Enjoy !!!

PS ::

If you wanna do a load test scenario or just to play with Jmeter for your easy reference I have attached herewith a sample Jmeter script for 'Calculator API'.

<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="2.8" jmeter="2.13 r1665067">
    <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="HTTP Request" enabled="true">
      <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      <stringProp name="HTTPSampler.domain">localhost</stringProp>
      <stringProp name="HTTPSampler.port">8280</stringProp>
      <stringProp name="HTTPSampler.connect_timeout"></stringProp>
      <stringProp name="HTTPSampler.response_timeout"></stringProp>
      <stringProp name="HTTPSampler.protocol"></stringProp>
      <stringProp name="HTTPSampler.contentEncoding"></stringProp>
      <stringProp name="HTTPSampler.path">/calc/1.0/subtract?x=33&amp;y=9</stringProp>
      <stringProp name="HTTPSampler.method">GET</stringProp>
      <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
      <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
      <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
      <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
      <stringProp name="HTTPSampler.implementation">HttpClient4</stringProp>
      <boolProp name="HTTPSampler.monitor">false</boolProp>
      <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
      <HeaderManager guiclass="HeaderPanel" testclass="HeaderManager" testname="HTTP Header Manager" enabled="true">
        <collectionProp name="HeaderManager.headers">
          <elementProp name="" elementType="Header">
            <stringProp name="">Authorization</stringProp>
            <stringProp name="Header.value">Bearer 2e431777ce280f385c30ac82c1e1f21c</stringProp>

Supun SethungaRunning Python on WSO2 Analytics Servers (DAS)

In the previous post we discussed on how to connect jupyter notebook to pyspark. Further going forward, in this post I will discuss on how you can run python scripts, and analyze and build Machine Learning models on top of data stored in WSO2 Data Analytics servers. You may use a vanilla Data Analytics Server (DAS) or, any other analytics server such as ESB Analytics, APM Analytics or IS Analytics Servers for this purpose.


  • Install jupyter
  • Download WSO2 Data Analytcs Server (DAS) 3.1.0
  • Download and uncompress spark 1.6.2 binary.
  • Dowload pyrolite-4.13.jar

Configure the Analytics Server

In this scenario, Analytics Server will act as the external spark cluster, as well as the data source. Hence it is required to start the Analytics server in cluster mode. For that open <DAS_HOME>/repository/conf/axis2/axis2.xml and enable clustering as follows:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"

When the Analytics server starts in cluster, it creates a spark cluster as well. (Or if its pointed to a external spark cluster, it will join that external cluster). Analytics server also creates a Spark App, and will be accumilating all the existing cores in the cluster. But, when connect python/pyspark for the same cluster, it will also creates an Spark App, but since no cores are available to run, it will be in "waiting" state, and will not run. Therefore to avoid that, we need to limit the amount of resource that gets allocated to the CarbonAnalytics Spark app. In order to do that, open <DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf file, and set/change the following parameters.
carbon.spark.master.count  1

# Worker
spark.worker.cores 4
spark.worker.memory 4g

spark.executor.memory 2g

spark.cores.max 2
Note that, here "spark.worker.cores" (4) is the number of total cores we allocate for spark. And "spark.cores.max" (2) is the number of maximum  cores allocate for each spark application.

Since we are not using a minimum HA cluster in DAS/Analytics Server, we need to set the following property in  <DAS_HOME>/repository/conf/etc/tasks-config.xml file.

Now start the server by navigating to <DAS_HOME> and executing:

Once the server is up, to check whether the spark cluster is correctly configured, navigate to the spark master web UI on: http://localhost:8081/. It should show something similar to below.

Note that the number of cores allocated for the worker is 4 (2 Used). And the number of cores allocated for the CarbonAnalytics Application is 2. Also here you can see the spark master URL, on the top-left corner (spark:// This URL is used by pyspark and other clients to connect/submit jobs to this spark cluster.

Now to run a python script on top of this Analytics Server, we have two options:
  • Connect ipython/jupyter notebook, execute the python script from the UI.
  • Execute the raw python script using

Connect Jupyter Notebook (Option I)

open ~/.bashrc and add the following entries: 
export PYSPARK_DRIVER_PYTHON_OPTS='notebook' pyspark
export PYSPARK_PYTHON=/home/supun/Supun/Softwares/anaconda3/bin/python
export SPARK_HOME="/home/supun/Supun/Softwares/spark-1.6.2-bin-hadoop2.6"
export PATH="/home/supun/Supun/Softwares/spark-1.6.2-bin-hadoop2.6/bin:$PATH"
export SPARK_CLASSPATH=/home/supun/Downloads/pyrolite-4.13.jar

When we are running a python script on top of spark, pyspark will submit it as a job to the spark cluster (Analytics server, in this case). Therefore we need to add the all the external jars to the sparks class path, so that the spark executor knows where to look for the classes during runtime. Thus, we need to add absolute path of jars located in <DAS_HOME>/repository/libs directory as well as  <DAS_HOME>/repository/components/plugins directory, to the Spark Class path, seperated by colons (:) as below.
export SPARK_CLASSPATH=/home/supun/Downloads/wso2das-3.1.0/repository/components/plugins/abdera_1.0.0.wso2v3.jar:/home/supun/Downloads/wso2das-3.1.0/repository/components/plugins/ajaxtags_1.3.0.beta-rc7-wso2v1.jar.......

To make the changes take effect, run:
source ~/.bashrc 

Create a new directory, to be used as the python workspace (say "python-workspace"). This directory will be used to store the scripts we create in the notebook. Navigate to that created directory, and start the notebook, specifying the master URL of the remote spark cluster as below, when starting the notebook.
pyspark --master spark:// --conf "spark.driver.extraJavaOptions=-Dwso2_custom_conf_dir=/home/supun/Downloads/wso2das-3.1.0/repository/conf"

Finally navigate to http://localhost:8888/ to access the notebook, and create a new python script by New --> Python 3.
Then check the spark master UI (http://localhost:8081) again. You should see a second application named "PySparkShell" has been started too, and is using the remaining 2 cores. (see below)

Retrieve Data

In the new python script we created in the jupyter, we can use any spark-python API. To do spark operations with python, we are going to need the Spark Context and SQLContext. When we start jupyter with pyspark, it will create a spark context by default. This can be accessed using the object 'sc'.
We can also create our own spark context, with any additional configurations as well. But to create a new one, we need to stop the existing spark context first.
from pyspark import SparkContext, SparkConf, SQLContext

# Set the additional propeties.
sparkConf = (SparkConf().set(key="spark.driver.allowMultipleContexts",value="true").set(key="spark.executor.extraJavaOptions", value="-Dwso2_custom_conf_dir=/home/supun/Downloads/wso2das-3.1.0/repository/conf"))

# Stop the default SparkContext created by pyspark. And create a new SparkContext using the above SparkConf.
sparkCtx = SparkContext(conf=sparkConf)

# Check spark master.

# Create a SQL context.
sqlCtx = SQLContext(sparkCtx)

df = sqlCtx.sql("SELECT * FROM table1")

'df' is a spark dataframe. Now you can do any spark operation on top of that dataframe. You can also use spark-mllib and spark-ml packages and build machine learning models as well. You can refer [1] for such a sample on training a Random Forest Classification model, on top of data stored in WSO2 DAS.

Running Python script without jupyter Notebook (Option II)

Other than running python scripts with notebook, we can also run the raw python script directly on top of spark, using pyspark. For that we can use the same python script as above, with a slight modification. In the above scenario, there is a default spark context ("sc") created by notebook. But in this case there wont be any such a default spark context. hence we do not need the sc.stop() snippet. (or else it will give errors.). Once we remove that line of code, we can save the script with .py extension. Then run the saved script as below:
<SPARK_HOME>./bin/spark-submit --master spark:// --conf "spark.driver.extraJavaOptions=-Dwso2_custom_conf_dir=/home/supun/Downloads/wso2das-3.1.0/repository/conf"

You can refer [2] for a python script which does the same as the one we discussed earlier.



Lakshani GamageHow to Enable Wire Logs in WSO2 ESB/APIM

You can use below steps on WSO2 ESB or APIM to enable Wire Logs.
  1. Uncomment below line in <PRODUCT_HOME>/repository/conf/
  2. Restart Server.

Supun SethungaGenerating OAuth2 Token in WSO2 APIM

First we need to create and publish an API in the APIM Publisher. Then Go to the store, subscribe an application to the API, and get the consumer key and consumer secret. Then execute the follow cURL command to generate the oAuth2 token.
curl -k -d "grant_type=password&username=admin&password=admin" -H "Authorization: Basic OXFhdUhUSjZoX0pkQUg2aDluOFZXWTV6NXQwYTpNZ0ZqTzQ4bW5OdzhVZHRkd1Bodkx1TDh5bWth, Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token

Where OXFhdUhUSjZoX0pkQUg2aDluOFZXWTV6NXQwYTpNZ0ZqTzQ4bW5OdzhVZHRkd1Bodkx1TDh5bWth is the base64 encoded value of <consumer_key>:<consumer_secret>

The received token can be then used to invoke the above published API, Passing it in the header in the following format.
                  Authorization : Bearer <token>

Supun SethungaEnabling Mutual SSL for Admin Services in WSO2 IS

When there is a requirement of calling secured Web Services/Admin services without using user credentials, Mutual SSL can come in handy. What happens here is, the authentication is done using the public certificates keys. Following steps can be used to enable Mutual SSL in WSO2 Identity Server 5.0.0.

  1. Copy org.wso2.carbon.identity.authenticator.mutualssl_4.2.0.jar which is available under resources/dropins directory of the SP1 (WSO2-IS-5.0.0-SP01/resources/dropins/org.wso2.carbon.identity.authenticator.mutualssl_4.2.0.jar) to <IS_HOME>/repository/components/dropins directory.

  2. Open <IS_Home>/repository/conf/tomcat/catelina-server.xml file. Then set the connector property "clientAuth" to ”want”.

  3. To enable the Mutual SSL Authenticator, add the following to <IS_HOME>/repository/conf/security/authenticators.xml file.
    <Authenticator name="MutualSSLAuthenticator" disabled="false">
            <Parameter name="UsernameHeader">UserName</Parameter>
            <Parameter name="WhiteListEnabled">false</Parameter>
            <Parameter name="WhiteList"/>
    Note: If you have enable SAML SSO for IS, you need to set a higher priority for MutualSSLAuthenticator than to SAML2SSOAuthenticator.

  4. Extract WSO2 public certificate from <IS_Home>/repository/resources/security/wso2carbon.jks and add it to client’s trust store. Then add client’s public certificate to the carbon trust store, which can be find in <IS_Home>/repository/resources/security/client-truststore.jks.
    To extract a certificate from wso2carbon.jks
    keytool -export -alias wso2carbon -file carbon_public.crt -keystore wso2carbon.jks -storepass wso2carbon
    To import client's certificate to carbon trust store:
    keytool -import -trustcacerts -alias <client_alias> -file <client_public.crt> -keystore client-truststore.jks -storepass wso2carbon

  5. Now you can call the service by adding the username to either SOAP header or HTTP header as follows.
    Add Soap header:
    <m:UserName soapenv:mustUnderstand="0" xmlns:m="">admin</m:UserName>
    Add HTTP Header:
    UserName : <Base4-encoded-username>

Supun SethungaProfiling with Java Flight Recorder

Java Profiling can help you to identify asses the performance of your program, improve your code and identify any defects such as memory leaks, high CPU usages, etc. Here I will discuss on how to profile your code using the java inbuilt utility JCMD and Java Mission Control.

Getting a Performance Profile

A profile can be obtained using both jcmd as well as mission control tools. jcmd is a command line based tool where as mission control comes with a UI. But since jcmd is lightweight when compared to mission control and hence has lesser effect to the performance of program/code which you are going to profile. Therefore jcmd is preferable for taking a profile. In order to get a profile:

First need to find the process id of the running program you want to profile. 

Then, unlock commercial features for that process:
jcmd <pid> VM.unlock_commercial_features

Once the comercial features are unlocked, start the recording.
jcmd <pid> JFR.start delay=20s duration=1200s name=rec_1 filename=./rec_1.jfr settings=profile

Here 'delay', 'name' and 'filename' all are optional. To check the status of the recording:
jcmd <pid> JFR.check

Here I have set the recording for 20 mins (1200 sec.). But you can take a snapshot of the recording at any point within that duration, without stopping the recording. To do that:
jcmd <pid> JFR.dump recording=rec_1 filename=rec_1_dump_1.jfr

Once the recording is finished, it will automatically write the output jfr to the file we gave at the start. But  if you want to stop the recording in the middle and get the profile, you can do that by:
jcmd <pid> JFR.stop recording=rec_1 filename=rec_1.jfr  

Analyzing the Profile

Now that we  have the profile, we need to analyze it. For that jcmd itslef not going to be enough. We are going to need Java Mission Control. You can simply open Mission Control and then open your .jfr file using it. (drag and drop the jfr file to mission control UI). Once the file is open, it will navigate you to the overview page, which usually looks like follows:

Here you can find various options to analyze your code. You can drill down to thread level, class level and method level, and see how the code have performed during the time we record the profile. In the next blog I will discuss in detail how we can identify any defects of the code using the profile we just obtained.

Supun SethungaBasic DataFrame Operations in python


  • Install python
  • Install ipython notebook

Create a directory as a workspace for the notebook, and navigate to it. Start python jupyter by running:
jupyter notebook

Create a new python notebook. To use Pandas Dataframe this notebook scipt, we first need to import the pandas library as follows.
import numpy as np
import pandas as pd

Importing a Dataset

To import a csv file from local file system:
filePath = "/home/supun/Supun/MachineLearning/data/Iris/train.csv"
irisData = pd.read_csv(filePath)

Output will be as follows:
     sepal_length  sepal_width  petal_length  petal_width
0 NaN 3.5 1.4 0.2
1 NaN 3.0 1.4 0.2
2 NaN 3.2 1.3 0.2
3 NaN 3.1 1.5 0.2
4 NaN 3.6 1.4 0.2
5 NaN 3.9 1.7 0.4
6 NaN 3.4 1.4 0.3
7 NaN 3.4 1.5 0.2
8 NaN 2.9 1.4 0.2
9 NaN 3.1 1.5 0.1
10 NaN 3.7 1.5 0.2
11 NaN 3.4 1.6 0.2
12 NaN 3.0 1.4 0.1

Basic Retrieve Operations

Get a single column of the dataset. Say we want to get all the values of the column "sepal_length":

Get a multiple column of the dataset. Say we want to get all the values of the column "sepal_length" and "petal_length":
print(irisData[["sepal_length", "petal_length"]])
#Note there are two square brackets.
Get a subset of rows of the dataset. Say we want to get the first 10 rows of the dataset:

Get a subset of rows a column of the dataset. Say we want to get the first 10 rows of the column "sepal_length" of the dataset:

Basic Math Operations

Add a constant to each value of a column in the dataset:
print(irisData["sepal_length"] + 5)

Add two (or more) columns in the dataset:
print(irisData["petal_width"] + irisData["petal_length"])
Here values will be added row-wise. i.e: value in the n-th row of petal_width column, is added to the value in the n-th row of petal_length column.

Similarly we can do the same for other math operations such as Substraction (-), Multiplication (*) and Division (/) as well.

Supun SethungaSetting up a Fully Distributed Hadoop Cluster

Here i will discuss on how to setup a fully distributed hadoop cluster with 1-master and 2 salves. Here the three nodes are setup in three different machines.

Updating Hostnames

To start off the things, lets first give hostnames to the three nodes. Edit the /etc/hosts file with following command.
sudo gedit /etc/hosts

Add following hostname and against the ip addresses of all three nodes. Do this for the all three nodes.    hadoop.master    hadoop.slave.1    hadoop.slave.2

Once you do that, update the /etc/hostname file to include hadoop.master/hadoop.slave.1/hadoop.slave.2 as the hostname of each of the machines respectively.


For security concerns, one might prefer to have a separate user for Hadoop. In order to create a separate user execute the following command in the terminal:
sudo addgroup hadoop
sudo adduser --ingroup hadoop hduser
Give a desired password..

Then restart the machine.
sudo reboot

Install SSH

Hadoop needs to copy files between the nodes. For that it should be able to acces each node with ssh, without having to give username/password. Therefore, first we need to install ssh client and server.
sudo apt install openssh-client
sudo apt install openssh-server

Generate a key
ssh-keygen -t rsa -b 4096

Copy the key for each node
ssh-copy-id -i $HOME/.ssh/ hduser@hadoop.master
ssh-copy-id -i $HOME/.ssh/ hduser@hadoop.slave.1
ssh-copy-id -i $HOME/.ssh/ hduser@hadoop.slave.2

Try sshing to all the nodes. eg:
ssh hadoop.slave.1

You should be able to ssh to all the nodes, without proving the user credentials. Repeat this step in all three nodes.

Configuring Hadoop

To configure hadoop, change the following configurations:

Define hadoop master url in <hadoop_home>/etc/hadoop/core-site.xml , in all nodes.

Create two directories /home/wso2/Desktop/hadoop/localDirs/name and /home/wso2/Desktop/hadoop/localDirs/data (and make hduser the owner, if you create a separate user for hadop) . Give read/write rights to that folder.

Modify the <hadoop_home>/etc/hadoop/hdfs-site.xml as follows, in all nodes.

<hadoop_home>/etc/hadoop/mapred-site.xml (all nodes)

Add the hostname of master node, to <hadoop_home>/etc/hadoop/masters file, in all nodes.

Add hostname of slave nodes  to <hadoop_home>/etc/hadoop/slaves file, in all nodes.

(Only in Master) We need to format the namenodes, before we start hadoop. For that, in the master node, navigate to <hadoop_home>/etc/hadoop/bin/ directory and execute the following.
./hdfs namenode -format

Finally, start the hadoop server, by navigating to <hadoop_home>/etc/hadoop/sbin/ directory, and execute the following:

If everything goes well, hdfs should be started. And you can browse the webUI of the namenode from the URL: http://localhost:50070/dfshealth.jsp.

Supun SethungaSetting up a Fully Distributed HBase Cluster

This post will discuss on how to setup a fully distributed hbase cluster. Here we will not run zookeeper as a separate server, but will be using the zookeeper which is embedded in hbase itself. And our setup will consist of 1 master node, and 2 slave nodes.


  • Update /etc/hostname file to include hadoop.master, hadoop.slave1, hadoop.slave2 respectively, as the hostnames of the machines.
  • Download hbase 1.2.1
  • Setup a fully distributed Hadoop cluster [1].
  • Start the hadoop server.

Configure HBase

Fisrt create a directory for hbase in the hadoop file system. For that navigate to <hadoop_home>/bin and execute:
hadoop fs -mkdir /hbase

Do the following confgiurations in <hbase_home>/conf/hbase-site.xml. Note that, here the host and port of "hbase.rootdir" should be the same host and port as hadoop's, we gave at the prerequisites step.

<!-- Note that above should be the same host and port as hadoop's>

    <description>Property from ZooKeeper's config zoo.cfg.The port at which the clients will connect.</description>

Here, hbase.rootdir should be on namenode. In our case, master is the namenode. Zookeeper quorums should be the slave nodes. This tells which nodes should run the zookeerper. It is preferred to have odd number of nodes for zookeeper.

Add the hostname of slave nodes, to  <hbase_home>/conf/regionservers, in all nodes, except master/namenode

Finally, set the following jvm properties in <hbase_home>/conf/ file. The property HBASE_MANAGES_ZK  is to indicate that hbase is managing the zookeeper, and no external zookeeper server is running.

# To use built in zookeeper
export HBASE_MANAGES_ZK=true

# set java class path
export JAVA_HOME=your_java_home

# Add hadoop-conf directory to hbase class path:
export HBASE_CLASSPATH=$HBASE_CLASSPATH:<hadoop_home>/etc/hadoop

Now all the configurations are complete. Now we can start the server by navigating to <hbase_home>/bin directory and executing the following:

Once the hbase server is up, you can navigate to its master web UI from http://hadoop.master:16010/



Supun SethungaObtain a Heap/Thread Dump

jmap -dump:live,format=b,file=<filename>.hprof <PID>

Thread Dump:
jstack <PID> > <filename>

Supun SethungaConnecting an IBM MQ to WSO2 ESB

In this article I will be discussing on how to configure WSO2 ESB 4.8.1 to both read and write from/to JMS queues on IBM WebSphere MQ. Following is the scenario.


You need to have IBM MQ installed in you machine (or to access remotely). Reference [1] contains detailed information on how to install and setup IBM MQ.

Once IBM MQ is installed, create two queues: InputQueue and OutputQueue.

Configuring WSO2 ESB

Copy jta.jar and jms.jar to repository/components/lib directory, and and fscontext_1.0.0.jar to repository/components/dropins directory. (.jars can be found in the article [1] )

Next, We need to enable JMS transport details in the ESB side. For that, add the following to <ESB_HOME>/repository/conf/axis2/axis2.xml file.

<transportReceiver class="org.apache.axis2.transport.jms.JMSListener" name="jms">
    <parameter locked="false" name="default">
        <parameter locked="false" name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
        <parameter locked="false" name="java.naming.provider.url">file:///home/supun/Supun/JNDIDirectory</parameter>
        <parameter locked="false" name="transport.jms.ConnectionFactoryJNDIName">MyQueueConnFactory</parameter>
        <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        <parameter locked="false" name="transport.jms.UserName">Supun</parameter>
        <parameter locked="false" name="transport.jms.Password">supun</parameter>
    <parameter locked="false" name="myQueueConnectionFactory1">
        <parameter locked="false" name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
        <parameter locked="false" name="java.naming.provider.url">file:///home/supun/Supun/JNDIDirectory</parameter>
        <parameter locked="false" name="transport.jms.ConnectionFactoryJNDIName">MyQueueConnFactory</parameter>
<parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        <parameter locked="false" name="transport.jms.UserName">Supun</parameter>
        <parameter locked="false" name="transport.jms.Password">supun</parameter>

Here, java.naming.provider.url is the location where IBM MQ's binding file is located. "transport.jms.UserName" and "transport.jms.Password" refers to the username and the password of the login account of the machine in which IBM MQ is installed. Similarly, add the following jms sender details too, to the same axis2.xml file.

<transportSender class="org.apache.axis2.transport.jms.JMSSender" name="jms">
    <parameter locked="false" name="default">
        <parameter locked="false" name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
        <parameter locked="false" name="java.naming.provider.url">file:///home/supun/Supun/JNDIDirectory</parameter>
        <parameter locked="false" name="transport.jms.ConnectionFactoryJNDIName">MyQueueConnFactory</parameter>
        <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        <parameter locked="false" name="transport.jms.UserName">Supun</parameter>
        <parameter locked="false" name="transport.jms.Password">supun</parameter>
    <parameter locked="false" name="ConnectionFactory1">
        <parameter locked="false" name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
        <parameter locked="false" name="java.naming.provider.url">file:///home/supun/Supun/JNDIDirectory</parameter>
        <parameter locked="false" name="transport.jms.ConnectionFactoryJNDIName">MyQueueConnFactory</parameter>
        <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        <parameter locked="false" name="transport.jms.UserName">Supun</parameter>
        <parameter locked="false" name="transport.jms.Password">supun</parameter>

Deploying Proxy to Read/Write from/to JMS Queue

Save the above configurations and start the ESB server. Create a new custom proxy as follows.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="" name="JMSProducerProxy" transports="jms" startOnLoad="true" trace="disable">
         <property name="OUT_ONLY" value="true"/>
         <property name="startTime" expression="get-property('SYSTEM_TIME')" scope="default" type="STRING"/>
         <property name="messagId" expression="//senderInfo/common:messageId" scope="axis2" type="STRING"/>
         <property name="messageType" value="application/sampleFormatReceive" scope="axis2"/>
         <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"/>
         <property name="JMS_IBM_PutApplType" value="2" scope="transport" type="INTEGER"/>
         <property name="JMS_IBM_Encoding" value="785" scope="transport" type="INTEGER"/>
         <property name="JMS_IBM_Character_Set" value="37" scope="transport" type="INTEGER"/>
         <property name="JMS_IBM_MsgType" value="8" scope="transport" type="INTEGER"/>
         <property name="Accept-Encoding" scope="transport" action="remove"/>
         <property name="Content-Length" scope="transport" action="remove"/>
         <property name="User-Agent" scope="transport" action="remove"/>
         <property name="JMS_REDELIVERED" scope="transport" action="remove"/>
         <property name="JMS_DESTINATION" scope="transport" action="remove"/>
         <property name="JMS_TYPE" scope="transport" action="remove"/>
         <property name="JMS_REPLY_TO" scope="transport" action="remove"/>
         <property name="Content-Type" scope="transport" action="remove"/>
               <address uri="jms:/OutputQueue?transport.jms.ConnectionFactoryJNDIName=MyQueueConnFactory&amp;java.naming.factory.initial=com.sun.jndi.fscontext.RefFSContextFactory&amp;java.naming.provider.url=file:///home/supun/Supun/JNDIDirectory&amp;transport.jms.DestinationType=queue&amp;transport.jms.ConnectionFactoryType=queue&amp;transport.jms.Destination=OutputQueue"/>
   <parameter name="transport.jms.Destination">InputQueue</parameter>

This proxy service will read messages from "InputQueue", and will write them out to the "OutputQueue". As you can see, here I have set some custom properties and removed some other properties from the JMS message. This is done as IBM MQ expect only certain properties, and if those are not available or if there are unexpected properties, it will throw an error.

If you need to change the priority of a message using the proxy, add the following property to the insequence.
      <property name="JMS_PRIORITY" value="2" scope="axis2"/>
Here "value" is the priority you want to set to the message.



Supun SethungaSearch for file(s) inside ziped folders in Linux

Simply execute the following command in the Linux Terminal.
find . -name *.zip -exec less {} \; | grep <file_name>

Supun SethungaAdding WSSE Header to a SOAP Request

Sample SOAP Message with WSSE Header:

<soapenv:Envelope xmlns:soapenv="" xmlns:echo="">
      <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="">
         <wsu:Timestamp wsu:Id="Timestamp-13" xmlns:wsu="">
         <wsse:UsernameToken wsu:Id="UsernameToken-14" xmlns:wsu="">
            <wsse:Password Type="">admin</wsse:Password>

Supun SethungaBuilding a Random Forest Model with R

Here I will be talking on how to build a Random Forest Model, for a classification problem, with R. There are two libraries exists in R which we can used to train Random Forest models. Here I'll be using the package "randomForest" for my purpose (Other package is "caret" package). Further, we can export our built model as a PMML file, which is a standard way (an XML format) of exporting models from R. I will be using the famous iris data-set for the demonstration.


You need to have R installed. If you are using Linux, then you may Install Rstudio as well. Because in windows, R comes with a GUI, but in Linux, R has only a console to execute commands. So it would be much convenient to have Rstudio installed as well, which will provide a GUI for R operations.

Install Necessary Packages.

We are going to need to install some external standard libraries for our requirement. For that start R and in the console, execute the following commands. (You can create a new script and execute all the commands at once also).
Once the libraries are installed, we need to load them to run-time.

Prepare Data:

Next, lets import/load the dataset (iris dataset) to R.
data <- read.csv("/home/supun/Desktop/intersects4.csv",header = TRUE, sep = ",")
Now, we need to split this dataset in to two propotions. One set is to train the model, and the remaining is to test/validate the model we built. Here I'l be using 70% for training and the remaining 30% for testing.
#take a partition of 70% from the data
split <- createDataPartition(data$traffic, p = 0.7)[[1]]
#create train data set from 70% of data
trainset <- data[ split,]
#create test data set from remaining 30% of data
testset <- data[-split,]
Now we need to put some special attention to the data type of our dataset. The feature Species of the iris data set is categorical and the remaining four features are numerical. And the Species columns has been encoded for numerical values. When R imports the data, since these are numerical values, it will treat all the columns as numerical data. But we need R to treat Species as categorical data, and the remaining as Numerical. For that, we need to convert the Species column in to factors, as follows. Remember to do this only for categorical columns, but not for numerical columns. Here im creating two new Tran set and a Test set, using the converted data.
trainset2 <- data.frame(as.factor(trainset$Species), trainset$Sepal.Length, trainset$Sepal.Width, trainset$Petal.Length, trainset$Petal.Width)
testset2<- data.frame(as.factor(testset$Species), testset$Sepal.Length, testset$Sepal.Width, testset$Petal.Length, testset$Petal.Width)
# Rename the column headers
colnames(trainset2) <- c("Species","Sepal.Length","Sepal.Width","Petal.Length","Petal.Width")
colnames(testset2) <- c("Species","Sepal.Length","Sepal.Width","Petal.Length","Petal.Width")

Train Model:

Before we start training the model, we need to put our attention on the inputs needed for the randomForest model. Here im going to give five input parameters and they are as follows, in order.
  • Formula - formula defining the relationship between the response variable and the predictor variables. (here y~. means variable y is the response variable and everything else are predictor variables)
  • Dataset - dataset to be used for training. This dataset should contain the variables defined in the above equation.
  • Boolean value indicating whether to calculate feature importance or not.
  • ntree - Number of trees to grow. This should not be set to too small a number, to ensure that every input row gets predicted at least a few times.Saying that, large number for this (say about 100 would result the output model to be very large in size - few GBs. And eventually, it would take a lot of time to export the model as PMML)
  • mtry - Number of variables randomly sampled as candidates at each split. Note that the default values are different for classification (sqrt(p) where p is number of variables in x) and regression (p/3)
If you need further details on randomForest, execute the following ccommand in R, which will open the help page of the respective command.
Now we need to find the best mtry value for our case. For that execute the follow command, and in the resulting graph, pick the mtry value which gives the lowest OOB error.
bestmtry <- tuneRF(trainset[-1],factor(trainset$Species), ntreeTry=10, stepFactor=1.5,improve=0.1, trace=TRUE, plot=TRUE, dobest=FALSE)
According to the graph, OOB error minimize at mtry=2. Hence I will be using that value for model training step. To train the model, execute the following command. Here I'm training the Random Forest with 10 trees.
model <- randomForest(Species~.,data=trainset2, importance=TRUE, ntree=10, mtry=2)
Lets see how important each feature is, to our output model.  
This will result the following output.
0 1 2 MeanDecreaseAccuracy MeanDecreaseGini
Sepal.Length 1.257262 1.579965 1.985794 2.694172 8.639374
Sepal.Width 1.083289 0 -1.054093 1.085028 2.917022
Petal.Length 6.455716 4.398722 4.412181 6.071185 39.194641
Petal.Width 2.213921 2.045408 3.581145 3.338613 18.181343

Evaluate Model:

Now that we have a model with us, we need to check how good our model is. This is where our test data set comes in to play. We are going to make the prediction for the Response variable "Species" using the data in the tests set. And then compare the actual values with the predicted values.
prediction <- predict(model, testset2)
Lets calculate the confusion matrix, to evaluate how accurate our model is.
confMatrix <- confusionMatrix(prediction,testset2$Species)
You will get an output like follows:
Confusion Matrix and Statistics

Prediction 0 1 2
0 13 0 0
1 0 17 0
2 0 0 15

Overall Statistics

Accuracy : 1
95% CI : (0.9213, 1)
No Information Rate : 0.3778
P-Value [Acc > NIR] : < 2.2e-16

Kappa : 1
Mcnemar's Test P-Value : NA

Statistics by Class:

Class: 0 Class: 1 Class: 2
Sensitivity 1.0000 1.0000 1.0000
Specificity 1.0000 1.0000 1.0000
Pos Pred Value 1.0000 1.0000 1.0000
Neg Pred Value 1.0000 1.0000 1.0000
Prevalence 0.2889 0.3778 0.3333
Detection Rate 0.2889 0.3778 0.3333
Detection Prevalence 0.2889 0.3778 0.3333
Balanced Accuracy 1.0000 1.0000 1.0000

As you can see in the confusion matrix, all the values that are NOT in the digonal of the matrix are zero. This is the best model we can get for a classification problem, with 100% accuracy. In most of the real world scenarios, it is pretty hard to get this kind of a highly-accurate output. But it all depends on the data-set. 
Now that we know our model is highly accurate,  lets export this models from R, so that it can me used in other applications. Here Im using the PMML [2] format to export the model.
RFModel <- pmml(model);
write(toString(RFModel),file = "/home/supun/Desktop/RFModel.pmml");


Supun SethungaTransferring files with vfs and file-connector in WSO2 ESB

Here I will be discussing on how to transfer a set of files defined in a document, to another location using a proxy in WSO2 ESB 4.8.1. Suppose the names of the files are defined in the file-names.xml file. Suppose the file has the following xml structure.
    <files-set name="files-set-1">
        <file name="img1">image1.png</file>
        <file name="img2">image2.png</file>
    <files-set name="files-set-2">
        <file name="img3">image6.png</file>
        <file name="img4">image4.png</file>
<file name="img5">image5.png</file>
The procedure i'm going to follow is, read the file names defined in the file-names.xml using vfs, and transfer the files using the file connector. First we need to enable the vfs in the proxy. Please refer [1] on how to enable vfs transport to a proxy. Once vfs is enabled, the proxy should look like follows.
<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="" name="FileProxy" transports="vfs" statistics="disable" trace="disable" startOnLoad="true">
   <parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
   <parameter name="transport.PollInterval">15</parameter>
   <parameter name="transport.vfs.MoveAfterProcess">file:///some/path/success/</parameter>
   <parameter name="transport.vfs.FileURI">file:///some/path/in/</parameter>
   <parameter name="transport.vfs.MoveAfterFailure">file:///some/path/failure</parameter>
   <parameter name="transport.vfs.FileNamePattern">.*.xml</parameter>
   <parameter name="transport.vfs.ContentType">application/xml</parameter>
   <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>
   <description />

Here "FileURI" is the directory location of the file-names.xml. "MoveAfterProcess" is the directory where the file-names.xml should be moved after successfully reading. "MoveAfterFailure" is the location where the file-names.xml file should be moved if some failure occurs during reading it.
Now we need to extract the file names defined in this file-names.xml. For that i'm going to iterate through the file names using iterator mediator [2].
 <iterate preservePayload="true" attachPath="//files/files-set" expression="//files/files-set/file"
   <property name="image" expression="//files/files-set/file"/>
Finally, move the files using the file-connector[3] , as follows.
   <property name="fileName" expression="//files/files-set/file"/>
   <property name="fileLocation" value="ftp://some/path/in/"/>
   <property name="newFileLocation" value="ftp://some/path/success/"/>
Here, "fileLocation" refers to the directory in which the files we need to move are located, and "newFileLocation" refers to the directory to which the files should be moved. You can use either a localfile system location as well as ftp location for both these properties.



Supun SethungaUseful xpath expressions in WSO2 ESB

Reading part of the Message:

String Concatenating:

Reading values from Secure Vault:

Reading entities from ESB Registry:
         expression="get-property('registry', 'gov://_system/config/some/path/abc.txt')"

Base64 encoding:

Supun SethungaEnabling Mutual SSL between WSO2 ESB and Tomcat

Import Tomcat's public key to ESB's TrustStore

First we need to create a key-store for tomcat. For that execute the following:
keytool -genkey -alias tomcat -keyalg RSA -keysize 1024 -keystore tomcatKeystore.jks

Export public key certificate from tomcat's key-store:
keytool -export -alias tomcat -keystore tomcatKeystore
.jks -file tomcatCert.cer
Import the above exported tomcat's public key to ESB's trust-store:
keytool -import -alias tomcat -file tomcatCert.cer

Import ESB's public key to Tomcat TrustSotre

Export public key certificate from ESB's key-store:
keytool -export -alias tomcat -keystore <ESB_HOME>/repository/resources/security/wso2carbon.jks -file wso2carbon.cer
Import the above exported ESB's public key to tomcat's trust-store. (Here the we create a new trust-store for tomcat)
keytool -import -alias tomcat -file <ESB_HOME>/repository/resources/security/wso2carbon.cer -keystore tomcatTrustStore.jks

Enable SSL in ESB

In the <ESB_HOME>/repository/conf/axis2/axis2.xml file, uncomment the following property in the "<transportReceiver name="https" class="org.apache.synapse.transport.passthru.PassThroughHttpSSLListener">" block.
parameter name="SSLVerifyClient">require</parameter>

Enable SSL in Tomcat

We need to enable the HTTPS port in tomcat. By default its commented-out. Hence modify the <Tomcat_Home>/conf/server.xml as follows, and point the key-store and trust-store.
<Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"

        maxThreads="150" SSLEnabled="true" scheme="https" secure="true"

        clientAuth="false" sslProtocol="TLS"









               truststoreType="JKS" />

Supun SethungaSeasonal TimeSeries Modeling with Gradient Boosted Tree Regression

Seasonal Time Series data can be easily modeled with methods such as Seasonal-ARIMA, GARCH and HoltWinters. These are readily available in Statistical packages like R, STATA and etc. But If you wanted to model a Seasonal Time-Series using Java, there' are only very limited options available. Thus as a solution, here I will be discussion a different approach, where the time series is modeled in java using regression. I will be using Gradient Boosted Tree (GBT) Regression in Spark ML package.


I will use two datasets:

  • Milk-Production data [1]
  • Fancy data [2].

Milk-Production Data

Lets first have a good look at our dataset.

As we can see, the time-series is not stationary, as the mean of the series increases over time. In more simpler terms, it has a clear upwards trend. Therefore, before we start modeling it, we need to make it stationary. 

For this, Im going to using the mechanism called "Differencing". Differencing is simply creating a new series with the difference of tth term and the (t-m)th term in the series. This can be denoted as follows:  
   diff(m):             X't=Xt - Xt-m

We need to use the lowest m which makes the series stationary. Hence I will start with m=1. So the 1st difference becomes:
diff(1):             X' t=Xt - Xt-1

This results a series with (n-1) data points. Lets plot it against the time, and see if it has become stationary.

As we can see, there isn't any trend exist in the series. (only the repeating pattern). Therefore we can conclude that it is stationary.  If the diff(1) still shows some trend, then use a higher order differencing, say diff(2). Further, after differencing, if the series has a stationary mean (no trend) but has a non-stationary variance (range of data changes with time. eg: dataset [2]), then we need to do a transformation to get rid of the non-stationary variance. In such a scenario, logarithmic (log10 , ln or similar) transformation would do the job.

For our case, we don't need a transformation. Hence we can train a GBT regression model for this differenced data. Before be fit a regression model, we are going to need a set of predictor/independent variable. For the moment we only have the response variable (milk-production). Therefore, I introduce four more features to the dataset, which are as follows:
  • index - Instance number (Starts from 0 for both training and testing datasets)
  • time - Instance number from the start of the series (Starts from 0 for training set. Starts from the last value of training set +1, for the test set )
  • month - Encoded value representing month of the year. (Ranges from 1 to 12). Note: This is because it is monthly data. you need to add "week" feature too if its weekly data.
  • year - Encoded value representing the year. (Starts from 1 and continues for training data. Starts from the last value of year of training set +1, for the test set). 

Once these new features were created, lets train a model:
SparkConf sparkConf = new SparkConf();
sparkConf.set("spark.driver.allowMultipleContexts", "true");
JavaSparkContext javaSparkContext = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(javaSparkContext);

// ======================== Import data ====================================
DataFrame trainDataFrame ="com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header", "true")

// Get predictor variable names
String [] predictors = trainDataFrame.columns();
predictors = ArrayUtils.removeElement(predictors, RESPONSE_VARIABLE);

VectorAssembler vectorAssembler = new VectorAssembler();

GBTRegressor gbt = new GBTRegressor().setLabelCol(RESPONSE_VARIABLE)

Pipeline pipeline = new Pipeline().setStages(new PipelineStage[] {vectorAssembler, gbt});
PipelineModel pipelineModel =;

Here I have tuned the hyper-parameters to get the best fitting line. Next, with the trained model, lets use the testing set (only contains the set of newly introduced variables/features), and predict for the future.
DataFrame predictions = pipelineModel.transform(testDataFrame);"prediction").show(300);

Follow is the prediction result:

It shows a very good fit. But the prediction results we get here is for the differenced series. We need to convert it back to the original series to get the actual prediction. Hence, lets inverse-difference the results:

X't = Xt - X t-1
Xt = X' t + X t-1

Where X' is the differenced value and Xt-1 is the original value at the previous step. Here, we have an issue for the very first data point in testing data series, as we do not have t-1 (X0 is unknown/ doesn't exist). Therefore, I made an assumption, that in testing series, X is equivalent to the last value (Xn) of the original training set. With that assumption, follow is the result of our prediction:

Fancy dataset:

Followed the same procedure to the "Fancy" dataset [2] too. If we look at the data we can see that the series is not stationary in both mean and variance wise. Therefore, unlike in previous dataset, we need to do a transformation as well, other than differencing. here I used log10 transformation after differencing. And similarly, when converting the predictions back to actual values, first I took the inverse log of the predicted value, and then did the inverse-differencing, in order to get the actual prediction. Follow is the result:

As we can see, it very closely fits to the validation data, though we may further increase the accuracy by further tuning the hyper-parameters.



Supun SethungaConnect to MySQL Database Remotely

By default, access to mysql databases is bounded to the server which is running mysql itself. Hence, if we need to log-in to the mysql console or  need to use a database from a remote server, we need to enable those configs.

Open /etc/mysql/my.cnf file.
    sudo vim /etc/mysql/my.cnf

Then uncomment "bind-address ="

*Note:  if you cannot find bind-address in the my.cnf file, it can be found in /etc/mysql/mysql.conf.d/mysqld.cnf file.

Restart mysql using:
   service mysql restart;

This will allow login in to mysql server form a remote machine. To test this, go to the remote server, and execute the following.
   mysql -u root -p -h <ip_of_the_sever_running_mysql> 

This will open up the mysql console of the remote sql server.

But this will only allow to login in to the mysql server. But here we want to use/access the databses from (a client app in) our remote machine. To enable access to databases, we need to grant access for each database.
    grant all on *.* to <your_username>@<ip_of_server_uses_to_connect> identified by '<your_password>'; 

If you want to allow any ip address to connect to the database, use '%' instead of  <ip_of_server_uses_to_connect>.

Supun SethungaBuilding Your First Predictive Model with WSO2 Machine Learner

WSO2 Machine Learner is a powerful tool for predictive analytics on big data. The outstanding feature of it is the step by step wizard which makes it easier for anyone to use and build even advanced models with just a matter of few clicks. (You can refer to this excellent article at [1] for a good read on the ability and features of WSO2 ML).

In this article I will be addressing how to deal with a classification problem with WSO2 Machine Learner. Here I will discuss on using WSO2 ML to build a simple Random Forest model, for the well known "Iris" flower data set. My ultimate goal would be to train a machine Learning model to predict the Flower "Type" using the rest of the features of the flowers.


  • Download and extract WSO2 Machine Learner (WSO2_ML) 1.0.0 from here.
  • Download the Iris Dataset from here. Make sure to save it in ".csv" format.

Navigate to <WSO2_ML>/bin directory and start the server using "./" command (or wso2server.bat in windows). Then locate the URL for the ML web UI (not the carbon console) in the terminal log, and open the ML UI. (which is http://localhost:9443/ml by default). You will be directed to the following window.

Then log in to the ML UI using the "admin" as both username and password.

Import Dataset

Once you logged in to the ML UI, you will see the following window. Here, since we haven't uploaded any data sets or, created any projects yet, it will show Datasets/projects as "0".

First thing we need to do is to import the Iris dataset to the WSO2 ML server. For that, click on "Add Dataset", and you will get the following dataset upload form. 

Give a desired name and a description for the dataset. Since we are uploading the dataset file from our local machine, select the "Source Type" as "File". Then browse the file and select it. Chose the dataformat as "CSV",  select "No" to the column headers as the csv file does not contain any headers. Once everything is filled, "select "Create Dataset". You will be navigate to a new page where the newly created dataset is listed. Refresh the page and click on the dataset name to expand the view.

If the dataset got uploaded correctly, you will see the Green tick.

Explore Dataset

Now lets visually explore the dataset to see what characteristics does our dataset hold. Click on the "Explore" button which can be find in the same tab as the Dataset name. You will be navigated to the following window.

WSO2 ML provides various sets of visualizing tools such as Scatter plot, Parallel set, Treliss Chart and Cluster diagram. I will not be discussing in detail on what does each of these charts can be used for, and will address that in a separate article. But for now, lets have a look at the Scatter plot. Here I have plotted PL (Petal Length) against PW (Petal Width), and have colored the points by the "Type". As we can see, "Type" is clearly separated in to three cluster of points, when we plotted against PL and PW.This is a good evidance that both PW and PL are important factors of deciding (predicting) the flower "Type". Hence, we should include those two points as inputs to our model. Similarly You can try plotting against different combinations of variables, and see what features/variables are important and which ones not.

Create a Project

Now that we import our data, we need to create project using the uploaded data, to start working on it. For that, click "Create Project" button which is next to the dataset name.

Enter a desired name and a description to the project, and make sure the dataset we uploaded earlier is selected as the dataset (this is selected by default if you create the project as mentioned in the previous step). Then click on "Create Project" on complete the action, which will show you the following window.

Here You will see a yellow exclamation mark next to the Project name, mentioning that no Analyses are available. This is because, to start analysing our dataset, we need to create an Analysis.

Create Analysis

To create an Analysis, in the text box under the project, Type the name of the analysis you want to create and click "Create Analysis". Then you will be directed to the following Preprocess window, where it shows the overall summary of the dataset.

Here we can see the Data type (Categorical/Numerical) and the overall distribution of each feature. For Numerical features, a histogram will be displayed, and for categorical features, a pie chart would be shown. Also we can decided which feature to be included in our Analysis ("Include") and how should we treat the missing values of each of the feature/variable. 

As per the Data explore step we did earlier, I will be including all the features for my analysis. So all the features will be kept ticked in the "include" column. And for the simplicity, I will be using "Discard" option for missing values. (This means if there is a missing value somewhere, that complete row will be discarded). Once we are done with selecting what options we need, lets proceed to the next step by clicking "Next" button on the top-right corner. We will be redirected again to the data explore step, which is optional at this stage since we have already done with our data exploration part. Thefore lets skip this step and proceed to the Model Building phase by clicking "Next" again.

Build Model

In this step, we can chose which algorithm are we going to use to train our model. As I stated at the beginning as well, I will be using "Random Forest" as the algorithm, and the flower "Type" as the variable that I want to predict (Response variable) using my model. 

We can also define what proportion of data should be used to train our model and what proportion to be used for validation/evaluation of the build model. As its a common standard, lets use 0.7 proportion of data (70%) to train the model. Yet again, click "next" to proceed to the next step, where we can set the hyper-parameters to be used by the algorithm to build the model.

These hyper-parameters are specific to the Random Forest algorithm. Each of the hyper-parameters represents the following:
  • Num Trees -  Number of trees in the random forest. Increasing the number of trees will decrease the variance in predictions, improving the model’s test-time accuracy. Also training time increases roughly linearly in the number of trees. This parameter value should be an integer greater than 0.
  • Max Depth - Maximum depth of the tree. This helps you to control the size of the tree to prevent overfitting. This parameter value should be an integer greater than 0.
  • Max Bins - Number of bins used when discretizing continuous features. This must be at least the maximum number of categories M for any categorical feature. Increasing this allows the algorithm to consider more split candidates and make fine-grained split decisions. However, it also increases computation and communication. This parameter value should be an integer greater than 0.
  • Impurity -  A measure of the homogeneity of the labels at the node. This parameter value should be either 'gini' or 'entropy' (without quotes).
  • Feature Subset Strategy - Number of features to use as candidates for splitting at each tree node. The number is specified as a fraction or function of the total number of features. Decreasing this number will speed up training, but can sometimes impact performance if too low. This parameter value can take values 'auto', 'all', 'sqrt', 'log2' and 'onethird' (without quotes).
  • Seed - Seed for the random number generator.

Those hyper-parameters have default values in WSO2 ML. Eventhough they are not optimized for the Iris dataset, they can do a decent enough job for any dataset. Hence I will be using the default values as well, to keep it simple. Click "Next" once more, to proceed to the last step of our model building phase, where we will be asked to select a dataset version. This is usefull when there are multiple dataset versions for the same dataset. but for now since we have only one version (default version), we can keep it as it is.

We are now done with all the processing need to build our model. Finally, click "Run" on the top-right corner to build the model. Then you will be directed to a page where all the models you built under an analysis are listed. Since we built only one odel for now, there will be only one model listed. It also state the current status of the model building process. Initiall it will be shown as "In Progress". After a couple of seconds, refresh the page to update the status. Then it should be saying "Completed" with a green tick next to it.

Evaluate Model

Our next and final step is to evaluate our model to see how well it performs. For that click on "View" button on the model, which will take you to a page having the following results.

In this page we can see the model's overall accuracy, the confusion matrix, and a Scatter plot where data points are marked according to their classification status (Correctly classified/ Incorrectly classified). These are generated from the 30% of the data we kept aside at the beginning of our analysis. 

As we can see from the above output, our model has a 95.74% overall accuracy which is extremely high. Also the confusion matrix shows a breakdown of this accuracy. On the ideal scenario, accuracy should be 100% and all the non-diagonal cells in the confusion matrix should be all zero. But this ideal case is far away from the real world scenarios, and the accuracy we have got here is extremely towards the higher side. Therefore, looking at the evaluation results, we can conclude that out model is a good enough model to predict for future data.


Optionally you can use this model and predict for new data points, from within the WSO2 ML itself. This is primarily for testing purpose only. For that, navigate back to the model listing page, and click on "predict".

Here we have two options: either to upload a file and predict for all the rows in the file, Or to enter a set of values (which represent a single row) and get the prediction for that single instance. I will be using the later option here. As in the above figure, Select "Feature Values" and then enter desired values for the features, and click on "Predict". The output value will be shown right below the predict button. In this case, I got the predicted output as "1" for the values I entered, which means that the flower belongs to the Type 1.

For more information on WSO2 Machine Learner, please refer the official documentation found at [1].


Supun SethungaCreating a Log Dashboard with WSO2 DAS - Part I

WSO2 Data Analytics Server (DAS) can be used to do various kinds of batch data analytics and create dashboards out of those data. In this blog, I will be discussing on how can you create a simple dashboard using the data read from a log file. As the first part, lets read the log file, load and the store the data inside WSO2 DAS.


  • Download WSO2 DAS 3.0.1 [1].
  • A log file. (I will be using a log dumped from WSO2 ESB)

Unzip the WSO2 DAS server, and start the sever by running the <DAS_HOME>/bin/ script. Then loggin to the management console (Default URL https://localhost:9443/carbon) using "admin" as both username and password.

Create Event Stream

First we need to create an event stream to which we push the data we get from the log files. To create an event stream, in the management console, navigate to the Manage→Event→Streams. Then click on Add Event Stream. You will get form like below.

Give a desired name, version and a description for your event stream. Under "Stream Attributes" section add payload attributes, which you wish to populate with the data coming from the logs.
For example, I want to store the time-stamp, class name, and the log message. Therefore, I create three attributes to store the above three information.

Once the attributes were added, we need to persist this stream to the database. otherwise the data coming to this stream will not be able to use in future. To do that click on "Next[Persist Event]" icon. In the resulting form, tick on Persist Event Stream and also tick on all the three attributes, as below.

Once done, click on "Save Event Stream".

Create Event Receiver

Our next step would be to create an event receiver to read the log file and push the data in to the event stream we just created. For that, navigate to Manage→Event→Receivers and click on Add Event Receiver .

Give a desired name to the receiver. Since we are reading a log file, select the "Input Event Adapter Type" as "file-tail" and set "false" for "Start From End" property. For "Event Stream", pick the event stream "LogEventsStream" we created earlier, and message format as "text".

Next we need to do a custom mapping for the events. In order to do so, click on Advanced. Then create three regex expressions to match the fields you want to extract from the log, and assign those three regex expressions to the three fields we defined in the event stream (see below).

Here the first regex match to the timestamp in the log file, and the second regex matches to the class name in the log file, and so on. Once done, click on "Add event Receiver".  Then DAS will read you file and store the fields we defined earlier in the Database.

View the Data

To see the data stored by DAS, navigate to Manage→Interactive Analytics→Data Explorer in management Console. Pick "LOGEVENTSSTREAM" (the event stream we created earlier),  and click on "search". This should show a table with the information from the log file, as below.

First phase of creating a log dashboard is done. In the next post I will discuss on how can you create the dashboard out of this data.



Supun SethungaCustom Transformers for Spark Dataframes

In Spark a transformer is used to convert a Dataframe in to another. But due to the immutability of Dataframes  (i.e: existing values of a Dataframe cannot be changed), if we need to transform values in a column, we have to create a new column with those transformed values and add it to the existing Dataframe. 

To create a transformer we simply need to extend the class, and write our transforming logic inside the transform() method. Below are a couple of examples:

A simple transformer

This is a simple transformer, to get the given power, of each value of any column.

public class CustomTransformer extends Transformer {
    private static final long serialVersionUID = 5545470640951989469L;
         String column;
         int power = 1;

    CustomTransformer(String column, int power) {
         this.column = column;
         this.power = power;

    public String uid() {
        return "CustomTransformer" + serialVersionUID;

    public Transformer copy(ParamMap arg0) {
        return null;

    public DataFrame transform(DataFrame data) {
        return data.withColumn("power", functions.pow(data.col(this.column), this.power));

    public StructType transformSchema(StructType arg0) {
        return arg0;

You can refer [1]  for another similar example.

UDF transformer

We can also, register some custom logic as UDF in spark sql context, and then transform the Dataframe with spark sql, within our transformer.

Refer [2] for a sample which uses a UDF to extract part of a string in a column.



Supun SethungaAnalytics for WSO2 ESB : Architecture in a Nutshell

ESB Analytics Server is the analytics distribution for the WSO2 ESB, which is built on top of WSO2 Data Analytics Server (DAS).  Analytics for ESB consists of an inbuilt dashboard for Statistics and Tracing visualization for Proxy Services, APIs, Endpoints, Sequence and Mediators. Here I will discuss on  the architecture of the Analytics Server, and how it operates behind the scenes, to provide this comprehensive dashboard.

Analytics Server can operates in three modes:

  • Statistics Mode
  • Tracing Mode
  • Offline Mode

For all the three modes, data are published from ESB Server  to the Analytics server via the data bridge. In doing so ESB server uses the "publisher" component/feature, while Analytics server uses the "receiver" component/feature of the data bridge. ESB triggers an event for a single message flow, to the Analytics Server. Each of these events contains the information about all the components that were invoked during the message flow.

If the statistics are enabled for a given Proxy/API at the ESB side, then the Analytics server will operates in "Statistics Mode". If the Tracing and capturing Syanpse properties are also  enabled at the ESB side, then the Analytics server will operates on the "Tracing Mode". Analytics server will switch between these modes on the fly, depending on the configurations set at the ESB side.

Statistics Mode

In this mode, ESB server will be sending information regarding each mediator, for each message to Analytics Server. Analytics Server will be calculating the summary statistics out of these information, and will store only summary statistics, but will not store any raw-data coming from the ESB. This is a hybrid solution of both siddhi (WSO2 CEP) and Apache Spark. This mode generate statistics in real time.

Pros: Can handle much higher throughput. Statistics are available in real time.
Cons: No tracing available. Hence any message related info will not be available in the dashboard.

Tracing Mode

Similar to the Statistics mode, ESB server will be sending information regarding each mediator, for each message to Analytics Server. Analytics Server will be calculating the summary statistics out of these information,. But unlike previous case, it will store both statistics as well as component wise data. This enables the user to trace any message using the dashboard. More importantly, this mode also allows a user to view statistics and trace messages in real time.

Pros: Statistics and Tracing info are available in real time. Message level details are also available.
Cons: Throughput is limited. Can handle upto around 7000/n events per second, where n is the number of mediators/components available in the message flow of the event sent from ESB.

Offline Tracing

This mode also allows a user to get statistics as well as tracing, similar to the previous "Tracing Mode". But this operates in an offline/batch analytics mode, unlike previous scenario. More precisely, Analytics Server will store all the incoming events/data from ESB, but will not process them on the fly. Rather a user can collect data for any period of time, and can run a predefined spark script in-order to get the stats and tracing details.

Pros: Users can trace messages, and message level details are available. A much high throughput can be achieved compared to "Tracing Mode"
Cons: No realtime processing.

Supun SethungaCheck Database size in MySQL

Login to mysql with your usernamse and password.
eg: mysql u root -proot

Then execute the following command:
SELECT table_schema "DB Name", ROUND(SUM(data_length + index_length)/1024/1024, 2) "Size in MBs"
FROM information_schema.tables
GROUP BY table_schema;

Here SUM(data_length + index_length) is in bytes. Hence we have divided it twice by 1024 to convert to Megabytes.

Supun SethungaStacking in Machine Learning

What is stacking?

Stacking is one of the three widely used ensemble methods in Machine Learning and its applications. The overall idea of stacking is to train several models, usually with different algorithm types (aka base-learners), on the train data, and then rather than picking the best model, all the models are aggregated/fronted using another model (meta learner), to make the final prediction. The inputs for the meta-learner is the prediction outputs of the base-learners.

Figure 1

How to Train?

Training a stacking model is a bit tricky, but is not as hard as it sounds. All it requires is some similar steps as k-fold cross validation. First of all, devide the original data set in to two sets: Train set and Test set. We wont be even touching the Test Set during our training process of the Stacking model. Now we need to divide the Train set in to k-number (say 10) of folds. If the original dataset contains N data points, then each fold will contain N/k number of data points. (its is not mandatory to have equal size folds.)

Figure 2

Keep one of the folds aside, and train the base models, using the remaining folds. The kept-aside fold will be treated as the testing data for this step.

Figure 3

Then, predict the valued for the remaining fold (10th fold), using all the M models trained. So this will result M number of predictions for each data point in the 10th fold. Now we have N/10 data points sets (prediction sets), each with M number of fields (predictions coming from the M number of models). i.e: matrix with N/10 * M.

Figure 4

Now, iterate the above process by changing the kept-out fold (from 1 to 10). At the end of all the iterations, we would be having N number of prediction results sets, which corresponds to each data point in the original training set, along with the actual value of the field we predict.

Data Point #prediction from base learner 1prediction from base learner 2prediction from base learner 3prediction from base learner Mactual

This will be the input data set to our meta-learner. Now we can train the meta learner, using any suitable native algorithm, by sending each prediction as an input field and the original value as the output field.


Once all the base-learners and meta-learner are trained, prediction follows the same idea as the training, except the k-folds. Simply, for a given input data point, all we need to do is to pass it through the M base-learners and get M number of predictions, and send those M predictions through the meta-learner as inputs, as in the Figure 1.

Supun SethungaConnect iPython/Jupyter Notebook to pyspak


  • Install jupyter
  • Download and uncompress spark 1.6.2 binary.
  • Dowload pyrolite-4.13.jar

Set Environment Variables

open ~/.bashrc and add the following entries: 
export PYSPARK_DRIVER_PYTHON_OPTS='notebook' pyspark
export PYSPARK_PYTHON=/home/supun/Supun/Softwares/anaconda3/bin/python
export SPARK_HOME="/home/supun/Supun/Softwares/spark-1.6.2-bin-hadoop2.6"
export PATH="/home/supun/Supun/Softwares/spark-1.6.2-bin-hadoop2.6/bin:$PATH"
export SPARK_CLASSPATH=/home/supun/Downloads/pyrolite-4.13.jar

If you are using some external third-party libraries such as spark-csv, then add that jar's absolute path to Spark Class path, seperated by colons (:) as below.
export SPARK_CLASSPATH=<path/to/third/party/jar1>:<path/to/third/party/jar2>:..:<path/to/third/party/jarN>

To make the changes take effect, run:
source ~/.bashrc 

Get Started

Create a new directory, to be used as the python workspace (say "python-workspace"). This directory will be used to store the scripts we create in the notebook. Navigate to that created directory, and run the following to start the notebook.


Here spark will start in local mode. You can check the Spark UI at http://localhost:5001

If you need to connect to a remote spark cluster, then specify the master URL of the remote spark cluster as below, when starting the notebook.
pyspark --master spark://"

Finally navigate to http://localhost:8888/ to access the notebook.

Use Spark within python

To do spark operations with python, we are going to need the Spark Context and SQLContext. When we start jupyter with pyspark, it will create a spark context by default. This can be accessed using the object 'sc'.
We can also create our own spark context, with any additional configurations as well. But to create a new one, we need to stop the existing spark context first.
from pyspark import SparkContext, SparkConf, SQLContext

# Set the additional propeties.
sparkConf = (SparkConf().set(key="spark.driver.allowMultipleContexts",value="true"))

# Stop the default SparkContext created by pyspark. And create a new SparkContext using the above SparkConf.
sparkCtx = SparkContext(conf=sparkConf)

# Check spark master.

# Create a SQL context.
sqlCtx = SQLContext(sparkCtx)

df = sqlCtx.sql("SELECT * FROM table1")

'df' is a spark dataframe. Now you can do any spark operation on top of that dataframe. You can also use spark-mllib and spark-ml packages and build machine learning models as well.

Imesh GunaratneContainerizing WSO2 Middleware

Image source:

Deploying WSO2 Middleware on Containers in Production

A heavier car may need more fuel for reaching higher speeds than a car of the same spec with less weight [1]. Sports car manufacturers always adhere to this concept and use light weight material such as aluminum [2] and carbon fiber [3] for improving fuel efficiency. The same theory may apply to software systems. The heavier the software components, the higher the computation power they need. Traditional virtual machines use a dedicated operating system instance for providing an isolated environment for software applications. This operating system instance needs additional memory, disk and processing power in addition to the computation power needed by the applications. Linux containers solved this problem by reducing the weight of the isolated unit of execution by sharing the host operating system kernel with hundreds of containers. The following diagram illustrates a sample scenario how much resources containers would save compared to virtual machines:

Figure 1: A resource usage comparison between a VM and a container

The Containerization Process

The process of deploying a WSO2 middleware on a container is quite straightforward, it’s matter of following few steps;

  1. Download a Linux OS container image.
  2. Create a new container image from the above OS image by adding Oracle JDK, product distribution and configurations required.
  3. Export JAVA_HOME environment variable.
  4. Make the entry point invoke the bash script found inside $CARBON_HOME/bin folder.

That’s it! Start a container from the above container image on a container host with a set of host port mappings and you will have a WSO2 server running on a container in no time. Use the container host IP address and the host ports for accessing the services. If HA is needed, create few more container instances following the same approach on multiple container hosts and front those with a load balancer. If clustering is required, either use Well Known Address (WKA) membership scheme [4] with few limitations or have your own membership scheme for automatically discovering the WSO2 server cluster similar to [5]. The main problem with WKA membership scheme is that, if all well known member containers get terminated, the entire WSO2 server cluster may fail. Even though this may look like a major drawback for most high intensity, medium and large scale deployments, it would work well for low intensity, small scale deployments which are not mission critical. If containers go down due to some reason, either a human or an automated process can bring them back to the proper state. This has been the practice for many years with VMs.

Nevertheless, if the requirement is other way around, and need more high intensity, completely automated, large scale deployment, a container cluster manager and a configuration manager (CM) would provide additional advantages. Let’s see how those can improve the overall productivity of the project and final outcome:

Configuration Management

Ideally software systems that run on containers need to have two types of configuration parameters according to twelve-factor app [6] methodology:

  1. Product specific and global configurations
  2. Environment specific configurations

The product specific configurations can be burned into the container image itself and environment specific values can be passed via environment variables. In that way, a single container image can be used in all environments. However currently, Carbon 4 based WSO2 products can only be configured via configuration files and Java system properties. They do not support to be configured via environment variables. Nevertheless if anyone is willing to put some effort on this, an init script can be written to read environment variables at the container startup and update the required configuration files. Generally Carbon 4 based WSO2 middleware have considerable amount of config files and parameters inside them. Therefore this might be a tedious task. According to the current design discussions, in Carbon 5, there will be only one config file in each product and environment variables will be supported OOB.

What if a CM is run in the Container similar to VMs?

Yes, technically that would work, most people would tend to do this with the experience they have with VMs. However IMO containers are designed to work slightly differently than VMs. For an example, if we compare the time it takes to apply a new configuration or a software update by running a CM inside a container vs starting a new container with the new configuration or update, the later is extremely fast. It would take around 20 to 30 seconds to configure a WSO2 product using a CM whereas it would only take few milli-seconds to bring up a new container. The server startup/restart time would be the same in both approaches. Since the container image creation process with layered container images is much efficient and fast this would work very well in most scenarios. Therefore the total config and software update propagation time would be much faster with the second approach.

Choosing a Configuration Manager

There are many different configuration management systems available for applying configurations at the container build time, to name few; Ansible [7], Puppet [8], Chef [9] and Salt [10]. WSO2 has been using Puppet for many years now and currently uses the same for containers. We have simplified the way we use Puppet by incorporating Hiera for separating out configuration data from manifests. Most importantly container image build process does not use a Puppet master, instead it runs Puppet in masterless mode (puppet apply). Therefore even without having much knowledge on Puppet, those can be used easily.

Container Image Build Automation

WSO2 has built an automation process for building container images for WSO2 middleware based on standard Docker image build process and Puppet provisioning. This has simplified the process of managing configuration data with Puppet, executing the build, defining ports and environment variables, product specific runtime configurations and finally optimizing the container image size. WSO2 ships these artifacts [11] for each product release. Therefore it would be much productive to use these without creating Dockerfiles on your own.

Choosing a Container Cluster Manager

Figure 2: Deployment architecture for a container cluster manager based deployment

The above figure illustrates how most of the software products are deployed on a container cluster manager in general. The same applies to WSO2 middleware on high level. At the time this article being written, there are only few container cluster managers available. They are Kubernetes, DC/OS, Docker Swarm, Nomad, AWS ECS, GKE and ACE. Out of these, WSO2 has used Kubernetes and DC/OS in production for many deployments.

Figure 3: Container cluster management platforms supported for WSO2 middleware

Strengths of Kubernetes

Kubernetes was born as a result of Google’s experience on running containers at scale for more than a decade. It has covered almost all the key requirements of container cluster management in depth. Therefore it’s the first most preference for me as of today:

  • Container grouping
  • Container orchestration
  • Container to container routing
  • Load balancing
  • Auto healing
  • Horizontal autoscaling
  • Rolling updates
  • Mounting volumes
  • Distributing secrets
  • Application health checking
  • Resource monitoring and log access
  • Identity and authorization
  • Multi-tenancy

Please read this article on more detailed information on Kubernetes.

Reasons to Choose DC/OS

AFAIU at the moment DC/OS has less features compared to Kubernetes. However it’s a production grade container cluster manager which has been there for some time now. The major advantage I see with DC/OS is the custom scheduler support for BigData and Analytics platforms. This feature is still not there in Kubernetes. Many major Big Data and Analytics platforms such as Spark, Kafka and Cassandra can be deployed on DC/OS with a single CLI command or via the UI.

The Deployment Process

Figure 4: Container deployment process on a container cluster manager

WSO2 has released artifacts required for completely automating containerized deployments on Kubernetes and DC/OS. This includes Puppet Modules [12] for configuration management, Dockerfiles [11] for building docker images, container orchestration artifacts [13], [14] for each platform and WSO2 Carbon membership schemes [5], [15] for auto discovering the clusters. Kubernetes artifacts include replication controllers for container orchestration and services for load balancing. For DC/OS we have built Marathon applications for orchestration and parameters for the Marathon load balancer for load balancing. These artifacts are used in many WSO2 production deployments.

Please refer following presentations on detailed information on deploying WSO2 middleware on each platform:

Documentation on WSO2 Puppet Modules, Dockerfiles, Kubernetes and Mesos Artifacts can be found at [16].


















Containerizing WSO2 Middleware was originally published in ContainerMind on Medium, where people are continuing the conversation by highlighting and responding to this story.

Nipuni PereraJava Collections Framework

This post will give a brief knowledge on Java Collections framework. This may not cover each and every implementation but most commonly used classes.

Before Java Collections framework was introduced, Arrays, HashTables and Vectors were the standard methods for grouping and manipulate collections of objects. But these implementations used different methods and syntax for accessing members (arrays used [], Vector used elementAt() methods while HashTable used get() and put() method. So you can't wrap one object to another easily). They were too static (changing the size and the type is hard) and had few built-in functions. Further most of the methods in Vector class were marked as final (so that you can't extend is behavior to implement a similar sort of collection), and most importantly none of them implements a standard interface. As programmers develop algorithms (such as sorting) to manipulate collections, they have to decide on what object to pass to the algorithm. Should they pass an Array or Vector or implement both interfaces?

The Java Collections framework provides a set of classes and interfaced to handle collections of objects. Most of the implementations can be found in java.util package. The Collections interface extends Iterable interface which make the Iterable interface (java.lang.Iterable) one of the root interfaced of the java collection classes. Thus all the subtypes of Collections also implements the Iterable interface. It has only one methd: iterator().

There are mainly 2 groups in the framework: Collections and Maps.


We have collection interface at the top that is used to pass collections around and manipulate then in the most generic way. All other interfaces List, Set etc extends this and make more customized objects.

Figure 1:Overview of Collection
List, Set, SortedSet, NavigableSet, Queue, Deque extends the Collection interface. The Collection interface just defines a set of  basic behaviors (methods such as adding, removing etc) that each of these collection sub-types share.
Both Set and List implements the Collections interface. The Set interface is identical to the Collection interface except for an additional method toArray() which converts a Set to an object array. The List interface also implements the Collections interface but provide more accessors that use and integer index into the List. For instance get(), remove(), and set() all take an integer that affects the indexed element in the list.


The Map interface is not derived from Collection, but provides an interface similar to the methods in java.util.Hashtable.

Figure 2:Map interface
Keys are used to put and get values. This interface is implemented with two new concrete implementations, the TreeMap and the HashMap. The TreeMap is a balanced tree implementation that sorts elements by the key.


  • The main advantage is it provides a consistent and a regular API. This reduces the effort required to design and implement APIs. 
  • Reduces programming effort - by providing data structures and algorithms so you don't have to write them yourself. 
  • Increases performance -  by providing high performance implementations of data structures and algorithms. As various implementations of each interface are interchangeable, programs can be tuned by switching implementations.  


The 2 commonly used implementations are LinkedList and ArrayList. List allow duplicates in the collections and use indexes to access elements. When deciding between these two implementations, we should consider whether the list is volatile (grows or shrinks often) and whether the access to items is random or ordered.

Following are 2 ways of initializing an ArrayList. Which method is correct?

List<String> values = new ArrayList<String>();

ArrayList<String> values = new ArrayList<String>();

Both approaches are acceptable. But the first option is more suitable. The main reason you'd do the first way is to decouple your code from a specific implementation of the interface and it allows you to switch between different implementations of the List interface with ease. eg: method public List getList() will allow you return any type that implements List interface. An interface gives you more abstraction, and make the code more flexible and resilient to changes as you can use different implementations.

ArrayList - A widely used collections framework in java. We can specify the size while initializing if not a default size will be used initially. We can add and retrieve items using add() and get() methods. But incase of removing items have to use with care. Internally this maintains an array to store items. So if you remove an item at the end, it will just remove the last element. But if you remove the first element, the operation will be very slow. (As it will remove first and copy all the following items back to fill the gap).

LinkedList - Internally maintains a doubly linked list, so that each element has a reference to the previous and next element. Thus retrieving an item from the list will be slow compared to the ArrayList as it traverse elements from the beginning.

The rule is if you want to add or remove items from the end of your list, it is efficient to use ArrayList, and to add or remove items from anywhere else it is efficient to use LinkedList.

Below sample will show using List implementations.


Set does not allow duplicates in the collection. Adding a duplicate add nothing to a Set.  Sets allow mathematical operations such as intersection, union, difference.


Maps store elements as key-value pairs. Can store any kind of objects. If you need to store customized objects as the keys, then you need to implement your own hashcode. The keys have to be unique. The inbuilt method map.keySet() returns a Set and does not have duplicates in the result. Thus if you have added different values with same keys, older will be replaced by the new one
If you iterate and retrieve items from the map, this will not maintain any order. Hence will get random order when iterate.

Two commonly used implementations are LinkedHashMap and TreeMap. LinkedHashMap will return values in order that they were inserted, while the treeMap is going to sort the keys


[1]This is a very useful set of video tutorials that I could found on - Java Collections Framework

Nipuni PereraClaim Management with WSO2 IS

WSO2 Carbon supports different claim dialects. A claim dialect can be thought of as a group of claims. A claim carries information from the underlying user store.

Claim attributes in user profile info page:

In WSO2 IS each piece of user attribute is mapped as a claim. If you visit the user profile page for a specific user (Configure --> Users and Roles --> Users --> User Profile), you can view the user profile data (see figure 1 below).

Figure 1

As you can see there are mandatory fields (eg: Profile Name), optional fields (eg: Country) and read only fields (eg: Role).
You can add a new user profile field to the above page. If you visit Claim Management list (in Configure --> Claim Management), there are set of default claim dialects listed in WSO2 IS. Among them is the default dialect for WSO2 Carbon. You can follow the steps below to add a new field to the user profile info page:
  1. Click on dialect . This will list down a set of claim attributes.
  2. Lets say you need to add attribute "Nick Name" to the user profile page. 
  3. Click on attribute "Nick Name" and "Edit" . There are a set of fields you can edit. Some important features are:
    1. Supported by Default - This will add the attribute to the user profile page
    2. Required - This will make the attribute mandatory to fill when updating user profile
    3. Read-only - This will make the attribute read-only 
  4. You can try actions listed above and add any attribute listed in the dialect (or add a new claim attribute using "Add new Claim Mapping" option)
There are some more useful dialects defined in WSO2 IS.

One such dialect is  which is defined for OpenID attribute exchange. Attributes defined in this dialect will be used when retrieving claims for user info requests (as I have described in my previous post on "Accessing WSO2 IS profile info with curl"  ).

How to add a value to a claim defined in OpenID dialect?

(This mapping is currently valid for WSO2 IS 5.0.0 and will get changed in a later release)
You can follow the steps below when adding a value to a claim attribute in the OpenID dialect.
  1.  Start WSO2 IS and login.
  2. Go to wso2 OpenID claim dialect. (
  3. Find a claim attribute that you need to add a value to. (eg: Given Name)
  4. Go to User Profile page. This will not display an entry to add Given Name attribute. 
  5. As I have described in the first section of this post add a new claim mapping to the default dialect for WSO2 Carbon ( with the name and the "Mapped Attribute (s)". (Eg: Add a new Claim with the following details: )
    1.  Display Name : Given Name
    2.  Claim Uri : given_name
    3.  Mapped Attribute (s) : cn   ----> add the same Mapped Attribute in you OpenID claim attribute
    4.  Supported by Default : check
    5. Required : check
  6. Now you have a new claim attribute added to the default dialect for WSO2 Carbon
  7. If you visit the user profile page of a user you can add a value to the newly added attribute. 
  8. If you retrieve user info as in "Accessing WSO2 IS profile info with curl" you can see the newly added value is retrieved in the format {<Claim Uri > : <given value>} eg: ({given_name : xxxxx})
Please note that if you still can't see the newly added value when retrieving user info, you may have to restart the server or retry after cache invalidates (after 15min).  

This claim mapping operate as follows:
 > When you add a value to a user profile field via the UI (eg: adding a value to "Full Name" will map the value with the mapping attribute "cn" of the claim).
 > Hence if there is any other claim attribute in OpenID dialect that has the same mapping attribute "cn" then, this will also get the value added above.
 > (Eg: say you have "Mapping Attribute"="cn" in the claim attribute "Full Name" in OpenID dialect, You can get the value you have entered in to the "Full Name" entry in the user profile.

Nipuni PereraCreating Docker Image for WSO2 ESB

Docker is a Container, used for running an application so that container is separate from others and run safely. Docker has a straightforward CLI that allows you to do almost everything you could want to a container.
Most of these commands use image id, image name, and the container id depend on the requirement. Docker daemon always run as the root user. Docker has a concept of "base containers", which you use to build your containers. After making changes to a base container, you can save those change and commit them.
One of Docker's most basic images is called "Ubuntu" (which I have used in my sample described in this post)
A Dockerfile provides a set of instructions for Docker to run on a container.
For each line in the Dockerfile, a new container is produced if that line results in a change to the image used. You can create you own images and commit them to Docker Hub so that you can share them with others. The Docker Hub is a public registry maintained by Docker, Inc. that contains images you can download and use to build containers.

 With this blog post, I am creating a docker image to start wso2 ESB server. I have created the dockerfile below to create my image. 

FROM       ubuntu:14.04


RUN apt-get update

RUN sudo apt-get install zip unzip

COPY /opt

COPY /opt

WORKDIR "/opt"

RUN unzip

RUN unzip /opt/

ENV JAVA_HOME /opt/jdk1.8.0_60

RUN chmod +x /opt/wso2esb-4.8.1/bin/

EXPOSE 9443 9763 8243 8280

CMD ["/opt/wso2esb-4.8.1/bin/"]

FROM ------------>will tell Docker what image is to base this off of.
RUN   ------------->will run the given command (as user "root") using sh -c "your-given-command"
ADD   ------------->will copy a file from the host machine into the container
WORKDIR ------ >set your location from which you need to run your commands from
EXPOSE ---------->will expose a port to the host machine. You can expose multiple ports.
CMD -------------- >will run a command (not using sh -c). This is usually your long-running process. In this case, we are running the script. 

Following are the possible errors that you may face while building the docker image with a dockerfile: 

Step 1 : FROM ubuntu:14.04

---> 1d073211c498

Step 2 : MAINTAINER Nipuni

---> Using cache

---> c368e39cc306

Step 3 : RUN unzip

---> Running in ade0ad7d1885

/bin/sh: 1: unzip: not found

As you can see the dockerfile has encountered an issue in step 3 which is “unzip is not found” to run as a program. This is because we need to add all the dependencies to the dockerfile before using them. Dockerfile creates an image based on the basic image “ubuntu:14.04” which is just a simple Ubuntu image. You need to install all the required dependencies (in my case it would be unzip) before using them.

Step 5 : RUN unzip
---> Running in e8433183014c

unzip : cannot find or open, or

The issue is docker cannot find the zip file. I have added my zip file as the same location of my dockerfile. While building the docker image, you need to copy your resources to docker instance location with “COPY” command.

Step 9 : RUN /opt/wso2esb-4.8.1/bin/

Error: JAVA_HOME is not defined correctly.

CARBON cannot execute java

We need JAVA_HOME environment variable to be set properly while running wso2 products. Docker support setting environment variables with “ENV” command. You can copy your jdk zip file similar to and set JAVA_HOME. I have added below commands to my dockerfile.

COPY /opt

RUN unzip

ENV JAVA_HOME /opt/jdk1.7.0_65

 After successfully creating the dockerfile, save it with name "Dockerfile" in you preferred location. Add and to same location.

You can then run the saved  dockerfile with command below:

 sudo docker build -t wso2-esb .

 As result you can see the commands listed in the dockerfile are running one by one with final line "Successfully built <image-ID>".

 You can view the newly created image with "sudo docker images".

 You can then run your image with command "sudo docker run -t <Image-ID>". You should be able to see the logs while starting the wso2 server.

 You also can access the server logs with "sudo docker logs <container-ID>".

Nipuni PereraUsing NoSQL databases

Databases plays a vital role when it comes to managing data in applications. RDBMS (Relational Database Management Systems) are commonly use to store/manage data/transactions in application programming.
As per the design of RDBMS, there are some limitations when applying RDBMS to manage Big/dynamic/unstructured data.
  • RDBMS use tables, join operations, references/foreign keys to make connections among tables. It will be costly to handle complex operations that involve multiple tables.
  • It is hard to restructure a table. (eg: each entry/row in the table has similar set of fields). If the data structure changed, the table has to be changed
In contrast, there are applications that process large scale, dynamic data (eg: geospatial data, data used in social networks). Due to the limitations above, the RDBMS may not be the ideal choice. 

What is No-SQL?

No-SQL (Not only SQL) is a non-relational database management system, that has some significant differences than RDBMSs. No-SQL as the name suggest does not use a SQL as the querying language and uses javascript(commonly used) instead. JSON is frequently used when storing records. 

No-SQL databases some key features that make it more flexible than RDBMS,
  1. The database, tables, fields need not to be pre-defined when inserting records. If the data structure is not present database will create it automatically when inserting data. 
  2. Each record/entry (or row in terms of RDBMS tables) need not to have the same set of fields. We can create fields when creating the records.
  3. Allows nested data structures (eg: arrays, documents)
Different types of No-SQL data:

  1. Key-Value:
    1. A simple way of storing records with a key(from which we can lookup the data) and a value (can be a simple string or a JSON value)
    1345"{Name: Nipuni, Surname: Perera, Occupation: Software Engineer}"

  2. Graph:
    1. Used when data can be represented as interconnected nodes.     
  3. Column:
    1. Uses a similar flat table structure used in RDBMSs, but keys are used in columns rather than in rows. 

  4. Document:
    1. Stored in a format like JNSON, XML.
    2. Each document can have a unique structure. (Document type is used when storing objects and support OOP)
    3. Each document usually has a specific key, which can use to retrieve the document quickly.
    4. Users can query data by the tagged elements. The result can be a String, array, object etc. (I have highlighted some of the tags in the sample document below.)
    5. A sample document data that stores personal details may look like below:
      1. {
Name”: “Nipuni”
Education”: [
{ “secondary-education”:”University of Moratuwa”}
, { “primary-education”: ”St.Pauls Girsl School”}

Possible application for No-SQL
  1. No-SQL commonly used in web applications, that involves dynamic data. As per the data type description above, No-SQL is capable of storing unstructured data. No-SQL can be a powerful candidate for handling big data. 
  2. There are many implementations available for No-SQL (eg:  CouchDB, MongoDB) that serve different types of data structures.
  3. No-SQL can use to retrieve full list (that may involve multiple tables when using RDBMS). Eg: Retrieving details of a customer in a financial company may have different levels of information about the customer (eg: personal details, transaction details, tax/income details). No-SQL can save all this data in a single entry with a nested data type (eg: document), which then can retrieve complete data set without any complex join operation. 
The decision on which scheme to use depend on the requirement of the application. Generally, 

  1. Structured, predictable data can be handled with →  RDBMS
  2. Unstructured, bid data, complex and rapidly changing data can manage with → No SQL (But there are different implementations for No-SQL that provide different capabilities. No-SQL is just a concept for database management systems.)

No-SQL with ACID properties

Relational databases usually guarantee ACID properties. ACID provides a rule set that guarantees to handle transactions keeping its data safe. It depend on which No-SQL implementation you choose, and how much the database implementation guarantee the ACID properties.

  • Atomicity - when you do something to change a database the change should work or fail as a whole. Atomicity is guaranteed in document wide transactions. Writes cannot be partially applies to an inserted document.
  • Consistency-  the database should remain consistent. This feature support depend on your chosen No-SQL implementation. As No-SQL databases mainly support distributed systems, consistency and availability may not compatible.

  • Isolation - If multiple transactions are processing at the same time they shouldn't be able to see mid-status. There are No-SQL implementations that support read/write locks to to support isolation mechanism. But this too depends on the implementation.
  • Durability - If there is a failure (hardware or software) the database needs to be able to pick itself back up. No-SQL implementations support different mechanisms (eg: MongoDB supports journaling. With journaling when you do an insert operation in mongoDB it keeps that in memory and insert into a journal. )

Limitations of No-SQL

  1. There are different DBs available that uses No-SQL, you need to evaluate and find out which fits your requirements the most.
  2. Possibility of duplication of data.
  3. ACID properties may not support for all the implementations.

I have mainly worked with RDBMS, and have a general idea about the No-SQL concept. There is are significant differences between RDBMS and No-SQL database management systems. The choice depends on the requirements of the application and the No-SQL implementation to use. IMHO the decision should take after a proper evaluation of the requirement, and the limitation that the system can afford.

Nipuni PereraBehavior parameterization in Java 8 with lambda expressions and functional interfaces

Java 8 is packed with some new features in language level. In this blog post I hope to give an introduction to behavior parameterization with samples using lambda expressions. I will first describe a simple scenario and give a solution with java 7 features and then improve that solution with java 8 features.

What is behavior parameterization?

Behavior parameterization is a technique to improve the ability to handle changing requirements by allowing a caller of a function to pass custom behavior as a parameter. In a simple note, you can pass a block of code as an argument to another method, which will parameterize the behavior based on the passed code block.

Sample scenario:

Assume a scenario of a Company with set of Employees. The management of the company need to analyze the details of the employees of the company to identify/categorize employees into set of groups. (eg: group based on age, gender, position etc).

Below is a sample code for categorizing employees based on age and gender with java 7.

Solution 1 - Using java 7

There are 2 methods filterByAge() and filterByGender() which follows the same pattern except one logic which is the logic inside the if statement. If we can parameterize the behavior inside the if block, we could use a single method to perform both filtering options. It improves the ability to tell a method to take multiple strategies as parameters and follow them internally as per the requirements.

Lets try to reduce the code to a single method using anonymous classes. We are still using java 7 and no java 8 features were used.

Solution 2 - Improved with anonymous classes

Instead of maintaining two methods we have introduced a new method “filterEmployee” which takes 2 arguments an employee inventory and an EmployeePredicate. EmployeePredicate is a customized interface that has a single abstract method test() which takes an Employee object and returns a boolean. Then we have used 2 implementations of EmployeePredicate interface as anonymous classes to pass the behavior as per the requirement.

We have changed our program from solution-1 to solution-2 with following steps:
  1. We have reduced 2 methods to a single method and improved that method to accept a behavior. (This is an improvement)
  2. We had to introduce a new custom interface EmployeePredicate and use anonymous classes to pass the behaviour. (This is not a good enough and verbose. We need to improve this)

Functional Interfaces

Functional interface is an interface that has only a one abstract method. (Similar to what we have introduced in the previous solution, EmployeePredicate interface). Functional interface can have other default methods (which is another new feature introduced in java8) as long as it includes a single abstract method.

Sample functional interfaces:

As per our latest solution with anonymous classes, we need to create our own interface that includes a method accepting an object of our preference and return some output. But with java 8 there are some generic functional interfaces that were newly introduced. We can reuse them to pass different behaviors without creating our own interfaces. I have listed some of them below:

  1. java.util.function.Predicate<T> interface has a one abstract method “test()” that takes an Object of type T and returns a boolean.
    1. Eg: as per our scenario We take Employee as an object and return a boolean to indicate if the employee’s age is less than 30.
  2. java.util.function.Consumer<T> interface has a one abstract method “accept()” that takes an Object of type T and does not return anything
    1. Eg: Assume we need to print all the details of a given Employee. But not return anything. We can use Consumer interface.
  3. java.util.function.Function<T,R> interface has a one abstract method “apply()” that takes an Object of Type T and returns an Object of type R.
    1. Eg: Assume we need to take Employee object and return the employee ID as an integer.

What should be the functional interface that we need to use to improve our existing solution in the employee categorizing scenario?
Lets try to use Predicate functional interface and improve the solution.

Solution 3 - Using java 8 functional interfaces

So far we have changed our program from solution-2 to solution-3 with the steps below:
  1. We have removed the customized interface EmployeePredicate and used an existing Predicate functional interface from java 8 - (This is an improvement and we have reduced an interface)
  2. We still use the anonymous functions. (still verbose and not good enough)

Lambda expressions:

We can use lambda expressions in any place we need to pass a functional interface. Lambda expressions can represent behavior or pass code similar to anonymous functions. It can has a list of parameters, method body and a return type.
We can describe a lambda expression as a combination of 3 parts as below:

  1. List of parameters
  2. An arrow
  3. Method body

Consider the sample lambda expression below:

(Employee employee) ---> employee.getAge() < 30

This sample shows how we can pass a behavior to a Predicate interface that we have used in solution-3. Lets first analyze a anonymous class we have used.

We are implementing the Predicate functional interface that has a single abstract method. This abstract method takes an Object as parameters and return a boolean value. In the lambda expression we have used:

  1. (Employee employee)   : Parameters for the abstract function of Predicate interface
  2. --->                                : Arrow separates the list of parameters from the body of lambda
  3. employee.getAge() < 30 : This is the body of the abstract method of the predicate. The result of the body is a boolean value. Hence the above lambda expression returns a boolean value.

Sample lambda expressions:
  1. (Employee employee) ---> System.out.println(“Employee name : ” + + “\n Employee ID: ” +
    1. This a possible implementation of the Consumer functional interface that has a single abstract method that accepts an object and return void.
  2. (String s) ---> s.length()
    1. This a possible implementation of the Function functional interface that has a single abstract method that accepts an object and return another object.
  3. () → new Integer(10)
    1. This lambda expression is for a functional interface that has a single abstract method with no arguments and return an integer.
  4. (Employee employee, Department dept)  ---> {
                              If (dept.getEmployeeList().contains(employee.getID)) {
                                          System.out.println(“Employee : ” + employee.getName());

    1. This lambda expression is for a functional interface that has a single abstract method with 2 arguments of type object and return a void.

Let's rewrite the solution using lambda expressions.
Solution - 4 (Using lambda expressions)

Solution-4 can further improved using method references. I have not discussed method references with this blog post and will not include that solution here.

Kalpa WelivitigodaWUM — WSO2 Update Manager

WUM is a CLI tool from WSO2 which enables you to keep your WSO2 products up to date. WUM sounds like yum and yes, WUM has somewhat similar…

Himasha GurugeXSLT stylesheet template to add a namespace with namespace prefix

If you need to write down a xslt stylesheet ,and you need to add a namespace to a certain element with a namespace prefix , you could write a template like below. In this it will add the namespace to <UserRequest> node.

<xsl:template match="UserRequest">
        <!--Define the namespace with namespace prefix ns0 -->
        <xsl:element name="ns0:{local-name()}" namespace="">
            <!--apply to above selected node-->
            <xsl:apply-templates select="node()|@*">

If you need to add this namespace to <UserRequest> and its child element , the template match should change to below.

<xsl:template match="UserRequest | UserRequest/*">

Himasha GurugeHandeling namespaces in xpath expressions of WSO2 ESB payload mediator

You could checkout payload factory mediator of WSO2 ESB in  If you need to provide an xml input that has namespaces (other than default namespace) included, and you need to access some node of this in <args> of payloadFactory mediator you could do it like this with xpath.

 <payloadFactory media-type="xml">
<format key="conf:resources/output.xml"/>
      <arg xmlns:ns0="" expression="//ns0:UserRequest" />

Rajendram KatheesEmail OTP Two Factor Authentication through Identity Server

In this post, I will explain how to use Email OTP two authenticator through WSO2 Identity server. In this demonstration, I am using SMTP mail transport which was used to send the OTP code via email at the time authentication happens.

Add the authenticator configuration  <IS_HOME>/repository/conf/identity/application-authentication.xml file under the <AuthenticatorConfigs> section.

<AuthenticatorConfig name="EmailOTP" enabled="true">
          <Parameter name="GmailClientId">gmailClientIdValue</Parameter>
          <Parameter name="GmailClientSecret">gmailClientSecretValue</Parameter>
          <Parameter name="SendgridAPIKey">sendgridAPIKeyValue</Parameter>    
          <Parameter name="EMAILOTPAuthenticationEndpointURL">emailotpauthenticationendpoint/emailotp.jsp</Parameter>
          <Parameter name="GmailRefreshToken">gmailRefreshTokenValue</Parameter>
          <Parameter name="GmailEmailEndpoint">[userId]/messages/send</Parameter>
          <Parameter name="SendgridEmailEndpoint"></Parameter>
          <Parameter name="accessTokenRequiredAPIs">Gmail</Parameter>
          <Parameter name="apiKeyHeaderRequiredAPIs">Sendgrid</Parameter>
          <Parameter name="SendgridFormData">sendgridFormDataValue</Parameter>
          <Parameter name="SendgridURLParams">sendgridURLParamsValue</Parameter>
          <Parameter name="GmailAuthTokenType">Bearer</Parameter>
          <Parameter name="GmailTokenEndpoint">
          <Parameter name="SendgridAuthTokenType">Bearer</Parameter>

Configure the Service Provider and Identity Provider Configuration as we normally configure for Two factor authentication. Now we will configure EmailOTP Identity provider for SMTP transport.

SMTP transport sender configuration.
   Add the SMTP transport sender configuration in the <IS_HOME>/repository/conf/axis2/axis2.xml file.
  Here you need to replace {USERNAME}, {PASSWORD} and {SENDER'S_EMAIL_ID} with real values.

<transportSender name="mailto" class="org.apache.axis2.transport.mail.MailTransportSender">
       <parameter name=""></parameter>
       <parameter name="mail.smtp.port">587</parameter>
       <parameter name="mail.smtp.starttls.enable">true</parameter>
       <parameter name="mail.smtp.auth">true</parameter>
       <parameter name="mail.smtp.user">{USERNAME}</parameter>
       <parameter name="mail.smtp.password">{PASSWORD}</parameter>
       <parameter name="mail.smtp.from">{SENDER'S_EMAIL_ID}</parameter>

Comment <module ref="addressing"/> module from axis2.xml in <IS_HOME>/repository/conf/axis2.
Email Template configuration.
Add the email template in the <IS_HOME>/repository/conf/email/email-admin-config.xml file.

    <configuration type="EmailOTP">
           <subject>WSO2 IS EmailOTP Authenticator One Time    Password</subject>
       Please use this OTP {OTPCode} to go with EmailOTP authenticator.
       Best Regards,
       WSO2 Identity Server Team

When authentication is happening in second step, the code will be sent to email  which is saved in email claim of  user's user profile.
If the user apply the code, WSO2 IS will validate the code and let the user sign in accordingly.

Rajendram KatheesSMS OTP Two Factor Authentication through Identity Server

In this post, I will explain how to use SMS OTP multifactor authenticator through WSO2 Identity server. In this demonstration, I am using Twilio SMS Provider which was used to send the OTP code via SMS at the time authentication happens.

SMS OTP Authentication Flow

The SMS OTP authenticator of WSO2 Identity Server allows to authenticate the system using multifactor authentication. This authenticator authenticates with user name and password as a first step, then sending the one time password to the mobile via SMS as a second step. WSO2 IS will validate the code and let the user sign in accordingly

Add the authenticator configuration <IS_HOME>/repository/conf/identity/application-authentication.xml file under the <AuthenticatorConfigs> section.

<AuthenticatorConfig name="SMSOTP" enabled="true">
<Parameter name="SMSOTPAuthenticationEndpointURL">smsotpauthenticationendpoint/smsotp.jsp</Parameter>
 <Parameter name="BackupCode">false</Parameter>

Configure the Service Provider and Identity Provider Configuration as we normally configure for Two factor authentication. Now we will configure SMS OTP Identity provider for Twilio specific SMS Provider.

Go to ​­twilio​  and create a twilio account.

While registering the account, verify your mobile number and click on console home​  to get free credits (Account SID and Auth Token).

Twilio uses a POST method with headers and the text message and phone number are sent asthe payload. So the fields would be as follows.

SMS URL   ­04­01/Accounts/{AccountSID}/SMS/Messages.json
HTTP Method     POST
HTTP Headers    Authorization: Basic base64{AccountSID:AuthToken}
HTTP Payload    Body=$ctx.msg&To=$ctx.num&From={FROM_NUM}

You can go to SMS OTP Identity Provider and configure to send the SMS using Twilio SMS Provider.

Twilio SMS Provider Config

When authentication is happening in second step, the code will be sent to mobile no which is saved in mobile claim of  user's user profile.
If the user apply the code, WSO2 IS will validate the code and let the user sign in accordingly.

Chanaka CoorayImplement a pax exam test container for your OSGi environment

Pax Exam is the widely using OSGi testing framework. It provides many features to test your OSGi components with JUnit or testng tests.

Rajendram KatheesSMS OTP Two Factor Authentication through Identity Server

In this post, I will explain how to use SMS OTP multifactor authenticator through WSO2 server. In this demonstration, I am using Twilio SMS Provider which was used to send the OTP code via SMS at the time authentication happens.

SMS OTP Authentication Flow

The SMS OTP authenticator of WSO2 Identity Server allows to authenticate the system using multifactor authentication. This authenticator authenticates with user name and password as a first step, then sending the one time password to the mobile via SMS as a second step. WSO2 IS will validate the code and let the user sign in accordingly

Add the authenticator configuration <IS_HOME>/repository/conf/identity/application-authentication.xml file under the <AuthenticatorConfigs> section.

<AuthenticatorConfig name="SMSOTP" enabled="true">
<Parameter name="SMSOTPAuthenticationEndpointURL">smsotpauthenticationendpoint/smsotp.jsp</Parameter>
 <Parameter name="BackupCode">false</Parameter>

Configure the Service Provider and Identity Provider Configuration as we normally configure for Two factor authentication. Now we will configure SMS OTP Identity provider for Twilio specific SMS Provider.

Go to ​­-twilio​  and create a twilio account.

While registering the account, verify your mobile number and click on console home​  to get free credits (Account SID and Auth Token).

Twilio uses a POST method with headers and the text message and phone number are sent asthe payload. So the fields would be as follows.

SMS URL   ­04­01/Accounts/{AccountSID}/SMS/Messages.json
HTTP Method     POST
HTTP Headers    Authorization: Basic base64{AccountSID:AuthToken}
HTTP Payload    Body=$ctx.msg&To=$ctx.num&From={FROM_NUM}

You can go to SMS OTP Identity Provider and configure to send the SMS using Twilio SMS Provider.

Twilio SMS Provider Config

When authentication is happening in second step, the code will be sent to mobile no which is saved in mobile claim of  user's user profile.
If the user apply the code, WSO2 IS will validate the code and let the user sign in accordingly.

Dilshani SubasingheHow to enable Linux Audit Daemon in hosts where WSO2 carbon run times are deployed

Please find the post from:

Amila MaharachchiMake a fortune of WSO2 API Cloud

For those who do not know about WSO2 API Cloud:

WSO2 API Cloud is the API management solution in cloud, hosted by WSO2. In other words, this solution is WSO2's API Manager product as a service. You can try it for free after reading this post.

What you can do with it:

Of course, what you can do is, manage your APIs. i.e. If you have a REST or a SOAP service, which you want to expose as a properly authenticated/authorized service, you can create an API in WSO2 API Cloud and proxy the requests to your back end service with proper authentication/authorization. There are many other features which you can read from here.

HOW and WHO can make a fortune of it:

There are many entities who can make a fortune out of API Cloud. But, in this post, I am purely concentrating on the system integrators. They undertake projects to combine multiple components to achieve some objective. These components might involve databases, services, apis, web UIs etc.

Now, lets pay attention to publishing a managed api to expose an existing service to be used in the above mentioned solution. We all know, no one will write api management solution from scratch to achieve this when there are api management solutions available. If a SI, decides to go ahead with WSO2 API Cloud, they

1. Can create, publish and test the apis within hours. If their scenario is complex, it might take a day or two if they know what they are doing and with some help from WSO2 Cloud team.

2. Don't need to worry about hosting the api, availability and its scalability.

3. Can subscribe for a paid plan starting from 100 USD per month. See the pricing details.

Now, lets say the SI decided to go ahead with API Cloud and subscribed to a paid plan which costs 100 USD per month. If the SI charges 10,000 USD for this solution, you can see the profit margin. You pay very less and you get a great api management solution in return. If the SI do couple of such projects, they can make a fortune of it :)

Rajjaz MohammedCustom Window extension for siddhi

Siddhi  Window Extension allows events to be collected and expired without altering the event format based on the given input parameters like the Window operator. In this post, we are going to look how to write a custom window extension for siddhi and test case to test the function. By default in window extension archetype 2.0.0 it generates the code for length window so let's go deep into

Rajjaz MohammedUse siddhi Try it for Experimentation

This blog post explains how to use Siddhi Try It tool which comes with WSO2 CEP 4.2.0. The Siddhi Try It is a tool used for experimentation of event sequences through Siddhi Query Language (SiddhiQL) statements in real time basis. You can define an execution plan to store the event processing logic and input an event stream to test the Siddhi query processing functionality. Follow the

Chandana NapagodaJava - String intern() Method

String Intern method returns an individual representation for the given String object. When the intern() method get invoked on a String object, it will look up the other interned strings, and if a String object exists in the memory with the same content, it will return the existing reference. Otherwise, it will return a new reference.

Example usage of String intern:

Think about a web application with a caching layer. If cache got missed, it would go to the Database. When the application is running with the high level of concurrency, we should not send all the request to the database. Such a situation we can check whether, multiple calls coming to the same reference by checking String intern.


String name1 = "Value";
String name2 = "Value";
String name3 = new String("Value");
String name4 = new String("Value").intern();

if ( name1 == name2 ){
    System.out.println("name1 and name2 are same");
if ( name1 == name3 ){
    System.out.println("name1 and name3 are same" );
if ( name1 == name4 ){
    System.out.println("name1 and name4 are same" );
if ( name3 == name4 ){
    System.out.println("name1 and name4 are same" );


name1 and name2 are same
name1 and name4 are same

You can see that name1, name2 and name4, objects have the same reference and name3 reference is different.

Lakshani GamageHow to Receive Emails to WSO2 ESB

  1. Uncomment below line in <ESB_HOME>/repository/conf/axis2/axis2.xml/axis2.xml to enable Email transport listener.
    <transportReceiver name="mailto" class="org.apache.axis2.transport.mail.MailTransportListener">

  3. Restart WSO2 ESB if has already started.
  4. Log in to Management console and add below proxy. 
  5. Here, proxy transport type is mailto. The mailto transport supports sending messages (E-Mail) over SMTP and receiving messages over POP3 or IMAP.
    <?xml version="1.0" encoding="UTF-8"?>
    <proxy xmlns=""
    <log level="custom">
    <property expression="$trp:Subject" name="Subject"/>
    <parameter name="transport.PollInterval">5</parameter>
    <parameter name=""></parameter>
    <parameter name="mail.pop3.password">wso2pass</parameter>
    <parameter name="mail.pop3.user">wso2user</parameter>
    <parameter name="mail.pop3.socketFactory.port">995</parameter>
    <parameter name="transport.mail.ContentType">text/plain</parameter>
    <parameter name="mail.pop3.port">995</parameter>
    <parameter name="mail.pop3.socketFactory.fallback">false</parameter>
    <parameter name="transport.mail.Address"></parameter>
    <parameter name="transport.mail.Protocol">pop3</parameter>
    <parameter name="mail.pop3.socketFactory.class"></parameter>

  6. Send email to You can see the email recieving to ESB from the logs.
Note : If you are using gmail to receive emails, you have to allow external apps access in your google account as mention in here.

Sameera JayasomaKeep your WSO2 products up-to-date with WUM

We at WSO2 continuously improve our products with bug fixes, security fixes , and various other improvements. Every major release of our…

Nandika JayawardanaWhats new in Business Process Server 3.6.0

With a release of BPS 3.6.0, we have a whole set of new features added to the business process server.

User substitution capability

One of the main features of this release is the user substitution capability provided by BPS. It allows the users to define a substitution for a period of absence. ( For example , a task owner going on vacation). When the substitution period starts, all the tasks assigned will be transferred to the substituted user. Any new user tasks created against the user will be automatically assigned to the substitute as well.

See more at

JSON and XPath-based data manipulation capability.
When writing a Business Process,  it is necessary to manipulate the data we are dealing with various forms. These data manipulations include, extracting data, concatenating, conversions ect. Often, we would be dealing with eitXML xml or json messages for our workflows. Hence we are introducing the JSON and XML data manipulation capabilities with this release.

See more at

Instance data audit logs
With BPS 3.6.0, we are introducing the ability to search, and view BPMN process instances from BPMN explorer UI. In addition to that, it will show a comprehensive audit information with respect to process instances data.

See more at

Enhanced BPEL process visualiser.
In addition to that, we are introducing enhanced BPEL process visualiser with BPS 3.6.0.

Human Tasks Editor
We are also introducing WS-Human Tasks editor with developer studio. With this editor, you will be able to implement a human tasks package for business process server with minimum effort and time.

See more at

In addition to above main features, there are many bug fixes and security fixes included in BPS 3.6.0 release.

Lakshani Gamage[WSO2 ESB]How to Aggregate 2 XMLs Based on a Element Value Using XSLT Mediator

The XSLT Mediator applies a specified XSLT transformation to a selected element of the current message payload.

Suppose you are getting 2 XML responses from 2 service endpoints like below.

<?xml version="1.0" encoding="UTF-8"?>
<policyList xmlns="">
<holderName>Ann Frank</holderName>
<holderName>Shane Watson</holderName>

<?xml version="1.0" encoding="UTF-8"?>
<policyList xmlns="">

Above two responses are dynamic responses. So, above 2 XMLs should be aggregated to one xml before applying XSLT mediator. For that you can use PayloadFactory Mediator, and get aggregated XML like below.
<?xml version="1.0" encoding="UTF-8"?>
<policyList xmlns="">
<holderName>Ann Frank</holderName>
<holderName>Shane Watson</holderName>

Assume you want to get the response as follows.
<?xml version="1.0" encoding="UTF-8"?>
<ns:policyList xmlns:ns="">
<policy xmlns="">
<holderName>Ann Frank</holderName>
<policy xmlns="">
<holderName>Shane Watson</holderName>

You can use the XSLT mediator to above transformation. Use below XSL file with XSLT mediator. Upload following XSL file into registry.

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="" xmlns:sn="" version="1.0">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" />
<xsl:strip-space elements="*" />
<xsl:key name="policy-by-id" match="sn:details/sn:policy" use="sn:policyId" />
<!-- identity transform -->
<xsl:template match="@*|node()">
<xsl:apply-templates select="@*|node()" />
<xsl:template match="/sn:policyList">
<xsl:apply-templates select="sn:summary/sn:policy" />
<xsl:template match="sn:policy">
<xsl:apply-templates />
<xsl:copy-of select="key('policy-by-id', sn:policyId)/sn:claims" />
<xsl:copy-of select="key('policy-by-id', sn:policyId)/sn:startedDate" />
<xsl:copy-of select="key('policy-by-id', sn:policyId)/sn:validityPeriod" />

Define XSLT mediator like this in your sequence
<xslt key="gov:/xslt/merger.xslt"/>

Dimuthu De Lanerolle

Useful OSGI Commands

1. Finding a package inside which jar?
     -  p

     - Reply :; version="0.0.0"< [24]>
          ; version="0.0.0"<jmqi_7.5.0.3_1.0.0 [82]>

2. Getting actual jar name
      - diag 24

Amani SoysaiPad Apps helps autistic children to spread their wings

Today there is app for everything, for maths, science, history, music and many more. Can these apps be useful for children with special needs? Kids with special needs cannot use the traditional approaches in learning, specially kids with autism need extra help than the rest of these children. They find it bit hard to focus on the things you teach, and process the information that is targeted. For kids like this Apps can be a life saver.

There  is an app for every skill you want to impart. For example there are apps targeted:

      • Improve Attention Span
      • Communication
      • Eye Contact
      • Social Skills
      • Motor co-ordination
      • Language and literacy
      • Time Management
      • Emotional Skills and Self Care

There are many apps in the appstore that facilitate Augmentative and Alternative Communication (AAC ) which can be highly use for autistic kids who are non verbal or speech partially impaired. These apps use text to speech technology, where kids can type words, or drag and drop images to communication with others on their needs.

Here are some of the reasons why iPad apps are so attractive for the kid's with autism and why they simply love it.
  • Visual Learning - Children with autism tend to learn more visually or by touch. In most cases they struggle with instructions given in the traditional school setting. Therefore, ipad provides an interactive educational environment to help them learn.
  • Structured and Consistent - Autistic children prefer devices compared to humans mostly because they are structured and constant. The voice is constant phase is constant, and they like things to be orderly and the ipad waits, which is something most human educators lack.
  • Promote independent learning and gives immediate feedback - Some apps which were build for ASD gives sensory re-enforcement to the child and when giving negative feedback, it chose to give it smoothly without any loud noises to distract the child.
  • Diverse Therapeutic Resource - Technology evolves and expand rapidly, so every day you see new apps with lot of creativity. Kids will not get bored, and the apps they use will get updated time to time
  • Cost Effective - Compared to other therapeutic resources ipad is inexpensive. For example Assistive Technology Devices such as Saltillo, Dynavox, Tobil cost much more than the iPad.
  • Socially Desirable and Portable- Most of therapeutic devices can lead to bullying in school, for kids with special needs. But if you give a child with an iPad, this makes the child popular at school (because every other kid likes to use iPad).
  • Apps can be easily customised - Most apps which were developed targetting autism can be customised, you can add your unique videos, images and voice to these apps. Specially the language and communication apps, so the child can learn from the things they see in their day to day life.

Here are some of the common iPad apps which target Communication, Social, Emotional and Linguistic Skills.
    • Communication - Tap to talk, iComm, Look2Learn, voice4u, My Talk Tools, iCommunicate, Proloqu2go, Inner Voice.
    • Social Skills - Model Me Going Places, Ubersense, TherAd, iMove (these apps will mostly do video self modelling social stories and give step by step  guidance for social stories such as brushing their teeth, going for a hair cut etc).
    • Linguistic Skills - First Phrases, iLanguage/ Language Builder, Rainbow Sentences, Interactive Alphabet, Touch and Write.
    • Emotional Skills - Smarty Pants.
Special Note for Educators, Teachers and Parents

Even though, there is a vast amount of apps available today, iPad is not a baby sitter, you should always be present when educating your child. The educational apps are their to help your child develop some cognitive processors or enhance their skills at reading, writing, spatial reasoning or just a way of communication. However, in order your child to grow, there should be an educator present when child is learning.. It can be a great tool and a resource for educational development but all things should come in moderation and also under supervision. It's actually quality of teaching and support that gives a positive out come, not just the device.

Dimuthu De Lanerolle

Testing WebsphereMQ 8.0 (Windows server 2012) together with WSO2 ESB 4.8.1 message stores and message processors - Full Synapse Configuration

For configuaring IBM WebsphreMQ with WSO2 ESB please click on this link :

WSO2 ESB Full Synapse Configuration
<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="">
   <registry provider="org.wso2.carbon.mediation.registry.WSO2Registry">
      <parameter name="cachableDuration">15000</parameter>
   <proxy name="MySample2"
          transports="https http"
            <property name="FORCE_SC_ACCEPTED"
            <property name="OUT_ONLY" value="true" scope="default" type="STRING"/>
            <store messageStore="JMSStore2"/>
   <proxy name="MySample"
          transports="https http"
            <property name="FORCE_SC_ACCEPTED"
            <property name="OUT_ONLY" value="true" scope="default" type="STRING"/>
            <store messageStore="JMSStore1"/>
   <proxy name="MyMockProxy"
          transports="https http"
            <log level="custom">
               <property name="it" value="** Its Inline Sequence of MockProxy"/>
            <payloadFactory media-type="xml">
                  <Response xmlns="">
            <header name="To" action="remove"/>
            <property name="RESPONSE" value="true" scope="default" type="STRING"/>
            <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
            <property name="messageType" value="application/xml" scope="axis2"/>
   <endpoint name="MyMockProxy">
      <address uri=""/>
   <sequence name="fault">
      <log level="full">
         <property name="MESSAGE" value="Executing default 'fault' sequence"/>
         <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/>
         <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/>
   <sequence name="main">
         <log level="full"/>
         <filter source="get-property('To')" regex="http://localhost:9000.*">
      <description>The main sequence for the message mediation</description>
   <messageStore class=""
      <parameter name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
      <parameter name="store.jms.password">wso2321#qa</parameter>
      <parameter name="java.naming.provider.url">file:\C:\Users\Administrator\Desktop\jndidirectory</parameter>
      <parameter name="store.jms.connection.factory">MyQueueConnectionFactory</parameter>
      <parameter name="store.jms.username">Administrator</parameter>
      <parameter name="store.jms.JMSSpecVersion">1.1</parameter>
      <parameter name="store.jms.destination">LocalQueue1</parameter>
   <messageStore class=""
      <parameter name="java.naming.factory.initial">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
      <parameter name="store.jms.password">wso2321#qa</parameter>
      <parameter name="java.naming.provider.url">file:\C:\Users\Administrator\Desktop\jndidirectory</parameter>
      <parameter name="store.jms.connection.factory">MyQueueConnectionFactory5</parameter>
      <parameter name="store.jms.username">Administrator</parameter>
      <parameter name="store.jms.JMSSpecVersion">1.1</parameter>
      <parameter name="store.jms.destination">LocalQueue5</parameter>
   <messageProcessor class="org.apache.synapse.message.processor.impl.forwarder.ScheduledMessageForwardingProcessor"
      <parameter name="client.retry.interval">1000</parameter>
      <parameter name="">5</parameter>
      <parameter name="interval">10000</parameter>
      <parameter name="">true</parameter>
   <messageProcessor class="org.apache.synapse.message.processor.impl.forwarder.ScheduledMessageForwardingProcessor"
      <parameter name="">2</parameter>
      <parameter name="client.retry.interval">1000</parameter>
      <parameter name="interval">10000</parameter>
      <parameter name="">true</parameter>

Important Points :

[1] Identify the message flow ...

 SoapRequest  --> MySampleProxy --> MessageStore -->MessageQueue --> MessageProcessor
     (SoapUI)                 (ESB)                        (ESB)           (WebsphereMQ)             (ESB)
                                                                                                         MyMockProxy  <--|

[2] Remember to set additional MS / MP settings depending on your requirement.
      eg: Maximum delivery attempts , Retry period etc.

Thushara RanawakaHow to Create a New Artifacts Type in WSO2 Governance Registry(GReg) - Registry Extensions(RXT)

Hi Everybody today I'm going to explain How to write an RXT from scratch and new features we have included in GReg 5.2.0 RXTs.  This article will help you to write an RXT from scratch and modify the out of the box RXTs for your needs. You can find the full RXT I'm explaining below from here. In the end of the document, I will explain how to upload this rxt to WSO2 GReg.

For this example, I will be creating an rxt fields using actual use-case. First, let's defined the header of the rxt. For that, I'm using following parameters.

mediaType : application/vnd.dp-restservice+xml
shortName : dprestservice

<artifactType type="application/vnd.dp-restservice+xml" shortName="dprestservice" singularLabel="DP REST Service" pluralLabel="DP REST Services" hasNamespace="false" iconSet="20">

  • type - Defines the media type of the artifact. The type format should be application/vnd.[SOMENAME]+xml. SOMENAME can contain any alphanumeric character, "-" (hyphen), or "." (period).
  • shortName - Short name for the artifact
  • singularLabel - Singular label of the artifact
  • pluralLabel - Plural label of the artifact
  • hasNamespace - Defines whether the artifact has a namespace (boolean)
  • iconSet - Icon set number used for the artifact icons

We can name type(mediaType) and shartName as the most important parameters. therefore please keep those in mind all the time. You can't change shortName once you defined it(saved the artifact type) but you can change this by reapplying the whole rxt with the new shartName. You can change mediaType later on but we are not recommending it since there is a risk of loosing all the assets If you want to change the mediaType you have to change the mideaType for all the assets that already in the GReg.

Next big thing is to create the storage path. This path is used to store the metadata type rxt. In the example, I'm using /trunk/dprestservices/@{overview_version}/@{overview_name} to store the assets creating using this rxt. denotes the dynamic data that users going to input. aoverview is a table name that we are going to include name and version data of all the asset that will create using DP REST service.


For example if we create a asset using overview_name=testdp and overview_version=1.0.0 then the storage path for that asset would be /trunk/dprestservices/1.0.0/testdp. The name attribute is specially used to denote asset name.


likewise you can add namespaceAttribute to denote the namespace if and only you uses namespace,


Defining nameAttribute and namespaceAttribute is not necessary if your using something other than default rxt table values. Default rxt table values will be overview_name and overview_namespace.

If you want to attach a lifecycle in the asset creation you have to add below tag with the lifecycle name. for this example, I'm using ServiceLifeCycle which is also available out of the box with GReg 5.2.0.


Now we are done with the upper section of the rxt definition. Let's start creating the asset listing page. This section is specially created to list the assets in management console(https://localhost:9443/carbon). Creating the listing page is straight forward. If you want to display name and version in the list page simply add below lines,

            <column name="Name">
                <data type="path" value="overview_name" href="@{storagePath}"/>
            <column name="Version">
                <data type="path" value="overview_version" href="/trunk/dprestservices/@{overview_version}"/>

There are two types of data which are path and textpath is a clickable hyperlink, It will direct you to the point which is mentioned by href and text is not clickable, it will just be a label. As per the above example if you click on the name you will be directed to metadata file, If you click on the version you will be directed to the collection(directory). Let's add an another column call namespace and set the type as text.

<column name="Service Namespace">
        <data type="text" value="overview_namespace"/>

Add above tag somewhere within the list tag.

Well now we have come half way through and its time to create the form to get the user data. Let's start with the overview table.
The overview is the table name that we have used in this example to store name, version, and namespace(additional). Likewise, you can use any table name that you preferred. However, if you're using something other than overview we recommended you to create publisher and store extension accordingly. I will be creating another article on this in the near future.
Let's create the content
        <table name="Overview">
            <field type="text" required="true" readonly="true">
                <name label="Name">name</name>
            <field type="text" required="true" validate="\/[A-Za-z0-9]*">
            <field type="text" url="true">
  <name label="URL">url</name>

            <field type="text" required="true" default="1.0.0">
                <name label="version">version</name>

let's start from the simplest explanation and move to the more complex once later.

  • default="1.0.0"

Firstly talk about default attribute, This attribute initialized the values for a specific field with the given value. In this example, version field will get the initial value of 1.0.0 which editable any time.

  • url="true"
This will wrap it as a clickable link. Users can link another asset simply by storing asset ID. Paste below bolted value of another asset and users can make simple links.

Example : https://localhost:9443/publisher/assets/dprestservice/details/50631a7c-646e-4156-88af-dad46be2f428
  • <name>context</name>

The value defined in name tag is the reference name of a specific field. The user has to omit spaces is this value and use of camel case is preferred. In WSO2 GReg reference name is created using concatenation of table name and value in name tag, Therefore reference names for above 3 fields will be overview_name, overview_context and overview_version.

  • validate="\/[A-Za-z0-9]*"

Validate attribute is an inline regex validation for the field, in this case, it is context. Likewise, you can add any regex you want. 

  • label="Name"

the label name is the display name for the field. Therefore the user can use any kind of characters and sentences in here. 

  • readonly="true"

readonly means once you save it for the first time users are not allowed to change it. 

  • required="true"

Make the field a mandatory or not.

  • name="Overview"
This denotes the table name for set of fields inside of it.
  • type

There are 6 different field types available in GReg 5.2.0,  Kindly find all the types with an example.


<field type="text" url="true">
       <name label="URL">url</name>



<field type="options">
                <name label="Transport Protocols">transportProtocols</name>


<field type="text-area">


<field type="checkbox">
   <name label="BasicAuth">BasicAuth</name>



<field type="date">
 <name label="From Date">FromDate</name>


     <heading>Contact Type</heading>
     <heading>Contact Name/Organization Name/Email Address</heading>

<field type="option-text" maxoccurs="unbounded">
                <name label="Contact">Contact</name>
                    <value>Technical Owner</value>
                    <value>Technical Owner Email</value>
                    <value>Business Owner</value>
                    <value>Business Owner Email</value>

maxoccurs="unbounded" makes the field infinite.  let's say if you want to make a set of different field types unbounded like this you can use this attribute in a table like below and achieve that task.

<table name="Doc Links" columns="3" maxoccurs="unbounded">
                <heading>Document Type</heading>
            <field type="options">
                <name label="Document Type">documentType</name>
            <field type="text" url="true" validate="(https?:\/\/([-\w\.]+)+(:\d+)?(\/([\w/_\.]*(\?\S+)?)?)?)">
                <name label="URL">url</name>
            <field type="text-area">
                <name label="Document Comment">documentComment</name>

How to load dynamic content to a filed.

<field type="options" >
    <name label="WADL">wadl</name>
    <values class="org.wso2.sample.rxt.WSDLPopulator"/>


For that please refer this blog post.

To deploy this in GReg please follow below steps.

1. Login to the carbon console: https://localhost:9443/carbon/

2 .Find Extensions from the left vertical bar and click it.

3. Click on add new extension.

4. First, remove the default sample by selecting all and paste the new RXT content from here.
5. Finally as per a best practice make sure to upload dprestservice.rxt to <GREG_HOME>/repository/resources/rxts/ directory as well.

Please add a comment if you have any clarifications regarding this.

Thushara RanawakaGood way to monitor carbon servers - Netdata v1.2.0

Netdata is a monitoring tool for real-time systems Linux that will allow us to visualize the core specs of a system such as CPU, memory, speed of the hard disk, network traffic, application, etc.. This basically shows most of the details we can get from Linux performance tools. This is similar to Netflix Vector. Netdata focuses on the real-time visualization only. Currently, the data is stored only in memory and there is a plan to store historical data in some data store in a future version. The biggest strength of Netdata is ease of use. It only takes 3 minutes to start using it. Let me show you how easy it is to getting started with it on a ubuntu server,

      1. First, prepare your ubuntu server by installing required packages.

$ apt-get install zlib1g-dev uuid-dev libmnl-dev gcc make git autoconf autogen automake pkg-config

      2. Clone/Download the netdata source and go the netdata directory.

$ git clone --depth=1
cd netdata

      3. Then install netdata.

$ ./

      4. After installation complete kill the running Netdata instance.

$ killall netdata

      5. Start again Netdata.

$ /usr/sbin/netdata

      6Now open a new browser tab and enter blow URL,


If you're running this on your local machine the URL should be http://localhost:19999.

That it!! Now Netdata is up and running in your machine/server/VM.

Netdata is developed using C language. the installation is simple and the resources used are extremely small, actually takes less than 2% CPU

After going through all the cons and pros I believe this will be a good way to monitor carbon servers remotely.

Thushara RanawakaRetrieving Associations Using WSO2 G-Reg Registry API Explained

This was a burning issue I had while implementing a client to retrieve association related data. In this post, I will be rewriting WSO2 official documentation for Association registry REST API. Without further a due let's send some requests and get some response :).

The following terms explain the meaning of the query parameters passed with the following REST URIs.
pathPath of the resource(a.k.a. registry path).
typeType of association. By default, Governance Registry has 8 types of association, such as usedBy, ownedBy...
startStart page number.
sizeNumber of associations to be fetched.
targetTarget resource path(a.k.a. registry path).

Please note that the { start page } and { number of records } parameters can take any value greater than or equal to 0. The { start page } and { number of records } begins with 1. If both of them are 0, then all the associations are retrieved.

Get all the Associations on a Resource

HTTP Method                
Request URI/resource/1.0.0/associations?path={ resource path }&start={ start page }&size={ number of records }
HTTP Request HeaderAuthorization: Basic { base64encoded(username:password) }
ResponseIt retrieves all the associations posted on the specific resource.
ResponseHTTP 200 OK
Response Typeapplication/json

Sample Request and Response

Get Associations of Specific Type on a Given Resource

HTTP Method                 
Request URI/resource/1.0.0/associations?path={ resource path }&type={ association type }
HTTP Request HeaderAuthorization: Basic { base64encoded(username:password) }
It retrieves all the associations for the specific type of the given resource
ResponseHTTP 200 OK
Response Typeapplication/json

Sample Request and Response

Add Associations to a Resource

  1. Using a Json pay load

HTTP Method                 
Request URI/resource/1.0.0/associations?path={resource path}
HTTP Request Header
Authorization: Basic { base64encoded(username:password) }
Content-Type: application/json
Payload[{ "type":"<type of the association>","target":"<valid resource path>"}] 
It adds the array of associations passed as the payload for the source resource
ResponseHTTP 204 No Content.
Response Typeapplication/json

    2. Using Query Params

HTTP Method                 
Request URI/resource/1.0.0/associations?path={resource path}&targetPath={target resource}&type={assocation type}
HTTP Request Header
Authorization: Basic { base64encoded(username:password) }
Content-Type: application/json
ResponseHTTP 204 No Content.
Response Typeapplication/json

Delete Associations on a Given Resource

HTTP Method                 
Request URI/resource/1.0.0/association?path={resource path}&targetPath={target path}&type={association type}
It deletes the association between the source and target resources for the given association type.
ResponseHTTP 204 No Content.
Response Typeapplication/json

Again this is a detailed version of WSO2 official documentation. This concludes the post. 

Thushara RanawakaHow to remotely debug WSO2 product which is in a docker container.

This post contains how to debug a docker container. As per the prerequisites, users have to clone two WSO2 repos along the road. Other than that he or she should have good knowledge in remote debugging. Without further due let's start the process.

First clone and download the Puppet modules for WSO2 products.
Move to the v2.1.0 tag
Add the product pack and JDK to following locations.
 Download and copy JDK 1.7 (jdk-7u80-linux-x64.tar.gz) pack to below directory.
 Download the relevant product pack and copy them to below directory.
 Then clone WSO2 Dockerfiles
Now set puppet home using below terminal command. 
export PUPPET_HOME=<puppet_modules_path>
export PUPPET_HOME=/Users/thushara/Documents/greg/wso2/puppet-modules
Update the following lines in
echo "Starting ${SERVER_NAME} with [Startup Args] ${STARTUP_ARGS}, [CARBON_HOME] ${CARBON_HOME}"
${CARBON_HOME}/bin/ start
sleep 10s
tail -500f ${CARBON_HOME}/repository/logs/wso2carbon.log

Lets install Vim in the docker container for debugging purposes. For that add following lines to Dockerfile, before last 2 lines.


RUN apt-get install -y vim
Now update the following line to open remote debug ports.
bash ${common_folder}/ -n ${product_name} -p 9763:9763 -p 9443:9443 -p 5005:5005 $*
Thats it with the configuration now you can build and start docker container to start debugging.
First build the docker container,
<dockerfiles_home>/wso2greg $ bash -v 5.2.0 -r puppet
Now run the docker container,
<dockerfiles_home>/wso2greg $ bash -v 5.2.0
To find the docker container name please list all the docker processes running using below command,
<dockerfiles_home>/wso2greg $ docker ps
Let's login to the docker instance using image name(wso2greg-default) which we got using docker ps command.
<dockerfiles_home>/wso2greg $ docker exec -it wso2greg-default /bin/bash
Now you can start remote debugging as you use to it :)
Below I have added additional commands you might need. To see the list of docker machines up and running use below commands,
<dockerfiles_home>/wso2greg $ docker-machine ls
That's it from here. If you have any questions please create a StackOverflow question and attached it as a comment below.

Thushara RanawakaHow to Schedule a Task Using WSO2 ESB 4.9.0

Schedule task is one of the very useful hidden features that comes with WSO2 ESB 4.9.0. This is a much improved and reliable version of schedule task comparing to previous versions of ESB. This Task Scheduler can be worked with clustered environments such as 1 manager and 2 worker..etc. Let's start hacking the WSO2 ESB

First We have to create a Sample Back-End Services. The back-end sample services come with a pre-configured Axis2 server, and demonstrates in-only and in-out SOAP/REST or POX messaging over HTTP/HTTPS and JMS transports using WS-Addressing, WS-Security, and WS-Reliable Messaging. They also handle binary content using MTOM and SwA.
1. Each back-end sample service can be found in a separate folder in <ESB_HOME>/samples/axis2Server/src directory. They can be built and deployed using Ant from each service directory. You can do this by typing "ant" without quotes on a console from a selected sample directory. For example,
user@host:/tmp/wso2esb-2.0/samples/axis2Server/src/SimpleStockQuoteService$ ant
Buildfile: build.xml
      [jar] Building jar: /tmp/wso2esb-2.0/samples/axis2Server/repository/services/SimpleStockQuoteService.aar
Total time: 3 seconds
2. Next, start the Axis2 server. Go to <ESB_HOME>/samples/axis2Server directory and execute either (for Linux) or axis2server.bat(for Windows) script. For example,
This starts the Axis2 server with the HTTP transport listener on port 9000 and HTTPS on 9002 respectively.
3. Now add the sample sequence, Please follow below steps.
i.Click on Sequences Under Service Bus from left pane.
ii.Click on Add Sequences.
iii.Then click on "switch to source view" from the tab

iv.And delete everything in that box and add below sequence.
<?xml version="1.0" encoding="UTF-8"?>
<sequence name="iterateSequence" xmlns="">
    <iterate attachPath="//m0:getQuote"
        expression="//m0:getQuote/m0:request" preservePayload="true"
        xmlns:ns="http://org.apache.synapse/xsd" xmlns:ns3="http://org.apache.synapse/xsd">
                        <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
                <log level="custom">
                        name="Stock_Quote_on" xmlns:ax21="http://services.samples/xsd"/>
                        name="For_the_organization" xmlns:ax21="http://services.samples/xsd"/>
                        name="Last_Value" xmlns:ax21="http://services.samples/xsd"/>

Note: you have to change the endpoint address if your running SimpleStockQuoteService in some other endpoint.

v. Save & Close the view.

4. Let's add the scheduling task.
Click on Scheduled Tasks Under Service Bus from the left pane. Select add task and fill the fields like below.

i. Task Name - CheckQuote
ii. Task Group - synapse.simple.quartz
iii. Task Implementation - org.apache.synapse.startup.tasks.MessageInjector
iv. Set below properties 
sequenceName - Literal - iterateSequence
injectTo - Literal - sequence
message - XML - 
<m0:getQuote xmlns:m0="http://services.samples">

v. Trigger type - Single
vi. Count - 100 
Note: This means the task will run 100 times. If you want to set an infinite tasks simply set the count to -1.
vii. Interval (in seconds) - 10

Thant's it! Now click on Schedule button and the task will start execution according to the Interval. In this example, task will start in 10 seconds. 

Kindly find the schedule task xml code from below:

<task class="org.apache.synapse.startup.tasks.MessageInjector"
        group="synapse.simple.quartz" name="CheckQuote">
        <trigger count="100" interval="10"/>
        <property name="sequenceName" value="iterateSequence" xmlns:task=""/>
        <property name="message" xmlns:task="">
            <m0:getQuote xmlns:m0="http://services.samples">
        <property name="injectTo" value="sequence" xmlns:task=""/>


Thushara RanawakaHow to Write a Simple G-Reg Lifecycle Executor to send mails when triggered using WSO2 G-Reg 5 Series.

When its comes to SOA governance lifecycle management aka LCM is a very useful feature. Recently I wrote a post about lifecycle checkpoints which is again another useful feature that comes with WSO2 G-Reg 5 series. Before start writing an LC executor you need to have basic knowledge of G-Reg LCM, lifecycle syntax and good knowledge in java.
In this sample, I will create a simple lifecycle that has 3 states which are Development, Tested, and Production. In each state, I will call the LC executor and send mails to the defined list in the lifecycle.

To get a basic idea about WSO2 G-Reg Lifecycles please go through Lifecycle Sample documentation. If you have a fair knowledge about LC tags and attributes you can straightaway add the first checkpoint to LC. You can find the official G-Reg documentation for this from here.

First let's start off with writing a simple lifecycle call EmailExecutorLifeCycle, In the LC there is only below things that you need to consider if you have basic knowledge in WSO2 LCs

<state id="Development">
     <data name="transitionExecution">
       <execution forEvent="Promote" class="">                             <parameter name="email.list" value=""/>                  </execution>
                    <transition event="Promote" target="Tested"/>   

Let's explain above bold syntax.
<datamodel> : This is where we define additional data models that need to be executed in a state change.

<data name="transitionExecution"> : Within this tag we define general stuff that need to be executed subsequent to the state change. Like wise we have transitionValidation, transitionPermission, transitionScripts, transitionApproval, etc..

forEvent="Promote" : This defines for which event this execution need to happen. In this case, it is promote.

class="" : Class path of the executor, we will be discussing this file later. For the record this jar file need to be added to the  dropins or libs directory which is located in repository/components/.

name="email.list" : This is a custom parameter that we define for this sample. Just add some valid emails for the sake of testing.

<transition event="Promote" target="Tested"/> : Transition actions available for the state, there can be 0 to many transitions available for the state. In this case, it is promote an event for the Tested state.

Please download the EmailExecutorLifeCycle.xml and apply it using G-Reg mgt console.

Since we are sending a mail using our executor we need to fill email settings of the mail admin. Please go to axis2.xml which is located in repository/conf/axis2/. Now in that XML file uncomment mail transport sender codes. Please find the sample settings for gmail transport sender.

<!-- To enable mail transport sender, ncomment the following and change the parameters
<transportSender name="mailto"
        <parameter name="mail.smtp.from"><></parameter>
        <parameter name="mail.smtp.user"><></parameter>
        <parameter name="mail.smtp.password"><password></parameter>
        <parameter name=""></parameter>

        <parameter name="mail.smtp.port">587</parameter>
        <parameter name="mail.smtp.starttls.enable">true</parameter>
        <parameter name="mail.smtp.auth">true</parameter>

Please fill the bold fields with correct values.

If you have enabled 2 step verification in your Gmail account you have to disable that first. Please follow these steps:

 1. Login to Gmail. 
 2. Go to Gmail security page.
 3. Find 2 step verification from there. open it.
 4. Select "Turn off".

Lets download and save the file in <GREG_HOME>/repository/components/libs/. Then do a restart.

Now it's time to see the executor in action,

1. Add the EmailExecutorLifeCycle.xml to any metadata type rxt such as soapservice, restservice, etc... using the mgt console.
2. Login to the publisher and from the lifecycle tab promote the LC state to next state.

3. Now check the inbox of any mail ID that you included in email.list.

You can download the source code from here.

Yashothara ShanmugarajahIntroduction to WSO2 BPS

We can use WSO2 BPS for efficient business Process Management by allowing easy deploy of business processes written using either WS-BPEL standard or BPMN 2.0 standard.

Business Process: Collection of related and structured activities or tasks, that meants a business use case and produces a specific service or output. A process may have zero or more well-defined inputs and an output.

Process Initiator: The person or Apps who initiates business process.

Human task: In here human interaction is involved in business process.

BPEL: It is an XML-based language used for the definition and execution of business. Composing of web services and orchestration.

BPMN: Graphical notation of business processes.

Simple BPEL Process Modeling
  • Download BPS product.
  • Go to <BPS_HOME> -> bin through terminal and execute sh ./ command.
  • Install the plug-in with pre-packaged Eclipse - This method uses a complete plug-in installation with pre-packaged Eclipse, so that you do not have to install Eclipse separately. On the WSO2 BPS product page, click Tooling and then download the distribution according to your operating system under the Eclipse JavaEE Mars + BPS Tooling 3.6.0 section.
  • Then we need to create carbon Composite Application Project. These steps clearly mentioned in

sanjeewa malalgodaLoad balance data publishing to multiple receiver groups - WSO2 API Manager /Traffic Manager

In previous articles we discussed about traffic Manager and different deployment patterns. In this article we will further discuss about different traffic manager deployments we can for deployments across data center. Cross data center deployments must use publisher group concept as each event need to sent all data center if we need global count across DCs. 

In this scenario there are two group of servers that are referred to as Group A and Group B. You can send events to both the groups. You can also carry out load balancing for both sets as mentioned in load balancing between a set of servers. This scenario is a combination of load balancing between a set of servers and sending an event to several receivers.
An event is sent to both Group A and Group B. Within Group A, it will be sent either to Traffic Manager -01 or Traffic Manager -02. Similarly within Group B, it will be sent either to Traffic Manager -03 or Traffic Manager -04. In the setup, you can have any number of Groups and any number of Traffic Managers (within group) as required by mentioning them accurately in the server URL. For this scenario it's mandatory to publish events to each group but within group we can do it two different ways.

  1. Publishing to multiple receiver groups with load balancing within group
  2. Publishing to multiple receiver groups with failover within group

Now let's discuss both of these options in detail. This pattern is the recommended approach for multi data center deployments when we need to have unique counters across data centers. Each group will reside within data center and within datacenter 2 traffic manager nodes will be there to handle high availability scenarios.

Publishing to multiple receiver groups with load balancing within group

As you can see diagram below data publisher will push events to both groups. But since we do have multiple nodes within each group it will send event to only one node at a given time in round robin fashion. That means within group A first request goes to traffic manager 01 and next goes to traffic manager 02 and so. If traffic manager node 01 is unavailable then all traffic will go to traffic manager node 02 and it will address failover scenarios.
Copy of traffic-manager-deployment-LBPublisher-failoverReciever(5).png

Similar to the other scenarios, you can describe this as a receiver URL. The Groups should be mentioned within curly braces separated by commas. Furthermore, each receiver that belongs to the group, should be within the curly braces and with the receiver URLs in a comma separated format. The receiver URL format is given below.

          Binary    {tcp://,tcp://},{tcp://,tcp://}
{ssl://,ssl://}, {ssl://,ssl://}

Publishing to multiple receiver groups with failover within group

As you can see diagram below data publisher will push events to both groups. But since we do have multiple nodes within each group it will send event to only one node at a given time. Then if that node goes down then event publisher will send events to other node within same group. This model guarantees message publishing to each server group.  

Copy of traffic-manager-deployment-LBPublisher-failoverReciever(3).png
According to following configurations data publisher will send events to both group A and B. Then within group A it will go to either traffic manager 01 or traffic manager 02. If events go to traffic manager 01 then until it becomes unavailable events will go to that node. Once its unavailable events will go to traffic manager 02.

Binary{tcp:// | tcp://},{tcp:// | tcp://}
{ssl://,ssl://}, {ssl://,ssl://}

Thushara RanawakaSOA Governance with WSO2 Governance Registry(G-Reg)

The WSO2 Governance Registry(G-Reg) has gone through a major transformation, from Good to best ever SOA governance platform. Starting from G-Reg 5 series WSO2 Governance Registry has shown great improvement compared to its previous versions. G-Reg now comes with multiple views for different roles, i.e. publishers, consumers/subscribers(aka Store User) and administrators. This is a significant change from the previous versions which just included one view which good old carbon console for all the users. Before understanding G-Reg let's first understand what an SOA registry is and it's purpose. 
A service-oriented architecture registry (SOA registry) is a resource that sets access rights for data that is necessary for service-oriented architecture projects. An SOA registry allows service providers to discover and communicate with consumers efficiently, creating a link between service providers and service customers. This is where the registry comes into the picture to facilitate SOA governance. The registry can act as a central database that includes artifacts for all services planned for development, in use and retired. Essentially, it's a catalog of services which are searchable by service consumers and providers. The WSO2 Governance Registry is more than just an SOA registry, because, in addition to providing end-to-end SOA governance, it can also store and manage any kind of enterprise asset including but not limited to services, APIs, policies, projects, applications, people.
Let's talk about some features WSO2 G-Reg Offers.
  • Seamless integration with great WSO2 product stack which contains award winning ESB, APIM, DAS, IS and much more.
  • Various integration techniques with other 3rd party applications or products.
  • Ability to store different types of resources out of the box(Content type and metadata type)
 - Content type:- WSDL, WADL, Schema, Policy and Swagger
 - Metadata type:- Rest Service, SOAP Service.
  • Add resources from the file system and from URL.
  • CRUD operations using UI for resources and capable of doing all the governance-related tasks.

  • Strong Taxonomy and filtering search capabilities.
  • Complex Searchability and Quick Search using Solr-based search engine.
  • Reuse stored searches using search history.
  • Great social features such as review, rate and much more...

  • XML-based graphical Lifecycle designers.
  • D3 Based Lifecycle Management UI.
  • Strong life-cycle management(LCM) capabilities.
  • Ability to automate actions using LCM executors and G-Reg Handlers.
  • Strong role-based visualizations.
  • Visualize dependencies and Associations.

  • UI or API capability for subscriptions and notifications.
  • Lifecycle subscriptions

  • Set of strong REST APIs that can do all the CRUD operations including governing an asset.
  • Use as an embedded repository.
  • Use as a service repository with UDDI.

  • Configure static and custom reports using admin console.
  • Ability to produce static snapshot reports(search) using admin console.
  • Report Generation Scheduling.

If you want to know more about G-Reg and its capabilities, We prefer you to download and run G-Reg in your local machine. It will only take just 5 minutes. Please make sure to follow the getting started guide which will walk through all the major features that G-Reg offers out of the box. 

Lakshani Gamage[WSO2 App Manager] How to Add a Custom Radio Button Field to a Webapp

In WSO2 App Manager, when you create a new web app, you have to fill a set of predefined values. If you want to add any custom fields to an app, you can easily do it.

Suppose you want to add custom radio button field to webapp create page. Say the custom radio button field name is "App Category".

First, Let's see how to add a custom field to UI (Jaggery APIs).
  1. Modify <APPM_HOME>/repository/resources/rxt/webapp.rxt.
    <field type="text" required="true">
    <name>App Category</name>

    Note : If you don't want add the custom field as mandatory, required="true" part is not necessary.
  3. Login to Management console and navigate to Home > Extensions > Configure > Artifact Types and delete "webapp.rxt"
  4. Add below code snippet to the required place of <APPM_HOME>repository/deployment/server/jageeryapps/publisher/themes/appm/partials/add-asset.hbs
    <div class="form-group">
    <label class="control-label col-sm-2">App Category: </label>
    <div class="col-sm-10">
    <div class="radio">
    <input type="radio" data-value="free" class="appCategoryRadio" name="appCategoryRadio">
    <div class="radio">
    <input type="radio" data-value="premium" class="appCategoryRadio" name="appCategoryRadio">

    <input type="hidden" class="col-lg-6 col-sm-12 col-xs-12" name="overview_appCategory"

  6. Add below code snippet to <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/partials/edit-asset.hbs.
    <div class="form-group">
    <label class="control-label col-sm-2">App Category: </label>
    <div class="col-sm-10">
    <div class="radio">
    <input type="radio" data-value="free" class="appCategoryRadio" name="appCategoryRadio">
    <div class="radio">
    <input type="radio" data-value="premium" class="appCategoryRadio" name="appCategoryRadio">

    <input type="hidden"
    value="{{{snoop "fields(name=overview_appCategory).value" data}}}"
    class="col-lg-6 col-sm-12 col-xs-12" name="overview_appCategory"

  8. To save selected radio button value in registry, you need to add below function inside $(document ).ready(function() {} of <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/js/resource-add.js
    var output = [];
    $( ".appCategoryRadio" ).each(function( index ) {
    var categoryValue = $(this).data('value');
    if( $(this).is(':checked')){

  10. To preview the selected radio button value in app edit page, add below code snippet inside $(document ).ready(function() {} of  <APPM_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/appm/js/resource-edit.js.
    var output = [];
    $( ".appCategoryRadio" ).each(function( index ) {
    var categoryValue = $(this).data('value');
    if( $(this).is(':checked')){


    var appCategoryValue = $('#overview_appCategory').val().split(',');
    $( ".appCategoryRadio" ).each(function( index ) {
    var value = $(this).data('value');
    if($.inArray(value, appCategoryValue) >= 0){
    $(this).prop('checked', true);

  12. When you create a new version of an existing webapp, to copy the selected radio button value to the new version, add below line to
    <input type='text' value="{{{snoop "fields(name=overview_appCategory).value" data}}}" name="overview_appCategory" id="overview_appCategory"/>

    Now, Let's see how to add customized fields to the REST APIs.
  14. Go to Main -> Browse -> in Management console and navigate to   /_system/governance/appmgt/applicationdata/custom-property-definitions/webapp.json and click on "Edit As Text". Add the custom fields which you want to add.

  16. Restart App Manager.
  17. Web app create page with the newly added radio button will be shown as below. save image