WSO2 Venus

Manula Chathurika ThantriwatteHow to write subscriber and publisher to JBOSS MQ topic

This Post explains Topics in JBOSS MQ with Subscribing and Publishing. For this we will write two java clients.

  • to Subcribe for messages
  • to to Publish the messages
First you have to download JBOSS Application Server from here. In this sample I'm using jboss-4.2.2.GA. Before starting the JBOSS application server you have to create a topic in JBOSS server. To do that, you have to create myTopoc-service.xml (you can used what every name you want) under the <JBOSS_SERVER>/server/default/deploy and enter following xml into it.

<mbean code="" name=",name=topicA">
<depends optional-attribute-name="DestinationManager"></depends>

After that you can start the JBOSS application server. From the console log you can verify the topicA was created.

Now you can create the sample project on IDE that you preferred. Also make sure to add client and lib directory jars in the JBOSS application server to the project. Now you can create and sample programs as follows. sample program

package simple;

import java.util.Properties;

import javax.jms.*;
import javax.naming.InitialContext;
import javax.naming.NamingException;

public class TopicSubscriber {

private String topicName = "topic/topicA";

private boolean messageReceived = false;

private static javax.naming.Context mContext = null;
private static TopicConnectionFactory mTopicConnectionFactory = null;
private TopicConnection topicConnection = null;

public static void main(String[] args) {
TopicSubscriber subscriber = new TopicSubscriber();

public void subscribeWithTopicLookup() {

Properties properties = new Properties();
properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
properties.put(Context.PROVIDER_URL, "jnp://localhost:1099");
properties.put("topic." + topicName, topicName);

try {

mContext = new InitialContext(properties);
mTopicConnectionFactory = (TopicConnectionFactory)mContext.lookup("ConnectionFactory");

topicConnection = mTopicConnectionFactory.createTopicConnection();

System.out.println("Create Topic Connection for Topic " + topicName);

while (!messageReceived) {
try {
TopicSession topicSession = topicConnection
.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);

Topic topic = (Topic) mContext.lookup(topicName);
// start the connection

// create a topic subscriber
javax.jms.TopicSubscriber topicSubscriber = topicSession.createSubscriber(topic);

TestMessageListener messageListener = new TestMessageListener();

} catch (JMSException e) {
} catch (NamingException e) {
} catch (InterruptedException e) {
} catch (NamingException e) {
throw new RuntimeException("Error in initial context lookup", e);
} catch (JMSException e) {
throw new RuntimeException("Error in JMS operations", e);
} finally {
if (topicConnection != null) {
try {
} catch (JMSException e) {
throw new RuntimeException(
"Error in closing topic connection", e);

public class TestMessageListener implements MessageListener {
public void onMessage(Message message) {
try {
System.out.println("Got the Message : "
+ ((TextMessage) message).getText());
messageReceived = true;
} catch (JMSException e) {

} sample program

package simple;

import javax.jms.*;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import java.util.Properties;

public class TopicPublisher {
private String topicName = "topic/topicA";

private static javax.naming.Context mContext = null;
private static TopicConnectionFactory mTopicConnectionFactory = null;

public static void main(String[] args) {
TopicPublisher publisher = new TopicPublisher();

public void publishWithTopicLookup() {
Properties properties = new Properties();
TopicConnection topicConnection = null;
properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
properties.put(Context.PROVIDER_URL, "jnp://localhost:1099");
properties.put("topic." + topicName, topicName);

try {

mContext = new InitialContext(properties);
mTopicConnectionFactory = (TopicConnectionFactory)mContext.lookup("ConnectionFactory");
topicConnection = mTopicConnectionFactory.createTopicConnection();

try {
TopicSession topicSession = topicConnection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);

// create or use the topic
System.out.println("Use the Topic " + topicName);
Topic topic = (Topic) mContext.lookup(topicName);

javax.jms.TopicPublisher topicPublisher = topicSession.createPublisher(topic);

String msg = "Hi, I am Test Message";
TextMessage textMessage = topicSession.createTextMessage(msg);

System.out.println("Publishing message " + textMessage);


} catch (InterruptedException e) {

} catch (JMSException e) {
throw new RuntimeException("Error in JMS operations", e);
} catch (NamingException e) {
throw new RuntimeException("Error in initial context lookup", e);


You have to used PROVIDER_URL as "java.naming.provider.url" and INITIAL_CONTEXT_FACTORY as "java.naming.factory.initial".

First you have to run TopicSubscriber and then run the TopicPublisher. Here are the output of them.

Create Topic Connection for Topic topic/topicA
Got the Message : Hi, I am Test Message

Use the Topic topic/topicA
Publishing message SpyTextMessage {
Header {
   jmsDestination  : TOPIC.topicA
   jmsDeliveryMode : 2
   jmsExpiration   : 0
   jmsPriority     : 4
   jmsMessageID    : ID:2-13977171929621
   jmsTimeStamp    : 1397717192962
   jmsCorrelationID: null
   jmsReplyTo      : null
   jmsType         : null
   jmsRedelivered  : false
   jmsProperties   : {}
   jmsPropReadWrite: true
   msgReadOnly     : false
   producerClientId: ID:2
Body {
   text            :Hi, I am Test Message


Nirmal FernandoApache Stratos as a single distribution

In Apache Stratos, we have recently worked on merging 3 of our main products (Stratos Manager, Cloud Controller and Auto-scaler) along with WSO2 CEP (Stratos uses WSO2 CEP as its complex event processing engine) into a single distribution using the powerfulness of WSO2 Carbon Framework. By doing so, we have expected to gain following main benefits:

1. Reduce the barrier for a new developer to getting started with Stratos.

2. Make lives of Stratos developers' easier by reducing the number of JVMs.

3. Deploying a distributed set-up using a single distribution in a production environment.

Earlier, in order to run Stratos, one needed to configure 3 Stratos products mentioned above, in addition to a message broker (Stratos uses message broker as its main inter-component communication channel), WSO2 CEP and WSO2 BAM (if you need monitoring capability). This would be resulted in six JVMs (assuming MB is also a JVM process) and would consume considerable amount of memory of your machine. Hence, lot of new blood, would rather give-up even without thinking of starting Stratos. 

With the introduction of this single distribution, as a developer, you can get-started with Stratos only with two JVMs namely Stratos JVM and MB (assuming it's a JVM), and in-turn would help us to attract more people to the project.

Reducing number of JVMs makes it easier to check logs, debug and makes your life way easier, as a contributor to Stratos. 

Further, you can use this single Stratos distribution and start Stratos Manager, Cloud Controller and Auto-scaler in 3 separate JVMs which will be useful in a real production deployment. In this case, of course you need to deploy WSO2 CEP and WSO2 BAM separately in addition to a Message Broker. 

Other than these, a single JVM Stratos deployment also capable of writing the data published by Stratos into a file (repository/logs/aggregate.log), so that you do not need an external business activity monitor in order to have a developer environment.

Try it out by following our wiki documentation here.

Nirmal FernandoBuilding your own PaaS using Apache Stratos (Incubator) PaaS Framework - 2

This is a continuation of this post, where I have explained the basic steps you need to follow in order to build your own PaaS using Apache Stratos PaaS Framework. There, I have explained the very first step that you would need to perform on the PaaS Framework using our REST API. Here in this post, I am going to explain how you can perform the same step via Apache Stratos UI.

1. You need to access the Stratos Manager console via the URL that can be found once the set-up has done. eg: https://{SM-IP}:{SM-PORT}/console

Here you need to login as super-admin (user name: admin, password: admin) to the Stratos Manager.

2. Once you have logged in as super-admin, you will be redirected to the My Cartridges page of Stratos UI. This page shows the Cartridge subscriptions you have made. Since we have not done any subscriptions yet, we would see a page like below.

3. Navigate to the 'Configure Stratos' tab.

This page is the main entry point to configure the Apache Stratos PaaS Framework. We have implemented a Configuration Wizard which will walk you through a set of well-defined steps and ultimately help you to configure Stratos.

4. Click on the 'Take the configuration wizard' button and let it begin the wizard.

The first step of the wizard is the Partition Deployment and it is the intention of this blog post, if you can recall. We have provided a sample json file too, in the right hand corner, in order to let you started quickly.

5. You can copy the sample Partition json file, I have used in the post 1, and paste it in the 'Partition Configuration' text box. The text box has an inbuilt validation for json format, so that you cannot proceed by providing an invalid json.

6. Once you have correctly pasted your Partition json, you can click 'Next' to proceed to the next step of the configuration wizard.

Once you have clicked on 'Next', Stratos will validate your Partition configuration and then deploy it, if it is valid. Also you will see a message on top in yellow back-ground if it is successful and in case, your Partition is not valid, you will get to see the error message in a red back-ground.

That's it for now, if you like to explore more please check out our documentation. See you in the next post.

Sivajothy VanjikumaranHow to list the admin Service in WSO2 carbon Products

If you want to find out the admins service that available in wso2 carbon product. with two steps you can find it out with out any problem.

Step 1
Start the wos2 server with OSGI console

Step 2
Once the server is started type the "listAdminServices" command in the osgi  console to list out the admin service.

Manoj KumaraCarbon 5.0.0 [C5] Milestone 03 - Architecture

Carbon 5 [C5] is the next generation of WSO2 Carbon platform. The existing Carbon platform has served as a modular middleware platform for more than 5 years now. We've built many different products, solutions based on this platform. All the previous major releases of Carbon were sharing the same high level architecture, even though we've changed certain things time to time.

Base architecture of the Carbon is modeled using the Apache Axis2's kernel architecture. Apache Axis2 is Web service engine. But it also has introduced a rich extensible server framework with a configuration and runtime model, deployment engine, clustering API and a implementation, etc. We extended this architecture and built a OSGI based modular server development framework called Carbon Kernel. It is tightly coupled with Apache Axis2. But now Apache Axis2 is becoming a dead project. We don't see enough active development on the trunk. Therefore we thought of getting rid of this tight coupling to Apache Axis2.

Carbon kernel has gained weight over the time. There are many unwanted modules there. When there are more modules, the rate of patching or the rate of doing patch releases increases. This is why we had to release many patch releases of Carbon kernel in the past. This can become a maintenance nightmare for developers as well as for the users. We need to minimize Carbon kernel releases.

The other reason for C5 is to make Carbon kernel a general purpose OSGi runtime, specialized in hosting servers. We will implement the bare minimal features required for server developers in the Carbon kernel.

Our primary goal of C5 is to re-architect the Carbon platform from the ground up with the latest technologies and patterns to overcome the existing architectural limitations as well as to get rid of the dependencies to the legacy technologies like Apache Axis2. We need to build a next generation middleware platform that will last for the next 10 years.

Current WSO2 CARBON architecture consists of the following components:

  • Artifact Deployment Engine 
  • Centralized Logging 
  • Pluggable Runtime Framework 
  • Clustering Framework 
  • Configuration and Context Model (Experimental) 
  • Hierarchical Tenancy Model (Experimental) 
The diagram below depicts the architecture with its components.

  • Artifact Deployment Engine

    Carbon Artifact Deployment Framework is responsible for managing the deployment and undeployment of artifacts in a carbon server. You can find the internal design architecture of this module and on how to implement a new deployer and to plug it with the Artifact Deployment Framework on here.
  • Centralized Logging

    Carbon Logging Framwork has integrated PaxLogging, which is a well known open source project for to implement strong logging backend inside Carbon server. You can find more about supporting logging API's and configuration information on here.
  • Pluggable Runtime Framework

    Carbon Pluggable Runtime Framework is responsible for handling and managing integrated 3rd party Runtime's on Carbon server. You can find the internal design architecture of this module and on how to plug new 3rd party runtime with the Pluggable Runtime Framework on here.
  • Clustering Framework

    The Clustering Framework provides the clustering feature for carbon kernel which adds support for High Availability, Scalabilty and Failover. The overall architecture of clustering implementation in a single node and implementation details can be found here.
  • Configuration and Context

    Carbon Configuration and context, model the CarbonConfiguration and allows CarbonRuntime implementations to retrieve the configuration instance. CarbonRuntime represents the complete server space and CarbonContext is the entity which provides the Carbon runtime related contextual information of the current executing thread.
  • Hierarchical Tenancy Model

    Carbon Hierarchical Tenancy Model will create OSGi isolated region for individual tenants. Each and every tenant deployed in a single JVM will get his own OSGi region. This would ensure class space, bundles and OSGi service level isolation for tenants and also provide application-level isolation for tenants.

    You can try out our Milestone 03 release on

    How To Contribute

    You can find more instructions on how to contribute on our documentation site. If you have any suggestions or interested in C5 discussions, please do so via or mailing lists .

Nirmal FernandoAdding support for a new IaaS provider in Apache Stratos

I have recently added a new wiki page to Apache Stratos documentation on providing a support for a new IaaS provider. You can check that here.

Prerequisite for this would be to have a good knowledge on the IaaS you are going to provide support for and also have some basic understanding of corresponding Apache Jclouds APIs.

Manoj KumaraHow to invoke secured API using httpClient

Today I faced difficulty when try to invoke published API on WSO2 API Manager using https protocol. The reason for this was before invoking I haven't set the truststore definition in my HttpClient. So after some effort and Googling I wrote a sample client project and I believe this will be useful to other as well.

So better publish it :)

To set up the sample to test this client you can use Deploying and Testing YouTube API sample API on our API Manager docs.

And on the client code you need to update the Authorization Bearer and URL according to your published API. In this sample I have used client-trust.jks inside my client. But it is always recommended that client should generate his own keystore, and export the public key to client-trust.jks and then client should invoke the API using his own keystore.

You can download the sample client maven project here.

Manoj KumaraHow to set MySql data-source on WSO2 product

Today I'm going to explain how to set your MySql datasource to WSO2 product. You can simply do this by few number of steps.

  1. Create new data in your MySql database say 'userstore'
      • CREATE DATABASE userstore;
  2. Copy  your mysql connector jar (ex: mysql-connector-java-5.1.12-bin.jar) to PRODUCT_HOME/repository/components/lib directory.
  3. Now go to master-datasource.xml file at PRODUCT_HOME/repository/conf/datasources
  4. Replace the datasource tag with the following,
<description>The datasource used for registry and user manager</description>
<definition type="RDBMS">
<validationQuery>SELECT 1</validationQuery>

Set your MySql url, username and password accordingly.

Once you start your server with sh ./ -Dsetup all the required scripts will be run and your database base will be updated.

Now you are ready to go :)

Chathurika MahaarachchiHow the Message Translator EIP Works

If you need to connect two applications, the main impotent point is both applications should use common data type compatible with each other. To facilitate this, some intermediary translation should happen between the sender and receiver. The translator in between  should change the context of the message form one data type to another.

This blog post explains how ESB Message Translator EIP works.

To explain this EIP pattern we use echo service hosted in application server connect with ESB proxy service. In here sender sends the request in one format and the translator transforms that message compatible with the back end service. Here we used the payload factory mediator type “xml “and format the message  coming in to in-sequence compatible with the back end service hosted in application server. The echo service deployed in application  server receives the message compatible with it. 
 Now lets see the how to implement this pattern.

1. Create a proxy service in ESB as follows.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <log level="full"/>
         <log level="custom">
            <property xmlns:p=""
         <payloadFactory media-type="xml">
               <p:echoString xmlns:p="">
               <arg xmlns:p=""
         <log level="full"/>
               <address uri="http://localhost:9764/services/echo"/>
         <log level="full">
            <property name="MESSAGE" value="Executing default &#34;fault&#34; sequence"/>
            <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/>
            <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/>

2. Open "SOAPUI"/"Try it" Client and send the below message to the ESB proxy

<soapenv:Envelope xmlns:soapenv="" xmlns:p="">

3. After sending the request, the payload factory mediator in ESB proxy  converts the request compatible with the receiver. 
The message format compatible with the receiver is as follows

<soapenv:Envelope xmlns:soapenv="" xmlns:p="">
<p:echoString xmlns:p="">
<in xmlns="">testString</in>

Pushpalanka JayawardhanaSigning SOAP Messages - Generation of Enveloped XML Signatures

Digital signing is a widely used mechanism to make digital contents authentic. By producing a digital signature for some content, we can let another party capable of validating that content. It can provide a guarantee that, is not altered after we signed it, with this validation. With this sample I am to share how to generate the a signature for SOAP envelope. But of course this is valid for any other content signing as well.

Here, I will sign
  • The SOAP envelope itself
  • An attachment 
  • Place the signature inside SOAP header 
With the placement of signature inside the SOAP header which is also signed by the signature, this becomes a demonstration of enveloped signature.

I am using Apache Santuario library for signing. Following is the code segment I used. I have shared the complete sample here to to be downloaded.

public static void main(String unused[]) throws Exception {

        String keystoreType = "JKS";
        String keystoreFile = "src/main/resources/PushpalankaKeystore.jks";
        String keystorePass = "pushpalanka";
        String privateKeyAlias = "pushpalanka";
        String privateKeyPass = "pushpalanka";
        String certificateAlias = "pushpalanka";
        File signatureFile = new File("src/main/resources/signature.xml");
        Element element = null;
        String BaseURI = signatureFile.toURI().toURL().toString();
        //SOAP envelope to be signed
        File attachmentFile = new File("src/main/resources/sample.xml");

        //get the private key used to sign, from the keystore
        KeyStore ks = KeyStore.getInstance(keystoreType);
        FileInputStream fis = new FileInputStream(keystoreFile);
        ks.load(fis, keystorePass.toCharArray());
        PrivateKey privateKey =

                (PrivateKey) ks.getKey(privateKeyAlias, privateKeyPass.toCharArray());
        //create basic structure of signature
        javax.xml.parsers.DocumentBuilderFactory dbf =
        DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
        DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
        Document doc = dBuilder.parse(attachmentFile);
        XMLSignature sig =
                new XMLSignature(doc, BaseURI, XMLSignature.ALGO_ID_SIGNATURE_RSA_SHA1);

        //optional, but better
        element = doc.getDocumentElement();

            Transforms transforms = new Transforms(doc);
            //Sign the content of SOAP Envelope
            sig.addDocument("", transforms, Constants.ALGO_ID_DIGEST_SHA1);

            //Adding the attachment to be signed
            sig.addDocument("../resources/attachment.xml", transforms, Constants.ALGO_ID_DIGEST_SHA1);


        //Signing procedure
            X509Certificate cert =
                    (X509Certificate) ks.getCertificate(certificateAlias);

        //write signature to file
        FileOutputStream f = new FileOutputStream(signatureFile);
        XMLUtils.outputDOMc14nWithComments(doc, f);

At first it reads in the private key which is to be used in signing. To create a key pair for your own, this post  will be helpful. Then it has created the signature and added the SOAP message and the attachment as the documents to be signed. Finally it performs signing  and write the signed document to a file.

The signed SOAP message looks as follows.

<soap:Envelope xmlns:dsig="" xmlns:pj=""
        <pj:MessageHeader pj:version="1.0" soap:mustUnderstand="1">
                <pj:PartyId pj:type="ABCDE">FUN</pj:PartyId>
                <pj:PartyId pj:type="ABCDE">PARTY</pj:PartyId>
            <pj:ConversationId>FUN PARTY FUN 59c64t0087fg3kfs000003n9</pj:ConversationId>
                <pj:MessageId>FUN 59c64t0087fg3kfs000003n9</pj:MessageId>
        <pj:Via pj:id="59c64t0087fg3ki6000003na" pj:syncReply="False" pj:version="1.0"
                soap:actor="" soap:mustUnderstand="1">
        <ds:Signature xmlns:ds="">
                <ds:SignatureMethod Algorithm=""></ds:SignatureMethod>
                <ds:Reference URI="">
                    <ds:DigestMethod Algorithm=""></ds:DigestMethod>
                <ds:Reference URI="../resources/attachment.xml">
                        <ds:Transform Algorithm=""></ds:Transform>
                    <ds:DigestMethod Algorithm=""></ds:DigestMethod>
            <ds:SignatureValue>d0hBQLIvZ4fwUZlrsDLDZojvwK2DVaznrvSoA/JTjnS7XZ5oMplN9  THX4xzZap3+WhXwI2xMr3GKO................x7u+PQz1UepcbKY3BsO8jB3dxWN6r+F4qTyWa+xwOFxqLj546WX35f8zT4GLdiJI5oiYeo1YPLFFqTrwg==
   <ds:X509Certificate>                MIIDjTCCAnWgAwIBAgIEeotzFjANBgkqhkiG9w0BAQsFADB3MQswCQYDVQQGEwJMSzEQMA4GA1UE...............qXfD/eY+XeIDyMQocRqTpcJIm8OneZ8vbMNQrxsRInxq+DsG+C92b
        <pr:GetPriceResponse xmlns:pr="">

In a next post we will see how to verify this signature, so that we can guarantee signed documents are not changed (in other words guarantee that the integrity of the content is preserved) .


Chamila WijayarathnaAdding Auto Completing Text Fields to WSO2 Enterprise Store Publisher

In my last post, I wrote about adding Client Side Validation to Enterprise Store, which will increase the quality of user experience. In this blog, I am going to write about adding auto completing text fields, which is also a way to increase user experience.
You may have seen that Enterprise Store contains an auto completing tag field by default. We can take advantage of tools that has been used there and add more auto completing fields with doing little amount of customization.

First, let's look at how auto completion has been added to 'Tag' field to get an idea.

Here, Enterprise Store uses token-input jquery library[1]. In <ES_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/default/js directory, we can see a folder named token.input which contains the files related to the library. <ES_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/default/js/common/form-plugins/tag-plugin.js file has defined a plugin called TagPlugin which calls to this library. In that file, it contains 3 functions, Init, GetData and fetchInitTags. In add-asset.hbs and edit-asset.hbs files, this plugin has been attached with 'tag' field.

<div class="control-group">
<b><label class="control-label">Technologies: </label></b>
<div class="controls">
<input type='text' id='tag-container' class='fm-managed' name='tag-container' data-field-name='tags' data-use-plugins="TagPlugin" data-tag-api='/publisher/api/tag/'>

Because of this, at the time which page loads for add asset or edit asset, it executes TagPlugin as well.
In TagPlugin.Init function, it attaches a URL to send request and load  set of suggestions to show at auto completion. Then it calls, fetchInitTags function which sends HTTP GET request to the attached URL, receive set of tags as JSON Object, parse it and save tags into an array and then load these arrays into 'tags' input box using token.input library. So here, it does not send requests and receive data per each key stroke, only at page loading time it connects to server and receive data.

TagPlugin.prototype.init = function (element) {
        //TODO: Replace where we get the asset type from
        var type = $('#meta-asset-type').val();
        this.tagUrl = element.meta.tagApi+type;
        this.tagContainer = '#';
        if (!this.tagUrl) {
            console.log('Unable to locate tag api url');
        fetchInitTags(this.tagUrl, this.tagContainer);
var fetchInitTags = function (tagUrl, tagContainer) {

        //Obtain all of the tags for the given asset type
            url: tagUrl,
            type: 'GET',
            success: function (response) {
                var tags = JSON.parse(response);
                $(tagContainer).tokenInput(tags, {
                    theme: DEFAULT_TAG_THEME,
                    allowFreeTagging: ALLOW_FREE_TAG

            error: function () {
                console.log('unable to fetch tag cloud for ' + type);

Now let's see how tags are taken from sending an HTTP GET request. When loading tags for ebooks field for example, the URL attached will be /publisher/api/tag/ebooks/. At jaggey.conf file at publisher, it has mapped this URL to a file, so when request sent to this URL, that file will be executed.


So this will call 'apis/v1/tags_api_router.jag' file which has logic to be executed when request sent to this URL.

routeManager.register('GET', 'publisher', '/publisher/api/tag/{type}', function (context) {

    var type = context.params.type;

    var tags = rxtManager.registry.query(TAG_QUERY);

    log.debug('tags retrieved: ' + tags);

    var tagManager = new tagModule.TagManager();


    tags = tagManager.get(type);


Now let's see how we can add auto completion to a new field by our own. Let's see how to add auto completion to Product field in Presentation Asset which we considered in my last blog.
First what we need to do is attach tag plugin to product fields at page loading time. But in add-asset.hbs (or edit-asset.hbs) we can't find a separate entry for product field. All text fields have been declared together. So we have to attach tag plugin with text fields.

<input id='{{}}' name='{{}}' type='text' value='{{this.value}}'  class="span8 fm-managed"  data-use-plugins='RequiredField,TextFieldValueExtractor,AssetStoreValidationField,TagPlugin' data-ready-only='{{this.isReadOnly}}' data-required='{{this.isRequired}}' data-tag-api='/publisher/api/asset-tag/'/> 

We can add attributes like data-tag-api if reqauired. Now the plugin will be attached with all text fields. But we only need it at 'product' field. We can limit it to product field at tag-plugin.js.

TagPlugin.prototype.init = function (element) {
        //TODO: Replace where we get the asset type from
        var val = $('#' +;
        var type = $('#meta-asset-type').val();
        var limit;
        if( == 'tag-container'){
            this.tagUrl = element.meta.tagApi+type;
            limit = null;
        }else if( =='overview_productname'){
            this.tagUrl = element.meta.tagApi + 'products';
            limit = 1;
        this.tagContainer = '#';

        if (!this.tagUrl) {
            console.log('Unable to locate tag api url');
       fetchInitTags(this.tagUrl, this.tagContainer,limit);

So other fields will not attached with plugin. We also need to change GetData method in the same manner.

TagPlugin.prototype.getData = function (element) {
        var data = {};
        var tags=[];
        var selectedTags;
        if( == 'tag-container' || =='overview_productname'){
            selectedTags = $(this.tagContainer).tokenInput('get');

            for (var index in selectedTags) {

            data[] = tags;

        return data;

Now the 'product' field will be attached with plugin, but will get '404 Not Found' response for the HTTP request. So we need to write an api to get required values and bind it to the 'publisher/api/asset-tag/products' URL.
First let's create /apis/v1/asset_tags_api_router.jag file and write logic to load data and return a json with those data.

routeManager.register('GET', 'publisher', '/publisher/api/asset-tag/{type}', function (context) {
    var type = context.params.type;

    var tags = [];
   if(context.params.type == 'products'){
        var fileread = require('fileread'); 
        var list = fileread.getList(); 
        //return myFunc.getList(); 
        var tagManager = new tagModule.TagManager();
        tags = list.split('\n');//myFunc.getList().split('\n');


    var counter = 0;
    var tagArray = [];

    for (var index in tags) {
        tagArray.push({id: counter, name: tags[index]});


Finally we have to map this file to above url at jaggery.conf file.


Now we will get auto completing support at our 'product' field. 


John MathonThe Enterprise Store – App, API, Mobile

mobile1It started with Apple

It all started about 15 years ago Apple introduced the iPhone…. It seems like this anyway.  The date was actually in 2007, a mere 6 years ago + a few months.   Since then the world changed so dramatically it is almost impossible to imagine a world before that and to realize that it was a mere 6 years ago that life was so different.

People are now spending 3 hours a day on their smartphones and 84% of the time using Apps, Mobile Apps.


apple app store    The Emergence of the Mobile App Store

There are many things I could reference that have changed but I am focusing on one particularly relatively invisible change that is incredibly significant.   By introducing the App Store apple created a central place people could go and find cool functionality they could use on these new devices.   We have always done this,i.e. offer add-ons, but Apple through the App Store made it possible for us to see comments of others who had used the app, ratings, pictures of the apps in operation.   They gave us (like Amazon did) a way to evaluate and choose to use the app.   They gave us (like Amazon) information that we trusted because it came from other people like us.  It gave us transparency which made us able to purchase. There were many factors contributing to the app stores success.  The low price of the apps and the fact apple enabled third parties to build apps created a massive community of people willing to spend an inordinate amount of time building these apps on the off-chance they might become rich.   There was even a social reference to this in Big Bang Theory just 4 years later in 2011.

The App store was so popular also because the app store made it easy to track our applications, update them and delete them.  This in one fell swoop solved a problem in the enterprise that we had struggled with for many years which was the proliferation and management of applications on desktops.  Even though desktops were infinitely more powerful than a cell phone and had more storage, apps had not taken off with nearly the rapidity or the incredible capitalistic rush to profits that was seen with the App store – because there was no App Store.   Do you see what I mean?  There were 600,000 apps in Mobile apps store in a few years and people had a hundred apps on their phones when they had maybe a half dozen apps on their desktop.  This was transformative.  The full significance of this has not been appreciated. There was a synergistic element to the success of this that is the heart of how Apple succeeded in the past.   They built the App store but also the phone to house them, made everything cheap and easy to manage, kept the quality high so people didn’t have bad experiences and the market exploded.   Does Tesla look like a company doing the same thing in the car space?  I think so.  See my Tesla article for more info.

Screenshot-API Publisher Listing - Google Chrome   The Emergence of the API Store

When mobile apps became popular they needed APIs to work.   Almost all apps had to phone home (an API) and the way they did this was simple because many developers were not Enterprise Architects they preferred simple.  (Not a denigration of Enterprise Architects or mobile developers… It’s just the facts, mam.)   So, they used http (REST) to talk to simple servers they were able to build to store data and access other information and services.   The growth of APIs was fueled also by the demand on social sites for data and to leverage information in the cloud.  What has emerged over the last 6 years is also an even MORE powerful trend to APIs in enterprises.   Salesforce and Netflix, Stubhub, Twitter, Facebook, Google and others found they could make BILLIONS from APIs.  Not$0.99 like apps but the revenues were enormous and so companies all over the world and entrepreneurs (similar to the wave of mobile app building) started building APIs.   Since the emergence of this megatrend to APIs the movement to REST has solidified and rationale has been provided in numerous ways for the simple way to talk to services rather than the more complex way services were defined in the past.    That has also led to more proliferation of services (or APIs) The number of APIs (published)  is far less than mobile apps but the movement is facilitated by the social community which builds around the API.   Companies have created API stores where if you want to use an API you go to the store in question and “subscribe” to the API along a usage tier which charges you according to your QOS.  The subscription allows the API provider to track usage and billing by application.   It is part of the security of the App but the App almost always has to provide credentials similar to any user of any service. An API is simply another term for “service.”  More specifically it refers to the description of the service.   The service behind the description is simply the service.     There are now tens of thousands of APIs and growing faster every year.  Every company I am talking to is building APIs for external consumption of their services by mobile apps or by other companies for integration into enterprise applications.  This is driving a sea change in Enterprise Architecture as companies now look to refactor their enterprises along API lines.  Realizing the value of services has companies realizing they can build their own products out of services inside the company.    This is leading to a spinoff megatrend with businesses doing what I call

734pxsix_degrees_of_separation_2    Enterprise Refactoring.

Enterprise refactoring is a second phase of SOA or replaces SOA depending on your point of view.   Service oriented architecture tried to get Enterprises to build software as services that could be reused to build other services or applications in the enterprise.   The idea of reuse was a powerful motivation for SOA.   However, since the description of services in SOA was usually a technical description of the parameters and the place to find the service rather than on the “transparency” around the service (i.e. pictures, references from other users of the service, ease of use tips) few people ever reused services in the SOA era compared to what was imagined.

So what?

I’ve now shown you 2 examples of megatrends that have been creating massive disruption in our world.  In personal life and business we are seeing massive consequences from Apples initial introduction of this device and the app store. We are seeing operating systems now becoming more like the store experience for desktops.  Ubuntu, Mac Mavericks and Microsoft have already incorporated this store concept in their operating systems for desktops.   The growth of tablet devices and mobile devices along with the proliferation of mobile interfaces will lead to the complete domination of the store concept for managing applications.   Similarly for APIs we are seeing the store is a center for community and to help get developer interest and experience companies are rapidly trying to build out the social aspects of their stores for APIs. This tidal wave of change and productivity has not hit enterprises in terms of how they operate internally because the enterprise has to take things sequentially: 1. Build APIs 2. Develop a community for these APIs inside and outside the company 3.  Refactor existing enterprise applications into services and mobile applications 4. Become successful with new services and mobile apps 5. Deal with the consequences of this success Most companies are somewhere between steps 1 and 4 and have not had the problem yet that they now have a large number of APIs, mobile applications or applications in general that they would like to manage better (i.e. step 5 above).   So, most companies are focused so much on doing steps 1-4 they aren’t thinking what comes AFTER I build all these applications and APIs and I need to manage all this new technology.

The birth of the Enterprise Store

Screenshot-API Publisher Listing - Google Chrome 002_overview_appfactory_ia_1_3 001_user_home_appfactory_ia_1_3

Well, the obvious thing is that a combined Enterprise App, API and Mobile store that allows you to see all of a companies Applications, APIs and mobile Apps in one place.   There are several reasons to believe consolidation of Enterprise APIs and Mobile apps as well as Enterprise Apps in one store is transformative:

1. Easier for End User:  Apps, APIs and mobile Apps are all related.   Someone may want an app, a mobile app and an API for your service.  Making them go to 3 different places and use 3 different means to leverage these is stupid and hard.

2. Security: For Enterprise IT having all Enterprise Assets in one store means that it is easier to manage what users and roles can see what APIs and Apps.  You can imagine that certain people in your company can see certain sensitive APIs and Apps, some people can see more inside the company.  Some partners can see certain APIs and Apps and get certain service level agreements and customers or outside developers and parties may get a different view.   This gives IT control over the availability of these Enterprise Assets and to whom in a better more transparent way.


3. Manageability:  These things are related. An App depends on the APIs and the mobile Apps may depend on Apps and APIs.  If you have one down it affects the others.  If the performance is a problem in one API it may affect lots of different things.   Having a single place to see the relationship between entities and their SLAs and respective performance, usage makes managing all these things easier.   It also means if you change an entity X you can find the subscribers to that thing (whether a mobile app, API or enterprise App) and notify them of outages, changes in APIs, changes in service or more.   This like the mobile app store makes it easier to manage large numbers of APIs, Apps and Mobile Apps both for the consumer and the Enterprise having to manage those entities.

4. Intelligence:  A single store provides a single place you can go to find information about usage of an enterprise entity, to collect bigdata so that you can see how your APIs, your Apps and your Mobile Apps are being used by whom.  Relating the usage of those things to each other allowing you to estimate growth easier and how you may need to increase service or change service for poorly performing or underused services.   It enables you to understand how different apps are affecting what APIs and what other enterprise assets and possibly improve service more rapidly.

5. Lower Cost:  By centralizing management, administration and simplifying subscription, communication and being able to see how to share resources better.

6: Foster adoption:  A store for these enterprise assets that is social and allows people to see transparently others comments, tips, best practices and plans for the future in one easy place will create the holistic experience that drives adoption and reuse.

7: Innovation:  By providing a place for people to see all the enterprise assets they can see how they can improve them, they see how they can communicate their ideas for improvement.  They may see that they can benefit by improving something, by suggesting something, by doing something.

So, you may be able to see like me that eventually enterprises will want to do this.  As they get their heads above water and realize they’ve built services, built mobile apps, redesigned their enterprise applications they will want to manage all of this better.   An integrated combined Enterprise Store that allows you to start down this path is available today.   WSO2 Enterprise Store is the only combined enterprise store on the market.   However, if I’m right within the next few years we will see enterprises taking this idea and we will see more enterprise stores.   Gartner projects that within 4 years 25% of Enterprises will have an Enterprise store.   Today WSO2 Enterprise Store is the only available product in open source or proprietary that deals with this vision.   This shows me that WSO2 is not just producing high quality middleware and technology to replace existing SOA stack components but also provide leading edge Enterprise quality technology for API Management, bigdata, social, cloud and to help you manage it better than any open source or proprietary software company out there.

 For further information please subscribe to my twitter feed @john_mathon and read the following articles:


CIO: Enterprise Refactoring and Technology Renewal: How do you become a profit center to the Enterprise?

API centric – The evolution of reuse and API Management

David Bressler about Enterprise App Store - it looks like he’s on the right track

Enterprise 3.0 - I am giving a talk in Rome on this topic in June

Enterprise Refactoring - The new movement in enterprises

Introducing WSO2 Enterprise Asset Store

Introducing WSO2 Enterprise Asset Store

Building an Enterprise App Store with WSO2 Gadget Server

Infosys whitepaper on Enterprise Stores


Udara LiyanageCreating a cartridge for Apache Stratos

Apache Stratos is a polyglot PaaS framework, providing developers a cloud-based environment for developing, testing, and running scalable applications. You can plug any service such as PHP, Tomcat, Ruby, MySQL, etc. as a cartridge at runtime. This tutorial will give you a step-by-step approach on how to create a PHP cartridge. We use OpenStack as the underlying Infrastructure as a Service (IaaS). The steps are similar in other IaaSs, such as Amazon EC2.


Before creating the cartridge, puppet master has to be configured by using the following link; Apache Stratos Puppet Master setup guide

Live terminal recording of configuring a puppet master can be found through the link; live terminal recording of setting up

Let’s assume, the puppet master was configured with the values,



Creating the cartridge to Stratos

  1. Create an ubuntu VM in Openstack
  • Select the image with the OS and click Launch


  • This will lead to a popup window called Launch instance where the Flavor has to be selected


  • Choose a key pair


2 .Log in to the machine created via SSH

ssh -i udara.pem ubuntu@


3. Install unzip

sudo apt-get install unzip

This step is carried out to extract the Stratos Cartridge Agent before installing the Cartridge Agent


4. Create the configuration script in the path; /root/bin/ with the content copied from this configuration file.

vim /root/bin/

Copy the content.

Make the script executable

chmod +x /root/bin/

This script installs the service you want (PHP in this case) and install and start the Puppet agent. Then puppet agent connects to puppet master and installs the other required software such as Apache2, PHP, Java and Stratos cartridge agent, etc.


5. Create the init script in /root/bin/ with the content at this init script.

Copy and paste the content.

Make the script executable

chmod +x /root/bin/init.s


6. Create the puppet-install script in /root/bin/puppetinstall/puppetinstall with the content at this script.

vim /root/bin/puppetinstall/puppetinstall

Copy and paste the content

Make the script executable

chmod +x /root/bin/puppetinstall/puppetinstall


7. Create the configuration file /root/bin/stratos_sendinfo.rb with the content of stratos_sendinfo.rb

8. Now execute the

cd /root/bin



For the questions prompt in this step should be answered with the following values;

Since we are going to make a PHP cartridge


As we set up the puppet master in the machine which has the IP address of192.168.18.134

puppet master IP=

As we setup the puppet master with the hostname

puppet master hostname =

root@udara-cartridge:~/bin# ./

This script will install and configure puppet agent, do you want to continue [y/n]y

–2014-04-01 04:04:52–

Connecting to… connected.

HTTP request sent, awaiting response… 404 Not Found

2014-04-01 04:04:52 ERROR 404: Not Found.

Please provide stratos service-name:php

Please provide puppet master IP:

Please provide puppet master hostname []

      This process may take some time. You may have to wait, till the process complete.

A log similar to below will be seen, once the above process is completed.

Info: Creating state file /var/lib/puppet/state/state.yaml

Notice: Finished catalog run in 2367.76 seconds.


9.Create the Snapshot

Now the machine is ready to become a cartridge. Required software is installed and configurations have been completed.


Click on Create Snapshot button




Save snapshot with a preferred name


Snapshot is waiting to be created



When the snapshot creation is completed, click on the snapshot to find details about it. The most important information is the ID of the image (snapshot).

When a PHP runtime is required, the user has to subscribe to a PHP cartridge in Stratos. When a PHP cartridge is subscribed, behind the scene, Stratos creates an instance from the image ID retrieved earlier. Since PHP is already installed and configured at the snapshot creating stage, PHP runtime will be available when Stratos spawn an instance from that snapshot.


Deploy the cartridge JSON in Stratos

Below is the cartridge JSON which is need to be deployed in Stratos. It is notable that the image ID is same as the snapshot ID, created above.

     "type": "php",
     "provider": "apache",
     "host": "",
     "displayName": "PHP",
     "description": "PHP Cartridge",
     "version": "5.0",
     "multiTenant": "false",
     "portMapping": [
           "protocol": "http",
           "port": "80",
           "proxyPort": "8280"
           "protocol": "https",
           "port": "443",
           "proxyPort": "8243"
     "deployment": {
      "iaasProvider": [
         "type": "openstack",
         "imageId": "RegionOne/ac859fc9-d0dc-4f4fb084-bada02ad5129",
         "maxInstanceLimit": "5",
         "property": [
            "name": "instanceType",
            "value": "RegionOne/4"
            "name": "keyPair",
            "value": "stratos-s4"
Please refer to the cartridge deployment guide of Stratosfor more information.

Lakmal WarusawithanaScalable and dynamic load balancing with Apache Stratos

Load balancers play a major role in scaling application in the cloud. Load balancers distribute incoming traffics into several server nodes, or clusters, or other resources to maximize the throughput, minimise response time and provide mechanism scale application based on the workload.

Apache Stratos capable of provision scalable load balancers for any IaaS clouds well as integrate any third party load balancers. This gives more flexibility and extendability while providing more efficient load balancing based on deployment environment.

Stratos load balancing features and benefits

Provisioning - Load balancers can spin up dynamically. Based on load balancer as cartridges, stratos will spin up defined minimum load balancers when the first application subscription comes. Also capable of defining load balancers to spin up in every cloud, regions enabling effective multi-cloud deployment specially for geo-graphical based application deployments.

Flexibility - Can defined service level dedicate load balancing. Easy to define service level load balancers via REST API while capable of doing load balancing multiple services in single load balancer.

Expandability - Capability of integrate any third party load balancers. With the message broker and topology based model, its easy to integrate load balancers like HAProxy, nginx, AWS ELB ..etc.  it gives optimize load balancing based on the deployment. (e.g use AWS ELB on EC2 deployment)

Monitoring - Stratos monitor load average, memory consumption, in-flight request counts per service like health stats.

Scalability - Based on heath publish Stratos can scale load balancers it-self and update the DNS.

Recovery - Based on health stat Stratos can identifies fault load balancers and capable of reprovision fully configured load balancer to replace the failed load balancer.

Any cloud - Stratos can deploy load balancers top on any cloud which created load balancer cartridges

Effective cloud bursting - Stratos capable of creating load balancers and update DNS while bursting into another cloud which give efficient load

Madhuka UdanthaWSO2 greg/product with IBM DB2

Here I am connecting IBM DB2 database


Looking our privileges.


Filtering user/group level



Via our client




Following statement retrieves all authorization names with privileges.

All authorization names that have been directly granted DBADM authority
Here is permission I have for db2test1 user
Here I am using user you can try role for this even.
you will no need provide grant right for here, accessing is enough
Starting wso2 with db2

Here is user I am going to use to connect wso2 product in db2

1. Update master-datasources.xml in wso2 product as below

<description>The datasource used for registry and user manager</description>
<definition type="RDBMS">

<validationQuery>SELECT 1</validationQuery>

2. Add libs ‘repository\components\lib’


3. Now Start  wso2 server


4. Here is my db2 table (that was created with out  –Dsteup)

Here  is tables


Here is User mgt


Reg tables check


274 RECORDS are added



To verify you we with correct user with greg, do not try on production level. Here I stop my db2 user privilege of connect as below.

Going ‘madhuka’ (DBADM) account stop connect privilege


you see greg go off


You will see ‘UM_DIALECT’ table sql error  and ‘Error in getting the tenants.’ as this is table we try to access firstly.

You can mount ESB for this and play around. (ESB with offset 1)


greg offset 0


WSO2 ok to go with DB2

Madhuka UdanthaDB2 database and User Privileges

IBM DB2 is a family of database server products developed by IBM. Here I was connecting DB2 and WSO2 products. Here I am show creating user and granting privilages for DB2 user.

Here I have two user one is 'madhuka' and and second user is 'db2test1'. Madhuka have all the right (admin). db2test1 is standard user. I have db2 database called 'regdb7'

Here I am try to connect to regdb7 from user 'db2test1'

db2 => connect to regdb7 user db2test1
Enter current password for db2test1:
SQL1060N  User "DB2TEST1" does not have the CONNECT privilege.  SQLSTATE=08004



db2test1 user do not have privilege to connect to regdb7

Now user 'madhuka' (admin) connect to db and give priviladges to db2test1 user

db2 => connect to regdb7 user madhuka using <password>

   Database Connection Information

Database server        = DB2/NT64 10.5.1
SQL authorization ID   = MADHUKA
Local database alias   = REGDB7


Here pass user grants

DB20000I  The SQL command completed successfully.


then quit

now try connect from db2test1

db2 => connect to regdb7 user db2test1
Enter current password for db2test1:

   Database Connection Information

Database server        = DB2/NT64 10.5.1
SQL authorization ID   = DB2TEST1
Local database alias   = REGDB7


Yap, you are in

You can revoke the permission from


Here is we try out



Here is full use case that we try


Here is privilege list to try


Nirmal FernandoBuilding your own PaaS using Apache Stratos (Incubator) PaaS Framework

This is a start of a series of blog posts I am planning to do on the topic "Building your own PaaS using Apache Stratos (Incubator) [1] PaaS Framework". 

PaaS, wondering what it is? It stands for Platform as a Service. It is the layer on top of the Infrastructure as a Service (IaaS) layer in the Cloud Computing Stack. Rackspace has published a white paper on the Cloud Computing Stack, and you may like to read it [2]. 

With the evolution of Cloud Computing technologies, people have realized the benefits that they could bring to their Organizations using Cloud technologies. Few years back they were happy to use an existing PaaS and develop/deliver their SaaS (Software as a Service) applications on top of it. But now, the industry has come to a state where they like to customize and build their own PaaS without being depended till the PaaS vendors deliver the customizations they need.

There arises a need of a framework where you have the freedom to customize and build the PaaS you wish. In this sense, having a pluggable, extensible and more importantly free and open source PaaS framework would be ideal.  Hard to believe an existence of such framework? No worries, Apache Stratos (Incubator) is there for you! 

Before go into details on the topic I am gonna discuss, it is worth to understand how Apache Stratos looks like. Apache Stratos consists of set of core components and the diagram below depicts them.

Currently Apache Stratos internally uses 3 main communication protocols, namely AMQP, HTTP and Thrift. 

AMQP protocol is mainly used to exchange topology information across core components. 'Topology' explains the run-time state of the PaaS at a given time such as existing services, service clusters, members etc.

HTTP protocol is used to perform SOAP service calls, among components.

Thrift protocol is used to publish various statistics to the Complex Event Processing Engine.

What does Apache Stratos (Incubator) core components capable of doing? Lakmal has explained this in [3].

In this first post of series of posts to come, I will roughly go through the major work-flows you need to do perform, in order to bring up your own PaaS, using Apache Stratos. Have a look at the below diagram;

As the sequence diagram explains, to build your own PaaS, in minimum, you need to follow the steps up to 'PaaS is ready!' state. Here, I am going to discuss the very first step you need to follow; that is 'Deploy Partitions'.

Let's understand the terminology first. What you deploy via a Partition is a reference to a place in an IaaS (eg: Amazon EC2/ Openstack etc.), which is capable of giving birth to a new instance (machine/node). Still not quite understood? Don't panic, let me explain via a sample configuration.

"id": "AWSEC2AsiaPacificPartition1",
"provider": "ec2",
"property": [
"name": "region",
"value": "ap-southeast-1"
"name": "zone",
"value": "ap-southeast-1a"

The above JSON defines a partition. Partition has a globally unique (among partitions) identifier ('id') and an essential element 'provider' which points to the  corresponding IaaS provider type. This sample has two properties call 'region' and 'zone'. The properties you define here should be meaningful in the context of relevant provider. For an example, in Amazon EC2, there are regions and zones, hence you can define your preferred region and zone, for this partition. So, in a nut shell, what this partition references to is, ap-southeast-1a zone in ap-southeast-1 region of Amazon EC2. Similarly, if you take Openstack, they have regions and hosts.

Above sequence diagram explains the steps that get executed when you deploy a partition. You can either use Stratos Manager REST API or Apache Stratos CLI tool or Stratos Manager UI when deploying partitions. Partition deployment is successful only if the partitions are validated against their IaaS providers at Cloud Controller. Autoscaler is the place where these Partitions get persisted and it is responsible for selecting a Partition when it decides to start up a new instance. 

Following is a sample CURL command to deploy partitions via REST API;

 curl -X POST -H “Content-Type: application/json” -d @request -k -v -u admin:admin https://{SM_HOST}:{SM_PORT}/stratos/admin/policy/deployment/partition  

@request should point to the partition json file. More information on the partition deployment can be found at [4].

That concludes the first post, await the second!



Manula Chathurika ThantriwatteHow to run Apache Stratos and subscribe to cartridges and how autoscaling is happen

In this video I'm going to show,

  • How do I run Stratos on my laptop
  • PaaS story video on how to get it done
  • How to deploy apps - PHP and Tomcat 
  • JMeter to load up and instances popping up 

Damitha KumarageApache Stratos on Openstack/Docker-Part One

In this and the next article I explain how to set up Apache Stratos using Openstack/Docker as the underlying IaaS. This is the third article on a series of step by step series of articles(see [1] and [2]). In the previous articles I discussed how to set up the IaaS environment, that is Openstack Havana with Docker Driver. In this article I discuss how to build Tomcat cartridge for Stratos in Openstack/Docker. In the next article I will discuss how to install Apache Stratos to use this Openstack/Docker set up as the IaaS.

This series is kind of a botto-up approach rather than a top-down approach of the Apache Stratos Architecture[3]. In the block diagram describing the architecture you can see that Apache Stratos can run it’s cartridges on many IaaS(concurrently or separately). The communication between Stratos and IaaS happen through jclouds API’s. jcloud support more than fiftly IaaSs. So theoretically Apache Stratos can be made to support all the IaaSes supported by jclouds. So we started from Openstack/Docker IaaS and first learnt how to make that IaaS ready for Stratos to work with it. Before going further here is a brief description of Stratos architecture focussed mainly on how cartridges connect with the main picture.

Apache Stratos is a fast growing open source PaaS where users can deploy their applications in various run time environments. A user in Stratos is a tenant or a tenant admin or a Stratos admin. First lets understand with following scenario. X company has selected Apache Startos as it’s private PaaS environment. The Stratos admin’s role is to set up Stratos and any IaaS environment used by Stratos. Setting up Stratos means installing it’s various middle-ware servers like Stratos Mananger, Stratos Controller and Stratos Auto Scaler. Then he need to setup IaaS, for example Openstack/Docker. Then he need to set up run time eivnronments in Stratos called Cartridges. Then suppose X company has set of customers who need to use a web application deployed on Tomcat and there is a administrator managing this application. Then the customers are the tenants and the administrator is the tenant admin. The tenant admin subscribe to the Tomcat cartridge which is already created by Stratos admin. When he subscribes he provides a git repository url where he has his tomcat Web application uploaded. This user interface and tenant managing functionality is provided by Apache Stratos Manager server. When the tenants use this application, the scaling decisions are taken by Auto Scaler server. Communication between the underlying IaaS and Stratos PaaS is managed by Stratos Controller server.

What does it mean by Tomcat cartridge mentioned above? It is simply a cluster of Apache Stratos aware virtual machines running Tomcat web services engine usually fronted by a load balancer. This cluster runs in the underlying IaaS, in our case this is a cluster of Docker containers running in Openstack.

So how does the Stratos admin create Tomcat cartridge? It is just creating OS images installed with necessary software for Tomcat web server, for the underlying IaaSes on which the cartridge is supposed to run plus some configuration settings on the Stratos Controller.. In our case this is equivalent to creating an Openstack/Docker image, uploading it to Openstack glance repository and setting tomcat cartridge configuration in Stratos Controller with image id information. This would be the scope of this article.

So what we are doing here is first create an Ubuntu OS Docker image with necessary software and configuration to run Tomcat web server, based on the Stratos base image we introduced in the previous article. Then we upload this image to Openstack glance repository and test it in Openstack environment. That will complete the IaaS part of our Tomcat cartridge. In the next article I will explain how to complete the next step of introducing our Tomcat cartridge to Apache Stratos.

First we need to build Apache Stratos from source. For that I suggest you to have a separate Virtualbox VM node. The reason is that if you are a developer your development/testing environment(creating of which is the aim of this article series) has to be regularly updated with the latest code.  So having a separate build machine is always good. Of course you can later integrate an automated build environment like Jenkins to your set up as well. But for now let’s do this way. Create such VM node and log into it.

git clone

cd incubator-stratos
Now we will build Stratos using Maven version 3. Before building set the followng Maven options. Otherwise out of memory errors could occur.

export MAVEN_OPTS="-Xmx1024m -XX:MaxPermSize=256m"

mvn clean install

This will build Stratos Server, Load balancer, cartridge agent and other artifacts which will be needed as we proceed.

Second we should install Puppet master into our Virtualbox VM node where Openstack/Docker run.  I’ will explain later why we need Puppet. I suggest you to follow the Configure Puppet Master of the Apache Stratos documentation. In that documentation the PUPPETMASTER-DOMAIN  is given as In my set up I gave it as You have the choice of your own. Also see my changes to /etc/puppet/manifests/nodes.pp file

 $package_repo         = ‘;
#following directory is used to store binary packages
$local_package_dir    = '/mnt/packs'
# Stratos message broker IP and port
$mb_ip                = ''
$mb_port              = '61616'
# Stratos CEP IP and port
$cep_ip               = ''
$cep_port             = '7611'
# Stratos Cartridge Agent’s trust store password
$truststore_password  = 'wso2carbon'

Since in our single node set up all servers run in the same node, we have both mb ip and cep ip the same value, our Virtualbox ip.  We have changed package_repo value as well.  This directive is for the server from where tomcat tar.gz archive is downloaded by Puppet.  In our Virtualbox VM node we already have a running apache server installed by the Openstack installation. We configure that as following to have a Apache Virtualhost running at port 8080 so that Puppet can downoad Tomcat from there.

Create /var/www/ with following content

<h1>Download required software for Stratos cartridges from here</h1>


chown -R wso2:wso2 var/www/

Create file /etc/apache2/sites-available/ with following content
Listen 8080
NameVirtualHost *:8080

ServerAdmin webmaster@localhost

DocumentRoot /var/www/
Options FollowSymLinks
AllowOverride None

Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all

ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all

ErrorLog ${APACHE_LOG_DIR}/error.log

# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn

CustomLog ${APACHE_LOG_DIR}/access.log combined


ln -s /etc/apache2/sites-available/ /etc/apache2/sites-enabled/


service apache restart

Now typing in a browser should show the content of index.html page.

Download Tomcat 7.0.52 version into /var/www/
Now we should be able to download this file from our Apache Server Virtualhost by wget

For Stratos installation we also need a database. We already have a Mysql database in our Virtualbox VM created while installing Openstack.
Create Mysql database user. We will later need that user as the Mysql database user when configuring Stratos.

grant all privileges on *.* TO 'wso2'@'localhost' identified by 'g' with grant option;
grant all privileges on *.* TO 'wso2'@'%' identified by 'g' with grant option;
flush privileges;

In the last article we created the Stratos base cartridge image using a base Dockerfile. Now we will create Tomcat cartridge Dockerfile using our previuos image as the base.
# stratosbase
# VERSION 0.0.1
FROM stratosbase1
MAINTAINER Damitha Kumarage ""
MAINTAINER Lakmal Warusawithana ""

RUN apt-get install -q -y puppet
RUN apt-get install -q -y ruby

RUN mkdir /root/bin
ADD /root/bin/
ADD puppet.conf /etc/puppet/
RUN chmod +x /root/bin/
ADD stratos_sendinfo.rb /root/bin/

ENTRYPOINT /usr/local/bin/ | /usr/sbin/sshd -D

See what are the additions we make here to the Dockerfile. First we install Puppet.  The reason for installing Puppet is that Puppet handle all stuff related to installing Tomcat and Apache Stratos agent.  In each cartridge there should be a Stratos agent to coordinate the cartridge instances with the Stratos Server. When the cartridge instance load, Puppet client will communicate with the Puppet Master to install software. Of course we can get rid of Puppet by doing all stuff related to installing Tomcat and Stratos agent ourselves within the Dockerfile. If we do that we can reduce the load time of cartridges. However the real strength of Puppet is unavoidable in a production ready cartridge where we need to periodically update the cartridges with patches and maintainance code.  You can download, puppet.conf and stratos_sendinfo.rb from [4]. You need to add the following line as the last line to your previously created file.

/root/bin/ > /var/log/stratos_init.log

When above line is executed when loading the cartridge, it will install and configure Stratos agent and necessary software into the cartridge.

Now you can build, tag and push the above image into glance repository by.

docker build -t tomcat .
docker tag tomcat
docker push
This will end the 3rd article in the series. In the next article I will explain how to install Apache Stratos in the same Virtualbox VM, where we created our Tomcat cartridge image. Then we will explain how to make our Stratos aware about our Tomcat cartridge. You will be able to use this set up as a development and Testing environment for Apache Stratos.


Sivajothy VanjikumaranQuick Note on How to install java in Ubuntu/Mint

Good old days we had the simple way to install the sun java in our Ubuntu machines! Since the oracle has taken the proprietorship of Oracle java(aka sun java) and due to Oracle's software redistribution policy it it not simple as it is to do the "apt-get install"

Nevertheless there are some good documents that showcase the way to install the java with out any struggle :).

First terminal will install the oracle java for you and second terminal command will set the install java as defualt java version.

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installer
sudo apt-get install oracle-java6-set-default


Yumani RanaweeraWhat happens to HTTP transport when service level security is enabled in carbon 4.2.0

Got to know this from Evanthika yesterday; thought of blogging about it because I was under impression it was a bug and am sure many of us will assume like that.

In carbon 4.2.0 products for example WSO2 AS 5.2.0, when you apply security, the HTTP endpoint disables and disappears from service dashboard as well.

Service Dashboad


In earlier carbon versions this did not happen, both endpoints use to still appear even if you have enabled security.

Knowing this I tried accessing HTTP endpoint and when failed tried;
- restarting server,
- dis-engaging security
but neither help.

The reason being; this is not a bug, but is as design.  The HTTP transport disables when you enable security and to activate it again you need to enable HTTPS from service level transport settings.

Transport management view - HTTP disabled

Change above as this;

Transport management view - HTTP enabled

Lali DevamanthriSecure Sockets Layer (SSL) and SSL Certificates

What Is SSL?

SSL is a security protocol. For establishing an encrypted link between a website (server) and a browser(client), there should be a  standard security technology.   SSL (Secure Sockets Layer)  provide that slandered technology.

Sensitive information such as credit card numbers, social security numbers, and login credentials can be transmit securely with SSL. Normally, data sent between browsers and web servers is sent in plain text—leaving you vulnerable to eavesdropping. If an attacker is able to intercept all data being sent between a browser and a web server they can see and use that information.

SSL protocol state few parameters to be share between server and client for verify the server and encrypt the sessions.  All browsers have the capability to interact with secured web servers using the SSL protocol. However, the browser and the server need what is called an SSL Certificate to be able to establish a secure connection.

What is an SSL Certificate and How Does it Work?

SSL Certificates have a key pair: a public and a private key. Using this two keys it can establish an encrypted connection. The certificate also contains what is called the “subject,” which is the identity of the certificate/website owner.

To get a certificate, you must create a Certificate Signing Request (CSR) on your server. This CSR creates the private key and a CSR data file that you send to the SSL Certificate issuer (called a Certificate Authority or CA). The CA uses the CSR data file to create a public key to match your private key without compromising the key itself. The CA never sees the private key.

Once you receive the SSL Certificate, you install it on your server. You also install a pair of intermediate certificates that establish the credibility of your SSL Certificate by tying it to your CA’s root certificate. The instructions for installing and testing your certificate will be different depending on your server.



1.Browser connects to a web server (website) secured with SSL (https). Browser requests that the server identify itself.

2.Server sends a copy of its SSL Certificate, including the server’s public key.

3.Browser checks the certificate root against a list of trusted CAs and that the certificate is unexpired, unrevoked, and that its common name is valid for the website that it is connecting to. If the browser trusts the certificate, it creates, encrypts, and sends back a symmetric session key using the server’s public key.

4.Server decrypts the symmetric session key using its private key and sends back an acknowledgement encrypted with the session key to start the encrypted session.

5.Server and Browser now encrypt all transmitted data with the session key.


Why Do I Need SSL?

One of the most important components of online business is creating a trusted environment where potential customers feel confident in making purchases. Browsers give visual cues, such as a lock icon or a green bar, to help visitors know when their connection is secured.

In the below image, you can see the green address bar that comes with extended validation (EV) SSL Certificates.

Yumani RanaweeraFine Grained Authorization scenario

This is actually a common scenario which I will also be posting in my blog

A request coming from the client will be authenticated at WSO2 ESB proxy, which acts as a XACML PEP and authorizes the request to access the back-end service by processing the request at WSO2 IS which acts as the XACML PDP.

So the actors in the scenario are;
PEP - Policy Enforcement Point - WSO2 ESB
PDP - Policy Decision Point - WSO2 IS
BE - echo service in WSO2AS
client - SoapUI

 Let's try step by step:

1. Configure Entitlement proxy (ESB-4.8.0)
a) Create a custom proxy, giving echo service as the wsdl;
WSDL URI - http://localhost:9765/services/echo?wsdl

b) In-Sequence
- Select Entitlement mediator and add entitlement information
Entitlement Server - https://localhost:9444/services/
Username - admin
Password - admin
Entitlement Callback Handler - UT
Entitlement Service Client Type - SOAP - Basic Auth

- Add results sequences for OnAccept and OnReject nodes.

OnReject as below;

OnAccept as below - send mediator to BE service;

c) OutSequence
-Add a send mediator

My complete proxy service is built like this;
<?xml version="1.0" encoding="UTF-8"?>

<proxy xmlns=""
         <entitlementService remoteServiceUrl="https://localhost:9444/services/"
               <makefault version="soap11">
                  <code xmlns:soap11Env=""
                  <reason value="Wrong Value"/>
                     <address uri="https://localhost:9445/services/echo"/>
   <publishWSDL uri="http://localhost:9765/services/echo?wsdl"/>
   <policy key="conf:/repository/axis2/service-groups/EntitlementProxy/services/EntitlementProxy/policies/UTOverTransport"/>

2) Start the back-end service.
In my scenario it is the echo service in WSO2AS-5.2.0

3) Configure XACML Policy using IS-4.5.0
a) Go to Policy Administration > Add New Entitlement Policy > Simple Policy Editor
b) Give a name to the policy and fill in other required data.
This policy is based on - Resource
Resource which is equals to -{echo} ---> wild card entry for BE service name.
Action - read

 <Policy xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" PolicyId="TestPolicy" RuleCombiningAlgId="urn:oasis:names:tc:xacml:1.0:rule-combining-algorithm:first-applicable" Version="1.0">
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-regexp-match">
<AttributeValue DataType="">echo</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:resource:resource-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource" DataType="" MustBePresent="true"/>
<Rule Effect="Permit" RuleId="Rule-1">
<Match MatchId="urn:oasis:names:tc:xacml:1.0:function:string-equal">
<AttributeValue DataType="">read</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action" DataType="" MustBePresent="true"/>
<Apply FunctionId="urn:oasis:names:tc:xacml:1.0:function:any-of">
<Function FunctionId="urn:oasis:names:tc:xacml:1.0:function:string-equal"/>
<AttributeValue DataType="">admin</AttributeValue>
<AttributeDesignator AttributeId="urn:oasis:names:tc:xacml:1.0:subject:subject-id" Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject" DataType="" MustBePresent="true"/>
<Rule Effect="Deny" RuleId="Deny-Rule"/>


c) After creating the policy, Click 'Publish To My PDP' link.

d) Go to 'Policy View' and press 'Enable'

e) To validate the policy, create a request and tryit. Click on the 'TryIt' link of the policy (in the 'Policy Administration' page) and give request information as below;

 <Request xmlns="urn:oasis:names:tc:xacml:3.0:core:schema:wd-17" CombinedDecision="false" ReturnPolicyIdList="false">
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:action">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:action:action-id" IncludeInResult="false">
<AttributeValue DataType="">read</AttributeValue>
<Attributes Category="urn:oasis:names:tc:xacml:1.0:subject-category:access-subject">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:subject:subject-id" IncludeInResult="false">
<AttributeValue DataType="">admin</AttributeValue>
<Attributes Category="urn:oasis:names:tc:xacml:3.0:attribute-category:resource">
<Attribute AttributeId="urn:oasis:names:tc:xacml:1.0:resource:resource-id" IncludeInResult="false">
<AttributeValue DataType="">{echo}</AttributeValue>

4) Send a request from client
a) Launch SoapUI and create a project using echo service wsdl.
 -  Add username/password in request properties
 -  Set the proxy url as the endpoint URL
 -  Send the request

Enable DEBUG logs in PDP and view the request and response as below;
a) Open IS_HOME/repository/conf/

b) Add following line

c) View IS logs as below;

Chathurika MahaarachchiHow the Message Router EIP can be implemented using WSO2 ESB

Introduction to Message Router.

Message router reads the content of the message, and based on the content of the message it directs to the incoming message to the correct recipient. Message router is useful handling scenarios like when the specific logical function is distributed among several physical systems and we need to deliver the message to the correct service.

The following diagram describes behavior of message router.


Message router by practical example

Now lets have a look on how message router works by a practical scenario.

Assume that we have deployed "Echo" service in WSO2 Application server. Here we need to deliver the incoming message (string or integer) to the different locations. What you need to do is first create a proxy service in WSO2 ESB by giving the end points of the services and WSDL of the echo service deployed in Application server.

Create a proxy service as follows in WSO2 ESB

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <log level="full"/>
         <switch source="local-name(/*/*/*)">
            <case regex="echoString">
               <log level="custom">
                  <property name="return" value="String"/>
                     <address uri="http://localhost:9763/services/echo"/>
            <case regex="echoInt">
               <log level="custom">
                  <property name="return" value="Int"/>
                     <address uri="http://localhost:9764/services/echo"/>
               <log level="custom">
                  <property name="return" value="default case"/>
                  <property name="return" expression="local-name(/*/*/*)"/>
         <log level="full">
            <property name="MESSAGE" value="Executing default 'fault' sequence"/>
            <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/>
            <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/>
   <publishWSDL uri="http://localhost:9763/services/echo?wsdl"/>

What happens in this proxy service is switch mediator observes the message and filters out the content based on the given xpath  expression. If the filtered content is matched with the case,(string/integer) the message will be routed to the specific endpoint. if the matching condition is not fount the message will be directed to the default case.

Once you invoke the above proxy you will receive a responses as follows.

case 1 - string

<soapenv:Envelope xmlns:soapenv="" xmlns:echo="">

[2014-04-06 00:01:46,824]  INFO - LogMediator return = String

case 2 - integer

<soapenv:Envelope xmlns:soapenv="" xmlns:echo="">

[2014-04-06 00:05:28,934]  INFO - LogMediator return = Int

Case - default

<soapenv:Envelope xmlns:soapenv=""><soapenv:Body><p:echoStringArrays xmlns:p="">
<!--0 or more occurrences--><a>aaa</a>
<!--0 or more occurrences--><b>bbbb</b>
<!--0 to 1 occurrence--><c>vvvvv</c>

[2014-04-06 00:33:55,953]  INFO - LogMediator return = default case, return = echoStringArrays

Ganesh PrasadThe End Of Ubuntu One - What It Means

Although a big fan of Ubuntu Linux as a desktop OS, I've never been interested in their cloud storage platform Ubuntu One, and found it a bit of a nuisance when asked to sign up for it every time I installed the OS.

Now Ubuntu One is being shut down. I'm 'meh' but still a bit surprised.

The linked article talks about mobile, and how new mobiles such as the Ubuntu-powered ones need cloud storage to succeed. If so, isn't it really bad timing for Canonical to walk away from a fully operational cloud platform just when its mobile devices are entering the market?

Ubuntu-powered smartphones
(Do you know what the time on the middle phone refers to?)

I think it's about economics.

Ubuntu's statement says:

If we offer a service, we want it to compete on a global scale, and for Ubuntu One to continue to do that would require more investment than we are willing to make. We choose instead to invest in making the absolute best, open platform and to highlight the best of our partners’ services and content.
Hmm. I read this as Canonical trying to build a partner ecosystem that will substitute for having a big cloud-and-mobile story like Google does, without the investment that such a proprietary ecosystem will require. Let's see if they succeed.

The other side-story in the linked article is about telcos and their role. Having worked at a telco over the last two years, I can confirm that the major fear in the telco industry is being reduced to commodity carriers by "over the top" services. The telcos are fighting to offer content, and will want willing mobile wannabe partners like Mozilla and Canonical to offer smartphone platforms that will work with networking infrastructure and make the telcos more attractive (through content that both players source from content providers). It will be interesting to see how this four-way, federated partnership (between multiple telcos, independent smartphone platform vendors like Mozilla and Canonical, smartphone device OEMs and content providers) will play out. Many of these companies will think of themselves as the centre of the Universe and the others as partners.

"Nothing runs like a fox" - Well, let's see if the Firefox Smartphone has legs

In the meantime, some good news for startup cloud providers ("startup" only with respect to the cloud, since they will still need deep pockets to set up the infrastructure!): Canonical is open-sourcing its Ubuntu One storage code “to give others an opportunity to build on this code to create an open source file syncing platform.” This should be interesting.

Yumani RanaweeraUsing operations scope to hold my values while iterating

Iterator mediator breaks a message from the given xpath pattern and produces smaller messages. If you need to collect an attribute value throughout the iteration, how do you do it?

With lot of help from IsuruU, I managed to workout this solution. In here I am Iterating through this message [a], and breaking it to sub messages using this [b] pattern. I need to collect the values of '//m0:symbol' in a property and send the processed values to client and values of failed messages to failure sequence.

 [a]  - Request message
  <soapenv:Envelope xmlns:soapenv="">
<m0:getQuote xmlns:m0="http://services.samples">
[b] xpath expression

Here is my insquence:
         <iterate xmlns:m0="http://services.samples"
                  <property name="PID"
                            expression="fn:concat(get-property('operation','PID'),//m0:symbol,' ')"
                  <store messageStore="pid_store"/>
         <log level="custom">
            <property name="Processed_PIDs" expression="get-property('operation','PID')"/>
         <payloadFactory media-type="xml">
               <ax21:getQuoteResponse xmlns:ax21="http://services.samples/xsd">
               <arg xmlns:ns="http://org.apache.synapse/xsd"

Let me explain above; With 'expression="//m0:getQuote/m0:request"' the request message will be split into different messages as I described earlier. Since my scenario was to collect a given value from each of the split message and send them to appropriate path as a single message, I have used continueParent="true" and sequential="true". By this I am making sequential processing instead of default parallel processing behaviour of iterator.

Then as a target sequence within iterator to mediate the split message, I have opened a property mediator. Using this, I am collecting the value of //m0:symbol and storing it in a variable (property name) 'PID'.

The scope of the PID property was set to scope=operations to preserve the property within iterated message flow.

Later, as per the initial requirement the message is sent to a message store. A log is printed on property for ease of tracking. Then I prepared a payload to send the message with PID as an attribute.

Fault sequence was done like this to capture faulty messages and pass their PIDs.

         <log level="full">
            <property name="MESSAGE"
                      value="--------Executing default &#34;fault&#34; sequence--------"/>
            <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/>
            <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/>
         <property xmlns:m0="http://services.samples"
                   expression="fn:substring(fn:concat(get-property('operation','PID'),//m0:symbol,' '),1,(fn:string-length(get-property('operation','PID'))-1))"
         <log level="custom">
            <property name="Failed_PIDs" expression="get-property('operation','PID')"/>
         <payloadFactory media-type="xml">
               <ax21:getQuoteResponse xmlns:ax21="http://services.samples/xsd">
               <arg xmlns:ns="http://org.apache.synapse/xsd"

The complete proxy configuration can be found here. TCPMon outputs are attached below for further clarity.

John MathonTesla : Case Study : Internet of Things

The Tesla automobile is a big example of the Internet of Things and potential problems and benefits of this idea.



IOT properties of the Tesla 

1) The Tesla has a persistent 3G cell connection to the internet that is paid for gratis by Tesla.   This makes it one of the “Internet of Things”  It also has Wifi, bluetooth and garage door opener built in. 2) The car has an API!  Tesla API 3) The car can provide Geolocation information 4) The car has attitude (angle) and various other sensors for acceleration 5) The car has a camera (in the back) 6) The car could be lethal to both passengers and others if it were hacked inappropriately 7) Tesla has reportedly formed an early warning threat detection and reward system to reward hackers who discover vulnerabilities in the cars security 8) The car can detect the presence of a key fob within a small distance of the vehicles perimeter 9) Virtually all aspects of the cars functionality are digitized and theoretically available for inspection and utilization including energy use, positions of wheels, brakes and emergency brake, climate system, seat positions, mirrors, door handles … 10) The car has an “App framework” that allows developers to build apps to run in the car.  This will be enabled in late 2014 with the addition of a Chrome browser and android app compatibility.  Currently there are only a couple internet music apps that are built into the car. 11) The car has a browser with geolocation capability and a map application 12) The car has a horn, lights for external signaling as expected 13) The car can be woken up from a sleep low energy state over the air and booted up at any time needed to query or operate the car 14) The car can take in new versions of its software and firmware and upgrade itself automatically. 15) the car has a 17″ touchscreen console for controlling all functions and a video console for the drivers speedometer and other driving  information 16) The car has a smart charging system that can adapt to almost any electrical source that is plugged into the car.  Adapters for things like RV hookups, Clothes dryer plugs as well as standard 110V and 220V configurations are supported.  The car can also accept up to 500Amps of current in DC mode for quick charging.  It has the ability to regulate the power consumed to the capability of the line it is connected to and to reduce consumption as the batteries can accept as well as allowing the user to designate lower power levels and even a timer to control when it starts charging.

Functionality of the Car Itself

2013-Tesla-Model-S-interior-1 1t4hg8KbzDRyDvQY7bYt2pA


The functionality of a car is well known.   The Tesla is a fully functional car with extraordinary performance.  Even the slowest version of the Tesla is faster than most high performance sedans from luxury car makers in the world and the performance version is at least as fast as any sports car costing 5 times its price.


It is 3-5 times less expensive to drive per mile than even fuel efficient ICE cars (Internal combustion engine car) and the maintenance of the car is minimal considering the number of moving parts in the car and components that could wear down are a small fraction of an ICE.  There are no oil changes, belts to replace, spark plugs or wearable components other than brakes and tires.  There is some concern that millions of ICE car workers could be put out of a job because the car simply doesn’t need maintenance like ICE cars do and if at home charging takes off then many thousands of service stations might go out of business as well.   That’s disruptive. Since the car has so few moving parts the body can be built with no-compromise aerodynamic and safety considerations.


The Tesla was ranked by the NTSA and Consumer reports as the safest car ever built and the most efficient.    The NTSA had their “car breaking” machine broken by the Tesla.   The Telsa broke the machine after applying a force equivalent to 4 Teslas stacked on the roof of the car the machine ran out of steam.   The NTSA also was unable to flip the car since the Tesla battery is positioned on the floor.  The car had a safety rating that was higher than any other car ever tested overall and in all 5 categories it tests for.  No passenger in a Tesla has been seriously injured or killed in its short history so far as far as I can tell and Elon has said this as well.   Recent reports of fires have not been life threatening or injurious to anyone.

Battery Life

The biggest consideration with the car like almost any IOT is the battery life.   The Tesla will drive between 150 and 400 miles depending on the battery configuration, speed you drive the car and various options you use while the car is on.   It has various smart configuration options to reduce power consumption and ways of telling you how you are using the cars power now and over various time periods for you to learn how to optimize its performance.     Tesla is building a network of hundreds initially (covering more than 95% of US population in 2014) and possibly thousands of supercharging stations over the next several years (worldwide)  where you may charge the car from empty to half full in 20 minutes or replace the battery with a fully charged replacement battery in 90 seconds( a lender battery), twice as fast as the time it takes to fill an ICE car with liquid dinosaur even at a fast pumping station.


There are <25,000 Teslas on the road but already its sales have impacted BWM and Mercedes causing them to report declining sales for 2013 for their luxury sedans. Every 2-4 weeks Tesla issues software upgrades to the car via wireless connection.  The latest upgrade improved the way the variable height suspension system works, improved the cars handling on steep hills when at a stop, improved the bluetooth coonectivity, improved the way the car measures range left and energy usage and a number of other things.  Previous versions have fixed charging problems with faulty household wiring which caused fires.   The ability of Tesla to fix the car without needing to go into a shop and over the air is revolutionary.

The Future

The following video demonstrates features Tesla may plan for future versions:  Telsa Z version

When the Android app capability is turned on later this year there will be possibility to do lots more things with the car.   At first standard android apps like Waze and various entertainment apps that already exist will be the big apps downloaded undoubtedly.

However, if some functions of the car itself are exposed to apps you can imagine that there may be many additional apps that have to do with functions of the car that broach into security and privacy concerns.   Also safety concerns from things as simple as distraction of the driver to apps that actually modify the cars functionality in unpredictable ways that cause a crash. I want to say this is all very speculative.   Tesla has not said what they will enable, if all apps will be able to run in the Tesla or what functions they might make available if any to app writers. In addition Tesla has said they are going to do like Google, Microsoft and Apple and have a reward program for those who discover vulnerabilities.   I hope so.

IOT Considerations

Network Effect

Once an IOT is available and providing functionality whether it is producing data or is an actor able to do things or both it becomes part of the network effect.   The network effect is the multiplicative impact connected people or devices have.    Usually with a network effect the effect increases in a non-linear way with the number of items that are connected. Some network effects could be negative.  For instance, the ability to reach a large number of cars and hack them may mean for instance making large numbers of cars fail on a highway at once. However, the thing we don’t know is what are the network effects of having lots of IOT’s in the network and people. New applications we can’t imagine today are possible and could be revolutionary in terms of their impact on our quality of life, efficiency or whatever measure you choose.   For instance if we had lots of people who were being monitored for their various biological parameters could we figure out how to intervene before heart attacks occur or could we diagnose people remotely for diseases or problems?   Could we discover new combinations of drugs or supplements that could be beneficial.   With devices such as cars are there ways to revolutionize how we deliver transportation or make transportation more efficient, less traffic, lower cost?  Can we discover new ways to deliver things to people?   The car IOT is a good example. One of the things you can do with a car API is to provide eventing.   You can detect events that occur based on any telemetry and let other people know things or put business processes in that automate activities.   If I was in the construction business I could use eventing to watch the delivery of components needed for the construction and plan the activity around that construction better.    If it is my car maybe things can be done automatically at my destination as I approach it.


Tesla_Motors_Model_S_base Another example of network effect is with bigdata.    Bigdata enables us to look smarter by accumulating lots of detailed information on activity and looking at that detail to figure out patterns that are of interest.   For a car that could be used on the car itself to improve the performance of the car. A company is already doing that with Tesla. Smart Trip Planner It collects data about your cars performance and stores that data for analysis.  Already they use this information to help you more accurately estimate the energy consumed by going on different routes. Boeing uses information like this from planes to fine tune plane performance, reliability and efficiency. Eventually when the Android app capability is turned on later this year there will be possibility to do lots more things with the car and analytics we can produce from the car.


Privacy concerns with the Tesla include the fact you can know where it is and what you are doing and going to as well as how you get there.  People could potentially learn your driving habits or even potentially figuring out you went through a stop sign.     The car links to your cell phone and therefore has access to contacts and other information.   A virus infected car could deliver this information to unscrupulous people potentially.


Privacy broaches security if for instance using the API someone could find and open the car.   If valuable things are in the car this could compromise security as well.   I believe like others that Tesla should implement a 2-factor authentication scheme sooner rather than later for the car.

Here is a video on various ways to help secure IOT and Tesla 


The biggest concern for the IOT of the tesla is not privacy but as a safety risk.    Right now Tesla is the safest car in the world ever built according to the NTSA.  However, there are numerous vectors for a Tesla to become an unconventional safety risk.   I doubt there is a way to make the batteries explode but certainly it may be possible to damage the car as well as passengers inside.   One demo of an app showed a passenger able to force the steering wheel to move without the driver having control.   The ability to simply turn the car off in driving mode could be dangerous as well as doing anything to brakes.   There is even the possibility if an app causes the tesla to have to reboot or flashes the screen or driver console in such a way that the car becomes inoperable for a small time while driving.  It’s scary to think of all the ways something could go wrong. The ability to interact with the cars API while the car is in operation could be a serious problem but the fact that nearly all functions of the car are digitized and controllable potentially could mean the exposure could be through apps that are installed on the car or over the air through the API inserting a virus or just using the API in ways that cause problems.   I think Tesla needs to beef up their story around security as do all the IOT vendors in general. Would I suggest someone not buy a Tesla because of these concerns?  No!   Of course not.  These are the kinds of things that are general problems that all car companies and all IOT vendors will have to deal with.  Tesla is just on the leading edge. The Tesla is a fabulous vehicle and I believe that they will address these issues in a timely manner to make this a viable IOT and light the way for others.

Supporting Articles:

sanjeewa malalgodaFix oracle database connecting issue due to missing time zone -java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1 ORA-01882: timezone region not found


When you connect web server or application to external oracle database you need to add certain configurations. But sometimes we might get following error due to missing time zone value.

[2014-04-03 10:57:43,128] ERROR - DatabaseUtil Database Error - ORA-00604: error occurred at recursive SQL level 1
ORA-01882: timezone region  not found
java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-01882: timezone region  not found

We will be able to fix this issue by setting time zone as JVM parameter. You can add following to your java code running script.


Then issue will fix as it passes time zone.

Udara LiyanageBRINGING THE CLOUD DOWN TO THE GROUND – Cloud computing explained


Earlier when people were thirsty, they had no other option other than digging a well. However with the population growth and urbanization, it was impractical to build wells to address each family’s need. Instead people created tanks where a huge volume of water could be stored. The stored water is supplied to the citizens of many towns and villages. This method has many visible advantages over the traditional way. All families don’t have to put an effort in digging wells for themselves, no need of time to maintain wells, and no dedicated and suitable place is required. An established and dedicated organization such as Water board does everything necessary to take water from a river, pump to a tank, purify and deliver to the end consumers. Consumers only pay for the volume of the water they consume which covers the overhead costs of the producer as well.


The above scenario is an analogy to the base concept of cloud computing. Earlier, when computing facilities or computer systems are required by an organization or an individual, they had to buy computers and the network infrastructure. In addition they had to buy licenses for necessary software and install them, allocate dedicated space and employ necessary personnel to maintain the system.  This requires a considerable initial setup cost and recurring maintenance cost. Scaling up and down based on market and business needs is not easy as it involves direct costs as well as management overheads associated with it. When the power of the internet was practically proven, the concept of cloud computing was emerged. As the water is received via pipes, computing power is received via internet. Thus the cloud computing can be explained as “computing via internet”.


Cloud computing comes in different flavors. Following will be discussed in this article.

  • IAAS – Infrastructure As A Service
  • SAAS – Software As A Service
  • PAAS – Platform As A Service

IAAS – Infrastructure as a Service

Dedicated or shared computing infrastructure is provided with the basic features such as the OS and other domain specific features installed. Users see these services as virtual machines with comprehensive administrative powers or hosting service with limited administration capabilities. Amazon EC2 is the most popular IAAS provider providing virtual computing power in the cloud. There are thousands of web hosting service providers such as

For instance let’s consider Amazon EC2. After creating an account in Amazon EC2, users can create virtual machines with his preferred OS, disk size, virtual memory size, firewall etc. Thereafter we can access those machines just as we access normal physical machines in a remote location.  We can stop or terminate the machines when we are done with it or restart later when we need them again. In addition a snapshot of the machine can be created at any point so we can boot up another machines from the same state later. Following is a screenshot of the management console of EC2.


There are more services such as load balancing, scaling up and down etc. based on the user needs. Users are charged according to the resources they consumed such as virtual memory, storage and the time the machines were active. Another advantage of having an IAAS is we can scale up the system instantly.  We can add and remove more hard disk space, memory or machines themselves or database instances easily; no ordering, no deployment, no baseline etc. Just play without even plugging. Coolest thing is that resources can be added in rush hours and removed when things go dull. Dynamic resource management to its max! These systems also support automatic scaling up and scaling down so application developers don’t have to think of typical constraints of the shared computer architecture.

SAAS – Software as a Service

Adding another layer on top of the computing infrastructure, users are given a valuable service through a single piece of software or a collection of software. Google’s collection of SAAS is the most famous free services stack among the general public. Gmail and Google Docs are the popular example applications for free SAASs. Even this article itself was started in the clouds as a Google Doc first – in favor of cloud computing. There are thousands of other paid services which provides services like online time management systems, online POS systems etc. Paid services charge in different ways like per year, per user and PAYG. PAYG or pay as you go is the cost model that charge for the resources (computing power, storage etc.) consumed. Users have the freedom to choose the cost model that best suits them.  So you might question how free services generate revenue. The answer is it depends. Let’s consider Gmail as an example. Their main revenue model is advertisements.  Gmail has more than 500 millions of users which imply Gmail make more than 5 billion advertisements display if we assume one average user sees 10 advertisements a day. Although it is impossible to assume how much they make for a year, but it should be a plenty.  Additionally Google offers only a limited email and document space. Normal users can live with that. However for large corporate users needs to pay for additional email and document space when they grow. For e.g. there are numerous organizations in Sri Lanka uses Google services by paying subscription fees.

PAAS – Platform as a Service

There are organizations that provide platforms to build applications in the cloud where other organizations can build software services and host them. PAAS facilitates building of custom applications using a bunch of software platforms. This includes facilitating all of the development, maintenance and retiring stages of a typical software lifecycle. WSO2 Stratos,WSO2 App Factory, HerukoRedhat OpenShift are high standard PAAS vendors. Please read more about WSO2 App Factory in this introductory article by the same author.

As an example, WSO2 Stratos is a 100% open source, multi tenanted for private, public and hybrid cloud deployments. It provides a complete stack of middleware products as a service and also mediation, governance, security, gadgets, monitoring and more by using the capabilities of WSO2 products.  Third party runtimes such as PHP, MySQL, and Tomcat etc. can be plugged in via a concept called cartridge.  StratosLive is the Stratos version hosted by WSO2. It is free for anyone interested.


Every service hosted in the Internet is not a cloud service. They have some common characteristics which qualifies a service as a cloud service which is listed below.

Multi tenancy

Single software is shared among multiple users or organizations called tenants. The system and the data are partitioned among tenants. Different tenants can have different configurations. But the software service provided should be the same with different features on or off based on costing model etc.

Auto Scaling

The system automatically scale up (spawn up serving instances) during the rush time and automatically scale down (kill additional services) during the free time.


System supports multiple nodes/instances running concurrently in different machines in a network, ability to save and transfer the overall state to work as a single servicing entity is also an attractive feature.





As every coin has two sides, cloud computing also has some inherent disadvantages. Organizations which handle the confidential data such as government organizations, mission critical applications etc. may think twice before storing their information in a public cloud.

Though it seems to be cheaper at the beginning, cloud may be expensive in the long run. Another problem is organizations may have to tolerate cloud services providers downtimes and bugs which can hinder the overall quality of service. If you can remember recent service degradations of Outlook and Gmail, you know what I meant by that.

When looking at the current industry, more and more organizations tend to move to the cloud since organizations needs to get the competitive advantages by delivering their services or products early to the market. Almost all the popular companies like Microsoft, Amazon, eBay, IBM, eBay, Redhat etc. have stepped into the cloud. Big companies like Google are born due to the cloud computing, and all its services are available via cloud. Through cloud, companies can make their services available to the customer faster. Cloud native features help to build a more distributive, auto scaling services that that supports any number of users. Cloud let the companies to address more users.  For instance Microsoft Office what was a desktop application became more collaborative and accessible when it reached the cloud as Office 365.

Cloud is a new computing model of the era to build the applications shortly. However moving to a cloud might be time consuming initially as information may have to be separated and treated differently based on the security needs.


Rise of cloud computing is remarkable as it has made great success for this short time period. Almost every person who uses a computer or modern mobile phone, benefit from the power of cloud. Google, Email, Facebook, Dropbox, Google Drive, Skydrive are few of the system an average person uses which uses the Cloud. According to a recent report by IBM states that the businesses that moved to use cloud have doubled the revenue compared to its non-cloud counterparts. According a survey, 75 % of the surveyed businesses uses the cloud and the global market for the cloud is $158.8 Billion by 2014 which is 126.5% growth compared to 2013. So it is clear that the past of cloud computing was bright, but the future of cloud computing will be even brighter and business will be forced to use it to survive in the industry.


Image credits :,,

Hasitha AravindaChange maven local repo in one command - Ubuntu/Linux

This will be useful for the developers who works with multiple maven local repositories.

Requirement:  set M2_HOME environment variable, before you start. 

Add following  bash function to .bashrc in your home directory. Change M2_LOCATION if you need.

Then reload the .bashrc using command,
$ source ~/.bashrc

How to use: 

Type changeM2 in the terminal and give the name of the m2 repo (folder) you wish to change.

Udara LiyanageIntroducing WSO2 AppFactory

By definition WSO2 App Factory is a multi-tenant, elastic and self-service Enterprise DevOps platform that enables multiple project teams to collaboratively create, run and manage enterprise applications.   Oh! kind of confusing? Yes, as most other definitions, only a few will grab what App Factory means from the first look at its definition. If it’s explained it in simpler words, WSO2 App Factory is a Platform as a service (PAAS) which manage enterprise application development from the cradle of the application to the grave. (Still confusing…? Figure below illustrates the move from the traditional on-premise software to cloud based services. You can see the Platform as a service in the third column.)


Unless it is a university assignment or test, every real world application development has to undergo several phases until it is ready to go live. Applications has to be designed, developed and sent to QA for testing. Then, QA has to test them rigorously before approving for production. Then the bug fixing and stabilization phase. When the software is ready, it gets deployed. Finally when the application completed its job, it is needed to be retired.

Organizations have to use a number of tools in each of the above phases. For instance, developers may be using SVN for creating code repositories, maven or ant for building the projects, JIRA for ticket tracking and various other tools for finding bugs in the application. Above tools are independent of each other which results in organizations having to put a considerable effort in deploying those tools. If you are a developer, QA manager, system administrator or a DevOps or any other stakeholder who is involved in application development, there is no doubt that you have endured the pain of above and you might be wondering “Is there one single tool which does the work of all of the above tools?”. WSO2 App Factory does exactly that. By using App Factory you gain all the support for your application development, all under one roof.

Individual building blocks of the App Factory is illustrated in the below diagram.


Diagram 1 depicts the components of the App factory. Management portal, what is the main interaction point to the system is at the center. Source code management, issue trackers and other features are accessible via the portal. When a developer created an app via the management portal, he is provided with a space in the repo , space in the build environment and a project  in the issue tracker and so on. You clone from the repository you are provided into your development machine. Then develop the application with your favorite programming IDE and commit. WSO2 is planning to rollout a browser based IDE in the future to make the complete lifecycle run on the cloud. The application you are developing is continuously built in the cloud using your built tool. If automatic build is enabled, the build process will be triggered automatically when you commit. If auto deploy is enabled, the app will be deployed in the development cloud automatically after the build. Then after the development is completed, the apps will be promoted to the test cloud.  This promotion will retire the apps from the development cloud and deploy them in the test cloud. QA department will test them, promote to the production or staging cloud if tests pass or demote again to the development cloud if fail. The ultimate step is to send the apps to the app store enabling users to discover the apps. The most interesting thing is, all the above tasks can be executed using a single tool via a single management portal.



Features of App factory

    1. Self-Provisioning of the workspace and resources such as code repository, issue tracking, build configuration and bug finding tools… etc.
    2. Support a variety of application

○     Web applications
○     PHP
○     Jaxrs
○     Jaxws
○     Jaggery
○     WSO2 ESB
○     WSO2 BPEL
○     WSO2 Data services

  1. Gather developers, QAs and DevOps of the organization to the application workspace
  2. Automate continuous builds, continuous tests and development activities
  3. One click solutions for branching and versioning
  4. Deploy application into WSO2 rich middleware stack
  5. No need to change your way of doing things

○     App factory can be configured to integrate with your existing software development life cycle.
○     Integrate with your existing users via LDAP or Microsoft Active directory


Yes, WSO2 App Factory is customizable. For instance organizations are not required to use the tools that App factory supports, they can plug in a tool of their preference. It is a matter of integrating another tool. Different organizations have different workflows, still App Factory can be configured to suit their own workflows.

In summary WSO2 App Factory is a cloud enabled DevOps PAAS for enterprise which manages the entire life cycle of an application. It leverages the application development giving enterprises a competitive advantage in the cloud.

Enough of talking, so help yourself by visiting App Factory preview in live. It is free and open source.

This article is just a bird’s eye view of the WSO2 App Factory. Visit its home page to broaden your knowledge. Good short video about the product is shown below:

Malintha AdikariCreate Apache Maven archetype from scratch

Archetype is a Maven project templating toolkit. An archetype is defined as an original pattern or model from which all other things of the same kind are made. The names fits as we are trying to provide a system that provides a consistent means of generating Maven projects. Archetype will help authors create Maven project templates for users, and provides users with the means to generate parameterized versions of those project templates.

You have two options to create your own archetype.

  • ->Create archetype from scratch
  • ->Create archetype from existing project

I have browsed internet to find a clue about creating maven archetype from  scratch.I couldn't find many resources for it but I found many resources for creating archetype from existing project.

In this post I will discuss how to create own maven archetype, host it in remote repository and use it to generate project

Before start creating the archetype you have to determine the source files and resources which should be there in the project once it will be crated using your archetype.

1. There is a standard file structure for maven archetype . You have to create root folder, sub folders and few files inside it (Diagram 1.0 shows the file structure inside a maven archetype)


Our root folder is "archetype". Inside that we have "src" folder which include the resources to be included in our archetype and "pom.xml" file. There is "main" folder inside "src" folder. It includes "resources" folder. "archetype-resources" folder inside it includes all resources to be included in the archetype project. "META-INF" includes "maven" folder which includes "archetype-metadata.xml" file. It has the configuration details for manipulating files when generating a project. You have to add "pom.xml" file for your desired project into the "archetype-resources" folder and all source scripts and test scripts in to the "src" folder inside it. All the resource file/folders to be included to generated projects should go to resources folder inside "src" folder as shown in the diagram. "archetype-resources" folder has the exact file structure to go to generated project using your archetype.

Now lets have a close look
  • archetype/pom.xml

Note: Green colored components are optional. You have to include it when you are going to deploy your archetype to remote repository.


There are some important attribute you should have to configured in this pom.xml file
  1. groupId - Group ID of your  archetype
  2. artifactId - This should be unique
  3. version - Different versions of same artifact (archetype) can be available. So this attribute would be used to distinguish them.
Above three attributes altogether used to call your archetype.Let's see how the things working under the hood. When you try to generate project using your archetype (using mvn  archetype:generate command - this will be explained later in this tutorial) , the archetype plugin will look at the a xml file called catalog (local or remote) where knowledge about archetypes is stored. Catalog distinguish archetypes using above three attributes.
  • archetype/src/main/resources/archetype-resources

This folder contains the exact file structure of the resultant project. You can make simple maven project ( contains pom.xml, src, test, resources folders ) and copy-paste it in to here. You can include all the resource files and test scripts into this structure.
  •  archetype/src/main/resources/META-INF/maven/archetype-metadata.xml

The metadata about an archetype is stored in the archetype-metadata.xml file located in the META-INF/maven directory of its jar file. This is a reference for the Archetype descriptor used to describe archetypes's metadata.

Note: Green colored component is optional. Here we have defined 3 attributes. Users are forced to give those 3 attributes when creating projects using this archetype. Custom maven parameters can be achieved using this method.

2. After create the file structure you have to build the project. Go to the project root folder "archetype" and build the project using

command.Then you can find "target" folder inside the project root folder which includes "jar" file. The name of the jar file is <artifactId>+<version>.jar. (artifactId and version are given as your parent pom.xml file)

3. Now your archetype has been added to your local catalog. If you want to add your archetype to a remote repository (remote maven nexus repo) you have to deploy your archetype. Execute following maven command giving appropriate configuration details.

4. If you have successfully deployed your archetype into a remote repository (or local repository ) you can use that archetype to generate projects. Goto to a desired location and execute following maven command to generate project.


    ->Remote repository details
      DarchetypeRepository - remote repository where the archetype is deployed

        ->Archetype details
          DarchetypeGroupId - group id of the archetype
          DarchetypeVersion - version of the archetype to be used
          DarchetypeArtifactId - artifact id of the archetype to be used

            ->New project details
              DartifactId - artifact id of project to be created
              DgroupId - group id of the project to be created
              Dpackage - package name in the project to be created

              If you have given above parameters correctly it will generate a new project to you location.

Madhuka UdanthaReal time event tracking in Gmail

This really powerful feature and it work properly and nicely in Gmail

As some one try to access my gamil account and gmail event track notice it as unusual activity and block it.

And it send me sms same time


It is realy time event monitor and it have block that use by accessing my my gmail

Then I login in morning to Gmail and it have nice way of recovery as well.



It shows here hack was try to access even. Really nice and worth feature to have in email and even to our any product that have our valuable information or data.


Chris HaddadBig Data Blocking and Tackling

Are you practicing Big data blocking and tackling actions?   Ron Bodkin (@ronbodkin) penned a good post describing the value of having information in one place; the mantra of data warehouses and data marts.

While Hadoop makes it easier to warehouse data (due to flexible schema model), effective analytics across disparate data sources still requires defining data semantics, data mapping, and master data sources. Don’t forget these important foundational building blocks.

The Path to Big Data Requires Little Data

In a recent workshop with industry IT practitioners, focus was on the little data problems.   The following problems will inhibit scaling little data to big data:

  • Uneven data management maturity across the organization

◦                      Emerging master data management practices

◦                      Minimal identification of single source of truth

◦                      Little agreement on core data entity representation

  • Enterprise Information sharing platform not in place

◦                      Fragmented data silos and data repositories

◦                      Ad hoc, project-level data integration

◦                      Limited data virtualization and data services

◦                      Proliferation of unknown Excel spreadsheets


In addition to copying legacy data, some BDP implementation roadmaps tie directly into business activity message streams and don’t wait for bulk copies.

Big Data Reading Recommendations

The WSO2 Big Data Story

Use Little Data Analytics

Improving Team Performance with Big Data



Denis WeerasiriWhen flight tickets get cheaper in advance

I wrote a small Google App Script with Google Spreadsheets to track when flight tickets get cheaper in advance. I weekly tracked the minimum return-ticket price from Sydney to Colombo for a fixed departure date (07th of April 2014). The answer was, ticket prices get minimum in advance of 5 months in average. Due to the disappearance of Malaysian airliner, MH370, I noticed a sudden fall of Malaysian air-line ticket prices after 8th of March 2014. So I added two charts. 1st one doesn't includes deviations related to MH370. 2nd chart includes the relevant deviations.

John MathonDisruptive Forces : Disruptive Megatrends

A number of analyst firms and pundits have declared a list of the  megatrends.  For examples you can see my blogs:

IDC Emerging Trends


Gartner Top Megatrends

I have my own list of what I call “disruptive” technologies and trends.   This comes from talking to people, observing what companies are doing and what people are saying as well as my own intuition of what we have seen in the past and therefore what seems likely to happen.

I divide my list into 3 categories:  1) Real world disruption 2) Enterprise Disruption and 3) Tech Disruption.   For real world disruption we are talking about things that will change the way we do things day to day whether you are a plumber, retired person or high tech worker.   For Enterprise disruption I am talking about the trends that will change the way business operates potentially causing some businesses to fail or leap to the front of the pack.  For Tech disruption I am talking about technologies that are going to change the way we technologists build, deploy, use technology and lead to the growth of some tech companies.

I have 7 real world megatrends, 4 enterprise megatrends and 5 tech megatrends.   Unlike other analysts who take a short term and very qualitative approach to prediction I am going out on a limb to say how big I think these trends are in terms of dollar impact on the economy and how fast the trend will move as well as how far I estimate the trend is along its curve.  We start with Tech Megatrends today.  Enterprise and Real world megatrends shortly.

Top Tech Megatrends:

1) PaaS (devops and the improvement of devops processes)

PaaS and DevOps.  Gartner, Forrestor, IDC all expect PaaS to grow by leaps and bounds in the next 5 years.  I also am a big enthusiast.    It seems that unlike the last 30 years which have been characterized by a wide plethora of ALM tools and processes adopted by companies the IaaS, open source and other trends are right now driving a consolidation of tools and methods to do PaaS which will make adoption of PaaS and devOps much faster and more complete than previous cycles.     The cloud has left all previous ALM and devops tools in the dust so that the few tools created in the last 10 years are dominating the space.  Git, Maven, Jenkins, Chef, Puppet,  Redmine, Jira, Java or Java based languages, container technologies of various types have become dominant in any modern enterprise development shop these days and there is little resistance I see in people moving to them.  This consolidation will lead to wider and faster adoption in my opinion of PaaS.

PaaS can be broken down into various flavors and subtypes.  iPaaS, aPaaS, BPMaaS.  There are also what I call Application PaaSs, Toy PaaSs, Generic PaaS‘s and Ecosystem PaaS’s.    Each of these represents different types of platform technologies used to build different types of applications.    I have a blog describing some of these variants on PaaS:

A simple guide to PaaS, iPaaS, IaaS, …

I believe that this is a 100 billion dollar market in 20 years.  Gartner predicts 14 billion by 2017.   I think PaaS is ultimately a $100B market and we are at 3% of that today and gaining 3% a year so I think it will take a long time to attain it’s ultimate potential but that in 15 years we could see the majority of all applications running and being built in PaaS environments.   What will slow down adoption is the existing legacy infrastructure and the need to train many people in PaaS technology.

Today PaaS comes in a number of delivery flavors.  Public PaaS’s meaning that a service provider provides not only the technology but the services behind the technology.   Private PaaS technology solutions you have to run in a cloud infrastructure that you operate that integrate best practices.  Usually on-premise but could be run in a public or private IaaS vendor.    I think that over time the complexity of operating a PaaS will cause most companies to switch to the former model but today about half of PaaS is being sold as private PaaS.  The trend will eventually move to public PaaS’s.

2) IaaS Commoditization and the end of Enterprise Premise Server Farms

I don’t think all companies will give up owning server farms nor do I think IaaS is the cat’s meow and is painless cheaper and perfect.  The reason I am convinced that enterprises will eventually give up owning so much of their own technology is incompetence.   I believe most organizations are simply not well equipped to own and manage this equipment and the technology that goes with it.  That is not a negative.  They shouldn’t.  If you sell shoes, concentrate on shoes.   The best shoes should win not the shoe company with the best IT.

This movement away from enterprise farms will be slowed by many factors.  Enterprise management and workers as well as vendors may push back on too many companies moving too much to the cloud too fast.   Government regulation may inhibit some movement.

The lack of standardization in cloud IaaS vendors makes it difficult for some companies to decide what is the best course and will lead to slower adoption.   For instance, Google has Google compute cloud and goodle app cloud.  Great ideas but nothing like what Amazon does with its AWS service and the various services and tools it offers.   SoftLayer has a different set and a number of companies have chosen OpenStack.  If standardization wins then OpenStack will end up winning but we see frequently that usually one proprietary non-standard vendor usually keeps a large market share.   I am not sure if that is Amazon or Google.  They are both very strong.

Other things will slow adoption of IaaS.    A large part of enterprise server farms can’t be easily transitioned due to lack of compatible hardware in IaaS vendors as well as prohibitive cost/benefit for doing this.   It will therefore take at least 30 years for this transition to happen.  However, the size of this is so great (1 trillion or more dollars worldwide in enterprise technology infrastructure) that as slow as this movement is it will be a huge economic and transformative thing.   I see that almost all new development will not be implemented on enterprise owned hardware within 5-10 years.

Contrast this with the fact that network effects of the cloud will create a massive new demand for computing resources and for applications so the size of this market is not just “the current IT infrastructure” but the vastly increased infrastructure needed to support bigdata and all the new devices and data about people and things and all the unanticipated new applications of all this data and connectivity.  This trend is huge and will stay huge for a long time.   I anticipate this could be a multi-trillion dollar industry in 20 years.

3) API Enterprise Refactoring

Enterprises are rebuilding their infrastructure along API Centric architectures.   Some are moving faster than others but the trend is huge and pervasive.  There is hardly a company I talk to that isn’t doing this or something like this.   Examples like Google, Netflix, Twitter, Salesforce and others are showing that you can make serious money from APIs ($billions).    More important is that we are seeing for the first time true reuse from APIs and services meaning higher efficiency and faster time to market.    All these things are extremely compelling to any technologist and can be justified to the business on revenue generation or cost savings basis.   Lastly API’s are the tools for building mobile applications and it is clear that mobile is a huge megatrend that will change the face of applications away from the desktop more and more.   The API refactoring trend also supports social because it is easy to insert the data gathering and analysis needed when using APIs that allow an organization to move to a data driven enterprise.  (Another megatrend I will talk about.)

Part of Enterprise Refactoring and the emergence of the services API economy is the Store.  The Enterprise Store will become the defacto way we manage technology in enterprises.  We have found the Store paradigm to be disruptive with Mobile and with APIs.  I expect we will see this happen with all enterprise Apps and Services.  The Store provides a place to see and manage all enterprise IT services, data, applications, infrastructure.  Check out Store as an early example of this.

The API refactoring / API Economy is a $200 billion market in 20 years.  We are 5% of the way into this market today and growing very fast at 3% a year.

4) Open Source

This megatrend is farther along than some of the others but I believe has legs.   I believe that this year and last year we saw the widespread acceptance of open source in enterprises at the highest levels.  It is true that open source has been in enterprises for more than a decade but now we are seeing organizations as conservative and rich as banks and financial companies, health companies, state governments making a commitment to open source as a framework for their enterprise architecture.   I see the cracking of the big enterprise players.  Salespeople I know from my past days of enterprise sales are seeing customers delay making these proprietary software license purchase decisions.   I hear people in companies saying “I hate x” (replace x with your least favorite proprietary enterprise sales vendor) more and more and looking for some way to avoid the millions and millions of these license costs.    Proprietary vendors are engaging in “openwashing” which is a way to say they are open source too even when they aren’t.

This trend is not all cost based.  If people were only concerned about cost the megatrend would have no legs.   The fact is that in many cases now the best software, the industry standards are based on open source standards, on open source solutions.   From operating systems, middleware vendors and database vendors and every kind of enterprise software is being led by open source companies that are pushing the envelope faster than their proprietary  competitors.

The reason for faster innovation in many cases at open source companies and projects is because a combination of cross fertilization by companies and inventors able to leverage open source and contribute back improvements faster but also because the cost of sales at open source companies is much lower.   A typical enterprise sales software company will spend at least 30% or more of revenue on sales.  Open Source companies spend less in many cases (not all).

Overall I am guessing that open source is a $20 billion business and we are 10% into this and will see a rapid increase at 5% a year or 10-15 years to get to $20 billion dollar market.

5) The iPaaS revolution

Integration is a massive business.   Most projects in companies have a large fraction which is “integration” combining one system with another.

When we have a service economy, when people have put the services into the cloud we see happening today by the 10s of thousands and these things grow as I expect in tech megatrend 3 we have a completely new way to integrate.  Applications become combinations of services and integration is combinations of services.   We have new applications which are nothing more than integration and new applications which depend on integration of things we never had before.

There is something called the network effect.  The network effect, sometimes called viral is something like Moores law.  It says that as things and services and people become connected  that new applications and new services become available based on this ability to network all these things together to form ever more valuable services.  It is very hard to estimate the size of this but I anticipate that the cloud is going to be mostly about network effects in 10-20 years if not sooner.   That means the time to move is now!

The iPaaS revolution is about this network effect and the combination of services to create new services on top of existing services.  Uber for instance is an example of a network effect.  The ability of smartphones has enabled the cab driver and the passenger to have a much more direct and simpler way to connect and do business.   The US economy is managed by the federal government based on statistics that are gathered  from surveys and numerous delayed data sources.  Imagine if the government actually had accurate data on a minute by minute basis.  Could they manage the US economy better?  Could they make it easier to identify and help poor people or to otherwise provide services that make the economy function better?  What is that worth?

I suggest that the iPaaS revolution and the network effects are worth $1Trillion and that we are only 1% into this market growing at 3% a year.  Is that optimistic enough?   Am I smoking something or am I being pessimistic?


Things NOT on my Tech “disruptive” technology list may not be on this list because they are on my other lists.  For instance, mobile and bigdata are part of Enterprise Disruption, so wait for those blogs before criticizing what’s not on this list.  However, feel free to comment on what you think is not included or naivete I may have on what technologies are disruptive, the size of the markets.





The next blog on this topic will be the Enterprise Megatrends.

Kasun Dananjaya DelgollaLet’s make our apps less battery consumptive

Major characteristics of a high battery consumptive app

1.) Apps that run without the user’s permission (Ex : Apps with administrator privileges).

2.) High usage of device functions which require high energy, such as GPS, accelerometer, camera and other sensors, or when the app fails to turn off those functions when the are not in use.

3.) Apps which do frequent wake-ups on smartphone or prevent the smartphone from going to sleep mode. (Handsets go into sleep mode when they are idle to enhance the battery life but it can be prevented programmatically by apps – Wakelocks)

4.) “no-sleep energy bug” – This is a situation where at least one component of the device is woken up by our app and failed to put back to sleep due to a programming error and it causes battery drain as well.

5.) Apps with high-end graphics such as games

Verizon rates apps considering battery consumption [1] and according to that, If the device consumes less than 30 minutes of battery time, it gets five stars.
  • 5 stars: current drain less than or equal to 5mA; up to 30 minutes battery life lost.
  • 4 stars: current drain 5-10mA; 30 minutes to 1 hour battery life lost.
  • 3 stars: current drain 10-15mA; 1-1.5 hours battery life lost.
  • 2 stars: current drain 15-20mA; 1.5-2 hours battery life lost.
  • 1 star: current drain over 20mA; over 2 hours battery life lost.
How do we optimize our app?

1.) Use resource optimizer tool by AT & T [2] to test and optimize the app (this tool is free and open source.)

2.) Use Wakelock Detector [3] to monitor our app's impact on waking the device up - This identifies all apps on the device that, with or without permission, override the sleep function.

3.) Performing updates while a device is charging causes minimal battery drain, so we can create our app to maximise its update rate when the device is charging. We can also reduce the update rate when the device is not connected to conserve battery life. Read android developer guide [4] for more information.

4.) We can use BatteryManager [5] to broadcast battery and charging details. BatteryManager broadcasts whenever your device is connected or disconnected from power. These events can determine how often you start your app in a background state. To monitor changes in the charging state, register a BroadcastReceiver in your manifest to listen for these events by defining ACTION_POWER_CONNECTED and ACTION_POWER_DISCONNECTED. Generally, you only need to monitor significant battery changes. To do this, you can use the intents ACTION_BATTERY_LOW and ACTION_BATTERY_OKAY, so you can determine when to disable all your background updates. For the full tutorial, see Monitoring the Battery Level and Charging State [6].

5.) The problem with using a BroadcastReceiver is that your app will wake up each time any of the receivers are triggered. However, you can disable or enable the broadcast receivers at runtime. This way the receivers you declared in the manifest are triggered by system events only when necessary. You can use the PackageManager to toggle the enabled state on any component defined in the manifest. - Blackberry developer blogs [7].

6.) If you determine that connectivity has been lost, you can disable all of your receivers except, which informs you when the internet connectivity status changes. Once you are connected, you can stop listening for connectivity changes. You can use this technique to delay larger downloads by enabling a broadcast receiver that initiates downloads only after you are connected to Wi-Fi.

7.) Stop view animations when they're not really necessary (Incl progressbar)

8.) Be very careful with location requests - Unless your app needs to know the device's precise geographic coordinates at all times (Ex: Map or navigation app), you should carefully decide on how often you send location requests.

9.) Push data to your apps (Using GCM) without making data calls always to check for updates.

To read in-depth on mobile application quality improvement, read the article [8] by “The App Quality Alliance”


Chamila WijayarathnaAdding Client Side Validation to WSO2 Enterprise Store Publisher

WSO2 Enterprise Store[1] provides a user-friendly experience to the enterprise for accessing and managing digital assets. It consists of 2 components, Publisher and Store. In Publisher, users can add assets to the enterprise store, publish assets and create existing assets. In Store, users can find assets and related meta data, search for assets etc. 



In this blog, I am going to explain how to add Clients Side Validations to Publisher, to validate details entered when adding new assets. I don't have much experience in JavaScript, HTML, CSS, JQuery, jaggery, etc. stuff which have been used in Enterprise Store. But with little experience, I found it very easy to add validations to publisher.
In the initial version of Enterprise Store it includes some validations. One of them checks if mandatory fields have been filled or not.

First let's see how this validation happens. In the same manner, we can add custom validations we need in our use cases.
In <ES_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/default/js/common/form-plugins folder, you can find a file named common-plugins.js. This file includes the jquery plugin definitions, which have been used to add various validations. 'Required Field' form manager does the validation for mandatory fields.

    function RequiredField() {


    RequiredField.prototype.isHandled = function (element) {
        var isRequired = element.meta.required ? element.meta.required : false;
        return isRequired;

    RequiredField.prototype.init = function (element) {
        $('#' +'<span class="label-required">*</span>');
        $('#' +'required', 'true');

    RequiredField.prototype.validate = function (element) {
        var value = $('#' +;
        if (value == '') {
            return {msg: 'Field: ' + + ' is a required field.'};

 FormManager.register('RequiredField', RequiredField);

Validate method contains the logic to check if the field is filled with valid values. Then in <ES_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/default/partials/add-asset.hbs file, it has declared which fields should be validated under this method. 

{{#if this.isTextBox}}
                                {{#if isReadOnly}}
                                <input type="hidden" name='{{}}' value='{{this.value}}'/>
                                <input id='{{}}' type='text' value='{{this.value}}'  class="span8 fm-managed" data-use-plugins="TextFieldValueExtractor,ReadOnlyField" data-ready-only="{{this.isReadOnly}}"/>
                                <input id='{{}}' name='{{}}' type='text' value='{{this.value}}'  class="span8 fm-managed"  data-use-plugins='RequiredField,TextFieldValueExtractor' data-ready-only='{{this.isReadOnly}}' data-required='{{this.isRequired}}' />

                                {{#if this.isDate}}
                                        <input type='text' id='{{}}' name='{{}}' value='{{this.value}}' class='fm-managed' data-ready-only='{{this.isReadOnly}}' data-required="{{this.isRequired}}" data-use-plugins="RequiredField,DatePickerPlugin"
                                                data-date-format='yyyy-mm-dd' >


Last step is, it should load commom-plugins.js file when 'create' button is clicked. For this to happen, it has added 'common/form-plugins/common-plugins.js' to load in <ES_HOME>/repository/deployment/server/jaggeryapps/publisher/themes/default/helpers/add-asset.js file.

var resources = function (page, meta) {
    return {

So now, for each textBox field and date field it is running the above validation. In the isHandled method it checks if the current filed has true for 'isRequired' attribute and for those fields this runs the validation.

Now let's see how we can add our own validation to publisher add-asset form using same method. I am going to check if the version has x.y.z format.
First I created a separate validate.js file in the same location of common-plugins.js file declaring new form manager with logic for above validation. We can add this inside common-plugin.js also, but then if we are updating our Enterprise Store to a new version, it can give trouble.

$(function () {

function AssetStoreValidationField() {

    AssetStoreValidationField.prototype.validate = function (element) {
        var value = $('#' +;
        if ( == 'overview_version') {
            var arr = value.split(".")
            if(arr.length!=3 ){
                return {msg: 'Field: ' + 'Version is not valid.' };
                if(isNaN(arr[0] )|| isNaN(arr[1]) || isNaN(arr[2])){
                    return {msg: 'Field: ' + 'Version is not valid.'};

    FormManager.register('AssetStoreValidationField', AssetStoreValidationField);

Version will be a textBox field. So we only need to check this for textBox fields. 

{{#if this.isTextBox}}
                                {{#if isReadOnly}}
                                <input type="hidden" name='{{}}' value='{{this.value}}'/>
                                <input id='{{}}' type='text' value='{{this.value}}'  class="span8 fm-managed" data-use-plugins="TextFieldValueExtractor,ReadOnlyField,AssetStoreValidationField" data-ready-only="{{this.isReadOnly}}"/>
                                <input id='{{}}' name='{{}}' type='text' value='{{this.value}}'  class="span8 fm-managed"  data-use-plugins='RequiredField,TextFieldValueExtractor,AssetStoreValidationField' data-ready-only='{{this.isReadOnly}}' data-required='{{this.isRequired}}' />


Finally we need to add this new validate.js file to add-asset.js helper file.

var resources = function (page, meta) {
    return {


Finally, there is one another important thing. When adding validate.js file to helper, we have to add it before 'logic/asset/add-asset.js'. Otherwise validation will not work properly.


Chanaka FernandoWSO2 ESB HTTP transport properties tutorial

WSO2 ESB uses the property mediator to change the behavior of the messages flowing through the ESB mediation engine. HTTP transport level properties mentioned below can be used to access and change the http level properties. You can get a general idea about ESB properties from this blog post.

HTTP Transport properties


This property makes the outgoing URL of the ESB a complete URL. This is important when we talk through a Proxy Server. You can set this property as below.

<property name="POST_TO_URI" scope="axis2" value="true"/>

Here is an example in which we can use this property.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <property name="POST_TO_URI" value="true" scope="axis2"/>
               <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
If you do not use the "POST_TO_URI" property, then ESB will send the POST request as below.

POST /services/SimpleStockQuoteService HTTP/1.1

With the above property set, the request change like below.

POST http://localhost:9000/services/SimpleStockQuoteService HTTP/1.1

This property can be used in a scenario where WSO2 ESB is in front of a HTTP proxy server. Then ESB needs to send the request to the full URL.


When set to true, this property forces a 202 HTTP response to the client so that it stops waiting for a response. You can set this property as below.

<property name="FORCE_SC_ACCEPTED" scope="axis2" value="true"/>

This property can be used in scenarios where client send a message to the ESB and ESB will store the message in a persistent store like a message store. In this scenario, client will wait until the timeout if ESB do not send any response. In this kind of scenario, we can use this property and send a 202 Accepted response to the client. Here is an example configuration where ESB store a message in a message store.

   <sequence name="main">
         <log level="full" />
         <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2" />
         <store messageStore="MyStore"/>
      <description>The main sequence for the message mediation</description>
   <messageStore name="MyStore" />


Disables the HTTP chunking for outgoing messaging. HTTP chunking is a method of transfer encoding which introduces with http 1.1 specification. This encoding method has a number of advantages over the older transfer mechanisms. One of the advantage is that you do not need to know the content length of the message before you send the message. You send the messages as chunks and when the message is finished, it will send a specific empty message to mark the end of the message. But some of the applications may not understand the chunked message. In that kind of scenario, you need to set this property to true like below.

<property name="DISABLE_CHUNKING" value="true" scope="axis2"/>

If the calling BE is expecting the Content-Length of the message, you need to set the below properties as well.

<property name="FORCE_HTTP_CONTENT_LENGTH" scope="axis2" value="true"></property>  
<property name="COPY_CONTENT_LENGTH_FROM_INCOMING" scope="axis2" value="true">

You can refer the below link for more information about chunked encoding.


This property is used to specify the availability of a content body in a HTTP request. In case of GET and DELETE requests this property is set to true. This property should be removed if a user want to generate a response from the ESB to a request without an entity body, for example, GET request. You can set this property as below.

<property name="NO_ENTITY_BODY" action="remove" scope="axis2"/>

Here is an example where you can use this property in a REST API defined in ESB.

<api xmlns="" name="SampleAPI" context="/sample">
   <resource methods="GET">
         <property name="NO_ENTITY_BODY" scope="axis2" action="remove"></property>
         <payloadFactory media-type="xml">
               <m:result xmlns:m="">true</m:result>

You will get the response as below.

<m:result xmlns:m="">true</m:result>


In the case of GET requests through an address endpoint, this contains the query string. The value of this property will be appended to the target URL when sending messages out in a RESTful manner through an address endpoint. This is useful when you need to append a context to the target URL in case of RESTful invocations. If you are using an HTTP endpoint instead of an address endpoint, specify variables in the format of "uri.var.*" instead of using this property. You can set this property as below.

<property name="REST_URL_POSTFIX" value="/context" scope="axis2"/>

In the example given below, this property is used to append the customer id to the http url.

 <proxy name="CustomerServiceProxy"
          transports="http https"
            <filter xpath="//getCustomer">
               <property name="REST_URL_POSTFIX"
               <property name="HTTP_METHOD" value="GET" scope="axis2" type="STRING"/>
            <header name="Accept" scope="transport" value="application/xml"/>
                  <address uri="http://localhost:9764/jaxrs_basic/services/customers/customerservice/customers"

In the above example, actual request from the ESB will go to the below url

where 1234 is the customer id.

There might be situations, that you would need to do REST calls using the send mediator of WSO2 ESB. Then, you might have noticed the endpoint url that you specified in the endpoint configuration, gets suffixed by a url fragment.

This happens when ever you do a REST call using the send mediator. In order to get rid of it, please specify the following property before the send mediator.

<property name="REST_URL_POSTFIX" scope="axis2" action="remove"/>

Chanaka FernandoWSO2 ESB properties tutorial

WSO2 ESB provides properties as a way to control different aspects of the messages flowing through the mediation engine. They will not change the content (payload) of the message but they will be used to change the behavior of the message flowing through the ESB. Property mediator is used to access or modify the properties defined in the WSO2 ESB.

You can define a property mediator inside the ESB configuration language as below if you are setting a static value for the property.

<property name="TestProperty" value="Chanaka" scope="default" type="STRING">

If you are setting a dynamic value for the property, then you can use the following method.

<property name="TestProperty" expression="//m0:getQuote/m0:request/m0:symbol"

    xmlns:m0="http://services.samples/xsd"  scope="default" type="STRING">

In the above property declaration, you can find that we are defining a scope for the property. This scope definition will define the scope where this property can be visible inside the ESB.

1. default - or the Synapse
Once you set a property under this scope - the value of it will be available throughout both the in/out sequences.

2. axis2
Once you set a property under this scope - the value of it will be available only throughout the sequence it's been set. If you set the Property mediator to the in-sequence, you cannot access it in the out-sequence.

3. axis2-client
This is similar to Synapse scope. The difference is - you can access it in following two ways inside a custom mediator.

public boolean mediate(org.apache.synapse.MessageContext mc) {
org.apache.axis2.context.MessageContext axis2MsgContext;
axis2MsgContext = ((Axis2MessageContext) mc).getAxis2MessageContext();
String propValue = (String) axis2MsgContext.getProperty("PropName");
System.out.println("SCOPE_AXIS2_CLIENT - 1 : " + propValue);

propValue = (String) axis2MsgContext.getOptions().getProperty("PropName");
System.out.println("SCOPE_AXIS2_CLIENT - 2: " + propValue);
return true;

4. transport
Once you set a property under this scope - it will be added to the transport header of the out going message from the ESB.

Now we know how to define properties at different scopes. Let's see what are the properties available in the WSO2 ESB. You can find a good reference about properties from the link below.

In the above reference, you can find a descriptive information about most of the properties. Here is an example of using the "messageType" property.


Possible Values
Default Behavior
Content type of incoming request.
Message formatter is selected based on this property. This property should have the content type, such as text/xml, application/xml, or application/json.
<property name="messageType" value="text/xml" scope="axis2"/>

Let's say you have need convert the Content-Type of your message from application/xml to application/json. Then you can use this property to achieve the same as below.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <property name="messageType" value="application/json" scope="axis2"/>

To test this property, send a request to this proxy service like below.

curl -v -X POST -H "Content-Type:application/xml" -d@request1.xml "http://localhost:8280/services/tojson"

where your request1.xml is like below.

       <name>Bermuda Triangle</name>
       <name>Eiffel Tower</name>

You can get the response from the proxy service as below.

{"coordinates":{"location":[{"name":"Bermuda Triangle","n":25.0000,"w":71.0000},{"name":"Eiffel Tower","n":48.8582,"e":2.2945}]}}

Here you can see that the Content-Type of the message is converted to application/json.

Sohani Weerasinghe

SAML2.0 SSO with WSO2 Identity Server

This blog post basically describes the SAML2.0 SSO behavior using the WSO2 Identity Server. Mainly in here I am going to demonstrate a very simple scenario just to understand the SAML behavior. 

When considering a single sign on system basic two roles are Service Provider and Identity Provider. In here there is a predefined trust between these two roles. When a user tries to access the Service Provider, the Identity Provider issues assertions after authenticated and authorized and the Service Provider trust the assertions issued by the Identity Provider. 

Some advantages with SSO are, users need only one username and password to access many services and also users are authenticated only once by the Identity Provider and then they are redirecting automatically to other services. Following diagram illustrates the scenario

Mainly in the scenario I have used two Identity Servers one as the Identity Provider and one as the Service Provider

Applies To : WSO2 Identity Server 4.6.0

Configure Identity Provider

  • Start WSO2 Identity Server and access Management Console
  • Now click on the SAML SSO under the Manage section
  • Now you will get a window to configure Service Provider
  • Click on 'Register New Service Provider' and provide details as follows. 
eg: Issuer - carbonServer
      Assertion Consumer URL - https://localhost:9444/acs

  • Click on update and now you can see the service provider has successfully added.

Configure Service Provider

  • Change the offset  to '1' by navigate to $IS_HOME/repository/conf/carbon.xml <Offset>1</Offset>

  • Then in order to include Authenticator Configurations for SAML2SSOAuthenticator go to $IS_HOME/repository/conf/security/authenticators.xml and include the below code

 <Authenticator name="SAML2SSOAuthenticator" disabled="false">
            <Parameter name="LoginPage">/carbon/admin/login.jsp</Parameter>
            <Parameter name="ServiceProviderID">carbonServer</Parameter>
            <Parameter name="IdentityProviderSSOServiceURL">https://localhost:9443/samlsso</Parameter>
            <Parameter name="NameIDPolicyFormat">urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified</Parameter>
   <Parameter name="AssertionConsumerServiceURL">https://localhost:9444/acs</Parameter>     

Note: The service provider ID should be the issuer name included the Identity Provider.

Testing the Sample 

Note: In the above configuration enable the Authenticator by making the disabled into false.

  • Now add the plugin SAML 2 Tracer to the browser and start tracing
  • Then start the Identity Provider by navigate to $IS_HOME/bin in command window and type the command
  • Then start the Service Provider and try to login by providing credentials.
  • Then in the SAML tracer you can see the request sent to the Identity Provider by the Service Provider as shown below

Hasini GunasingheGlobal Cafe this week: Japan - a country of sushi eating samurai

Global Cafe is a very interesting weekly event held at International Center of Purdue University on Friday from 5.30-7.30 PM where students from a particular country can present about their country, culture and importantly share an authentic dish of the country with the attendees.

Today, the Japanese students association did a session on Japan. I am writing down some of the interesting and new things I got to know here.

Japan is a small island with a very high population. Sushi is considered as the most favorite food among school children. Washoku is the traditional meal served in traditional restaurants which includes a soup and about 3 side dishes. This is considered as an intangible heritage by UNESCO.
Takikomi Gohan is another popular Japanese rice dish which they shared with us today. It was delicious and following is a picture I took before I start eating it. :)

They clarified the actual meaning of Otaku which means a person who is dedicated for a certain hobby or a favorite activity. I also heard for the first time that Japan is famous for anime. It might be my ignorance that I haven't heard it before. They showed some famous animations and also an video of real people who mimic cartoons. Anime Otaku are the people who are into animations.

Geisha is traditional female who entertain visitors in traditional restaurants. But they are not prostitutes as interpreted by some movies. They wear traditional Japanese dress and it needs lot of practice to become Geisha. Apprentice of Geisha are called Maiko. Geisha are not seen by general public and their performances can not be recorded or taken photographs of where as Maiko can be seen by public. Cost of visiting such traditional restaurants where Geishas are, is very high.

Budo is different kinds of Japanese martial arts such as Karathe, Judo etc. Some say that Budo descend from Samurai-who are the warriors in ancient Japan. But it is a both yes/no question. Budo is not to hurt anyone else but to overcome one's own self.

They also clarified the difference between Ninja and Samurai. Ninja are the people who were considered as messengers employed in spying etc. They usually carry a small a knife like tool where as Samurai are the real military warriors who carry the traditional samurai sward. But once the Samurai was prohibited in Japan around 1867, currently people have only the dream of becoming a samurai because they are very attractive.

So above is just a glimpse of what I learned about Japan from today's session most of which are new to me and I hope to explore more about certain aspects such as Japanese cuisine etc.
Looking forward to do a session on Sri Lanka with the Sri Lanakan friends in Purude. :)

John MathonBigdata and Privacy

I believe we need to have new laws to deal with the accuracy of information being held about people and the duration that data can be held.   For instance, no company should need information about you for more than 3 years duration without your explicit permission, not permission in a 20 page “legal disclosure” but explicitly that you are okay with someone keeping data longer than that in a separate acknowledgement.  If you are under 21 it should be 1 year limit by law.   Any data kept after 3 years (1 year if under 21) must be kept in such a way that you can dispute it and find out who has such data by consulting a central registry.  Disputes of the data should be resolved to the benefit of the consumer unless the holder of such data wants to fight and prove the legitimacy of such data.

Every company I talk to is accumulating vast information about you and I. While I am a big fan and excited about using bigdata to provide better service, higher intelligence smarter services I am worried also that it is an invasion of privacy or even that improper or inaccurate data will cause people problems.  My company WSO2 is trying to build secure solutions and bigdata solutions to enable companies to be intelligent.  It is an awesome responsibility to have personal information about people.   It’s not just a legal responsibility but a personal responsibility societal responsibility to make sure that everyone is treated fairly by the systems we build.

As an Open Source company WSO2 has an obligation to promote transparency and responsible use of data.   We provide our source code to everything we do.  There are no “enterprise licenses.”  I believe our advocacy of open source is a statement also about transparency.   Please let me know if you think my personal ideas about privacy above are reasonable and sound.   I feel very passionate that while a new cyber world is being built that world shouldn’t be something we fear or are hurt by.   The goal of all this new technology is to make life better.  We must find a way to build this new world so that we and our children want to live in that world and that this new world is compassionate and fair.

Hasitha AravindaRecovering BPEL Activity Failures - WSO2 BPS

Read more about ODE activity failure and recovery from.


Method 1: Via WSO2 BPS Management Console 

In WSO2 BPS, BPEL failed activity recovery can be done using WSO2 BPS Management console.

To do this, it is required to enable ODE activityLifecycle events for a BPEL process. To do that modify process-event configuration in deploy.xml as follow.
(refer for to see how you can enable ODE events for a BPEL process.)

Users can view activity failures for a BPEL process instance in the instance view page. 

1) Goto Instances -> click on of the instance ID to open the instance view for that instance.

 2) In the instance view, you can find failed activity/activities under activity informations. (See image). Also you can retry/cancel (ignore a failed activity) a failed activity using two buttons listed under Action column.

Method 2 - Via InstanceManagementService admin API

Also users can recover failed activities by using BPS InstanceManagementService. Unfortunately there is no UI functionality for this when ode events are disabled. (BPS 3.2.0 & older)

These are the steps, to retry activities using InstanceManagementService admin service.

1) Execute following SQL query on BPS database.

This will return the failed activities with corresponding BPEL process instance id. You will require following information to retry a failed activity.

2) Set HideAdminServiceWSDLs configuration to false in /repository/conf/carbon.xml file.


3) Then start the BPS server.

4) Now create a SOAP-UI project using https://localhost:9443/services/InstanceManagementService?wsdl

5) Create a new request under recoverActivity operation. A sample request will be like this.

6) Then Authenticate soap-ui request by configuring authentication and security related settings.
For Basic Auth select Authorization type as preemptive, and give admin user name and password.

7) Then for each failed activity (results in step 1), send a recoverActivity request. Use "retry" action to retry activity, and use "cancel" action to cancel the activity retry and continue instance execution.

Nadeesha Cabralgridster-bootstrap 0.1

I've created gridster-boostrap. It's a little adapter-esque library to convert gridster.js layouts to Twitter Bootstrap.


Gridster.js allows you to build layouts on a grid by drag and drop and all the bells and whistles you need from a grid layout. Unfortunately, gridster layouts are not responsive.

What I did was create a plugin where you can build a layout using gridster, and spew out HTML with responsive goodness. The HTML uses a tried and tested responsive grid framework of Twitter Bootstrap 3.

What's the use case?

Basically, people use gridster a lot to build dynamic UI layouts, which are - well, unresponsive. gridster-bootstrap tries to make that better by producing an alternate version of the same layout which is responsive.

Under the hood?

It's a simple js algorithm to pull and set width of div containers according to the gridster container positioning. How simple? Just 1.4kb minified without gzipping. It has underscore.js as dependency, which I will take out. Everything is bound together with the help of a single CSS media query.

Demo here and source here. Contributions are more than welcome.

Madhuka UdanthaMutual SSL with WSO2 Identity Server

1. Build mutual-ssl-authenticator soruce code on here.

2. Build jar put it ‘wso2is-4.7.0\repository\components\dropins’

3. Open ‘wso2is-4.7.0\repository\conf\tomcat\catelina-server.xml’ file and set  clientAuth=”true” to make server to (always) expect two-way SSL authentication.

4. Extract WSO2 public certificate from <IS_Home>/repository/resources/security/wso2carbon.jks
and add it to client’s trust store

keytool -export -alias wso2carbon -file carbon_public2.crt -keystore wso2carbon.jks -storepass wso2carbon
keytool -import -trustcacerts -alias <Client_Alias> -file carbon_public2.crt -keystore client-truststore.jks -storepass wso2carbon

5. Start the server

For Client

6. Create new SoapUI project using  https://localhost:9443/services/RemoteUserStoreManagerService?wsdl

7. SSL setting for SOAP UI


8. Make call for ‘isExistingUser’

Make sure you sure you add soup header

<soapenv:Envelope xmlns:soapenv="" xmlns:ser="">
        <m:UserName xmlns:m=""




Here is testing no password, used crt for aut

sanjeewa malalgodaFixing issue in WSO2 API Manager due to matching resource found or API authentication failure for a API call with valid access token


No matching resource found error or authentication failure can happen due to few reasons. Here we will discuss about errors can happen due to resource define

Here in this article we will see how resource mapping work in WSO2 API Manager. When you create API with resource we will store them in API Manager database and synapse configuration. When some request comes to gateway it will first look for matching resource and then dispatch to inside that. For this scenario resource is as follows.


In this configuration * means you can have any string(in request url) after that point. If we take your first resource sample then matching request would be something like this.


Above request is the minimum matching request. In addition to that following requests also will map to this resource.



And following requests will not map properly to this resource. The reason for this is we specifically expecting /resource1/ in the request(* means you can have any string after that point).


From the web service call you will get following error response.

<am:fault xmlns:am="403Status'>"><am:code>403</am:code><am:type>Status report</am:type><am:message>Runtime Error</am:message><am:description>No matching resource found in the API for the given request</am:description></am:fault>

If you sent request to t/ it will not work. Because unfortunately there is no matching resource for that. Because as i explained earlier you resource definition is having  /resource1/*. Then request will not map to any resource and you will get no matching resource found error and auth failure(because trying to authenticate against none existing resource).

Solution for this issue would be something like this.

In API Manager we do support both uri-template and url mapping support. If you create API from API Publisher user interface then it will create url-mapping based definition. From API Manager 1.7.0 on wards we will support both options from UI level. Normally when we need to do some kind of complex pattern matching we use uri-template. So here we will update synapse configuration to use uri-template instead of url-mapping. For this edit file as follows.

Replace <resource methods="GET" url-mapping="/resource1/*"> with <resource methods="GET" uri-template="/resource1?*">

Hope this will help you to to understand how resource mapping work. You will find more information from this link[1]


Harshan LiyanageMemory Sharing between Virtual Machines

Identical memory pages in virtualized environments

In any computing environment “System Performance” has been the most important factor for the most of users. The main factors affecting system performance are the processor speed and the capacity of main memory. When considering about a virtualized environments (Fig 1), the processor could be multiplexed to support the execution of several virtual machines (VM). However multiplexing of main memory would allocate the memory for each VM where the amount of memory allocated for a VM is inversely proportional to the number of VMs on the host, thus reducing the performance of entire system due to frequent paging. One of the solution for this problem is to increase the physical memory which it is often more expensive and in sometimes a difficult task to achieve due to hardware limitations.

Fig .1. Virtualized Environment

But in virtualized environments, most of the users use VMs with identical operating system and software configuration. Due to that, memory pages of each VM would be much similar to each other  (Fig 2). As a result, server virtualization enables numerous opportunities for sharing memory between VMs. For example, consider a host which is already running a VM with Ubuntu 13.10 installed. If there is a request to start another VM with Ubuntu 13.10, the same process has to be repeated by the VMM (Virtual Machine Manager) and the same memory requirements have to be allocated for the new VM even though most of memory pages may be already in memory which was brought to the main memory by the first VM. For example linux kernels used in both VMs might be 100% identical unless they have not received any updates. Most interestingly the amount of sharable memory pages will increase with the number of VMs hosted in the server. Thus removing these duplicated memory pages will allow to be freed up considerable amount of memory in the host.
Fig .2. Duplicated memory regions

Removing the duplicate memory pages in virtualized environments has been the key to the number of computer science researches. Hopefully the researchers have came up with several solutions to tackle this and the most successful method to share the identical memory pages between VMs is the "Content-based page sharing (CBPS) [1]". This method has been widely used in VMWare ESX server product & it is included as a tech preview on "Xen Hypervisor" since 2011.

Content-based Page Sharing

The fundamental idea of Content-based page sharing is to use page contents to identify duplicate memory pages at run-time. Content-based page sharing approach has two major advantages.

  1. Pages could be shared transparently to the guest OS.
  2. More page-sharing opportunities could be identified which means more memory could be saved.

Since this approach requires carrying out an exhaustive search to compare contents of each memory page with every other page, hashing technique is used to identify pages with identical contents efficiently. In summary, content-based page sharing will work as follows.

First a hash function will index the content of memory pages at run-time by creating hash values. If the two or more pages have the same hash value then those memory pages might be selected as good candidates for sharing. Due to the fact that hash functions might produce the same hash even-though contents are different, memory pages with same hash value will be then compared bit by bit using bitwise operators to ensure the page contents are identical. If those pages are identical, redundant memory pages will be reduced to one shareable copy using Copy-on-Write(CoW) technique which will result in creating a private copy of the memory page if any write attempt occurs on a shared page (Please refer to fig 3). Content-based page sharing implementation in VMware ESX-server has been shown to reduce the memory footprint of multiple, homogeneous virtual machines by factor of 10% to 40%.

Fig. 3. How CoW technique works

Even-though content-based page sharing allows sharing maximum number of memory pages, it has certain drawbacks.

1. Computational overhead on the CPU
This is a result of exhaustive search of identical memory pages by using hashing technique and by the memory scanner which periodically runs as it has been scheduled. As a result the overhead of the CPU will increase which affects overall system performance.

2. Inability to detect short-living sharing opportunities
Since content-based page sharing does not consider about the life time of sharing opportunities, it tends to share any identical memory page even-though the memory page could be modified in the next moment. When the write attempt occurs to those pages, write-faults will trigger and the VMM will have to allocate new private copies for each write-fault which is a far more CPU-intensive operation when occurs frequently.

As a solution to these issues some researchers have suggested to use heuristics in memory sharing mechanisms. In some research projects like Satori [2], the researchers have modified to operating system of the VM to identify identical memory regions. This method will reduce the CPU overhead dramatically, but in practice there is a very low possibility that the clients likely to run modified-OS. Especially this is impossible in most OSes because these OSes are not open-source.

Proposed Solution 

In VMMs' point of view VM is just another process with special capabilities. So VMM will treat VMs as just OS treats application processes. In Computing, the memory of any process could be divided into 2 main areas.
  1. Code/Text segment which contains the executable instructions 
  2. Data segment which contains run time data as the program executes
As we know so far code segment of a process is very unlikely to change during the process's life time. Considering that fact if there's a mechanism to filter-out memory pages in code segments of processes & OS itself those memory pages will be the best sharable candidates because there won't be any CoW faults on those pages. So our idea was to incorporate semantic-knowledge into the VMM about the VMs so that it will allow the memory sharing mechanism to find those memory pages with ease & share them across VMs if there are identical pages among them (Fig 4). This solution has number of benefits over the existing CBPS based mechanisms.
  1. CPU overhead will be minimal when compared to CBPS (No exhaustive search of memory regions, hash calculations & bitwise comparisons)
  2. Occurrence of CoW faults in shared memory pages will be 0, thus improving the overall system performance compared to CBPS
Fig .4. Shared code segment
On the other-hand the drawbacks of this approach are,
  1. Need of semantic knowledge
  2. The amount of shared memory amount will be low compared to CBPS 


We have carried out numerous experiments using 4 VMs running the same OS (Debian Squeeze 64-bit edition with linux kernel 2.6.32-5) to find the similarity of memory pages in code segment of linux kernels. We have covered following scenarios during these experiments.

  • All VMs in idle state
  • All VMs running a workload generator (Stress) with 100 CPU bound processes, 4 Disk read/ write processes with each operating with 200MB of disk space, 2 Memory allocators for allocating and dirtying 200MB in main memory and 4 IO bound processes
  • All VMs running a combination of linux applications and system utilities

These experiments concluded that the memory regions of identical linux kernel code segments are 100% similar in the beginning and after a while it is 99.61% identical. In our case the linux kernel we used had 771 memory pages in code segment & there were 768 memory pages which were 100% identical during run-time. There were 3 memory pages which were modified during run time. Based on the observations we have assumed that the reason for these modifications might be the kernel feature known as "SMP alternatives", which optimizes the kernel itself at boot time.


After running these experiments & verifying the validity of our proposed solution we started up with the implementation of poc (XMe) to show the validity of proposed solution. This poc was designed to share only the code segments of identical guest OSes running on Xen hypervisor. For obtaining the semantic-knowledge required we have used system-map file of linux kernel. The virtual memory addresses of kernel code segment could be found using kernel symbols _text and _etext in system-map file. Using this semantic-knowledge, XMe has successfully shared 768 identical memory pages (ignoring the 3 different memory pages found during experiments) in linux kernel code-segment across multiple VMs.


We have carried out number of experiments using XMe to evaluate the effectiveness of proposed approach.  We have used same VMs used in above experiments. Following is a summery of obtained results.
  • 3072 total shared memory pages
  • 20 CoW faults has occurred just after sharing those pages (5 per VM)
  • 9156KB of memory was reclaimed from code-segments of 4 VMs by sharing 3052 pages among 4 VMs (Fig. 5 & 6)
  • Amount of CoW faults were independent of the workload and life time of VM
  • After the initial CoW faults, there were not any CoW fault reported on shared pages (Fig .7)
  • When the number of VMs increased amount of shared memory increased linearly (Fig. 8)
Fig. 5. Status of the system before XMe execution

Fig. 6 .Status of the system after XMe execution
Fig .7. Amount of shared pages
Fig .8. Increase of Shared Memory Pages with number of VMs
Most interestingly during the experiment on fig 8, we were able to reclaim 33572 KBs of memory from code segments of 12 VMs. The occupied memory for code segments of all 12 VMs was 3436 KBs which is approximately 12% higher than the memory required for the kernel code segment of a single VM. Thus it is more likely that all VMs shared a single code segment.

It should be noted that the occurrence of CoW faults during these experiments contradicts with the experimental results we have obtained during the experiments to find changes in kernel code segments. Hence based on the observation that the CoW faults are occurring at the identical places across all kernel code segments, it could be assumed that these memory pages may contain the spin locks or data structures that would get modified by SMP alternatives feature.


Using the proposed approach, we have successfully shared 98.96% of memory pages in kernel code segment irrespective of number of VMs in the system. These evaluation experiments have shown that the amount of shared memory pages in kernel code segments is independent of the workload and memory allocation of VMs.

As stated in [3], the mean execution time required for allocating a private memory page when a CoW fault occurs is 25.1 us. Since the amount of CoW fault get increase with the workload, we could expect that considerable amount of CPU time would be spent on handling CoW faults under heavy workloads. Thus, it is possible to conclude that the proposed Semantic-based page sharing approach has significant gain over existing CBPS approaches in reduction of performance overhead on CPU introduced by CoW faults. 

We have also shown that the mean execution time required for detection and sharing of 3072 memory pages in proposed approach is 2.424 milliseconds, which means our proposed method has very low CPU overhead comparative to CBPS. Thus these evaluation results have concluded the efficiency and effectiveness of proposed approach in terms of detection of identical memory pages and reduction of CoW faults over the existing CBPS approaches.

This blog post is based on the research project which I did in 2011 for my Bachelor's thesis. Please feel free to ask any question regarding this research.

References :

[1] . C. A.Waldspurger, “Memory resource management in vmware esx server,” SIGOPS Oper. Syst. Rev., vol. 36, pp. 181–194, December 2002.

[2]. G. Milos, D. G. Murray, S. Hand, and M. A. Fetterman, Satori: Enlightened page sharing, p. 14. No. Section 3, USENIX Association, 2009.

[3]. D. Gupta, S. Lee, M. Vrable, S. Savage, A. C. Snoeren, G. Varghese, G. M. Voelker, and A. Vahdat, “Difference engine: harnessing memory redundancy in virtual machines,” Commun. ACM, vol. 53, pp. 85–93, October 2010.

Krishanthi Bhagya SamarasingheWSO2 ESB shows the highest performance in the Space....

WSO2 ESB team has published the performance study “ESB Performance Round 7.5″, by comparing following well known open sources ESBs with WSO2 ESB v4.8.1.

 1. AdroitLogic UltraESB v2.0.0
2. Mule ESB Community Edition v3.4.0
3. Talend ESB SE v5.3.1

The following table and graph show the summary results of the performance test.

As shown in the graph, WSO2 ESB 4.8.1 outperforms all the other ESBs except for the security scenario.

Conclusion:  WSO2 ESB is the fastest open source ESB on the Space.

For more information refer .

Dedunu DhananjayaPortable Drive encryption with Ubuntu

I haven't used encryption on hard disk drives or pen drives ever. But recently I got a requirement to encrypt some portables disks. And there are some limitations also. After encrypting you would not be able to use that drive on Windows Operating System. You are going to limit to ext4 file system as well.

First you have to install cryptsetup to format your portable drive with encryption. To install cryptsetup run following command on a terminal.
sudo apt-get install cryptsetup
After that you have to open Disk utility application. To open it just search on Unity dash board like below.
 Then you will get a window like this.
Click on "Unmount Volume"
Click on "Format Volume". Then you will get a new dialog box like below. Select "Encrypt underlying device" option, before clicking "Format" button.
After that Disk Utility will ask for a pass-phrase to encrypt device. Give a strong pass-phrase and don't forget it! If you forget it, you will loose your all important and confidential data.
Then wait till it format the drive. This may take some time depending on your device capacity.
After completing formatting you will get below window.
Most probably after formatting disk utility will automatically mount your disk without prompting for the password. When you are connecting encrypted disk next time, Ubuntu will prompt for the password like below
Just type the pass-phrase and use your portable drive.
But sometimes Ubuntu will prompt for the pass-phrase. When you are attempting to mount drive you will get below error.
Unable to mount x.x GB Encrypted
Error unlocking device: cryptsetup exited code 5: Device <UUID> already exists.
There is a workaround for this bug. What you have to do first is closing all the applications which is using files on your encrypted device. Even the terminal windows you have to close.
Then run below command on a terminal.
sudo dmsetup remove /dev/mapper/<UUID>
Change the UUID part according to your error message. If you are getting an error like below after running above command, please check whether there are any application which uses your encrypted device. If you have anything running please close it and run above command again!
device-mapper: remove ioctl failed: Device or resource busy
Command failed

Heshan SuriyaarachchiInstalling R on Mac OS X

1) Install Homebrew in your system.

2) Download and install XQuartz (Homebrew does not package XQuartz and it is needed for step 3. Therefore install XQuartz.)

3) Install Fortran and R.
heshans@15mbp-08077:/tmp$ brew update

heshans@15mbp-08077:/tmp$ brew tap homebrew/science

heshans@15mbp-08077:/tmp$ brew install gfortran r

4) Verify the installation by running R.
heshans@15mbp-08077:/tmp$ R

R version 3.0.3 (2014-03-06) -- "Warm Puppy"
Copyright (C) 2014 The R Foundation for Statistical Computing
Platform: x86_64-apple-darwin13.1.0 (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

Dedunu DhananjayaSri Lanka Cartogram with d3.js

We have been using d3js to visualize things on maps. And after struggling with topojson application my boss found a way to convert Sri Lanka Shape file (.shp) to topojson format. Then we wanted to use cartograms for our visualizations.

I found that there was a d3js cartogram implementation. -

With the help of this blog post -, I finished cartogram Visualization. 

And Nisansa's co-ordinations for center of Sri Lanka was very helpful to calibrate Sri Lanka map.

You can find my sri-lanka-cartogram repository on GitHub -

You find the live demo from here -

Thanks for d3js library! He is a wizard in visualizations.
Thanks a lot for cartogram.js! Hope this will be useful to you.

sanjeewa malalgodaHow to move WSO2 API Management platform from one to another deployment( API Manager-1.6.0)

It seems this is very valid requirement coming from API manager users. At this moment we do not have deployable API artifacts. When it comes to API management related artifacts its bit different from .war or .car archives. Because when we migrate API platform from one to another we need to migrate APIs, Applications, tokens, usage data and etc. And its not simple as we deploy war in another deployment. For the moment you can use following work around.

01. Create setup for QA environment. 
02. Move API Manager database to production system after QA process completes. 
03. Move User management related database or LDAP to production system after QA process completes, 
04. Move registry database to production system after QA process completes. 
05. Move repository/deployment/server folder to production system after QA process completes (run time artifacts migration) . If you have tenants in your system you might need to move tenant specific data as well. For that move /repository/tenants (If you have tenants in your setup). 
06. In addition to that if you are using BAM then move usage data as well(but recommend to use fresh usage data store for production as you don't need to mix up production and QA statistics).

Is seems bit complicate at the start but actually its very easy and reliable way of migrating between environments(actually it need database dump and restore + copy run time artifacts to new deployment). You can easily puppetize this or can have your own deployment tool.

Paul FremantleA rose by any other name would smell as sweet, but with no name, maybe not

The famous quotation from Shakespeare is that "a rose by any other name would smell as sweet". But what if the rose had no name. What if every time you talked about it, you had to come up with a description, you know that thing with the pretty pink petals, except sometimes they are red, and sometimes white, but it smells really nice, except some don't really smell and others do. You know the thing with multiple layers of petals except for the wild ones that only have one layer of petals.

Maybe not so sweet.

What about the other way round? You build a really cool system that works effectively and then it turns out that someone has named it? Now that is nice, and yes, your thing suddenly smells sweeter.

I've had this happen a lot. When we first started WSO2 we applied a lot of cool approaches that we learnt from Apache. But they weren't about Open Source, they were about Open Source Development. And when they got names it became easier to explain. One aspect of that is Agile. We all know what Agile means and why its good. Another aspect is Meritocracy. So now I talk about a meritocratic, agile development team and people get me. It helps them to understand why WSO2 is a good thing.

When Sanjiva and I started WSO2 we wanted to get rid of EJBs: we wanted to remove the onion-layers of technology that had built up in middleware and create a simpler, smaller, more effective stack. It turns out we created lean software, and that is what we call it today. We also create orthogonal (or maybe even orthonormal) software. That term isn't so well understood, but if you are a mathematician you will get what we mean.

Why am I suddenly talking about this? Because today, Srinath posted a note letting me know that something else we have been doing for a while has a nice name.

It turns out that the architecture we promote for Big Data analysis, you know, the one where we pipe the data through an event bus, into both real-time complex event processing and also into Cassandra where we apply Hive running on Hadoop to crunch it up and batch analyse it, and then store it either in a traditional SQL database for reports to be generated, or occasionally in different Cassandra NoSQL tables, you know that architecture?

Aha! Its the Lambda Architecture. And yes, its so much easier to explain now its got a nice name. Read more here:

Danushka FernandoHow to edit file /etc/fstab

This is how the my fstab file is looks like which is placed inside the etc folder in the Linux Folder structure.

You can overcome some problems by editing this file.

# /etc/fstab: static file system information.
# Use 'vol_id --uuid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
proc /proc proc defaults 0 0
# / was on /dev/sda8 during installation
UUID=64839f40-b2f1-412f-ae1a-c5a213ba449a / ext3 relatime,errors=remount-ro 0 1
# /home was on /dev/sda7 during installation
UUID=2ab422e9-b37d-4e7e-966f-6ca7d1d081cf /home ext3 relatime,errors=remount-ro 0 2
# /boot was on /dev/sda6 during installation
UUID=faaab6f9-539b-4f1a-82fc-b5a18887d28d /boot ext3 relatime,errors=remount-ro 0 3
# swap was on /dev/sda5 during installation
UUID=ab5f806d-8f4e-42b9-b67b-5618e9715585 none swap sw 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0

Now we can study a line what it contains

# /home was on /dev/sda7 during installation

UUID=2ab422e9-b37d-4e7e-966f-6ca7d1d081cf /home ext3 relatime,errors=remount-ro 0 2

This line contains following parts

  1. # /home was on /dev/sda7 during installation
  2. UUID=2ab422e9-b37d-4e7e-966f-6ca7d1d081cf
  3. /home
  4. ext3
  5. relatime,errors=remount-ro 0 2

Now let me consider each one of these.

Actually I have separate home and boot partitions.

And when I installed linux for the second time I had to edit this file to use them as the actual home and boot partitions.

Part 1
This says that where the partition was in at the installation. In my case the partition is in sda7. It is said by the "dev/sda7"

Part 2
First of all launch the Partition Editor Software.

You can find the UUID of any of the partitions from right click the partition and select information at the Partition Editor Software.

Part 3
This part says where to mount the partition. Example is this sda7 partition has mounted as the home.

Part 4
This says about the file system about the partition. In this example case home partition contains the ext3 file format.

Part 5
I also don't know the exact meaning of this line. But it contains
options       dump  pass

options can have values of errors=remount or defaults
errors=remount is used only for the root partition
But if u want u can use that option for any partition
Next two numbers represent dump and pass
It can have following variations

0 0 for /proc and swap
0 1 for root
0 2 for others

You can see that last two lines is bit different because they are for swap partition and cdrom. Don't edit them if unless you really needed and u know what to do there.

Most probably you will need to restart twice after editing this to make it work. Don't get afraid if it doesn't work at the first time.

Danushka FernandoHow to overcome the problem Linux kernel image is not booting properly

First check this post and come to this section

This is the part of the fstab file that needed in this matter.

# / was on /dev/sdax during installation
UUID=xxxx / ext3 relatime,errors=remount-ro a b
# swap was on /dev/sday during installation
UUID=yyyy none swap swc d
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8  0 0

x should be the number of the partition of the ubuntu installed
y should be the number of swap
xxxx - UUID of Ubuntu partition
yyyy - UUID of swap
a - dump            (0 is my recommendation)
c - dump            (0 is my recommendation)     
b - pass            (1 is my recommendation)
d - pass            (0 is my recommendation)

Danushka FernandoHow to mount a partition permanently some where in the folder structure.

First check this post and come to this section

You needed to add the following line to the fstab file.

# / was on /dev/sdax during installation
UUID=xxxx /media/xxxxxxxxx xxx relatime,errors=remount-ro a b

xxxx - UUID of the partition
x - number of the partition you needed to mount
xxx - file format of the partition
xxxxxxxxx - Name of the Partition**
a - dump
b - pass

** Name of the partition should be the same as the Real name of the partition (The name we can see at the Computer (Simultaneous to My Computer in Microsoft Windows)).

An example line for this is

# /media/MyDisk was on /dev/sda8 during installation
UUID=64839f40-b2f1-412f-ae1a-c5a213ba449a /media/MyDisk ntfs relatime,defaults-ro 0 2

It is recommended to have name without spaces here. Because spaces can cause troubles. 

Danushka FernandoFind the menu list in the latest Grub Version

I have noted that in the latest version of the Grub the file u have to edit to customize the Grub view or to Fix Errors had changed. It is no longer exists inside menu.lst file. Now it is in the grub.cfx file in the same location.

Danushka FernandoCustomize your grub loader by just editing a file

You can find a file called menu.lst in the path /boot/grub. By editing this file you can customize your Grub loader easily.

First we can set the time out amount of the grub. Go to Section ## timeout sec and set timeout value to an amount of seconds as you like.

Then you can Customize the view by changing the colors. It is under # Pretty colors.

Then comes the most important thing of the Grub menu list. Find out the kernel list in the File. Normally it is in the bottom. Sample section for one kernel is given below.

title        Linux Mint 7 Gloria, kernel 2.6.28-16-generic
root       faaab6f9-539b-4f1a-82fc-b5a18887d28d
kernel    /vmlinuz-2.6.28-16-generic root=UUID=6a2a464d-bf70-4b4a-8c8e-8dad3ccafe2c ro quiet splash
initrd      /initrd.img-2.6.28-16-generic

You can always change the title as you wish. root is the UUID of the boot partition. If u don't have a separate boot partition then it is same as the partition containing Linux kernel. Then the generalize form is given below.

title        <Title of the kernel / os>
root      <UUID of the boot partition>
kernel    /vmlinuz-2.6.28-16-generic root=UUID= ro quiet splash
initrd      /initrd.img-2.6.28-16-generic

Other details are about the Kernel versions. So don't try to edit them until its really needed and u exactly know what is doing.

After finish the editing reboot and see what happened.

Danushka FernandoHow to fix grub loader when its not loading at the boot

When you installed or restore a Microsoft Windows after the Linux installation you won't see the grub loader is loading at the boot.

This is not a error of grub or Linux. But you couldn't load Linux at this situation.

There are few ways of fixing this issue. One way is using super grub disk.

Here I will tell you an another way of fixing this issue.

You just have to boot using any type of Linux CD/DVD

Now Open the terminal and type

sudo grub

and type the password to enter the kernel mode.

Now you have to find out few things.

First find the hd number of the hard disk.

This is normally zero since we are using one hard disk only.

But if we are using more of them we have to find out the hd number of the hard disk.

Then find the sd number of the partition which contains grub.

If you are not partitioned a separate /boot partition at the Linux installation this is same as the Linux partition.

If you are using Debian version you can start the application called Partition Editor and find out these things.

Here I will take hd number is x and sd number is y.

After typing sudo grub and enter the password you will be switched in to new prompt ( grub prompt ) like below


Now Enter

grub> root (hdx,y)

Now Restart the machine and you will find the grub loader is loading.

Danushka FernandoLinux Mint updates bug found

Bug Description : Failed to fetch (malformed Release file?) Unable to find expected entry multiverse/binary-i386/Packages in Meta-index file Some index files failed to download, they have been ignored, or old ones used instead.

Reasoning and Solution : There is no multiverse section in the Mint repository... please edit your /etc/apt/sources.list and remove the "multiverse" keyword from the lines referring to the Mint repositories. 

How to Fix it:
This command will open the sources list.
~$ sudo gedit /etc/apt/sources.list
This is my file before editing and after editing

Before :-

deb isadora main upstream import multiverse import
deb lucid main restricted universe multiverse
deb lucid-updates main restricted universe multiverse
deb lucid-security main restricted universe multiverse
deb lucid partner
deb lucid free non-free

# deb lucid-getdeb apps
# deb lucid-getdeb games


deb isadora main upstream import
deb lucid main restricted universe
deb lucid-updates main restricted universe
deb lucid-security main restricted universe
deb lucid partner
deb lucid free non-free

# deb lucid-getdeb apps
# deb lucid-getdeb games

Danushka FernandoDell Vostro 1520 Boot problem, Recovering & restore Grub from a live Ubuntu cd.

In Vostro 1520 Dell Laptop series, when Ubuntu installed with windows (Dual Boot), due to a problem in bios the MBR will get corrupted and after that you will be unable to see the Grub loader at the startup and you want be able to load either windows or Ubuntu.
This happens only because of the corruption at MBR record or partition table. 
So by reinstalling grub to the partition u can resolve this problem. The steps of resolving are given below.

  1. First you have to boot from the Ubuntu Live CD
  2. Then you have to find the sd number of the partition that grub is installed in. ( If you have a separate /boot partition it should be the boot partition sd number otherwise it should be the Ubuntu file system sd number. ). And here I assume that sd number is x.
  3. Open the Terminal and execute following codes
  4. sudo mkdir /mnt/root -- Setup a folder before change root.
  5. sudo mount -t ext3 /dev/sdax /mnt/root -- Using this code Mount the partition in to the folder created.
  6. Mount and bind necessary places using following code
  7. sudo mount -t proc none /mnt/root/proc
  8. sudo mount -o bind /dev /mnt/root/dev
  9. sudo chroot /mnt/root /bin/bash -- Now change the root to the configured folder using this code
  10. grub-install /dev/sdax -- Install the grub to specified device.
  11. Now you are done. Restart and check. You will see that the grub is loading.

Note : - When you are at step 7 and 8, if it fails with errors saying directories does not exists, then create directories by using following commands before step 7.
  1. sudo mkdir /mnt/root/proc
  2. sudo mkdir /mnt/root/dev

Dimuthu De Lanerolle


What is ActiveMQ?

Apache ActiveMQ is an open source message broker written in Java together with a full Java Message Service (JMS) client. 

To refer more on Apache-ActiveMQ click on this link.

How to configure ActiveMQ with WSO2 ESB?

One of the most popular releases of Apache-ActiveMQ is ActiveMQ 5.5.1 released version. In this post we will show you the steps for configuaring WSO2 ESB with ActiveMQ 5.5.1.

Before start running your ESB you need to configure ActiveMQ.

1. Download ActiveMQ from the above link.
2. Navigate to ActiveMQ_HOME/lib and copy below libraries to ESB_HOME/repository/components/lib directory.
     * activemq-core-5.5.1.jar
     * geronimo-j2ee-management_1.1_spec-1.0.1.jar
     * geronimo-jms_1.1_spec-1.1.1.jar
3. Configure transport listeners and senders in the ESB.

un-comment the following listener configuration related to ActiveMQ in ESB_HOME/repository/conf/axis2/axis2.xml file.

<transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">
       <parameter name="myTopicConnectionFactory" locked="false">
           <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
           <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
           <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">TopicConnectionFactory</parameter>
            <parameter name="transport.jms.ConnectionFactoryType" locked="false">topic</parameter>
       <parameter name="myQueueConnectionFactory" locked="false">
           <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
           <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
           <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
            <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
       <parameter name="default" locked="false">
           <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
           <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
           <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
            <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>

4. Setup JMS sender.
un-comment the following configuration in ESB_HOME/repository/conf/axis2/axis2.xml file.

<transportSender name="jms" class="org.apache.axis2.transport.jms.JMSSender"/>

Can we execute an esb test class in wso2 PATS (Platform automated test suite) with ActiveMQ ?

Yes. we can do this. We will now show you how to do this in PATS 1.7.0.

This test scenario will focus on sending messages to JMS queue and consuming the sent messages.

Given below is the code snippet of two classes which can be used to achieve the task ahead of us now.

package org.wso2.carbon.automation.platform.scenarios;

import org.testng.Assert;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import org.wso2.carbon.automation.engine.frameworkutils.FrameworkPathUtil;
import org.wso2.carbon.automation.extensions.servers.jmsserver.client.JMSQueueMessageConsumer;
import org.wso2.carbon.automation.extensions.servers.jmsserver.controller.config.JMSBrokerConfigurationProvider;
import org.wso2.carbon.automation.platform.scenarios.esb.ESBBaseTest;
import org.wso2.carbon.automation.test.utils.axis2client.AxisServiceClient;

public class JMSMessageConsumerTestCase extends ESBBaseTest {

    private String activemqIP;

    private String activemqPort;

    @BeforeClass(alwaysRun = true)

    protected void init() throws Exception {
        activemqIP = automationContext.getDefaultInstance().getProperty("ip");
        activemqPort = automationContext.getDefaultInstance().getProperty("port");
        OMElement synapse = loadClasspathResource(FrameworkPathUtil.getSystemResourceLocation() +
        updateESBConfiguration(setConfigurations(synapse, activemqIP, activemqPort));

    @Test(groups = {"wso2.esb"}, description = "Test proxy service with jms transport")

    public void testJMSMessageStoreAndProcessor() throws Exception {

        JMSQueueMessageConsumer consumer = new JMSQueueMessageConsumer

                (JMSBrokerConfigurationProvider.getBrokerConfiguration(activemqIP, activemqPort));

        AxisServiceClient client = new AxisServiceClient();

        for (int i = 0; i < 5; i++) {
            client.sendRobust(getStockQuoteRequest("JMS"), automationContext.getContextUrls()
                    .getServiceUrl() + "/JMSMessageStoreTestCaseProxy", "getQuote");


        try {
            for (int i = 0; i < 5; i++) {
                if (i < 5) {
                    //05 messages should be in the queue
                    Assert.assertNotNull(consumer.popRawMessage(), "Cloud not consume the " +
                            "message from the queue");
        } finally {

    public static OMElement getStockQuoteRequest(String symbol) {

        OMFactory fac = OMAbstractFactory.getOMFactory();
        OMNamespace omNs = fac.createOMNamespace("http://services.samples", "ns");
        OMElement method = fac.createOMElement("getQuote", omNs);
        OMElement value1 = fac.createOMElement("request", omNs);
        OMElement value2 = fac.createOMElement("symbol", omNs);

        value2.addChild(fac.createOMText(value1, symbol));


        return method;


package org.wso2.carbon.automation.platform.scenarios.esb;

import org.wso2.carbon.automation.engine.context.AutomationContext;
import org.wso2.carbon.automation.engine.context.TestUserMode;
import org.wso2.carbon.automation.test.utils.esb.ESBTestCaseUtils;

import javax.xml.xpath.XPathExpressionException;
import java.util.Iterator;

public class ESBBaseTest {

    protected AutomationContext automationContext = null;

    private OMElement synapseConfiguration = null;
    private ESBTestCaseUtils esbTestCaseUtils = new ESBTestCaseUtils();

    protected void init() throws Exception {

        automationContext = new AutomationContext("ESB", TestUserMode.SUPER_TENANT_ADMIN);

    protected String getBackendURL() throws XPathExpressionException {

        return automationContext.getContextUrls().getBackEndUrl();

    protected OMElement loadClasspathResource(String path) throws FileNotFoundException,

            XMLStreamException {
        OMElement documentElement = null;
        FileInputStream inputStream = null;
        XMLStreamReader parser = null;
        StAXOMBuilder builder = null;
        File file = new File(path);
        if (file.exists()) {
            try {
                inputStream = new FileInputStream(file);
                parser = XMLInputFactory.newInstance().createXMLStreamReader(inputStream);
                //create the builder
                builder = new StAXOMBuilder(parser);
                //get the root element (in this case the envelope)
                documentElement = builder.getDocumentElement().cloneOMElement();
            } finally {
                if (builder != null) {
                if (parser != null) {
                    try {
                    } catch (XMLStreamException e) {
                if (inputStream != null) {
                    try {
                    } catch (IOException e) {


        } else {
            throw new FileNotFoundException("File Not Exist at " + path);
        return documentElement;

    protected void updateESBConfiguration(OMElement synapseConfig) throws Exception {

        if (synapseConfiguration == null) {

            synapseConfiguration = synapseConfig;
        } else {
            Iterator<OMElement> itr = synapseConfig.cloneOMElement().getChildElements();
            while (itr.hasNext()) {
        esbTestCaseUtils.updateESBConfiguration(synapseConfig, automationContext.getContextUrls().getBackEndUrl(), automationContext.login());

    public static OMElement setConfigurations(OMElement synapseConfig, String ip, String port) throws XMLStreamException {

        String config = synapseConfig.toString();

        config = config.replace("tcp://localhost:61616", "tcp://" + ip + ":" + port);

        config = config.replace("tcp://localhost:61616", "tcp://" + ip + ":" + port);
        return AXIOMUtil.stringToOM(config);


<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="">
    <registry provider="org.wso2.carbon.mediation.registry.WSO2Registry">
        <parameter name="cachableDuration">15000</parameter>
    <proxy name="JMSMessageStoreTestCaseProxy"
           transports="https http"
                <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"/>
                <property name="OUT_ONLY" value="true"/>
                <log level="full"/>
                <store messageStore="JMSTestMessageStore"/>
    <sequence name="fault">
        <log level="full">
            <property name="MESSAGE" value="Executing default 'fault' sequence"/>
            <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/>
            <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/>
    <sequence name="main">
            <log level="full"/>
            <filter source="get-property('To')" regex="http://localhost:9000.*">
        <description>The main sequence for the message mediation</description>
    <messageStore class=""
        <parameter name="java.naming.factory.initial">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
        <parameter name="java.naming.provider.url">tcp://localhost:61616</parameter>
        <parameter name="store.jms.JMSSpecVersion">1.1</parameter>
        <parameter name="store.jms.destination">JMSTestMessageStore</parameter>

Now lets dig into more finer edges of the context. This test class needs you to start your ESB and the ActiveMQ instance before running the test case. Remember in PATS you need to provide/ up the setup manually. Now lets imagine your ESB server runs on the default ip - and the default https port 9443 and ActiveMQ instance runs on and on the default port 61616. i.e both the ESB and the ActiveMQ instances runs in a remote machine, other than the machine which hosts PATS.

Next step is to access the remotely hosted ESB console.

You should be able to access ESB management console by typing the below url in your browser.

To access ActiveMQ console through your browser first you need to start the ActiveMQ instance. To do so navigate to apache-activemq-5.5.1/bin folder and and type as below in the command prompt.

.../apache-activemq-5.5.1/bin$java -jar run.jar start

Thats fine.Your ActiveMQ instance is up and running now. !!!
Now the matter is to access it's console from the browser. To do so just type the below url in the browser url panel.

Draw your attention on two different port values for ActiveMQ. (61616 & 8161).

If every thing is fine now you should be able view incoming messages are being queued in the JMSTestMessageStore queue and when you consume these messages it shows as dequeued.

An example status image of the ActiveMQ console is given below.

Name Number Of Pending Messages  Number Of Consumers  Messages Enqueued  Messages Dequeued  Views  Operations  
JMSTestMessageStore2153Browse Active Consumers
Send To Purge Delete

Srinath PereraImplementing Bigdata Lambda Architecture using WSO2 CEP and BAM

Most real world Bigdata use cases involve both stream processing (real-time) processing as well as batch processing. To address both the concerns Nathan Marz introduced an architecture style called “Bigdata Lambda architecture”.

Following figure shows the outline of Lambda Architecture that includes batch, speed, and serving layers. Incoming data are sent to both batch and speed layers, where batch layer pre-calculates a historical view of the system and speed layer calculates the most recent view of the system. The sensing layer combines the two layers to satisfy the given queries.

You can find more information about Lambda Architecture from following.
  1. Big Data Lambda Architecture
  2. The Lambda architecture: principles for architecting realtime Big Data systems
  3. Applying the Big Data Lambda Architecture 
Following picture shows how we can implement the lambda architecture using WSO2 Products.

As the picture depicts, you can use WSO2 BAM to implement the batch layer and WSO2 CEP to implement speed layer. We send incoming data to both BAM and CEP using high performance data transport called "Data bridge" that can achieve throughput upto 300,000 events/second. BAM run user defined Hive queries to calculate the batch views and CEP runs user define CEP queries to calculate the runtime views. Then we can combine the both the views using “Event tables” in WSO2 CEP, which maps the batch views in a database into CEP windows, to answer the queries posed by the users.

For example, the next figure shows how to implement the following query using lambda architecture. You can find more information in my Strata talk.

“If velocity of the ball after a kick is different from season average by 3 times of season’s standard deviation, trigger event with player ID and speed”

Here we combine CEP and BAM to answer the query.

Dimuthu De LanerolleIn a Nutshell .... At a Glance .....

Enabling wire logs for wso2-apimgr 

1 . To enable wire logs you need to go to wso2am-1.x.0/repository/conf/ file.

2. Uncomment the following entry.

3. Now restart the server.

4. Send your request.

5. Now go to wso2am-1.x.0/repository/logs/wso2carbon.log file.

6. Identify the message flow,

    Form the wire to api-mger

    DEBUG {org.apache.synapse.transport.http.wire} -  >> "POST /t/ HTTP/1.1[\r
     DEBUG {org.apache.synapse.transport.http.wire} -  >> "Host: xx.xx.x.xx:8243[\r][\n]" {org.apache.synapse.transport.http.wire}  

    From api-mger to the wire

    DEBUG {org.apache.synapse.transport.http.wire} -  << "POST /user/ HTTP/1.1[\r][\n]" {org.apache.synapse.transport.http.wire}
     DEBUG {org.apache.synapse.transport.http.wire} - << "Host: yy.yy.yy.yy:9000[\r][\n]" {org.apache.synapse.transport.http.wire} 

Note : The best way to identify the message flow is to consider the host ip.

WSO2 ESB --> OUT_ONLY property

                <property action="set" name="OUT_ONLY" value="true"/>
 Now this means that you are not expecting any response.

Sandapa HandakumburaHow to configure SAML SSO for Salesforce using WSO2 Identity Server.

You need to follow these steps to configure SSO for Salesforce using WSO2 Identity Server 4.6.0 as the identity service provider.

Configuring Salesforce :

1. Create an account in

2. Go to and login to the above created account and go to Home > Domain Management > My Domain  and create new domain (Eg: https://sandapa­dev­

SalesForce takes some time to register the domain.

3. Go to Home > Security Controls > Single Sign­On Settings  and enable SAML SSO. Configure the rest of the properties as given below.

SAML Enabled                         Checked
User Provisioning Enabled        Not Checked
SAML Version                          2.0
SAML Identity Type                 Username
SAML Identity Location            Subject
Issuer                                      https://localhost:9443/samlsso
Identity Provider Certificate       CN=localhost, O=WSO2, L=Mountain View, ST=CA, C=US Expiration: 13 Feb 2035 07:02:26 GMT

Identity Provider Login URL      https://localhost:9443/samlsso
Identity Provider Logout URL    https://localhost:9443/samlsso
Custom Error URL
Entity Id                             
Service Provider Initiated Request Binding      HTTP POST

You need to upload an Identity Provider Certificate when configuring SSO settings. You can use the
following command inside <IS_HOME>/repository/resources/security/ directory to create the certificate.

keytool ­export ­alias wso2carbon ­file wso2.crt ­keystore wso2carbon.jks ­storepass wso2carbon

This will create a file wso2.crt and you can upload that to salesforce as the Identity Provider Certificate.

4. Goto Home > Domain Management > My Domain. Click on edit button in Login Page Branding
section. Out of the two options ‘Login Page’ and ‘SSO’, select ‘SSO’ as  Authentication Service (Login Page ­ not checked, SSO ­ checked) and save changes.

Configuring Identity Server :

1. Login to Identity Server, go to SAML SSO page and click on Register New Service Provider.

Use the following when filling out the above form.

Assertion Consumer URL            Use the Salesforce Login URL (found under SSO settings in Salseforce).      Eg : https://sandapa­dev­

Select the check boxes as shown above and Register.

2. Create a user in Identity Server with ‘Login’ and other required privileges. Note that Salesforce accepts usernames in email format and therefore this IS user should have a username in email format (Eg :

You have to follow these few steps before creating a user with email as username in WSO2 IS.

a. Open carbon.xml in IS_HOME/repository/conf and uncomment :


b. Open user­mgt.xml in IS_HOME/repository/conf and add the following property under the default LDAP user store manager configurations (org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager).

<Property name="UsernameWithEmailJavaScriptRegEx">[a­zA­Z0­9@._­|//]{3,30}$</Property>

Using above property, you can change the pattern of your email address. By default it must be more than 3 characters and less than 30, But you can configure it as you wish.

c. Restart the server

3. Login to with the initially created account and go to Home > Manage Users > Users and
create a New User with the same username and password as the user we created in IS above

Verifying SSO : 

Now Logout from your account and try to access your domain in salesforce using the Salesforce Login URL. (Eg : https://sandapa­dev­ You will get redirected to WSO2 Identity Server Login page.

Login with credentials of the newly created user ( Now you will get redirected back to
salesforce home page of that user.

Geeth MunasingheJAX-RS Tryit

If there is a better way to try your rest endpoints without writing a client, which will make  your life much easier. And even the clients who has subscribed to your restful services, can try them without problem, that would save a lot of time.
Now you have a solution. We WSO2 company, will be releasing WSO2 Application Server 5.3.0 at the end of April 2014, which includes this time saving feature.

Shazni NazeerIntegrating Sharepoint with WSO2 Governance Registry

WSO2 Governance Registry (GREG) is a fully open source registry-repository for storing and managing service artifacts and other resources.

Microsoft Sharepoint is a web application platform, comprising of multipurpose set of web technologies. Sharepoint has historically been engaged with content and document management. But the latest versions can be used for lot more; document and file management, social networking, websites, intranet, extranet, enterprise search, business intelligence, system integration, process integration and workflow automation. The latest version of Sharepoint is Sharepoint 2013.

In this guide I shall show you how can we integrate Sharepoint resources with WSO2 Governance Registry. This can be useful to govern resources and artifacts stored in Sharepoint in GREG.

In this guide I will create a Sharepoint site and a blog in the site and create a resource in GREG with the blog posts URL.

You can find Sharepoint 2013 installation instructions here.

Let's first create a site in the Sharepoint 2013. You can create a sharepoint site collection using the Sharepoint 2013 Central Administration, which you can find in the start menu. This will prompt you for your Administrative user name and password, which you would have configured at the installation time. Sharepoint 2013 Central Administration window will open up in the browser as shown below.

Sharepoint 2013 Central Administration

You can create a site collection by clicking 'Create site collection' link and following the onscreen instructions. I've created a site collection called 'MySiteCollection' for demonstration and I can access the site collection by navigating to the http://win-nh67lj7lsq4/sites/mysitesYou can configure your site collections URL while following the above mentioned instructions. When you navigate to your site collection with the configured URL, you would see a similar window as the following.

Site Collection
You can create a new post by clicking 'Create a post' link and following the onscreen instructions. I've created a blog post called 'My first blog post'. After creating you can see the blog post listed in the site collection as shown in the following screenshot.

A blog post in Sharepoint

You can view the blog post by clicking the blog link. Its URL in my case is http://win-nh67lj7lsq4/sites/mysites/Lists/Posts/Post.aspx?ID=3

OK. Now we can import this blog post as a resource in the WSO2 Governance Registry. This would allow us to govern certain aspects of this resource in the WSO2 Governance Registry.

If you haven't downloaded and ran WSO2 Governance Registry, look here for the details. Navigate to the Management console in browser using the URL http://localhost:9443/carbon if you are running the WSO2 Governance Registry in the local machine and running with the default port settings.

Now let's add a resource in the WSO2 Governance Registry, corresponding to the blog post we created in the Sharepoint.  Login to the Management Console and click browse and navigate to a path where you want to store the blog post in WSO2 Governance Registry, Let's say in /_system/governace.

Click the 'Add Resource' and select 'import content from URL' for method, as shown in the following picture. Provide the Sharepoint blog post URL and give it a name. This should import the blog post content into WSO2 Governance Registry.

In case you get an error, that is most probably due to Sharepoint resources are protected. You can't access the Sharepoint resources without the authentication provided. Even if you want to access the WSDL in the browser by providing the link, you will be prompted for credentials. So how do we cope this scenario in the WSO2 Governance Registry? WSO2 products offer an option in the configuration to allow this kind of authentication to external resources happen in the network. Open up the carbon.xml located in GREG_HOME/repository/conf. There you will find a tag named <NetworkAuthenticatorConfig>. Provide the following configuration (of course changing the pattern according to your requirement and provide your credentials).

Provide your Sharepoint username and password for the <Username> and <Password> tags. <Pattern> tag allows any URL pattern matching to that to be authenticated in the WSO2 products. Type can be either 'server' or 'proxy' depending on your type.

After doing this change, you need to restart the WSO2 Governance Registry and attempt the above import. Now it should work.

By clicking the resource link the tree view would take you the following screen where you can do all the conventional governance operations for a resource. You can add lifecycles, tags, comments all in the WSO2 Governance Registry.

If you just want to save the blog post as a URI, you may do so by adding a the Sharepoint Blog URL as a URI artifact. This step will further be described below with adding a WSDL.

I'll wrap up this post by adding a WSDL file of a web service stored in the Sharepoint. The WSDL is of a default web service to list the content of a site collection. The WSDL URL of this service name List is http://win-nh67lj7lsq4/sites/mysites/_vti_bin/Lists.asmx?wsdl. Replace win-nh67lj7lsq4/sites/mysites/ with your configured site URL.

Adding a WSDL in WSO2 Governance Registry would import any dependent schema, create a service artifact type and endpoints (if available). Let's create a WSDL artifact in WSO2 Governance Registry.

Click the Add WSDL in the left pane as shown below.

Provide the WSDL URL and a name for it. This would import the WSDL and create any corresponding artifacts. In this case it creates a List service and an endpoint. You can find the service by clicking the Services in the left pane.  The endpoint dependency can be seen by clicking the 'View Dependency' in the WSDL list as shown below.

Above description indicated the way it imported the WSDL and its dependencies. You might just want to have the WSDL URL stored in WSO2 Governance Registry and not the imported content. This could be done by adding the WSDL as a URI. For that, click the Add URI button in the left pane. This should bring up a window as shown below.

Provide the WSDL URL for URI. Select WSDL for the Type. Provide the name List.wsdl (provide .wsdl extension anyway) and click save. Now go to the List URI. You should be able to see the WSDL listed there as shown below.

Click the List.wsdl. This will bring up the following windows with the dependency and associations listed in the right side.

This guide gave you a very basic guide on how to integrate some of the resources in Sharepoint with WSO2 Governance Registry. You can do lot more with WSO2 Governance Registry. I recommend you download it and play with the product and get more details from WSO2 official documentation from

Hope this guide was useful.


Chanaka FernandoSimple Git commands to getting started with Git

Git is an increasingly popular source control mechanism which most of the organizations are moving in to. With the extensive use of Git, sometimes developers need to learn git quickly. In this blog post, I will write about 10 of the most useful git commands to getting started with Git.

git init - Initialize a local git repository

git clone "repo_url" - Clone an existing git repository to your local machine

git add -A dir_name - Add an entire directory with its files to your local git repository

git pull - update your existing local repository with a remote repository

git commit -m "comment" - Commit your local changes to your local git repository

git push - Push your local changes to the remote git repository which you have cloned this repository from.

Updating your own repository with changes committed to a remote(upstream) repository

git remote add upstream "repo_url" - Add the upstream

git fetch upstream - Fetch the changes from upstream to your local repository

git merge upstream/master - Merge the changes with your existing local repository

git push - Push the changes to your own remote repository

Dhananjaya jayasingheWSO2Asia Con - Tutorial -"Advancing Integration Competency and Excellence with the WSO2 Integration Platform" Presentation Slides

Ajith VitharanaOverwrite property in registry mount.

The following configuration section is a part of the mount configuration.

<mount path="/_system/config" overwrite="true|false|virtual">

The overwrite property can be a one of following three value

1. true

If the overwrite is true, the existing resources/collections  of the mount location will be removed before establishing the mount.

2. false

If the overwrite is false, the existing resources of the mount location will NOT  be removed before establishing the mount. But error will be logged if the same resources already exist.

3. virtual

If the overwrite is virtual, the existing resources of the mount location will be kept as it is, but mount will be added on top of it. If the existing resource is a mount or a symbolic link overwriting will not happen.

Srinath PereraWSO2Con Talk: Accelerating Mobile Development with Mobile Enterprise Application Platforms (MEAP)

Following are the slides for my wso2con talk about upcoming MEAP product. This talk describes WSO2 MEAP, a product that let users develop and manage the complete lifecycle of mobile application development. MEAP includes support for both Mobile App development and back end service development as well.

You can download the slides from here.

Dinuka MalalanayakeREST API Documentation with Swagger

This is going to focus REST API documentation with Swagger. We know that REST APIs are very popular with modern world technology and most of the technology solutions came up with REST Implementations. Developers who are involved with the REST APIs development and they have kind of problem like how to document the REST APIs and how to simply expose those REST APIs to the end users. Swagger is good solution for above explained problem. So lets talk about swagger integration to the JAX-RS API. I think that you are familiar with the JAVA JAX-RS implementations. If not you have to get some knowledge about JAX-RS implementation.

1. You have to add swagger maven artifact to your project.

2. Secondly you have to mentioned the base path on your web.xml. This URL will be used as backend service call from swagger-ui

3. Finally you have to annotate your REST-APIs by swagger annotations. You can get total idea from following code snippets.

4. Then you have to get the swagger-ui and host it on your local tomcat.

5. Now you can simply view and invoke the REST APIs which is developed by you by using the Swagger UI.



Malintha AdikariGetting started with CodeIgniter on OpenShift

CodeIgniter Framework

From the CodeIgniter website, "CodeIgniter is a powerful PHP framework with a very small footprint, built for PHP coders who need a simple and elegant toolkit to create full-featured web applications. If you're a developer who lives in the real world of multi-tenant hosting accounts and clients with deadlines, and if you're tired of ponderously large and thoroughly undocumented frameworks, CodeIgniter is right for you."
Here are several reasons why the CodeIgniter framework might be right for you:
  • You need a framework with a small footprint
  • You need exceptional performance
  • You need broad compatibility with standard hosting accounts that run a variety of PHP versions and configurations
  • You want a framework that requires nearly zero configuration
  • You want a framework that does not require you to use the command line
  • You don’t want to be forced to learn a templating language
  • You need clear, thorough documentation


Getting started with CodeIgniter on OpenShift

If you haven’t already created an OpenShift account, head on over to the website and get started.
Getting started with CodeIgniter on OpenShift is very quick and easy. I have created a quickstart guide on github that will walk you through the steps.

Step 1: Create a PHP application

rhc app create -a ci -t php-5.3

Step 2: Add database support

Issue the following command to embed and activate MySQL for your application.  At the time of this writing, we support MySQL, mongoDB, and postgreSQL as available datastores.
rhc app cartridge add -a ci -c mysql-5.1

Step 3: Download and install the CodeIgniter framework

cd ci
git remote add upstream -m master git://
git pull -s recursive -X theirs upstream master git push

Step 4: Start coding

For more information on how to begin using the CodeIgniter framework, head on over their excellent user guide which will walk you through the process.

Heshan SuriyaarachchiCustom Log Formatter

Following is the source for a custom log formatter.

import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.logging.Formatter;
import java.util.logging.Handler;
import java.util.logging.LogRecord;

public class CustomFormatter extends Formatter {

private static final DateFormat df = new SimpleDateFormat("dd/MM/yyyy hh:mm:ss.SSS");

public String format(LogRecord record) {
StringBuilder builder = new StringBuilder(1000);
builder.append(df.format(new Date(record.getMillis()))).append(" - ");
builder.append(record.getSourceMethodName()).append("] - ");
builder.append("").append(record.getLevel()).append(" - ");
return builder.toString();

public String getHead(Handler h) {
return super.getHead(h);

public String getTail(Handler h) {
return super.getTail(h);

Following is the way how the log messages will turn up in your log file.

25/03/2014 08:46:17.770 - [] - INFO - thread05 compositekey set time : 5 ms
25/03/2014 08:46:17.781 - [] - INFO - thread03 compositekey set time : 11 ms
25/03/2014 08:46:17.783 - [] - INFO - thread05 compositekey set time : 12 ms
25/03/2014 08:46:17.785 - [] - INFO - thread03 compositekey set time : 4 ms
25/03/2014 08:46:17.787 - [] - INFO - thread05 compositekey set time : 4 ms

Thilini IshakaHow to write a SaaS app on Stratos

Wikipedia defines software as a service (SaaS) as; "On-demand software provided by an application service provider." "A software delivery model in which software associated data are centrally hosted in the cloud."

Gartner defines SaaS as; "Software that is owned, delivered and managed remotely by one or more providers. The provider delivers software based on one set of common code and data definitions that is consumed in a one-to-many model by all contracted customers at anytime on a pay-for-use basis or as a subscription based on use metrics."

Typical Application Vs SaaS


SaaS Technical Requirements
1. Elastic
       Scales up and down as needed.
       Works with the underlying IaaS.
2. Self-service
       De-centralized creation and management of tenants.
       Automated Governance across tenants.
3. Multi-tenant
       Virtual isolated instances with near zero incremental cost.
       Implies you have a proper identity model.
4. Granularly Billed and Metered
       Allocate costs to exactly who uses them.
5. Distributed/Dynamically Wired
       Supports deploying in a dynamically sized cluster.
       Finds services across applications even when they move.
6. Incrementally Deployed and Tested
       Supports continuous update, side-by-side operation, in-place testing and
       incremental production.

A cloud platform offers,

  • Automated Provisioning.
  • Versioning.
  • Lifecycle management.
  • Infrastructure as a Service.
  • Virtualization.
  • Federated Identity Management.
  • Clustering.
  • Caching.
  • Billing and Metering.

WSO2 Stratos


  • WSO2 Stratos is WSO2’s Cloud Middleware Platform(CMP).
  • A complete SOA platform.
  • In private cloud or public cloud.
  • 100% Open Source under Apache licence.
  • Can run on top of any Cloud IaaS.
  • WSO2 StratosLive is the PaaS offering by WSO2.
  • Significantly ahead of the competition.
  • Stratos is the only 100% Open Source, Open Standards option.
  • Based on OSGi - modular, componentized, standard.

Cloud Native features supported in Stratos
  • Elasticity.
  • Multi-tenancy.
  • Billing and Metering.
  • Self Provisioning.
  • Incremental Testing.

Super Tenant SaaS Applications Vs Tenant SaaS Applications

Tenant SaaS application do not have certain permissions and will not be able to access or modify other tenant's data where the super tenant applications have full controllability and permissions.

Tenant SaaS Web Applications
 Configure security in web.xml


Super Tenant SaaS Application can access tenant's user level information. It uses org.wso2.carbon.context.PrivulegedCarbonContext to access tenant's information such as registry, cache, tenant manager, queue, etc.


[1]A sample is available at
[2]Creating a SaaS App with the Multi-Tenant Carbon Framework:
[3]Webinar on Building and Deploying SaaS Applications:

Darshana GunawardanaProvision users to Google using WSO2 Identity Server

The latest release of WSO2 Identity Server is couple of weeks away and its powered with a whole lot of features like,

  • seamless identity federation with new Authentication Framework
  • new User View Portal and improvements to server management console… and most importantly,
  • simple way to provision users to different domains with new Identity Provisioning Framework

To provision users to a Google domain; the last feature – Identity Provisioning Framework is the module working behind the screen. As prerequisites, following details about the Google domain are needed later for configuration,

  1. A Google domain
  2. A service account for above domain (which provides service account email and a private key)
  3. Email of the Service account owner

If you find any trouble finding above details, I have planned to come up with a post on that. Until then drop a comment if you find any trouble.

So lets get started with downloading Identity Server.

Step 1 : Download the latest alpha pack of Identity Server from here : Then extract it to a folder which we’ll refer as <IS_HOME> from here onwards.

Step 2 : Open <IS_HOME>/repository/conf/carbon.xml file and search for EnableEmailUserName element and set it to true as following,


Changing above property is optional, but it will allow us to simplify the demonstration.

Step 3 : Open <IS_HOME>/repository/conf/security/ file and edit following elements accordingly.

# Enable provisioning

# Remove salesforce provisioning connector from the registration

# Specify Google domain wants to provision

# Specify how to obtain primary mail

# Delete Identity.Provisioning.Connector.Google.Required.Field.Claim.familyName entry
# or keep it value empty as follows

# Specify authentication details

Step 4 : Now start the server by running <IS_HOME>/bin/ (if windows use wso2server.bat)

Step 5 : Create a new user using the management console. To do so,

Or simply use https://localhost:9444/carbon/user/add-step1.jsp for the direct access.

In here enter username in email format for your domain. For example is a valid username for my google domain. Then enter a strong password, re enter it and click on finish.

If everything was set up accordingly, now the user would created in Identity Server upon successful user creation in Google domain.

Step 6 : Go to Google login page try to login given username ( and the password

 Thats it.. Its was only a couple of configurations switching on the functionality and obtaining authentication details and now you can create Google accounts for Identity Server users.


PS : If you wanted to configure more than one domain, you could do that by registering new connector from different name and including separate set of configuration for that connector. For example, if we are registering new connector called “GoogleConnector2″ it would need add\change following properties with needed values

# Add new provisioning connector "GoogleConnector2" to connector registration

# Add new set of configurations for new domain

# Attribues mandetory for user object

# Claim Mapping for required fields

# Default values for required fields

# Authorization configs to access Google APIs

# Claim for store store Google UserID

Lalaji Sureshika[WSO2 AM] APIStore User Signup as an approval process

In previous versions of WSO2 APIManager before 1.6.0, it was allowed any user who's accessible the running APIStore come and register to the app.But there will be requirement like,without allowing any user to signup by him/her self alone,first get an approve by a privileged user and then allow to complete app registration.Same requirement can be apply to application creation and subscription creation as well.To fulfill that,we have introduced workflow extension support for WSO2 APIManager and you can find the introductory post on this feature from my previous blog post on "workflow-extentions-with-wso2-am-160".

From this blog-post,I'll explain how to achieve simple workflow integration with default shipped resources with  WSO2 APIManager 1.6.0 and WSO2 Business Process Server 3.1.0 with targeting "user-signup" process.


  • First download the WSO2 APIManager 1.6.0[AM] binary pack from product download page.
  • Extract it and navigate to {AM_Home}/business-processes directory.You'll be find three sub-directories and browse "user-signup"  directory.You'll notice a bpel and a human task exist inside it.These bpel and human task created with WSO2 Business Process Server 3.1.0 and try downloading BPS 3.1.0 from product download page and extract it.
  • For further references,we'll keep APIM offset value as 0 and BPS offset value as 2 
              For BPS ->Change 2 in carbon.xml [{BPS_Home}/repository/conf]
              For AM- >Keep the default value
  • Copy /epr directory found in {AM_Home}/business-processes directory in to repository/conf folder of Business Process Server.
  • Then copy the file located at {AM_Home}/business-processes/user-signup/HumanTask to {BPS_Home}repository/deployment/server/humantasks directory.
  • Then copy the file located at {AM_Home}/business-processes/user-signup/BPEL to {BPS_Home}repository/deployment/server/bpel directory.
  • Then start Business Process Server 3.1.0 [BPS].Once you login to BPS management console,you'll see the BPEL and Human Task are successfully deployed in BPS as follow.
deployed user-signup bpel

deployed user-signup human task

  • Now,we have configured BPS server and it's time to configure AM with enabling triggering the BPS side deployed user-signup process.
  • Edit WSO2 APImanager configuration file to enable web service based workflow execution. For this we need to edit api-manager.xml located inside {AM_Home}/repository/conf.All work flow related configurations are located inside configuration section. Replace the existing content for WorkFlowExtension section for user-signup as follows. 
   <UserSignUp executor="org.wso2.carbon.apimgt.impl.workflow.UserSignUpWSWorkflowExecutor">
           <Property name="serviceEndpoint">http://localhost:9765/services/UserSignupProcess</Property>
           <Property name="username">admin</Property>
           <Property name="password">admin</Property>
           <Property name="callbackURL">https://localhost:8243/services/WorkflowCallbackService</Property>

  • Then start the AM server.Browse for APIStore [https://localhost:9443/store].Try registering a new user from signup link shown in /Store page.Say a user called lalaji tries to register as an APIStore subscriber.

  • Once the user submit user signup data a message similar to below saying "User account awaiting Administrator approval" will popup.

  • If the user lalaji try to login ,it will failed as still the user-signup process hasn't completed and it's waiting until get the approval from administrator.  

  • However,now the related business process has been triggered. You can view the created process instance by navigating to BPS management console [https://localhost:9445/carbon] and click on left menu Business Processes- >Instances as shown below.

  • The BPEL,we deployed in WSO2 BPS is having a simple flow as below.
trigger the process -> Execute the Human Task [Approve/Reject] ->Send response to APIM callback endpoint

  • Now the question coming,how we can execute the human task.Do we provide a custom UI to do this in WSO2 BPS side? No,but we have introduced a new web app called workflow-admin in APIM side to achieve this. 
  • Navigate to workflow-admin [https://localhost:9443/workflow-admin] web app from web browser and try login as a user who's having admin rights.
         NOTE- In the sample human task we written,we have allowed only to users having  admin  role to able to approve/disapprove task requests.So by default,only the users with admin  role will able to login to workflow-admin app.But if you need to plug your own bpel   and human task to APIM with allowing different user roles to accept/reject task requests he       still can use the new human task with this web app and the task allowed role can be                 configurable from web app itself.

And make sure,to share the users-stores between WSO2 AM and WSO2 BPS 

  • Once a user with admin role login to workflow-admin web app,he would see the pending tasks list which are waiting for approval by admin users.The logged in user can assign it to him,start the task then approve/reject the task request and finally complete the task.

  • Let's say,admin user approved above requested task from workflow-admin UI. Then the triggered process will be completed with calling the APIM callback endpoint and then the signup request sent user could able to login to APIStore successfully.

In similar manner,you can try the default shipping BPELs for subscription process and application creation process triggerred from APIStore UI as well.For more info,please refer the readme.txt located at {AM_Home}/business-processes directory.

NOTE- You can create your own bpels and human tasks with different flows on WSO2 BPS and then use with APIM.You can find more information on how to write business processes with WSO2 BPS,by reffering [1,2].

Additionally,you can plug your own custom workflow executor to APIM without using WSO2 BPS.For that please refer [3].

Chamila WijayarathnaRetrieving Network Usage Information from NetFlow Version 9 Records

Netflow is a feature that was introduced on Cisco routers that give the ability to collect IP network traffic as it enters or exits an interface. By analyzing the data that is provided by Netflow a network administrator can determine things such as the source and destination of the traffic, class of service, and the cause of congestion. Netflow consists of three components: flow caching, Flow Collector, and Data Analyzer. In Netflow, router forwards details of network usage as UDP packets to a specified port of a destination. 

In this blog, I am going to explain how to retrieve some important information from Netflow version 9 records. 
In earlier versions of Netflow like version 5 and version 6, Netflow record had an fixed format [1] [2]. But in Netflow version 9, it deviates from this fix format and introduced template based Netflow records. Using templates provides several key benefits:

  • New features can be added to NetFlow more quickly, without breaking current implementations.
  • NetFlow is "future-proofed" against new or developing protocols, because the NetFlow version 9 format can be adapted to provide support for them.
  • NetFlow version 9 is the IETF standard mechanism for information export.
  • Third-party business partners who produce applications that provide collector or display services for NetFlow will not be required to recompile their applications each time a new NetFlow feature is added; instead, they may be able to use an external data file that documents the known template formats.

When using Netflow Version 9, router sends template information with netflow records. How often templates will be send can be configured at router.
Since there is no fixed format, retrieving information from NetFlow version 9 is not easy as earlier version. For collection purpose, I used Java NetFlow Collector Analyzer (jnca) library [3]. I downloaded its source code from [4] and imported it into Eclipse.
Jcna collects Netflow records and write information to an SQL database. But in this blog, I'm only writing about retrieving data. So after downloading code, I removed code that corresponding to SQL Database access.
First we have to set listening host and port. This can be done by changing '' and 'net.bind.port' at etc/ 
Decoding of Packets happen in Before going to decode packets, let's have a look at packets using wireshark. 

In every Netflow packet, it contains 1 or more flow sets, each flow set will be in a unique template and will contain one or more Netflow records. Now let's see what are the details available in records in each template.

Template 256

Template 260 

Template 261 

Template 263

In above images, we can see what are the details available in each template. Each of these templates are used for different uses.
eg -:
IPv4 Traffic Templates 
       Template ID 256 – IPv4 Standard 
       Template ID 257 – IPv4 Enterprise 
IPv4 with NAT Traffic Templates 
       Template ID 260 – IPv4 with NAT Standard 
       Template ID 261 – IPv4 with NAT Enterprise 
IPv6 Traffic Templates 
       Template ID 258 – IPv6 Standard 

       Template ID 259 – IPv6 Enterprise [5]

Now let's see how we can retrieve desired information from Netflow by decoding packets. Decoding of Packets happen in constructor of We should consider about the case 'flowsetId > 255' in the if-else ladder. At the top, it loads the template for the flowset.

Template tOfData = TemplateManager.getTemplateManager().getTemplate(this.routerIP, (int) flowsetId);

This works only if collector already received details of templates. Otherwise this will return 'null'. Fields that can be available in records are defined in class. When above line is executed, it maps offset and length for that field with Field Definition. (NOTE: These offset and length for a field is different from template to template). In jcna, unfortunately this only happens for some of popular fields like 'Source Ip', 'Destination Ip', 'timestamp', etc. For these fields, we can take the value directly. 
f = new V5_Flow(RouterIP, buf, p, tOfData);
f.getSrcAddr().toString()  //Returns Source Address
f.getDstAddr().toString() // Returns Destination Address

But for other fields which offset and length are not matched by library, we can map them manually by observing using wireshark and hard code them. For example, let's think we need to find amount of data transferred. It's available as 'Initiator Octets' and 'Responder Octets' in flows with id 263. But jcna does not provide any functionality to take this values directly. So first we need to find offset and length of these fields.
In wireshark, we have 3 main windows, Top window shows list of packets captured, middle window shows details of a selected packet and bottom window shows byte level details. By double clicking on a flow in middle window, it highlights bytes related to that flow in bottom window. By doing this, we can find the beginning of byte stream related to that flow.

Then by clicking on 'Initiator Octets' in middle window, we can find the place where relevant byte stream is and by that we can find offset and length. In same way we can find values for 'Responder Octets'. Now we can add a little code at to calculate amount of data transferred and save it as size.

        size = Util.to_number(buf, off + 46, 4) + Util.to_number(buf, off + 50, 4);

Here for 'Initiator Octets' offset= 46 and length =4. For 'Respondor Octets'  offset= 50 and length =4.
In the same manner, we can take any detail that is available in Netflow. 
I think this will help you to clear problems you have when working with Netflow V9. If you have any more doubts please leave a comment. I'll try to help with what I know. 


Thilina PiyasundaraFew best practices on setting up Puppet 3 master/agent environment

Puppet is a configurations management tool like Chef and CFEngine. This tool is to manage configurations of large dynamically changing infrastructures like clouds efficiently. Puppet 3 is the latest release from PuppetLabs but still some operating system distributions does not include those packages in their repositories. So we need to some manual things to install puppet 3.

In this post I will explain few best practices to follow when installing a puppet master - agent environment. I have configure puppet master and agent environments several times and came across with this sequence and I think this a good way of doing this. But please note this not "the" best way of doing it and not recommended to use it as it is in a production environment. And also this will not describe about best practises of writing puppet manifests/modules.

Set a domain name for the environment
First of all use a domain name for your environment. Think that you are going to set up a puppet environment for ABC company, you can set the domain for that as '' or '' (data center 1 of ABC company). If you are doing it for testing purposes its advisory to use ''. '' is a reserved domain name for documentation and example purposes and no one can register that domain, so it will avoid many DNS resolution issues.

Give a proper FQDN for each hosts hostname.
Set a fully qualified domain name (FQDN) to each and every host within the puppet environment including the puppet master node. It will reduce lots of SSL related issues. It is not enough to just to give a hostname because most systems adds a domain (via DHCP) that will introduce some issues. Run 'hostname' and ' hostname -f ' and see the difference.

Use 'puppet' as a prefix as the puppet masters hostname. So it would be like; or or

And for the puppet agents; or or

Or or or

Use a UUID when creating the hostnames for puppet agents. Then give the service name (apache,mysql) or the node number (node002 - if using multiple services in a single server). That name must match the node definitions in the 'site.pp' (or 'nodes.pp').

Use the 'hostname' command and edit the '/etc/hostname' configurations file to change the hostname. You can do it like this, assuming that the host is ''

# hostname
# echo '' >/etc/hostname

Give and IP address to each FQDN.
It is a must to give an appropriate IP addresses to each hostname/FQDN. At least, the system should be able to refer to the '/etc/hosts' file and resolve the IP address of the relevant FQDN and should have following entries in the '/etc/hosts' file. localhost < local fqdn >
    < puppet master ip > < puppet master fqdn >

For an example, if you take '' node, its '/etc/hosts' file should like this. localhost

Check the system time and timezone information
Both puppet master and agents should have same system time and time zone on both systems. Use 'date' command to check the system time and time zone. Synchronize the system time with a well known time server. Commands are bit different from one distribution to another.

Download and install puppet repositories from PuppetLabs website
PuppetLabs provide an apt and a yum repository. Most distributions does not support puppet 3 for the moment therefore, we need to add those manually.

Please refer to "Using the Puppet Labs Package Repositories" article and install the appropriate repository for your system. Then update your repository lists.

Install puppet master 
After completing all above steps, then try to install puppet master using a package management system (apt/yum).

It's better to go ahead with default setting. But you need to do few changes to some configuration files to make it work as a master-agent environment puppet master server. Use a 'autosign.conf' file to automatically sign agents SSL requests. But avoid using ' * ' in that. Better to use it like this;


It's better to add the 'server=puppet.< domain >' in the 'puppet.conf  's 'main' section. On Debian based distros change the 'start' option in to 'yes' to start the puppet master. After configuring all restart the puppet master service. Open port 8140 from the system firewall specially check that if you are using any RedHat distribution.

Track changes
Use a version controlling system like git or subversion to track changes to puppet manifests. Use branching, versioning/tagging features to do it effectively.

Install puppet agent
First of all it is better to have puppet master installed. Then check the hostname and DNS resolutions for the hostname and puppet master. Then try to install puppet agent using a package management system.

You have to do few changes to connect to the puppet master server. Edit the '/etc/puppet/puppet.conf ' and add 'server=puppet.< domain >' to the 'main' section. Change the 'start' option to 'yes' in '/etc/default/puppet' configuration file in debian based distros. Then restart the puppet agent.

Test the system
Add this into your puppet masters '/etc/puppet/manifests/site.pp' file.
node default {
    file { '/tmp/mytestfile.t':
        owner   => 'root',
        group   => 'root',
        content => "This file was created by puppet.\n",
        ensure  => present,
Then run 'puppet agent -vt ' on the agent and check the '/tmp ' directory.

Automated script
I wrote a script to automate this and you can get it from here on github. It support Debian, RedHat and SLES distributions. If you have any issues please report those to this.

Srinath PereraTools and Techniques to make sense of Bigdata

Couple of weeks back, I did a talk titled "Big Data Analysis:Deciphering the haystack" , which is about different tools available for bigdata analysis.

 Tools I categorised based on following taxonomy based on what they do. Note there are tools for streaming (a.k.a. realtime) analytics as well as for store and processing.

Also I categorised Analysis techniques, or in other words making sense of data, based on what they are used to achieve into three sub topics based on the goals.
  1. To know what happend - this is basic analytics 
  2. To explain what happend - this is detecting patterns. e.g. data mining. 
  3. To forecast what will happen - this is forecasting models e.g. regression, numerical modes (e.g. weather models that use simulation) and other machine learning algorithms etc. 

 Following is the slidedeck I used, and please check it out for more information.


Srinath PereraSlides for the talk Internet of Things and Big Data

Couple of weeks back, I did a talk about Internet of Things and Big Data at Export Development Board auditorium. Following are the slides for the talk. You can find a writeup about the seminar from and recording is available from youtube.


Srinath PereraStrata 2014 Talk:Tracking a Soccer Game with Big Data

In January I did a talk at Oreilly Strata SF 2014 about how we solved the DEBS grand challenge, which involved processing data collected from a sensors in the ball and players boots in a football Game. I had blogged about some of the details before.  Following is the slide deck I used for the talk.


Also following is the abstract.

 Mobile devices, sensors and GPS are driving the demand to handle big data in both batch and real time. This presentation discusses how we used complex event processing (CEP) and MapReduce based technologies to track and process data from a soccer match as part of the annual DEBS event processing challenge. In 2013, the challenge included a data set generated by a real soccer match in which sensors were placed in the soccer ball and players’ shoes. This session will review how we used CEP to implement DESB challenge and achieved throughput in excess of 100,000 events/sec. It also will examine how we extended the solution to conduct batch processing using business activity monitoring (BAM) using the same framework, enabling users to obtain both instant analytics as well as more detailed batch processing based results.

Srinath PereraView, Act, and React: Shaping Business Activity with Analytics, BigData Queries, and Complex Event Processing

Following is the slides for my talk at WSO2Con San Francisco 2013. It talks about how to use WSO2 BAM and WSO2 CEP to build big data solutions that handles both realtime processing as well as batch processing.

View, Act, and React: Shaping Business Activity with Analytics, BigData Queries, and Complex Event Processing from Srinath Perera

Following is the abstract.

Sun Tzu said “if you know your enemies and know yourself, you can win a hundred battles without a single loss.” Those words have never been truer than in our time. We are faced with an avalanche of data. Many believe the ability to process and gain insights from a vast array of available data will be the primary competitive advantage for organizations in the years to come.

To make sense of data, you will have to face many challenges: how to collect, how to store, how to process, and how to react fast. Although you can build these systems from bottom up, it is a significant problem. There are many technologies, both open source and proprietary, that you can put together to build your analytics solution, which will likely save you effort and provide a better solution.

In this session, Srinath will discuss WSO2’s middleware offering in BigData and explain how you can put them together to build a solution that will make sense of your data. The session will cover technologies like thrift for collecting data, Cassandra for storing data, Hadoop for analyzing data in batch mode, and Complex event processing for analyzing data real time.

Srinath PereraWhere the mind is without fear

Following is a poem by Rabindranath Tagore, which to me in the same class as "Invictus", "Man in the arena", "IF" or "Desiderata (Desired Things)". (If you have not read those, check them out also). It is amazing how close does he it into ideals of "free society" and ideas like opensource. I love the part "Where the clear stream of reason has not lost its way Into the dreary desert sand of dead habit".

Where the mind is without fear and the head is held high
Where knowledge is free
Where the world has not been broken up into fragments by narrow domestic walls
Where words come out from the depth of truth
Where tireless striving stretches its arms towards perfection
Where the clear stream of reason has not lost its way Into the dreary desert sand of dead habit
Where the mind is led forward by thee into ever-widening thought and action
Into that heaven of freedom, my Father, let my country awake.

Chathurika MahaarachchiWorking with ESB XSLT Mediator

What is ESB XSLT Mediator ? 

ESB XLST Mediator supports mediates the messages  when there are dynamic (not predefined/ not static) request for  ESB proxy. The XLST  mediator applies specified XLST transformation to a selected element of current message payload.

Working with XSLT Mediator

This blog post explains how XSLT mediator works, with a simple example.

Let's assume we have a simple calculator service which defines to  work only for "a" and "b" payloads for all four  operations. Assume this service is hosted in WSO2 Application server.




But what happens if we give payloads as "c" and "d" ? The calculator service won't identify the payloads and give us error response. As a solution for this we can use  ESB XSLT mediator.

This is the created XSLT file for solve  the above problem and save that in ESB local entries.

<xsl:stylesheet xmlns:xsl="" xmlns:p="" version="1.0">
      <xsl:output method="xml" encoding="utf-8" indent="yes"></xsl:output>
      <xsl:template match="p:add">
         <p:addition xmlns="">
               <xsl:value-of select="p:a"></xsl:value-of>
               <xsl:value-of select="p:b"></xsl:value-of>

Please note this XSLT  representation only for "add" operation.

Create a proxy service in ESB by adding XSLT mediator as follows.

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <xslt key="xsltNew"/>
               <address uri="http://localhost:8280/services/Calculator/"/>
   <publishWSDL uri="http://localhost:9764/services/Calculator?wsdl"/>

Now create a new project in SoapUI by adding the WSDL of the created calculator  service.

Invoke the calculator service by sending the payloads as "c" and "d". Now the requests are receiving to ESB as "c" and "d" . The XSLT mediator in ESB proxy converts this to "a"and "b" and send to the Application server where the calculator service is hosted. Inside Application  server the calculations are  done and the respond is send back to the ESB. Then the XSLT mediator in ESB Proxy service understands that, this is the the response for "c" and "d" and it send the answer to the client.

Anusha RuwanpathiranaSAML2.0 bearer token from OAuth2 - WSO2 API Manager 1.6

An enterprise application exchanges SAML2.0 bearer token which retrieves authentication against OAuth2 token from the API Manager.

These few steps explained how to create SAML2 token from Oauth2 token step by step

1. Configuring Trusted Identity Provider
Configure Trusted Identity Provider and create new Trusted Identity Provider

Figure 1 : Trusted Identity Provider Configuration

1.1 Create identity provide public certificate  

Create pem file from keystore:
Here, i have used wso2carbon.jks file in {produtc_home}/repository/resources/security

keytool -exportcert -alias wso2carbon -keypass wso2carbon -keystore wso2carbon.jks -rfc -file test-user.pem

Now you can uplaod  test-user.pem  file as public certificate

2. Create OAuth Application

Create application to manage OAuth token, Main --> OAuth

You can untick code and Implicit check box since those are not mandatory fields

Figure 2: OAuth Management Configuration
When you double click on the OAuth app, you can see the properties like this

3. SAML assertion creation

You can implement SAML assertion functionality in your code or else you can  download sample SAMLAssertionCreator.jar from this location

execute the lib as following to create the SAML assertion string: 

java -jar SAML2AssertionCreator.jar SAML2AssertionCreator admin ws02carbon.jks wso2carbon wso2carbon wso2carbon

NOTE: require JDK 1.7 for execute the lib

Now, you need to copy SAML assertion string for next step

4. Use following  cURL command used to generates an access token

4.1. Using Curl command

curl -k -d "grant_type=urn:ietf:params:oauth:grant-type:saml2-bearer&assertion=<to-be-replace-generated-token>&scope=PRODUCTION" --basic -u "<to-be-replace-client-id>:<to-be-replace-client-secure>" -H "Content-Type: application/x-www-form-urlencoded"

4.2. Using RESTClient

Tools -->RESTClient

You can use following parameters to create POST request in RESTClient Request
Figure 3: RESTClient request configuration

You might get server_error exception, possible causes

1. expire the token
2. incorrect parameter or missing parameters

Tested only for API Manager 1.6

Above flows you can use for IS 4.6 as well. But in 4th step, the property scope is not required for the request since the scope property use only in the API Manager

Related reading:

Paul FremantleInternet of Things - protocols and access keys

I've just read this article from Mark O'Neill on the 10 concerns for the Internet of Things. Mark brings up some very interesting aspects and concerns. I'd like to comment on two of those: protocols and access keys.

His primary concern is protocol proliferation. I agree this is an issue. Mark explicitly mentions CoAP, MQTT, AMQP and XMPP. Interestingly he doesn't mention HTTP, which I have found to be heavily used by devices, especially the new generation of Raspberry Pi based systems. Many Arduino's also use HTTP.

I will admit to a strong bias. I think that MQTT is the best of these protocols for IoT devices, with CoAP a distant second.

Let's get XMPP out of the way. I love XMPP. I think its a fantastic protocol. Do I want to create XML packets on my Arduino? Er... nope. Even on 32-bit controllers, there is still the network traffic to consider: suppose I'm using a GPRS connection and I have thousands of devices deployed: minimizing network traffic is important for cost and efficiency, and XMPP was not designed for that.

AMQP is not an appropriate protocol for IoT devices and was not designed for that. It is designed for "the efficient exchange of information within and between enterprises". It was certainly not designed for lightweight, non-persistent, non-transactional systems. To that end, my own system (WSO2) will be providing efficient bridging for AMQP and MQTT to enable lightweight systems to get their data into wider enterprise contexts. I also demonstrated HTTP to MQTT bridging with the WSO2 ESB at the MQTT Interop held last week at EclipseCon.

How about CoAP vs MQTT. Firstly, CoAP is more appropriate to compare to MQTT-SN. It is UDP only, and designed to emulate a RESTful model over UDP. My biggest concern with CoAP is this: most people don't actually understand REST - they understand HTTP. If I had a dollar for every time I've come across supposedly RESTful interfaces that are really HTTP interfaces, I'd be a rich man! 

Interestingly, despite MQTT having been around for 10 years, the Google Trend shows that it has only recently hit the public notice:
However, as you can see, it has quickly overtaken CoAP. In terms of traffic, it is a clear winner: every Facebook mobile app uses MQTT to communicate with the Facebook servers.

The other area I'd like to comment on is access keys. I agree this is a big issue, and that is the reason I've been working on using OAuth2 access keys with MQTT and IoT devices. I recently gave talks about this at FOSDEM, QCon London, and EclipseCon.  The EclipseCon talk also covered a set of wider security concerns and the slides are available here. OAuth2 and OpenID Connect are important standards that have got incredible traction in a short period of time. They have evolved out of 10+ years of trying to solve the distributed, federated identity and access control problems of the Internet. 

In my presentation I strongly argued that passwords are bad for users, but worse for devices. Tokens are the correct model, and the OAuth2 token is the best available token to use at this point. There was considerable interest in the MQTT interop session on standardizing the use of OAuth2 tokens with the protocol. 

My personal prediction is that we will see MQTT and HTTP become the most-used IoT protocols, and I strongly urge (and hope) that OAuth2 tokens will become the de-facto model across both of these.

Chathurika MahaarachchiHow to get a JSON response for XML request using SoapUI

This is a small tip on how to  get a JSON response from XML request while you are working with soapUI.

1. Download and start the latest version of Application server. (at the moment its 5.2.1) . In order to get the JSON response you need to change the following configurations.

2.Add following parameter to 'axis2_client.xml' in AppServer_HOME}/repository/conf/axis2/' directory. because the default is set to false.

'<parameter name="httpContentNegotiation">true</parameter>'

3. Now restart the server.

4. Deploy the Jaxrs_Basic service to simulate this.

5.Open SoapUI and create a new project for Jaxrs_Basic service.

6. Now the new REST project is created and you need to set the  media type to "application /xml"

7. Add header name value pairs  Name :" Accept" Value :"application/json"

you can see the converted response to JSON.

8.Same as you can convert the JSON request to XML

Madhuka UdanthaUser Account Lock/Unlock in WSO2 IS

Identity Server can be configured to lock a user when configurable number of login attempts are exceeded or via unlockUserAccount service.

Now we can try this Out.

1. Update below parameters  wso2is-4.6.0\repository\conf\security\

2. Identity.Listener.Enable=true









I changed
as mean to easy to create user pass word and for demo

Do below change in 'carbon.xml' to try services in soap UI


3. Then start serve.

4. Add the following claims and correctly map the 'attributes' with existing user store/LDAP

by navigating into 'Home > Configure > Claim Management > Claim View'


Used description,pager and streetAddress for mapping attributes



Now Time for Demo

6. Create tenant 'Home -> Configure -> Multitenancy -> Add New Tenant'



7. For the tenant we need to added user and role for demo so Login with tenant domain  admin.

8. Added role for ‘login’ permission called ‘loginRole’


9. Create User for tenant with above role




10. Now login IS as ''


11. Open SOAP UI and open 'unlockUserAccount' service in https://localhost:9443/services/UserIdentityManagementAdminService?wsdl.

12. Call service as below

<soapenv:Envelope xmlns:soapenv="" xmlns:ser="">


13. Now login as tenant admin and view ‘madhuka’ profile you see it is been locked


14. try to login as ‘’


Yap, Account has locked!!!


15.  Now I will unlock user madhuka from ‘unlockUserAccount’


Now we try to login as madhuka again, You are in…

Here is my demo Console log. Smile


It is you time to play with user lock and unlock in WSO2 IS

Samisa AbeysingheApache Stratos - The best PaaS to use

Apache Stratos (Incubating) is the Platform as a Service (PaaS) project from Apache community. WSO2 donated its cloud PaaS project to Apache, and it evolved over time with the nurturing from the Apache community in incubation.

What is Unique about Stratos?

As a PaaS framework, there are some key elements that the cloud computing industry is taking interest on Apache Stratos (incubating).

The Tenancy Model

Stratos uses an in-container multi-tenancy (MT) model. This means, for example, within an application server MT is available for applications. In other words, a container can host multiple tenants.
This tenancy model maximizes resource utilization across all tenants. A single instance of a container can cater for multiple tenants.
This model also yields a very good multi-tenancy density. There is no need to allocate memory (or other resources) per tenant.

Auto Scaling

The auto scaling model supported by Apache Stratos (Incubating) provides better control over how the platform scales. It provides multi-factor based auto scaling support where factors such as load average of instances, memory consumption of instances, in-fight request counts can be used to auto scale the cloud.
Straots also has  support for scaling non-HTTP traffic. The scaling algorithms are adaptable to any transport, thanks to the loosely coupled design.

Easy to use Cloud Bursting

Multi-cloud bursting can be used to burst into other public, private or hybrid clouds to scale up the capacity at peak times. The advantage with Stratos in cloud bursting is the architecture supports seamless topology management across multiple clouds when the cloud bursting happens. The topology management coordinates between auto scaler and cloud controller, making DevOps life easy, not having to do anything manual. Also there are comprehensive set of tools to monitor the system status, including unified logging and business activity monitoring (BAM) tools.    

Inherent Identity Management

The platform keeps track of who does what on whose behalf, making it easy to do manage, monitor and detect all identity related aspects. The platform comes with an identity management solution embedded to the platform.

Why Should a Developer Care?

Easy to Get Up and Running

Stratos PaaS is easy to get it up and running in quick time. A developer will be able to run and test PaaS framework on a single machine to try out. A developer can leverage the existing set of pre-built cartridges such as PHP, MySQL, Tomcat, .NET, Node.js, WordPress and a set of WSO2 middleware products to build SaaS applications.

Able to Add New Cartridges in Quick Time

If the programming language framework, application server or the database you are looking for if not available, the Straots platform is able to support those frameworks/languages in quick time, with cartridge plugin model. The implementation model for new cartridges is well defined and simple to use.

Easy to Develop Multi-Tenant SaaS Applications

Easy to develop and test multi-tenant enabled SaaS applications on top of Stratos. The programming model is natural, and the developer does not have to change anything that he/she would do developing an on-premise application. Just develop the application and deploy onto Stratos, and you will have a multi-tenant enabled SaaS application.

Why should an Enterprise Care?

For enterprises, Stratos PaaS framework would yield the least Total Cost of Ownership (TCO). The key here is the in-container multi-tenancy model, where the resource utilization is maximized across tenants. Also the setup, maintenance and DevOps cost are minimal with Stratos, thanks to the unique architecture that makes DevOps effortless. An enterprise can  run and maintain the PaaS to suite and cater for multiple audiences using the polyglot cartridges model, where it can support multiple databases, programming languages, frameworks, operating systems and even legacy systems.
An enterprise can start small when they initially start with cloud projects with Stratos PaaS. Then, as the cloud usage and need increases, the platform can be expanded based on the ROI that the PaaS brings in. This ensures the optimum and transparent utilization of resources allocated for the cloud projects.

Who else Should Care?

Independent software vendors (ISV) looking to build vertical SaaS offerings can leverage Stratos PaaS as the platform of choice for delivering multi-tenanted SaaS applications with low cost and effort. Given the loosely coupled component architecture and also the extension and plug-in points, it is very easy to use Stratos PaaS in an OEM model. Given the wide variety of cartridges and infrastructure as a service (IaaS) layers that Stratos supports, it is ready to be re-deployed in multiple setups with heterogenous systems. So the ISVs can leverage the build once and deploy anywhere model with Stratos.

Amani SoysaReal time log event analysis with WSO2 BAM(CEP features)

One of the major problems when it comes to managing/monitoring distributed systems is not being able to detect when your system is giving issues all of a sudden. Lets say you have like 100 servers and you need to detect when a fatal error appears in the system and you want to act right away. This is nearly impossible if you do not have proper monitoring tools.
WSO2 BAM with WSO2 CEP features has the perfect mechanism to monitor real time logs and alert it to relevant parties when an expectation/ suspicious error occurs.

In this sample I will be using WSO2 Appserver and WSO2 BAM in order to demonstrate the monitoring and alerting capabilities of WSO2 BAM 2.4.0.

  • WSO2 Appserver will be used to send log events to BAM.
  • WSO2 BAM will do all the monitoring and alerting when logs are captured.
Send Logs to BAM

To send logs all you need to is go to WSO2_AS_HOME/repository/conf/ and add LOGEVENT to the root logger. And start Appserver. (please make sure your BAM server is up and running before starting the Appserver)

To test if the logs are successfully sent to BAM, you can log in to cassandra explorer and see if there is a new column family created under EVENT_KS keyspace

ie log.AS.0.2013.1.12

Configuring WSO2 BAM for REAL TIME analytics

In this demo, I will be sending an email if an ERROR log occurs in WSO2 Appserver.  Since we will be using mail transport in BAM to send email alerts to recipients, we have to enable mail transport in BAM. To do that go to repository->conf->axis2->axis2-client.xml and add email configurations.

<transportSender name="mailto" class="org.apache.axis2.transport.mail.MailTransportSender">
       <parameter name="mail.smtp.from"></parameter>
       <parameter name="mail.smtp.user">wso2esb.mail</parameter>
       <parameter name="mail.smtp.password">wso2mail</parameter>
       <parameter name=""></parameter>
       <parameter name="mail.smtp.port">587</parameter>
       <parameter name="mail.smtp.starttls.enable">true</parameter>
       <parameter name="mail.smtp.auth">true</parameter>

Here you can give your own email configurations. 

Restart/Start the BAM server.

Assuming the logs are getting published to BAM, lets see how we can capture these log events for real time analytics.

Step 1 - Creating Event Adapters

In order to do real time analytics we need to create an execution plan. For that we need two input adapters.

  1. Input adapter - to capture log events coming to BAM (in  this case it will be a wso2event).
  2. Output adapter - to send emails
To create adapters you need to login to BAM management console. Under configurations, there will be "Event Processor Configs" and add input event adapters and output event adapters as shown below.

Input Event adapter

Output Event Adapter

Step 1 - Creating Stream Definitions

Now that we have created event adapters, we need to create a stream definition to capture the LogEvent in order to do complex event processing. In the appserver log event, these are the attributes we have.

  • Meta Data
    • clientType {String}
  • Payload Data
    • tenantID  {String}
    • serverName {String}
    • appName  {String}
    • logTime  {Long}
    • priority {String}
    • message {String}
    • logger {String}
    • ip {String}
    • instance {String}
    • stacktrace {String}
To create the stream definition, go to main tab and under create stream definition you can create the log event stream as shown below.

Step 1 - Creating the Execution Plan

In the execution plan we will be specifying the input stream and writing a CEP Query (SQL Like Query)  for the event stream.

Go to Create Execution Plan, and give suitable name for the execution plan. Select the needed stream and give an alias. Click on import after selecting the stream. In our CEP Query we will analyze events and if an event has an error, we will send it to an output stream.


from LogEvents[priority == "ERROR"]
select message,stacktrace,serverName
insert into ExceptionStream

After creating the Query, we need to add the exported stream (which is the stream that we are sending Error logs) according to our CEP querry exported stream name should be ExceptionStream give the value of the exported stream name and select "Create Stream Definition" to create the ExceptionStream. This stream will be auto generated by looking at the CEP as shown below.

Once we create the Exception stream select the Exception stream as exported stream and create a new formatter. This formatter is used to specify the email body and email related information like the subject, to address ect.  Give the output mapping type as text so we can give the content of the email message body inline. 

Email Body

Error Occurred in {{serverName}} – {{message}}

In this body we are taking message, stacktrace and server name from the OutputStream (ExceptionStream) and adding a readable message for the email message body.

Add the event formatter and save the execution plan. Now we have successfully created the event trigger to monitory error logs for wso2 appserver. You can test this by invoking a service with an error.

If you want more in depth information on real time log event analytics you can follow the following screen cast for more details

sanjeewa malalgodaHow to configure WSO2 API Manager to access by multiple devices(from single user and token) at the same time


This would be very useful when we setup production kind of deployments and use it by many users. According to current architecture if logged out from one device and revoke key then all other call made with that token will get authentication failures. In that case application should be smart enough to detect authentication failure and request for new token. Once user log into application, that user might want to provide user name and password. So we can use that information and consumer/ secret keys to retrieve new token once detect authentication failure. In our honest opinion we should handle this from client application side. If we allowed users to have multiple tokens at the same time. And that will cause to security related issues and finally users will end up with thousands of tokens that user cannot maintain. Also this might be problem when it comes to usage metering and statistics.

So recommended solution for this issue is having one active user token at a given time. And make client application aware about error responses send by API Manager gateway. Also you should consider refresh token approach for this application. When you request user token you will get refresh token along with the token response so you can use that for refresh access token.

How this should work

Lets assume same user logged in form desktop and tablet. Client should provide user name and password both when they log into desktop and tablet apps. At that time we can generate token request with username, password and consumer key, secret pair. So we can keep this request in memory until user close or logout from application(we do not persist this data to anywhere then there is no security issue) 

then when they logout from the desktop or the application on the desktop decides to refresh the OAuth Token first, then the user will be prompted for their username and password on the tablet since the tablet has a revoked or inactivated OAuth Token.  But here we should not prompt username password as client is already provided them and we have token request in memory. Once we detect auth failure from tablet app it will immediately send token generation request and get new token. User will not aware about what happen underline.

Chathurika MahaarachchiHow to capture SOAP / XML messages with WireShark

When you are debugging a web service calls some times you may want to capture requests and responses . Using WireShark you can see exactly what requests are coming and what are the responses are going out from service. This blog explains how to capture SOAP /XML  message using wire shark.

First you need to start a web service in your machine. Here the used service is Jaxrs_Basic and deploy it in your machine.

1. Open SOAP UI , get the WADL of Jaxrs_Basic service and create a new SOAP UI project.
2. Open WireShark in your machine giving “sudo wireshark”
3. Create a “ Capture Filter” as follows.
  • Select Capture from menu bar → Interfaces → Select your interface where your web service is running
  • Click options on “Capture Interface Menu”-> It will direct you to “wireshark capture options” window.
  • Click on the selected interface and it directs to the edit interface settings window.
  • Set your capture filter as display in screen shot.

4.Enable network name resolution (makes identifying traffic much easier)

Now we are ready to capture required packets.

5. Invoke the Jaxrs_Basic service using SOAP UI.

6. Here you can see packets are receiving through WireShark.

7. Now you need to filter the SOAP/XML messages. To do this just type the filter “xml” at filter box and click at Apply. This will capture the soap messages in invoked web service.

8. Select one message-> right click on it ->”Select follow TCP Stream” from available menu.

9. You can view the soap/xml  message clearly.

Chris HaddadTracking Soccer Game Play with Big Data Streaming, Internet of Things, and Complex Event Processing

Teams gain a competitive edge by analyzing Big Data streams. By using complex event processing and MapReduce based technologies, teams can improve performance. By establishing a feedback loop, your team can visualize business activity, understand impact, and take positive actions.


Big data Life Cycle

One powerful Big Data streaming example, soccer match activity data captured by embedded sensors were streamed and analyzed to understand how player actions impact soccer play.

Soccer players, race car drivers, energy buyers, stock traders, and digital marketers gain a competitive edge by acting on strategic and tactical intelligence recommendations.   By connecting intelligent controllers to a multitude of Internet of Things (IoT) devices (i.e. soccer balls, shoes, cellphones, and turbines) and connected business information feeds (i.e. clearing houses, weather services, and ad networks), teams can aggregate Big Data, create Big Data streams, and trigger events that influence workflow and enhance performance.


In the DEBS challenge, teams bridged soccer game play to Internet of Things and analytics.  Sensors placed in shoes, goal mitts, and soccer balls streamed spatial, vector, and temporal information toward Big Data stream receivers.  The solution processed data stream events in real time to understand play actions and visual game play (through heat maps).  Teams can use the analytic visualization to recommend performance improvements.




sanjeewa malalgodaHow to fix issue in WSO2 API Manager due to missing API resource

When you visit subscription page in API manager store page you might see following error.

org.wso2.carbon.registry.core.exceptions.ResourceNotFoundException: Resource does not exist at path /_system/governance/apimgt/applicationdata/provider/admin/api-name/1.0.0/api 
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.get(

The reason for this issue is API resource is not there in provided path(/_system/governance/apimgt/applicationdata/provider/admin/api-name/1.0.1/api ). Here, subscriptions are run time data that stored in api manager db and API are meta data stored in registry. This error log says application is having API which is not there in registry.

This can happen due to multiple reasons.

01. Deleting registry API resource without deleting it from publisher UI(then it will delete registry data and entries in API manager tables).

02. change governance space after create subscriptions.

For this issue you can do followings.

If governance space changed you can mount to old space again. Then missed API will appear in given path.

You can create missed api-name/1.0.1 again in the system. For this delete API from api manager tables and create API from publisher UI.

You can delete subscriptions associated with missed API(api-name/1.0.1). For this you need to delete all associated entries from API manager tables carefully(or you can drop API manager database and create new database if you are not in production system ).

Actually we don't need to delete entire database. Please run following db queries against you API manager database. Its always recommend to run these queries on test environment and then apply to production servers.

Here first we need to get API_ID associated with problematic API. To get that please run following command

SELECT API_ID FROM AM_API where API_NAME = 'api-name' and API_VERSION = '1.0.1'

Then we need to delete all subscriptions associated with that API. Let say we got 3 as the result of above query. Run following query to delete all associated subscriptions.


Shazni NazeerDownloading and running WSO2 API Manager

WSO2 API Manager is fully open source product which allows you to create, publish and manage many aspects of APIs such as life cycle management, versioning, monetization, governance, security etc. A business has high potential for growth in using WSO2 API Manager as it allows managing APIs in a SOA environment in a decentralized manner.

Let's start getting an insight about API Manager by downloading and running it in our local machine

Download the latest WSO2 API Manager from

Extract the WSO2 API Manager into a directory. I'll refer to this directory as APIM_HOME. Navigate in to bin directory and run the WSO2 API Manager as follows.
$ ./                       in Linux
$ ./wso2server.bat in Windows
You would see some content in the console and finally the Managemwnt console url, API publisher URL and the API store URL.

Open up your favorite browser and open the Management console URL. 9443 is the default port, unless you have changed the offset.


Note : If you are running more than one WSO2 product at a time, make sure you increment the <Offset> in PRODUCT_HOME/repository/conf/carbon.xml to avoid conflicting ports.

You may be prompted for a security exception in the browser. Add the security exception. You would then be re-directed, after login with default user name (admin) and password (admin) to the API Management console as shown in the following. This is the administrative console use to manage aspects of the API.

API Manager Management Console

WSO2 API Manager also provides two more views, namely, API Publisher and API Store. API Publisher is the place where an API developer would create an API. He/She can also manage the life-cycle of the API in the publisher. API Store is the place where a typical API user would see, available APIs from publisher. He/She can subscribe to APIs and start using it for their own applications. Following two images are the windows of the API publisher and API store respectively in the API Manager 1.6.0.

WSO2 API Manager Publisher
WSO2 API Manager Store

This is just a startup guide for WSO2 API Manager. I recommend you to play with this product and also to refer to the online documentation for further details.


John MathonCIO: Enterprise Refactoring and Technology Renewal: How do you become a profit center to the Enterprise?


CIO’s are being asked more and more to consider being a profit center or at least to reuse the assets they have to reduce costs.   They are also being asked to be more agile and to deliver products faster.   Some CIOs are being asked to be more entrepreneurial, to facilitate people trying new ideas at low cost.    The reasons for this are the disruptive changes happening in technology and business around BigData, Cloud, Mobile, Social and APIs.  More and more companies are making substantial revenues from API’s or mobile apps for instance.  If not moving to being a profit center CIOs are being asked to drive new business or keeping existing business by being more agile and delivering a connected business.   I call this technology renewal  “Enterprise Refactoring.”

The central way to do this is to turn the existing services in your organization into reusable entities that can be leveraged by new products, outside organizations, partners or even internal projects.  Sometimes people are told the way to do this is to implement API Management or implement PaaS.  API Management gives you several things that your organization needs to segment, account and enable reuse of enterprise services so that you can increase usage and charge for services you currently provide for free.   IaaS gives you tools to leverage infrastructure as a service and to segment, account and enable reuse of enterprise hardware infrastructure.   PaaS gives you this ability at the Application level to do the same things.   These things all fit together conceptually, but making them fit together in reality is not something the industry has accomplished but it is where WSO2 is going and has done to some extent.    Let’s start at the bottom and understand what goes into the tools for making APIs a profit center or encouraging their reuse.

I will disclose that API Management by itself is not the complete answer to getting reusable APIs.   It is a basic building block and incorporates a number of needed pieces to give you the flexibility and accountability you need to start to leverage your existing services better.   A later blog will address the larger problem of reuse in general.  API Management has 3 basic types of users and user stories that go along with them that provide value to that role.

1) The Publisher:

a) Allow you to publish APIs and to iterate your APIs rapidly to meet demand of customers

b) Account for usage of your services and be able to bill and account and manage the growth of demand for the services

c) Tailoring the usage tiers so that you can offer different QOS or performance to different users at different price points or different roles

d) Tools to learn about usage so that you can move your products forward intelligently based on demands you see in the market

e) Security to guarentee that customers only see the information they are entitled to see

f) Tools to enable you to customize the experience that the community of subscribers gets


2) The Subscriber

a) A customizable “store” they can search to find resources to solve problems and the information they need to use the service.

b) A social environment that they can learn about where the services are going, interact with them and influence the future of those services, inform others of the pitfalls, best use cases, examples, tricks and just generally learn what OTHERS think of the services

c) Tools to help you use the services easily

d) Support for when problems happen with the services

e) A place to manage their subscriptions,  to understand their usage and cost


3) The Gateway

a) Deliver the services, secure them, scale them as needed to meet demand, collect information on usage patterns and information needed to account and/or bill users of the services



API Management gives you the infrastructure you need to offer API services.   This can be inside the corporation to groups within the company or to outside groups.  These services can be used by mobile apps, web apps or other services.   API Management allows you to share these assets scalably and to bill but it doesn’t guarantee their success.

The same exact things can be expected for a IaaS to provide basics you need to monetize infrastructure and PaaS to provide the basics you need to monetize application and development environments.  The last piece is mobile which is a platform to deliver and manage mobile applications.

WSO2 has all these pieces integrated and provides a platform you can implement these in pieces or in combination to renew your enterprise architecture and to implement a profit center capability to your infrastructure, applications, data, web services, APIs, mobile applications.

Those tools provide you the basics.   The next blog will describe what the “basics” doesn’t give you and what you need to do in your company to implement successful reuse and renewal of technology.

sanjeewa malalgodaHow to revoke access tokens programmatically In API Manager 1.4.0

Here in this article we will see how we can revoke tokens manually using curl command or any other client. We have rest endpoint to revoke access token. For this you need to provide required information with revoke request. See following sample curl commands. Hope you will be able to implement your client based on those requests. You need to pass token to be revoked, consumer key and authorized user as parameters.

Login to Publisher:
curl -X POST -c cookieshttp://localhost:9763/publisher/site/blocks/user/login/ajax/login.jag -d 'action=login&username=admin&password=admin'

Revoke Tokens:
curl  –X POST -b cookieshttp://localhost:9763/publisher/site/blocks/tokens/ajax/revokeToken.jag -d "action=revokeAccessToken&accessToken=hLmK_5TvX6f2NiSXkZ3h_l2NpnIa


John MathonFollowup: Cloud Security and Adoption

businesses say security is concern for cloud adoption

I produced a blog entry which created some interest on the Cloud Security topic.   I produced the blog entry after reading several articles reminding me that the most common excuse i’ve heard for enterprises to avoid the cloud was the concern for “Cloud Security.”

The blog entry in question is:  Security in the Cloud Age

After I wrote the article I got a number of responses.  A CSO friend of mine said it was balderdash.  He sent me comments to edit and soften my article to make it more “precise.”   I really appreciate his comments.  I also spoke on the topic to a group of architects at a major financial institution.    After doing more research on this topic I have become convinced:

1) The central argument that the cloud is simply too valuable to use the security excuse is valid. In my opinion the value to enterprises using the cloud in all its manifestations is too important for most businesses to ignore or remain aloof.   The followers on this will indeed be the losers.

2) There is even more to be concerned about regarding security than I realized. I believe that frequently for possibly many reasons people conspire to minimize concerns about security in general.   The fact is there is an enormous amount of hacking and general security attacks on companies than I had realized.

My slideshare presentation:

outlines a very small number of the breakins and attacks over the last year alone.    The vast majority of these attacks happened at private companies and the most serious by far in my opinion happened at private companies NOT in the cloud.

The ratio that seemed to hold is that the number of attacks at private vs cloud companies was 3:2.  Since the number of private companies vastly exceeds the number of cloud companies you can see the very high percentage of all attacks are occurring at private companies.  Private companies in 38 of 50 states are required by law to disclose breakins that effect individual consumers.  So, it is not clear how many private companies are required to report successful security attacks, how many are required but don’t?   The ratio of attacks on private vs cloud companies could be much worse than the 3:2 ratio.  We just don’t know.  Also, the losses were tangible at private companies with actual monies lost in some cases and very significant reputational loss as in the case of Target and others.  Hundreds of millions in user email addresses, passwords, credit card numbers and other significant personal identifying information lost by private companies.    There were 4 million health records lost when a hospital simply had a number of computers lifted off the campus.  Hospitals don’t have the security procedures of a bank for instance for computer equipment.

This didn’t surprise me, but I guess I naively believe that even if lots of things are possible very few of those worst case scenarios ever emerge.   Several hundred million email addresses means a large percentage of people have been compromised.  4 million health records is a lot.  For me these numbers were shocking.

We live in an era where privacy is being attacked from all angles.

There is enormous number of hacking attempts trying to gather personal information from companies about us.   On top of that is the general ineptness of most companies in the security area.   The average private company patches their operating systems with known patches  25 to 60 days from the patch being available and the vulnerability being disclosed.   That is atrocious because we all should be aware that when a vulnerability is announced the hackers go to work.  They know it takes time for many to patch the vulnerability so most severe attacks occur within 30 days of a vulnerability disclosure.  Therefore by waiting 25-60 days to patch the vulnerability the vast majority of companies are simply leaving themselves open to attack.   I didn’t realize it was this bad.  As I have pointed out, the incompetence of the average company simply means that whether they want to admit it or not most companies will probably have HIGHER security by moving to the cloud.

One thing that was brought up in reaction to my comments is that the focus of attention for hackers has not moved to the public cloud yet because the data, the information isn’t there yet.   If private companies moved all this data to the cloud the argument is that the number of attacks on cloud companies will proliferate.  This makes sense but also cloud companies are  generally improving their security much more rapidly than private companies.  I don’t have statistics to prove this but the fact of what we know cloud companies have done in the last year in response to the attacks that have come against cloud companies suggest that they are cognizant of the danger.    My slideshare presentation provides information on this as well.

On another front it was discovered that 20% of all attacks in the last year were initiated from specific “governments.”   China was mentioned a lot in attacks over the last year.  Specific attacks last year were clearly a result of Chinese government attacks.  (My slideshare presentation above documents the sources for these attributions.)   We know the US government has been spying on us.  For me this is not a surprise.  I have generally heard about all the stuff that is being done that Snowden talked about at least 2 years before Snowden released it.  This is public information I have been able to deduce from presentations by companies what is happening without making difficult leaps of imagination.   It is clear that numerous organizations in the US are probably being paid to spy on people within or outside the US and they may have no option but to do this.

In addition to that the march of technology makes the spying easier.  New technology by Google enables them to do much better at recognizing words, faces, anything digital and therfore tying digital information, phone records whatever to specific individuals.    If you are someone the government is interested in finding or discovering things about from an electronic trail it is getting easier and easier for them to do that. On top of that there is a massive effort by almost every company to gather vast amounts of information about you and everything you do for the purpose of helping you either with better advertising or to somehow get you interested in whatever they want to get you interested in.    This is resulting in disturbing trends where if I go look for a certain software technology or a good of some sort I will find myself bombarded by advertisements (retargeting) wherever I go on the internet who seem to know I was looking at something 3 days ago.  That verges on creepy. I don’t know where to draw the line between “innocent spying,” “creepy spying,” “illegal spying,” but that is not what this blog is about.

I believe the opportunities to change our life for the better from the cloud are enormous.  I believe the opportunities and benefits to participate in the cloud far outweigh the risks.  In many cases companies may have to go away that can’t adjust and become leaders in adopting the cloud.    So, I believe by not adopting the cloud you put your business at risk.

Some of the reasons for this are outlined in my slideshare presentation above.

Ajith VitharanaMount (jdbc) WSO2 ESB registry to WSO2 Governance Registry Server with H2.

When you start the vanilla distribution of wso2 servers (WSO2 ESB , WSO2 Governance Registry , WSO2 Application Server ..etc) the default registry database is  "Embedded mode H2" . In this mode of connection it allows only one connection to the database.  Therefore if you need to mount ESB to external Governance Registry server, then it should have started with "Server mode H2"

Governance Registry Configurations:

1. Add the following  H2DatabaseConfiguration to the carbon.xml to start the Governance Registry with "Server mode H2.
<property name="tcp" />
<property name="tcpPort">9092</property>
<property name="tcpAllowOthers" />
2. Change the <url> of the default WSO2_CARBON_DB in master-datasources.xml file.
3. Start the Governance Registry server.

ESB Configurations

1. Add the following data source configuration for mount database in master-datasources.xml.
<description>The datasource used for mounting registry databse</description>
<definition type="RDBMS">
<validationQuery>SELECT 1</validationQuery>
 2. Add the following mount configuration in registry.xml
    <dbConfig name="wso2registry_mount">

<remoteInstance url="https://localhost:9443/registry">

<mount path="/_system/config" overwrite="true">
<mount path="/_system/governance" overwrite="true">
3. If you are ruining ESB in same machine, change the default port <Offset> in carbon.xml  to prevent the port conflicts with Governance Registry server.

4. Start the ESB sever.

5. Logged in to the ESB server and browse the registry to verify that mount has established.

Note: We don't recommend to use the H2 as the mount database for production setup. However this method is useful for the users who don't  have rights to create the databases (MySQL , Oracle ..etc) in their corporate database system but still want to demo/test the registry space sharing/mounting.   

Chanaka FernandoWSO2 ESB creating a response for a "GET" request

When you are creating REST APIs with WSO2 ESB, you may need to send some response message back to the user when something goes wrong. In this kind of scenario, you can create a payload inside the ESB and send it back. For doing this, you can use the below configuration.

<api xmlns="" name="LoopBackProxy" context="/loopback">
   <resource methods="POST GET">
         <property name="NO_ENTITY_BODY" scope="axis2" action="remove"></property>
         <log level="full"></log>
         <payloadFactory media-type="xml">
               <m:messageBeforeLoopBack xmlns:m="http://services.samples">
               <arg xmlns:ns="http://org.apache.synapse/xsd" xmlns:m0="http://services.samples/xsd" evaluator="xml" expression="//m0:symbol/text()"></arg>
         <payloadFactory media-type="xml">
               <m:messageAfterLoopBack xmlns:m="http://services.samples">
               <arg xmlns:ns="http://org.apache.synapse/xsd" xmlns:m0="http://services.samples" evaluator="xml" expression="//m0:symbolBeforeLoopBack/text()"></arg>
         <property name="ContentType" value="text/xml" scope="axis2"></property>

In the above configuration, the most important part is the removal of NO_ENTITY_BODY property. The reason for removing this property is that in this scenario, request is not going to the backend and the GET request will set the "NO_ENTITY_BODY" property to true (default value). This will make sure that the request does not have a message body since this is a GET request.

Hasitha AravindaRunning HumanTask Cleanup Job - WSO2 BPS

HumanTask engine allows you to configure periodic cleanup tasks from the WSO2 BPS persistence storage based on task status. To enable Task clean up task, uncomment TaskCleanupConfig element in HumanTask.xml

HumanTask engine uses quartz-scheduler for cron jobs. Refer [1] to declare cron expressions format. In above example, Cron scheduler will trigger TaskCleanup task every Four Hours, to remove COMPLETED, OBSOLETE, EXITED from the database.

[1] -

Chathurika Erandi De SilvaBig data in action

Lately i have been hearing the word "Big data" frequently. Everybody is talking about big data. The frequency slowed me down and i wanted to know why every body is so interested in the data that is big.

Over the years, the technology has grown rapidly with enabling various means of interaction among systems and humans. Today you can have a chat with your best friend miles away and share what you think of the new bakery in your town in facebook. Mary angry with her mobile telecommunication provider, can write on her facebook wall "ABC Tel sucks". Jonathan might be calling the call center of the bank to inform some changes he needs with the credit card. Maggie frustrated by the bill will call the Cable TV help center and say "I subscribed this, why can't i see it?". Simply taken everywhere you might turn, increased interaction with systems is visible.

The world today is a competitive business arena. Organizations compete to get ahead of one another. With the advance of technology organizations should find innovative competitive advantages to survive in the battle pit. Information technology has come to the rescue of many organizations, it has provided the much needed armor and the armor should be redesigned as the Information Technology is changing it's shape rapidly. 

" Mr.Cooper started a business few decades ago. In the begining he kept books to keep track of his transactions. With the increase of customers he wanted a new means of keeping track, so he turned to IT. So he decided to use a database to store his data. Happy with his database, he continued his business successfully for sometime becasue he was the first one to use it among his competitors. Suddenly he got to know that Mr.Denny has implemented a web site for the customers to view information. Mr. Cooper took action and he just didn't want a passive web site, he wanted something so customers can access and interact. So he connected his database to the web site. His business blossomed, because most of the customers were happy with interacting through the computer. 

Mr.Coopers' business now is a huge enterprise with billions of customers and operating globally. Mr.Cooper even used Business Intelligence now. Has Mr.Cooper identified a big potential that is hidden? Still not"

His organization generates tons of data from call centers, on line chat support, emails, customer feedback on social networking sites, etc...Business Intelligence itself is hidden among these data. But can Mr.Cooper use the traditional databases or datamining to dig the knowledge out from the above sources. The answer would be "No" because they are high in volume, variety and the data keeps increasing. 

This is where big data comes in to action. Big data allows above kind of unstructured data to be used to support Business intelligence.

Organizations as Google, UPS, Linkedin, etc... uses big data to reshape the business. 

Big data in action can bring insight and success to the users because it uses the data that is generated every second in the effective manner

Further links to explore

Dimuthu De Lanerolle

ApiMgr 2.0.0 
Api creation , publish test scenario in Test Automation Framework 4.3.0


import org.json.JSONObject;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import org.wso2.carbon.automation.engine.context.AutomationContext;
import org.wso2.carbon.automation.engine.context.TestUserMode;
import org.wso2.carbon.automation.extensions.servers.tomcatserver.TomcatServerManager;
import org.wso2.carbon.automation.extensions.servers.tomcatserver.TomcatServerType;
import org.wso2.carbon.automation.test.utils.http.client.HttpRequestUtil;

import javax.xml.xpath.XPathExpressionException;
import java.util.HashMap;
import java.util.Map;

import static org.testng.AssertJUnit.assertTrue;
import static org.wso2.carbon.automation.engine.configurations.AutomationConfiguration.getConfigurationValue;

public class RestPeopleTestCase {

    private APIPublisherRestClient apiPublisher;
    private APIStoreRestClient apiStore;
    private AutomationContext automationContext;
    private String userName;
    private String password;
    private int tomcatServerPort = 8080;
    private String url;

    @BeforeClass(alwaysRun = true)
    protected void init() throws Exception {
        url = automationContext.getContextUrls().getServiceUrl().split("/services")[0];
        userName = automationContext.getConfigurationNode("//userManagement/" +
        password = automationContext.getConfigurationNode
                ("//userManagement/" + "superTenant/tenant/admin/user").getChildNodes()
        TomcatServerManager tomcatServerManager = new TomcatServerManager(
                CustomerConfig.class.getName(),, tomcatServerPort);
        apiStore = new APIStoreRestClient(url);
        apiPublisher = new APIPublisherRestClient(url);

    @Test(groups = {""}, description = "Add RestBackendTest API and sending request to " +
            "rest backend service")
    public void addAPITestCaseAndSendRequest() throws Exception {

        // adding api
        String APIContext = "restBackendTestAPI";
        String tags = "rest, tomcat, jaxrs";
        String restBackendUrl = automationContext.getContextUrls().getWebAppURL() + ":" + tomcatServerPort + "/rest/api/customerservice";
        String description = "This RestBackendTest API was created by API manager integration test";
        String APIVersion = "1.0.0";
        String providerName = "admin";

        apiPublisher.login(userName, password);
        String APIName = "RestBackendTestAPI";
        APIRequest apiRequest = new APIRequest(APIName, APIContext, new URL(restBackendUrl));

        // publishing
        APILifeCycleStateRequest updateRequest = new APILifeCycleStateRequest(APIName,
                providerName, APILifeCycleState.PUBLISHED);

        // subscribing
        apiStore.login(userName, password);
        SubscriptionRequest subscriptionRequest = new SubscriptionRequest(APIName, providerName);
        GenerateAppKeyRequest generateAppKeyRequest = new GenerateAppKeyRequest("DefaultApplication");
        String responseString = apiStore.generateApplicationKey(generateAppKeyRequest).getData();
        JSONObject response = new JSONObject(responseString);
        String accessToken = response.getJSONObject("data").getJSONObject("key").get("accessToken").toString();
        Map<String, String> requestHeaders = new HashMap<String, String>();
        requestHeaders.put("Authorization", "Bearer " + accessToken);


        // invoke backend-service through api mgr
                , requestHeaders).toString().contains("John"));

    private void getContext() throws XPathExpressionException {
        automationContext = new AutomationContext("AM", TestUserMode.SUPER_TENANT_USER);

    private void setupDistributedSetup() throws XPathExpressionException {

Madhuka UdanthaA cluster management framework, Apache Helix

What is Helix?

It  used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes. Helix automates reassignment of resources in the face of node failure and recovery, cluster expansion, and reconfiguration. Modeling a distributed system as a state machine with constraints on states and transitions.


  • Node :  A single machine
  • Cluster: Set of Nodes   
  • Resource : A logical entry (e.g.    database, index, task)
  • Partition: Subset of the resource  (Each subtask is referred to as a partition)
  • Replica: Copy of a Partition State  (e.g Master, Slave). It increase the availability of the system
  • State: Describes the role of a replica (Each node in the cluster has its own Current State)
  • State Machine and Transitions: An action that allows a replica to move from one state to another, thus changing its role. ( e.g    Slave    -->    Master )  
  • spectators: the external clients. Helix provides an External View that is an aggregated view of the current state across all nodes.
  • Current State: represents resource's actual state at a participating node.
    - INSTANCE_NAME: Unique name representing the process
    - SESSION_ID: ID that is automatically assigned every time a process joins the cluster
  • Rebalancer: The core component of Helix is the Controller which runs the Rebalance algorithm on every cluster event.
  • Dynamic Ideal State: Helix powerful is that Ideal State can be changed dynamically. It is adjusting the ideal state. Whenever a cluster event occurs, Helix can operate in one of three modes

Cluster events can be one of the following:

  • Nodes start and/or stop
  • Nodes experience soft and/or hard failures
  • New nodes are added/removed


Hasitha AravindaGenerating a random unique number in a SOAP UI request

In the request use,

${=System.currentTimeMillis() + ((int)(Math.random()*10000))}

example :  

Note : Here I am generating this number by adding currant milliseconds as a prefix. So this will generate almost unique number.

Madhuka UdanthaWSO2Con Asia, 2014; Java scripted every where

“Learn from world-changing innovators on how to be a Connected Business”
Fifth WSO2Con will be happening at 24th to 26th March in Waters Edge Colombo, Sri Lanka. Guest keynote speakers and People from WSO2 will talk about how WSO2 products can help an organization become lean and agile.
In wso2Con 2014 will have many interesting talks and tutorial over wso2 products. There will be tutorial on JavaScript everywhere. It will contain mainly
  • Writing server side scripting
  • Using 3rd party JS libraries and Web services
  • Exposing your API from jaggery with front end controller concepts
  • Best Particle in jaggery and caramel

What is Jaggery?
It is pure java script framework.
"A modern web application invariably includes a significant client-side JavaScript component. Being fluent in
Javascript is a given. Why then are we using a completely separate language for server-side
programming? It?s simpler to author when you don?t have to learn and then constantly switch your brain
between two different languages. Jaggery uses JavaScript as the server-side programming language, the
obvious choice for simplification."[1]
Is there any jaggery codes can be found in WSO2 Products?
Yes. Jaggery code is used to handle business logic
Which Products?
  • API Manager
  • Application Server
  • Business Activity Monitor (BAM)
  • Complex Event Processor
  • Enterprise Store
  • Governance Registry
  • User Engagement Server
  • Enterprise Mobility Manager
  • BPS and ESB will be joining soon


Madhuka UdanthaApache ZooKeeper Intro and Sample

ZooKeeper is an open source distributed configuration service, synchronization service, and naming registry for large distributed systems. ZooKeeper was a sub project of Hadoop. ZooKeeper's architecture supports high-availability through redundant services. It support naming service, configuration management, synchronization, leader election, message Queue and notification system. ZooKeeper is a high-performance coordination service for distributed applications

“ZooKeeper: Because Coordinating Distributed Systems is a Zoo”

Let is start work with Zoo-Keeper.

1. Download stable version from here

2. Unzip it ‘C:\zookeeper\zookeeper-3.4.6\’

3. Setup zoo configuration in <zookeeper-home>\zoo.cfg

  • tickTime: do heartbeats and the minimum session timeout (milliseconds)
  • dataDir: the location to store
  • clientPort: the port to listen for client connections




4. You can define log level and log properties in ‘’

5. You can start zookeeper server by  zkServer.cmd/ .sh


6. Sample client can be start by  zkCli.cmd / .sh




Samisa AbeysingheApache Stratos Architecture - Key Differentiators in PaaS Space

The unique and state of the art Apache Stratos (Incubating) architecture makes the platform as a service (PaaS) framework stand out among the rest. There are few factors that one should pay attention to, in order to understand the key advantages of the Stratos PaaS. Those include:

  • Cartridge model
  • Unified communication infrastructure
  • Centralised logging and monitoring
  • IaaS plugin capabilities
  • Load balancer plugin capabilities
  • Health check tooling plugin capabilities

Cartridge Model

Cartridge model makes Stratos unique among the PaaS alternatives because it allows the runtime extensibility of the PaaS framework. It is the enabler of the polyglot aspect of the PaaS container framework. The polyglot nature enables the support of multiple types of cartridges such as data cartridge, language cartridge, framework cartridges and operating system cartridges to be plugged in and managed by the PaaS framework in unified manner.
Another key advantage of Stratos cartridge model is that it enables bringing in even legacy containers and applications into cloud with ease. So, with Stratos, you can cloud enable both existing IT assets as well as the future IT assets that you plan to use or yet to acquire.  

Unified Communication Infrastructure

Apache Stratos, within the architecture, has a unified communication model across components within the PaaS framework. This model enables loosely coupled component based communication architecture within the framework. This is achieved using a message broker.
The message broker use a publisher/subscriber model for unified communication, and use a standard protocol, namely JMX. This makes it easy to integrate with third parties and also guaranteed interoperability, thanks to the use of JMX.

Centralised Logging and Monitoring

Apache Stratos is equipped with a centralised monitoring and metering mechanism with a unified logging framework. It also has the provision for plugging in a third party business activity monitor (BAM). Ability to plug in BAM enables the tracking and monitoring as well as tuning the PaaS deployment based on the key performance indicators (KPI).
Metering is a key requirement in a PaaS framework, to help bill the tenants to enable the pay as you go model.

IaaS Plugin Capabilities

The use of jclouds API is the key advantage in Apache Stratos PaaS architecture in terms of supporting different infrastructure as a service (IaaS) layers. Because of this standardized API based approach, you can plug any IaaS easily.
As of now the integrated and tested IaaS list include: OpenStack all versions (including SUSE cloud), vCloud and EC2.
TO support the IaaS of your choice, it is all about integrating via the APIs.

Load Balancer Plugin Capabilities

Apache Stratos comes with a built in load balancer, and an associated set of load balancing algorithms. It also has the ability to plugin any third party load balancer using message broker model. The message broker communication model can be used to get any load balancer to integrate into the topology and auto scaling model of the PaaS framework.

Health Check Tooling Plugin Capabilities

Health check is a key requirement for any IT deployment and the key stakeholders are the DevOps. Apache Stratos has the provision to plugin any third party health checking/monitoring framework. This way the IT experts can use the familiar set of tools that they are comfortable with, to stay informed of the health of the PaaS deployment.

Lali DevamanthriWSO2 Application Server Multitenancy

The goal of multitenancy is to maximize resource sharing across multiple users (while hiding the fact that these users are on the same server) and to ensure optimal performance. You can register tenants in the ESB Management Console, allowing tenants to maintain separate domains for their institutions.

Adding a tenant

To add a new tenant, take the following steps:

On the Configure tab of the AS Management Console, click Add New Tenant.

Enter the information about this tenant as follows:

  • Domain –  The domain name for the organization, which should be a unique name (e.g.,
  • Usage plan for the tenant – The usage plan defines limitations (such as number of users) for the tenant.
  • First Name – First name of the tenant admin.
  • Last Name – Last name of the tenant admin.
  • Admin Username – The username the tenant admin will use to log in. The username must always end with the domain name (e.g.,
  • Email – The email address of the admin.

Viewing tenants

To view existing tenants, on the Configure tab in the AS Management Console, click View Tenants.

Now you can log in with tenant credentials.


Madhuka UdanthaApplication Specific Permissions in WSO2 IS

This new future coming in wso2 IS 4.7.0 where we can define application specific permission. First create  service provider as below

1. Start IS and login to WSO2 IS and navigate to 'home -> Manage -> Service Providers -> add'


2. As it added you can find new role. it is create for this service provider


3. Now we will edit service providers that we created to add Permissions. Go to 'Role/Permission Configuration'

4. Add new Permission for application and click ‘Update’


5. Now to check those permission is added. We will go to ‘permission tree’ at Home > Configure > Users and Roles > Roles > Permissions


Here I am browsing registry for application permissions


Now we will try to authorized user for this resource from web services. ‘RemoteAuthorizationManagerService’[1]

6. Send request using SOAPUI

<soap:Envelope xmlns:soap="" xmlns:ser="">



Then verifies the Task, we will used ‘isUserAuthorized’

<soap:Envelope xmlns:soap="" xmlns:ser="">



Respond will come as true as it authorized.


[1] https://localhost:9443/services/RemoteAuthorizationManagerService?wsdl

Ushani BalasooriyaHow to read multiple values in a single cell of .csv in Jmeter

In this blog post it is explained how to read a value from a .csv file via jmeter.

You have to follow the exact steps given in the blog post.

  • Apart form that, assume you have your csv file in the following format in the first cell it self and you need to read multiple values:


  • In Jmeter, you should right click on Thread Group and select Add --> Config Element ---> CSV Data Set Config.

  • Then provide the following information.

File Encoding =
Variable Names = username,password,email,firstname,lastname,tenantDomainName
Delimiter = , // this is to delimit entries in list, not variables
  • My sample call to create tenant will be written as follows :

 <soapenv:Envelope xmlns:soapenv="" xmlns:ser="" xmlns:xsd="">  

Kasun Dananjaya DelgollaWSO2 ESB - The FASTEST ESB on Earth.!

The latest round of performance testing results has been published by WSO2: WSO2 ESB Performance Round 7.5 with the release of WSO2 ESB 4.8.1.



Above results clearly depicts that the WSO2 ESB is the fastest giant in the ESB space, when compared with almost all the popular open source ESBs.

Ranga SiriwardenaPub/Sub with WSO2 MB and WSO2 ESB using Durable and Hierarchical Topics

WSO2 MB is a standers complaint  message broker which supports JMS and AMQP standards and it will allow interoperability between many languages. It supports two of the main standers patterns of communication.

  1. Point-to-Point messaging through queues where one application sends messages directly to another application.
  2. Publish/Subscribe pattern through topics where one application publishes messages to a topic and other applications who are subscribed to this topic will receive these messages.
This post will explain how to use WSO2 ESB as a publisher and subscriber for WSO2 MB which will act as the middle hub for message exchange. Also it will explain the hierarchical topic capabilities and durable topic capabilities of WSO2 MB.

As an example lets take a news publisher service which publishes various types of news. And there are subscribers who are interested on various types of news.

  1. Download and unzip WSO2 ESB 4.8.1 and WSO2 MB 2.1.0
  2. Locate wso2mb-2.1.0/bin folder and start wso2mb-2.1.0 using ./ (or wso2server.bat) command.
  3. Configure WSO2 ESB
    • Open carbon.xml inside "wso2esb-4.8.1/repository/conf" folder and update port Offset
      •    1: <Offset>1</Offset> 

    • Open axis2.xml inside wso2esb-4.8.1/repository/conf/axis2" and uncomment JMSListener for WSO2 MB.
      •    1: <!--Uncomment this and configure as appropriate for JMS transport support with WSO2 MB 2.x.x -->


           3: ransportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">

           4:       <parameter name="myTopicConnectionFactory" locked="false">

           5:          <parameter name="java.naming.factory.initial" locked="false">org.wso2.andes.jndi.PropertiesFileInitialContextFactory</parameter>

           6:           <parameter name="java.naming.provider.url" locked="false">repository/conf/</parameter>

           7:           <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">TopicConnectionFactory</parameter>

           8:           <parameter name="transport.jms.ConnectionFactoryType" locked="false">topic</parameter>

           9:       </parameter>



          12:       <parameter name="myQueueConnectionFactory" locked="false">

          13:           <parameter name="java.naming.factory.initial" locked="false">org.wso2.andes.jndi.PropertiesFileInitialContextFactory</parameter>

          14:           <parameter name="java.naming.provider.url" locked="false">repository/conf/</parameter>

          15:           <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>

          16:          <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>

          17:       </parameter>



          20:       <parameter name="default" locked="false">

          21:           <parameter name="java.naming.factory.initial" locked="false">org.wso2.andes.jndi.PropertiesFileInitialContextFactory</parameter>

          22:           <parameter name="java.naming.provider.url" locked="false">repository/conf/</parameter>

          23:           <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>

          24:           <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>

          25:       </parameter>

          26:   </transportReceiver>

    • Uncomment JMS sender inside inside wso2esb-4.8.1/repository/conf/axis2/axis2.xml"
      •    1: <!-- uncomment this and configure to use connection pools for sending messages-->

           2:   <transportSender name="jms" class="org.apache.axis2.transport.jms.JMSSender"/>

    • Open "wso2esb-4.8.1/repository/conf/" file and add following configuration. 
      •    1: # register some connection factories
           2: # connectionfactory.[jndiname] = [ConnectionURL]

           3: connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5672'

           4: connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5672'
           7: # register some queues in JNDI using the form
           8: # queue.[jndiName] = [physicalName]



          11: # register some topics in JNDI using the form

          12: # topic.[jndiName] = [physicalName]
    • Copy following two jar file from "wso2mb-2.1.0/client-lib" to "wso2esb-4.8.1/repository/components/lib" folder

      • geronimo-jms_1.1_spec-1.1.0.wso2v1.jar
      • andes-client-0.13.wso2v8.jar
    • Locate wso2esb-4.8.1/bin folder and start wso2esb-4.8.1 using ./ (or wso2server.bat) command.

  4. Create publisher and subscribers from ESB 

    • Sign-in to ESB admin console and create a proxy service called "SportsNewsPublisherProxy" with following configuration ( This is the publisher which will publish to a topic called "").
           1: <?xml version="1.0" encoding="UTF-8"?>

           2: <proxy xmlns=""

           3:        name="SportsNewsPublisherProxy"

           4:        transports="https,http"

           5:        statistics="disable"

           6:        trace="disable"

           7:        startOnLoad="true">

           8:    <target>

           9:       <inSequence>

          10:          <property name="OUT_ONLY" value="true"/>

          11:          <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"/>

          12:          <log level="custom">

          13:             <property name="Message"

          14:                       value="************** Start SportsNewsPublisherProxy **************"/>

          15:          </log>

          16:          <log level="full"/>

          17:          <send>

          18:             <endpoint>

          19:                <address uri="jms:/;java.naming.factory.initial=org.wso2.andes.jndi.PropertiesFileInitialContextFactory&amp;java.naming.provider.url=repository/conf/;transport.jms.DestinationType=topic"/>

          20:             </endpoint>

          21:          </send>

          22:       </inSequence>

          23:    </target>

          24:    <description/>

          25: </proxy>
    • Create another proxy service as the first subscriber with following configuration. (This subscriber will subscribe only for  Sports news with the topic definition of "") 
           1: <?xml version="1.0" encoding="UTF-8"?>

           2: <proxy xmlns=""

           3:        name="SportsNewsSubscriberProxy"

           4:        transports="jms"

           5:        statistics="disable"

           6:        trace="disable"

           7:        startOnLoad="true">

           8:    <target>

           9:       <inSequence>

          10:          <property name="OUT_ONLY" value="true"/>

          11:          <log level="custom">

          12:             <property name="SportsNewsSubscriberProxy"

          13:                       value="### I am Sports News subscriber (SportsNewsSubscriberProxy) ###"/>

          14:          </log>

          15:          <log level="full"/>

          16:       </inSequence>

          17:    </target>

          18:    <parameter name="transport.jms.ContentType">

          19:       <rules>

          20:          <jmsProperty>contentType</jmsProperty>

          21:          <default>application/xml</default>

          22:       </rules>

          23:    </parameter>

          24:    <parameter name="transport.jms.ConnectionFactory">myTopicConnectionFactory</parameter>

          25:    <parameter name="transport.jms.DestinationType">topic</parameter>

          26:    <parameter name="transport.jms.SubscriptionDurable">true</parameter>

          27:    <parameter name="transport.jms.Destination"></parameter>

          28:    <parameter name="transport.jms.DurableSubscriberClientID">SportsNewsSubscriber</parameter>

          29:    <description/>

          30: </proxy>
    • Create the second subscriber proxy with following configuration. (This proxy will subscribe to all types of news by "*" topic definition. So this proxy is capable of receiving events from any "immediate children" from "" topic hierarchy)
           1: <?xml version="1.0" encoding="UTF-8"?>

           2: <proxy xmlns=""

           3:        name="NewsSubscriberProxy"

           4:        transports="jms"

           5:        statistics="disable"

           6:        trace="disable"

           7:        startOnLoad="true">

           8:    <target>

           9:       <inSequence>

          10:          <property name="OUT_ONLY" value="true"/>

          11:          <log level="custom">

          12:             <property name="NewsSubscriberProxy"

          13:                       value="### I am News subscriber (NewsSubscriberProxy) ###"/>

          14:          </log>

          15:          <log level="full"/>

          16:       </inSequence>

          17:    </target>

          18:    <parameter name="transport.jms.ContentType">

          19:       <rules>

          20:          <jmsProperty>contentType</jmsProperty>

          21:          <default>application/xml</default>

          22:       </rules>

          23:    </parameter>

          24:    <parameter name="transport.jms.ConnectionFactory">myTopicConnectionFactory</parameter>

          25:    <parameter name="transport.jms.DestinationType">topic</parameter>

          26:    <parameter name="transport.jms.SubscriptionDurable">true</parameter>

          27:    <parameter name="transport.jms.Destination">*</parameter>

          28:    <parameter name="transport.jms.DurableSubscriberClientID">NewsSubscriber</parameter>

          29:    <description/>

          30: </proxy>
    • Create the third subscriber proxy with following configuration. ( This proxy will subscribe to all types of events by "events.#" topic definition. So this proxy is capable of receiving events from "event" topic and all "children topics" which is under  "events" topic hierarchy)
           1: <?xml version="1.0" encoding="UTF-8"?>

           2: <proxy xmlns=""

           3:        name="EventSubscriberProxy"

           4:        transports="jms"

           5:        statistics="disable"

           6:        trace="disable"

           7:        startOnLoad="true">

           8:    <target>

           9:       <inSequence>

          10:          <property name="OUT_ONLY" value="true"/>

          11:          <log level="custom">

          12:             <property name="EventSubscriberProxy"

          13:                       value="### I am Event subscriber (EventSubscriberProxy) ###"/>

          14:          </log>

          15:          <log level="full"/>

          16:       </inSequence>

          17:    </target>

          18:    <parameter name="transport.jms.ContentType">

          19:       <rules>

          20:          <jmsProperty>contentType</jmsProperty>

          21:          <default>application/xml</default>

          22:       </rules>

          23:    </parameter>

          24:    <parameter name="transport.jms.ConnectionFactory">myTopicConnectionFactory</parameter>

          25:    <parameter name="transport.jms.DestinationType">topic</parameter>

          26:    <parameter name="transport.jms.SubscriptionDurable">true</parameter>

          27:    <parameter name="transport.jms.Destination">events.#</parameter>

          28:    <parameter name="transport.jms.DurableSubscriberClientID">EventSubscriber</parameter>

          29:    <description/>

          30: </proxy>
  5. Tryout pub/sub model 

    • Sign-in to ESB management console and go to services list (Home > Manage > Services > List)
    • Then select "Try this service" option on "SportsNewsPublisherProxy"
    • Send following request on "tryit" window (instead of tryit you can use SoapUI kind of tool to simulate the same behavior).
           1: <event>

           2:    <news>

           3:       <sports>

           4:          <title>2014 FIFA World Cup</title>

           5:          <description>2014 FIFA World Cup will be held in Brazil from 12th June to 13th July.</description>

           6:       </sports>

           7:    </news>

           8: </event>
    • Once you send the request to the publisher you will see following log messages in ESB carbon log file.
           1: [2014-03-16 01:19:49,592] INFO - LogMediator Message = ************** Start SportsNewsPublisherProxy **************

           2: [2014-03-16 01:19:49,837]  INFO - LogMediator SportsNewsSubscriberProxy = ### I am Sports News subscriber (SportsNewsSubscriberProxy) ###

           3: [2014-03-16 01:19:49,847]  INFO - LogMediator NewsSubscriberProxy = ### I am News subscriber (NewsSubscriberProxy) ###

           4: [2014-03-16 01:19:49,839]  INFO - LogMediator EventSubscriberProxy = ### I am Event subscriber (EventSubscriberProxy) ###
    • From above log messages it will show that all the three subscribers receives the message because all of them subscribed to the topic hierarchy ""
      • First Subscriber subscribed to "" topic 
      • Second Subscriber subscribed to "*"
      • Third Subscriber subscribed to "events.#" topic
    • Once you are done with above steps you can test the durable topic capabilities of WSO2 MB by deactivating one of the Subscribers. (Note: in this post all the subscribers defined are marked as durable subscribers by using "transport.jms.DurableSubscriberClientID" parameter in proxy definitions)

      • Sign-in to ESB management console and go to services list (Home > Manage > Services > List)
      • Then click on "SportsNewsSubscriberProxy" and click on "Deactivate" option.
      • Then send a request to "SportsNewsPublisherProxy" as explained earlier. 
      • Now you will not see the following log in the carbon console. 
             1: INFO - LogMediator SportsNewsSubscriberProxy = ### I am Sports News subscriber (SportsNewsSubscriberProxy) ###
      • Now click on "Activate" option back to activate the "SportsNewsSubscriberProxy" again.
      • Once it becomes to active state, you will see that it consumes the message which we sent in earlier step. 

Dulitha WijewanthaSetup an Apache Proxy for Carbon Servers on Mac OS X

Setting up an Apache server on reverse proxy is a very basic requirement for deployments. What I am going to explain in this blog post is about setting up a reverse proxy for a WSO2 Carbon web server that is running on port 9763 (for HTTP) . In fact you can skip the carbon specific configurations if you want to set it up for another server. So let’s get started.

First you’ll have to install apache server on your machine. This alone is one of the most complex steps as it is dependent on your platform (OS X, Windows, Linux). I am going to only show steps for OS X but Proxy Configuration is same for all operative systems.

Where is Apache?

You don’t have to worry about installing apache cause OS X already has apache server inside of it. It’s located under /etc/apache2 directory (Optionally you can also install XAMPP as well).

To start the apache server –

sudo apachectl start

Setup the Proxy Configs

Now the role of the reverse proxy is to send traffic coming out from the network to another host.

Reverse Proxy

We can setup a proxy in apache to do this where localhost:9763 will go to localhost:80. Port 80 is the default http port. For this blog post I will be focusing on setting up proxy over http. I am still looking into https proxy. I’ll link that post here once I am done with it.

There is a config file in apache located in /etc/apache2/httpd.conf. You have to include the below section in the bottom of that file.

<IfModule mod_proxy.c>
 # Reverse Proxy
ProxyRequests On
<Proxy *>
    Order deny,allow
    Allow from all
ProxyVia On
ProxyPass / http://localhost:9763/
ProxyPassReverse / http://localhost:9763/
ProxyPreserveHost On

Now restart apache –

sudo apachectl restart

Carbon configurations

To configure carbon for reverse proxy you have to change the file –$SERVER_HOME/repository/conf/tomcat/catalina-server.xml. In the configuration file first Connector should be changed to below configuration. proxyPort attribute was added to the Connector.

<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                server="WSO2 Carbon Server"
                noCompressionUserAgents="gozilla, traviata"

The apache server will proxy traffic coming through port 80 to port 9763.

sanjeewa malalgodaHow to customize WSO2 API Manager swagger UI to invoke APIs without auth token


In this post we will see how we can skip providing auth keys when we invoke APIs using swagger user interface(In WSO2 API Manager 1.6.0). One possible approach is marking resource auth type as none. if you marked resource level authentication as none when you create API then you can invoke it through swagger UI without having keys.

Lets look in detail other option. For this we might need to do small customization to swagger users interface and some additional steps. 
01. Add all APIs that need to be invoke through swagger to single application and generate token with some large life time. 
02. Then hard code that token to swagger user interface jaggery js file as instructed below. 
     Open and edit file wso2am-1.6.0/repository/deployment/server/jaggeryapps/store/site/themes/fancy/templates/api/swagger/template.jag 
     Add following generated access token as follows next to supported methods section. 
     headers: { 'Authorization': 'Bearer 7c7a62dd139b819776ea06f845cd48f'}, 
     Then content will be like this 
     window.swaggerUi = new SwaggerUi({ 
                apiKeyName: "authorization", 
                supportHeaderParams: true, 
                supportedSubmitMethods: ['get', 'post', 'put', 'delete', 'options'], 
                headers: { 'Authorization': 'Bearer 7c7a62dd139b819776ea06f845cd48f'}, 
                onComplete: function(swaggerApi, swaggerUi){

03. Then you will be able to invoke APIs without tokens. But in swagger UI we have mandatory field to enter auth token. We can put some random value and invoke APIs(actually hard corded token will be used). 
     But if you wish to remove it follow below instructions. 
         Go to publisher UI and select the API you interested. Then go to docs section and edit Swagger API Definition as follows. 
         You will see for each resource section parameter named Authorization ("name": "Authorization") 
         By default that field is marked as required filed. You have to remove it. Then content would be like this 
                           "name": "Authorization", 
                            "description": "Access Token", 
                            "paramType": "header", 
                            "allowMultiple": false, 
                            "dataType": "String" 

Screenshot from 2014-03-14 16_36_43

Now you will be able to invoke APIs without providing keys using swagger UI.

Screenshot from 2014-03-14 16_41_45

Shazni NazeerClustering WSO2 Governance Registry

Note: Before reading this guide, I recommend reading my previous blog post on "Setting up WSO2 Governance Registry Partitions", because last few steps in this guide is similar to that of the previous post. You can still follow along without reading it though.

In this guide we shall set up a WSO2 Governance Registry into a cluster. Cluster??? What's important in clustering any sort of product in the first place. Oh!!! That's a critical question. Anyhow, this could be a possible question in a semester paper of a subject in computer science. What could be that subject? Distributed Systems... I'm not sure. Well, whatever it is, I'll provide some simple answers why clustering an application can be useful, though these answers may not be suitable for a semester paper. :)
  • Clustering can be used to load balance between working nodes. This will reduce overhead that each node needs to handle by dividing work among partitions
  • Clustering enables application to withstand on sudden failures in one or more node as far as at least one node is up and running, decreasing the overall downtime of the service and providing high availability  
  • Clustering undoubtedly enables scalability and can increase the performance of the overall application
OK.. enough of theory!!! Let's bogged down into clustering WSO2 Governance Registry as described below.

Here we are going to demonstrate the clustered setup with three nodes of WSO2 Governance Registry (GREG), a WSO2 Elastic Load Balancer (ELB) and a database for storage, as shown below.

We are going to set up the cluster in the local machine for testing, though, the steps would be the same if each component; ELB, GREG nodes and the DB, resides in different servers, except for appropriate IP address settings.

First, we shall configure the WSO2 ELB, which is going to be in front of the three WSO2 GREG nodes. Load Balancers, in general help distribute the requests among the nodes. Download the WSO2 ELB from and unzip it to a directory. I'll refer to load balancer home directory as ELB_HOME.

Now open up the ELB_HOME/repository/conf/loadbalancer.conf and replace the governance domain as shown below.
    governance {
domains {
wso2.governance.domain {
tenant_range *;
group_mgt_port 4321;
mgt {
Next we need to edit the ELB_HOME/repository/conf/axis2/axis2.xml and uncomment the localMemberHost to expose the ELB to the members of the cluster (the GREG nodes that we are going to configure) as shown below.
<parameter name="localMemberHost"></parameter>
Also change the localMemberPort to 5000 as shown below.
<parameter name="localMemberPort">5000</parameter>
Next you need to update your system's hosts file with an entry like the following, which is a simple mapping between an IP address and a hostname.
In Linux, the file is /etc/hosts and in Windows the file is %systemroot%\system32\drivers\etc\hosts (In Windows to view this file you may have to enable the system to show hidden files)

That's all on WSO2 ELB. We can now start the WSO2 ELB in terminal by running the following command in ELB_HOME/bin/
$ ./
Let's now download the latest WSO2 GREG from, and unzip it into a directory. Then copy the GREG home directory to two more places for other two nodes. You can place all three nodes in the same directory and rename those by appending '_node1', '_node2' and '_node3'. Before we delve into configuring WSO2 GREG nodes, we need to create database to hold the shared node data.

Let's now create database for the 'governance registry'. You can use the in-built H2 database, but it's recommended to use a database like MySQL in production systems. Therefore, we will use MySQL in this setup. If you don't have MySQL installed in Linux look in my previous post 'Installing MySQL in Linux'. Windows users can download and install using the MySQL installer package and it's very straight forward. To create a MySQL database for governance registry and populate it with the schema, enter the following to go into the MySQL prompt.
$ mysql -u root -p

mysql> create database wso2governance;
Query OK, 1 row affected (0.04 sec)
mysql> create database wso2apim;
Query OK, 1 row affected (0.00 sec)
Next we ought to exit from MySQL prompt. And, now we shall populate the databases we created with the schema. For that enter following commands. Don't forget to replace the GREG_HOME to any WSO2 GREG directory.
$ mysql -u root -p wso2governance < GREG_HOME/dbscripts/mysql.sql
$ mysql -u root -p wso2apim < GREG_HOME/dbscripts/apimgt/mysql.sql
OK.. now database is also ready. We can now configure our WSO2 GREG nodes. Repeat the following steps in all the GREG nodes. The differences you have to make are noted where necessary.

As a first step in clustering the nodes, we got to enable clustering in each GREG node, of course. This can be done by changing the enable attribute to 'true' as the following line in axis2.xml, which can be found in GREG_HOME/repository/conf/axis2 directory. And this needs to be done in all three nodes.
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
* Change the GREG_HOME/repository/conf/axis2/axis2.xml as shown below
<parameter name="membershipScheme">wka</parameter>
<parameter name="domain">wso2.governance.domain</parameter>
<parameter name="localMemberHost"></parameter>
<parameter name="localMemberPort">4250</parameter>
This is for joining to the multicast domain "wso2.governance.domain". Nodes with same domain belong to the clustering group. "wka" is for Well-Known Address based multicasting. "localMemberPort" port on other nodes needs to be different. Say 4251 and 4252. This is the TCP port on which other nodes contact this node.

* In the <members> section, there needs to be a <member> tags with the details of the ELB or multiple ELBs (If we are having clusters of ELBs). The following shows the ELB with port 4321, which is the group management port we configured in the loadbalancer.conf in the ELB. Since we are having only one ELB, there is only one <member> tag.
* Next we need to edit the HTTP and HTTPS proxy ports in the GREG_HOME/repository/conf/tomcat/catalina-server.xml. Add a new attribute named proxyPort to 8280 in HTTP and 8243 in HTTPS. Attributes need to be added like proxyPort="8280" and proxyPort="8243" in the two <Connector> tags, respectively.

* Next, since we are going to setup our nodes and the ELB in the local machine we need to avoid port conflicts. For this, set the <Offset> to 1 in the first node and 2 in
 the second node, 3 in the third node, in GREG_HOME/repository/conf/carbon.xml. You can put any Offset of your wish, but it has to be different if the nodes are running in the same machine. If these nodes are in different servers, this need not be done.

* In carbon.xml we also need update the "HostName" and "MgtHostName" elements as follows
* WSO2 Carbon platforms supports a deployment model in its architecture for components to act as two types of nodes in clustering setup. 'management' nodes and 'worker' nodes. 'management' nodes are used for changing configurations and deploying artifacts. 'worker' nodes are to process requests. But in WSO2 GREG there is no concept of request processing, all nodes are considered 'management' nodes. In a clustered setup, we got to configure this explicitly in axis2.xml. We need to change the value attribute to "mgt" of <property> tag of "subdomain". After the change, the property tags would look like the below.
Properties specific to this member
<parameter name="properties">
<property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
<property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
<property name="subDomain" value="mgt"/>
* Next open up GREG_HOME/repository/conf/user-mgt.xml file and update the "datasource" property as follows to enable the user manager tables to be created in "wso2governance":
<Property name="dataSource">jdbc/WSO2CarbonDB_mount</Property>
* Next we to configure the shared 'config' and 'governance' registries, as I earlier pointed out to you to read my previous blog post "Setting up WSO2 Governence Registry Partitions". OK.. we shall configure this in each GREG_HOME/repository/conf/registry.xml and GREG_HOME/repository/conf/datasources/master-datasources.xml. First in registry.xml we need to add the following additional configs.

Add a new <dbConfig>
<dbConfig name="wso2registry_mount">
Add a new <remoteInstance>
<remoteInstance url="https://localhost:9443/registry">
Add the <mount> path for 'config' and 'governance'
<mount path="/_system/config" overwrite="true">
<mount path="/_system/governance" overwrite="true">
Next in master-datasources.xml add a new <datasource> as below. Do not change the existing <datasource> of <name> WSO2_CARBON_DB.
<description>The datasource used for registry and user manager</description>
<definition type="RDBMS">
<validationQuery>SELECT 1</validationQuery>
As the final step modify the <datasource> for WSO2AM_DB as below.
<description>The datasource used for API Manager database</description>
<definition type="RDBMS">
<validationQuery>SELECT 1</validationQuery>
Now we may run each WSO2 GREG node by running the following in GREG_HOME/bin
$ ./
When all three nodes are up and running, the ELB console should have printed something similar to below, indicating that nodes have successfully joint the cluster.
[2013-11-12 16:52:29,260]  INFO - HazelcastGroupManagementAgent Member joined [271050b5-69d0-468d-9e9e-29557aa550ef]:
[2013-11-12 16:52:32,314] INFO - MemberUtils Added member: Host:, Remote Host:null, Port: 4252, HTTP:9766, HTTPS:9446, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true
[2013-11-12 16:52:32,315] INFO - HazelcastGroupManagementAgent Application member Host:, Remote Host:null, Port: 4252, HTTP:9766, HTTPS:9446, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster
[2013-11-12 16:53:01,042] INFO - TimeoutHandler This engine will expire all callbacks after : 86400 seconds, irrespective of the timeout action, after the specified or optional timeout
[2013-11-12 16:55:34,655] INFO - HazelcastGroupManagementAgent Member joined [912fe7db-7334-467c-a6a7-e6929e652edb]:
[2013-11-12 16:55:38,707] INFO - MemberUtils Added member: Host:, Remote Host:null, Port: 4251, HTTP:9765, HTTPS:9445, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true
[2013-11-12 16:55:38,708] INFO - HazelcastGroupManagementAgent Application member Host:, Remote Host:null, Port: 4251, HTTP:9765, HTTPS:9445, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster
[2013-11-12 16:57:05,136] INFO - HazelcastGroupManagementAgent Member joined [cc177d58-1ec1-473e-899c-9f154303e606]:
[2013-11-12 16:57:09,232] INFO - MemberUtils Added member: Host:, Remote Host:null, Port: 4250, HTTP:9764, HTTPS:9444, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true
[2013-11-12 16:57:09,232] INFO - HazelcastGroupManagementAgent Application member Host:, Remote Host:null, Port: 4250, HTTP:9764, HTTPS:9444, Domain: wso2.governance.domain, Sub-domain:mgt, Active:true joined application cluster
Great!!!. Now we may access the Management Console by browsing the and now the WSO2 GREG is clustered. Every processing that comes to ELB will be distributed among the GREG nodes, providing the scalability and high availability.