WSO2 Venus

Chamila WijayarathnaData Model of Sinmin Corpus

In my last two blogs, I wrote about the project I carried out in the final year at university. In this blog I will write about the data storage model we used in the Sinmin corpus.
After doing a performance analysis with various data storage candidates for the corpus we decided to use Cassandra as the main data storage system of the corpus.
Cassandra is a open source column store database system. It uses a query based data modeling where data model is decided based on the information expected to retrieve.
Following table shows information needs of the corpus and column families defined to fulfill those needs with corresponding indexing.

Information Need
Corresponding Column Family with Indexing
Get frequency of a given word in given time period and given category
corpus.word_time_category_frequency ( id bigint, word varchar, year int, category varchar, frequency int, PRIMARY KEY(word,year, category))
Get frequency of a given word in given time period
corpus.word_time_category_frequency ( id bigint, word varchar, year int, frequency int, PRIMARY KEY(word,year))
Get frequency of a given word in given category
corpus.word_time_category_frequency ( id bigint, word varchar, category varchar, frequency int, PRIMARY KEY(word, category))
Get frequency of a given word
corpus.word_time_category_frequency ( id bigint, word varchar, frequency int, PRIMARY KEY(word))
Get frequency of a given bigram in given time period and given category
corpus.bigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, year int, category int,    frequency int, PRIMARY KEY(word1,word2,year, category))
Get frequency of a given bigram in given time period
corpus.bigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, year int, frequency int, PRIMARY KEY(word1,word2,year))
Get frequency of a given bigram in given category
corpus.bigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, category varchar, frequency int, PRIMARY KEY(word1,word2, category))
Get frequency of a given bigram
corpus.bigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, frequency int, PRIMARY KEY(word1,word2))
Get frequency of a given  trigram in given time period and in a given category
corpus.trigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, word3 varchar, year int, category int, frequency int,    PRIMARY KEY(word1,word2,word3,year, category))
Get frequency of a given  trigram in given time period
corpus.trigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, word3 varchar, year int, frequency int,    PRIMARY KEY(word1,word2,word3,year))
Get frequency of a given  trigram in a given category
corpus.trigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, word3 varchar, category varchar, frequency int,    PRIMARY KEY(word1,word2,word3, category))
Get frequency of a given  trigram
corpus.trigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, word3 varchar, frequency int,    PRIMARY KEY(word1,word2,word3))
Get most frequently used words in a given time period and in a given category
corpus.word_time_category_ordered_frequency ( id bigint, word varchar, year int, category int, frequency int, PRIMARY KEY((year, category),frequency,word))
Get most frequently used words in a given time period
corpus.word_time_category_ordered_frequency ( id bigint, word varchar, year int,frequency int, PRIMARY KEY(year,frequency,word))
Get most frequently used words in a given category,
Get most frequently used words
corpus.word_time_category_ordered_frequency ( id bigint, word varchar,category varchar, frequency int, PRIMARY KEY(category,frequency,word))
Get most frequently used bigrams in a given time period and in a given category
corpus.bigram_time_ordered_frequency ( id bigint, word1 varchar, word2 varchar, year int, category varchar, frequency int, PRIMARY KEY((year,category),frequency,word1,word2))
Get most frequently used bigrams in a given time period
corpus.bigram_time_ordered_frequency ( id bigint, word1 varchar, word2 varchar, year int, frequency int, PRIMARY KEY(year,frequency,word1,word2))
Get most frequently used bigrams in a given category,
Get most frequently used bigrams
corpus.bigram_time_ordered_frequency ( id bigint, word1 varchar, word2 varchar, category varchar, frequency int, PRIMARY KEY(category) ,frequency,word1,word2))
Get most frequently used trigrams in a given time period and in a given category
corpus.trigram_time_category_ordered_frequency (id bigint, word1 varchar, word2 varchar, word3 varchar, year int, category varchar, frequency int, PRIMARY KEY((year, category),frequency,word1,word2,word3))
Get most frequently used trigrams in a given time period
corpus.trigram_time_category_ordered_frequency (id bigint, word1 varchar, word2 varchar, word3 varchar, year int,frequency int, PRIMARY KEY(year,frequency,word1,word2,word3))
Get most frequently used trigrams in a given category
corpus.trigram_time_category_ordered_frequency (id bigint, word1 varchar, word2 varchar, word3 varchar, category varchar, frequency int, PRIMARY KEY( category,frequency,word1,word2,word3))
Get latest key word in contexts for a given word in a given time period and in a given category
corpus.word_year_category_usage (id bigint, word varchar, year int, category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word,year,category),date,id))
Get latest key word in contexts for a given word in a given time period
corpus.word_year_category_usage (id bigint, word varchar, year int, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word,year),date,id))
Get latest key word in contexts for a given word in a given category
corpus.word_year_category_usage (id bigint, word varchar, category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word,year),date,id))
Get latest key word in contexts for a given word
corpus.word_year_category_usage (id bigint, word varchar,sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY(word,date,id))
Get latest key word in contexts for a given bigram in a given time period and in a given category
corpus.bigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, year int, category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,year,category),date,id))
Get latest key word in contexts for a given bigram in a given time period
corpus.bigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, year int, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,category),date,id))
Get latest key word in contexts for a given bigram in a given category
corpus.bigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,category),date,id))
Get latest key word in contexts for a given bigram
corpus.bigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2),date,id))
Get latest key word in contexts for a given trigram in a given time period and in a given category
corpus.trigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, word3 varchar, year int, category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,word3,year,category),date,id))
Get latest key word in contexts for a given trigram in a given time period
corpus.trigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, word3 varchar, year int, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,word3,year),date,id))
Get latest key word in contexts for a given trigram in a given category
corpus.trigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, word3 varchar,category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,word3,category),date,id))
Get latest key word in contexts for a given trigram
corpus.trigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, word3 varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,word3),date,id))
Get most frequent words at a given position of a sentence
corpus.word_pos_frequency ( id bigint, content varchar, position int, frequency int, PRIMARY KEY(position, frequency, content))
corpus.word_pos_id ( id bigint, content varchar, position int, frequency int, PRIMARY KEY(position, content))
Get most frequent words at a given position of a sentence in a given time period
corpus.word_pos_frequency ( id bigint, content varchar, position int, year int, frequency int, PRIMARY KEY((position,year), frequency, content))
corpus.word_pos_id ( id bigint, content varchar, position int, year int,frequency int, PRIMARY KEY(position,year,content))
Get most frequent words at a given position of a sentence in a given category
corpus.word_pos_frequency ( id bigint, content varchar, position int, category varchar,frequency int, PRIMARY KEY((position,category), frequency, content))
corpus.word_pos_id ( id bigint, content varchar, position int, category varchar,frequency int, PRIMARY KEY(position, category,content))
Get most frequent words at a given position of a sentence in a given time period and in a given category
corpus.word_pos_frequency ( id bigint, content varchar, position int, year int, category varchar,frequency int, PRIMARY KEY((position,year,category), frequency, content))
corpus.word_pos_id ( id bigint, content varchar, position int, year int, category varchar,frequency int, PRIMARY KEY(position,year,category,content))
Get the number of words in the corpus in a given category and year
corpus.word_sizes ( year varchar, category varchar, size int, PRIMARY KEY(year,category));

Afkham AzeezVega - the Sri Lankan Supercar

For the past 3 months, I have been on sabbatical & away from WSO2. During this period, I got the privilege of working with Team Vega, which is building a fully electric supercar. It was a great opportunity for me since anything to do with vehicles is my passion. The car is being 100% hand built, with around 95% of the components being manufactured locally in Sri Lanka.

Vega Logo

The work on the car is progressing nicely. The images below show the plug which will be used to develop a mold. The mold in turn, will be used to develop the body panels. The final product will have Carbon Fiber body panels.

The vehicle chassis is a space frame. This provides the required strength & rigidity, as well as ease of development using simple fabrication methods. The following images show the space frame chassis of Vega.

The following image shows the 450 HP motors coupled with a reduction gear box that powers Vega. This setup will power the rear wheels. The wheels are not coupled in any sort, and differential action is controlled via software. You can see the gear box in the center, and the two motors on either side of it.

450 HP motor & motor controller

450 HP motor

One of the highlights of this vehicle is its mechanical simplicity. The vehicles uses very little mechanical parts compared to traditional vehicles, and all the heavy lifting is done by the electronics & software.  There will be around 25 micro-controllers that communicate via CAN bus. Most of the actuation & monitoring will be via messaging between these micro-controllers.

The power required is supplied by a series of battery packs. The battery packs are built using 3.3V Lithium Iron Phosphate (LiFePO4) cells. This cell has high chemical stability under varying conditions. There is a battery management system which monitors the batteries & handles charging of the batteries.

A single LiFePO4 cell

Battery module with cooling lines

A single battery module mounted on the Vega chassis

 When it comes to electric vehicle charging, there are two leading standards; J1772 & CHAdeMO. The team is also building chargers which will be deployed is various locations. The image below shows a Level 2 charger. There are 3.3kW & 6.5kW options available. 1 hour of charging using this charger will give a range of 25Km on average.

The image below shows the super charger that is being built. There are 12.5kW & 25kW options available at the moment. This charger can charge the battery up to 80% of its capacity within a period of 20 minutes.  

With electric vehicles gradually gaining popularity in Sri Lanka & the rest of the world, it has become a necessity to deploy chargers in public locations. This leads to a new problem of managing & monitoring chargers, as well as billing for charging. OCPP (Open Charge Point Protocol) is a specification which has been adopted by a number of countries & organizations to carry out these tasks. The Vega chargers will also support this standard.

CAD diagrams of the Vega supercar (in the background)
Last day with Team Vega
It was a wonderful experience working with Team Vega, even though it was for a very short time, and I am looking forward to the day where I get to test drive the supercar.

Update: Video introducing Vega 

Nandika JayawardanaEmail testing with Greenmail

When an application provides support for sending and receiving emails we need to be able to write test cases to automate the verification of this functionality. Green mail  is a great library for Implementing this functionality. Greenmail provide support for various mail protocols such as smtp, smtps, pop3 , pop3s , imap and imaps.

You can start a Greenmail server to listen with all these protocols enabled and listening on localhost with two lines of code.

        GreenMail greenMail = new GreenMail();

When the server is started as above, the default ports of the above supported protocols are offset by 3000.

SMTP    : 3025    SMTPS  : 3465   POP3     : 3110   POP3S   : 3995  IMAP    : 3143  IMAPS  : 3993

However, if you start the server as above, more often than not, there will be a port conflict with some other program resulting in exceptions. Therefore you can configure the GreenMail server with the ServerSetup object with the protocol , binding address and port.

 ServerSetup setup = new ServerSetup(3025, "localhost", "smtp");
 GreenMail greenMail = new GreenMail(setup);

You can register a user to the server by using setUser method.

greenMail.setUser("", "user1", "user1");

In order to receive mails from greenmail server object, waitForIncomingEmail method is available.

greenMail.waitForIncomingEmail(5000, 1);
Message[] messages = greenMail.getReceivedMessages();

In order to find more details, take a look at the source code at

Following is a sample on how to send a mail using java mail and receive it using greenmail with user authentication enabled.

import com.icegreen.greenmail.util.GreenMail;
import com.icegreen.greenmail.util.ServerSetup;

import javax.mail.Message;
import javax.mail.MessagingException;
import javax.mail.PasswordAuthentication;
import javax.mail.Session;
import javax.mail.Transport;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeMessage;
import java.util.Properties;

public class GreenMailTest {

    public static void main(String[] args) throws InterruptedException, IOException, MessagingException {
        ServerSetup setup = new ServerSetup(3025, "localhost", "smtp");
        GreenMail greenMail = new GreenMail(setup);
        greenMail.setUser("", "user1", "user1");

        final String username = "user1";
        final String password = "user1";

        Properties props = new Properties();
        props.put("mail.smtp.auth", "true");
        props.put("mail.transport.protocol", "smtp");
        props.put("", "localhost");
        props.put("mail.smtp.port", "3025");

        Session session = Session.getInstance(props,
                                              new javax.mail.Authenticator() {
                                                  protected PasswordAuthentication getPasswordAuthentication() {
                                                      return new PasswordAuthentication(username, password);

        Message message = new MimeMessage(session);
        message.setSubject("Mail Subject");
        message.setText("Mail content");
        message.setFrom(new InternetAddress(""));
        message.setRecipients(Message.RecipientType.TO, InternetAddress.parse(""));

        greenMail.waitForIncomingEmail(5000, 1);
        Message[] messages = greenMail.getReceivedMessages();
        System.out.println("Message length =>" + messages.length);
        System.out.println("Subject => " + messages[0].getSubject());
        System.out.println("Content => " + messages[0].getContent().toString());

Kalpa WelivitigodaWorkaround for absolute path issue in SFTP in VFS transport

In WOS2 ESB, VFS transport can be used to access SFTP file system. The issue is that we cannot use absolute paths with SFTP and this affects to WSO2 ESB 4.8.1 and prior versions. The reason is that SFTP uses SSH to login, and it will by default log into the user's home directory and the path specified will be considered relative to the user's home directory.

For example consider the VFS URL below,
The requirement is to refer to /myPath/file.xml but it will refer /home/kalpa/myPath/file.xml (/home/kalpa) is the user's home directory.

To overcome this issue we can create a mount for the desired directory in the home directory of the user in the FTP file system. Considering the example above, we can create the mount as follows,
mount --bind /myPath /home/kalpa/myPath
 With this the VFS URL above will actually refer to /myPath/file.xml.

Thilini IshakaImporting Swagger API definitions to Create APIs in WSO2 API-M 1.8.0

Now you have the facility to add/edit the Swagger definition (1.2 version) of an API in the API design time with API-M 1.8.0

A sample swagger API definition URI:

Using Import option;

Designer API UI after importing the Swagger API definition;

Upcoming API-M version (1.9.0) has the support for Swagger 2.0 API definitions.

Madhuka UdanthaBuilding Zeppelin in windows 8

Pre - Requirements

  • java 1.7
  • maven 3.2.x or 3.3.x
  • nodejs
  • npm
  • cywin

Here is my version in windows8 (64 bit)


1. Clone git repo

     git clone

2. Let’s build Incubator-zeppelin from the source

    mvn clean package

Since you are running in windows shell command or space in dir, new line issue in windows (Unix to dos issue) will break some test so you can skip them for now by ‘-DskipTests’. Used –u to get updated snapshot of the repo while it is building.

Incubator-zeppelin is build success.



Few issues you can face with windows


[ERROR] Failed to execute goal com.github.eirslett:frontend-maven-plugin:0.0.23:bower (bower install) on project zeppelin-web: Failed to run task: 'bower --allow-root install' failed. (error code 1) -> [Help 1]

you can find 'bower' in incubator-zeppelin\zeppelin-web folder. So you can go for zeppelin-web directory and enter 'bower install' and wait till it complete.

Some time you will get 'issue in node-gyp' then check you nodejs version and nodejs location is it pointed correctly.

  • $node –version
  • $which node

Then you can get newest version of node-gyp

Some time depending on cywin-user permission you have to install 'bower' if not.

  • npm install -g bower


Error 02

[ERROR] bower json3#~3.3.1  ECMDERR Failed to execute "git ls-remote --tags --heads git://", exit code of #128 fatal: unable to connect to[0:]: errno=Connection timed out

Instead to run this command:

  git ls-remote --tags --heads git://

you should run this command:
     git ls-remote --tags --heads
    git ls-remote --tags --heads

or you can run 'git ls-remote --tags --heads git://' but you need to make git always use https in this way:

    git config --global url."https://".insteadOf git://

Lot of time this issue occur deu to corporate network / proxy. So we can added proxy settings to git's config and all was well.
    git config --global http.proxy
    git config --global https.proxy


Error 03

You will have fix new line issue in windows. In windows new line is mark as ‘/r/n’.

Madhuka UdanthaBuilding Apache Zeppelin

Apache Zeppelin (incubator) is a collaborative data analytics and visualization tool for Apache Spark, Apache Flink. It is web-based tool for the data scientists to collaborate over large-scale data exploration. Zeppelin is independent of the execution framework. Zeppelin integrated full support of Apache Spark so I will try sample with spark in it's self. Zeppelin interpreter allows any language/data-processing-backend to be plugged into Zeppelin (Scala- Apache Spark, SparkSQL, Markdown and Shell)
Let’s build from the source.
1. Get repo to machine.
git clone
2. Build code
mvn clean package
You can used '-DskipTests' to skip the test in the build
For cluster mode
mvn install -DskipTests -Dspark.version=1.1.0 -Dhadoop.version=2.2.0
Change spark.version and hadoop.version to your cluster's one.
3. Add jars, files
spark.jars, spark.files property in ZEPPELIN_JAVA_OPTS adds jars, files into SparkContext
ZEPPELIN_JAVA_OPTS="-Dspark.jars=/mylib1.jar,/mylib2.jar -Dspark.files=/myfile1.dat,/myfile2.dat"
4. Start Zeppelin
bin/ start
In console you will see print ‘Zeppelin start ‘ So go to  http://localhost:8080/
Go to NoteBook –> Tutorial
There you can see the chats and graph with queries. There you can pick chart attributes with drag and drop mode.


Madhuka UdanthaZeppelin Note for load data and Analyzing

Previous post is introduction for zeppelin notebook. Here we will more more detail view where it will used for researcher. Using shell interpreter we can download / retrieve data sets / files from remote server or internet. Then using Scala in Spark to make class from that data and then used SQL to play with the data. You can analysis data very quickly as it support dynamic form in zeppelin display. 

1. Loading Data for zeppelin from local file called csv

1 1 val bankText = sc.textFile("/home/max/zeppelin/zp1/bank.csv")
2 2
3 3 case class Bank(age:Integer,job:String,marital:String,education:String,balance:Integer)
4 4 val bank =>s.split(";")).map(
5 5 s=>Bank(s(0).toInt,
6 6 s(1).replaceAll("\"",""),
7 7 s(2).replaceAll("\"",""),
8 8 s(3).replaceAll("\"",""),
9 9 s(5).replaceAll("\"","").toInt
10 10 )
11 11 )
12 12 bank.registerAsTable("bank");


Here we are, Creating an RDD of Bank objects and register it as a table called ‘bank’

Note: Case classes in Scala 2.10 can support only up to 22 fields.

2. Next run SQL for newly created table

%sql select count(*) as count from bank


Give total record count we have in table

1 %sql
2 select age, count(1) value
3 from bank
4 where age < 30
5 group by age
6 order by age


It support tool tip on chart where it show age and user count for that age


Sivajothy VanjikumaranWSO2 ESB send same request to different Rest services

In this scenario I need to send a post request to two different REST services, I am using REST API configuration of WSO2 ESB First I need to post a request to first service and based on successful posting then need to post this same original request to another service and also But I need to obtain the response from first service and send it to client.However I do not need to obtain the response from second service.

I have illustrate this scenario in the simple flow diagram.

Relevant Synapse configuration.

Madhuka UdanthaZeppelin NoteBook

Here is my previous post to build zeppelin from source. This post will take you a tour on “notebook feature of zeppelin”. NoteBook contain with note. Note will have paragraphs.

1. Start you zeppelin by entering

/incubator-zeppelin $ ./bin/ start

2. Go to localhost:8080 and click on ‘NoteBook’ in top menu. Then click on ‘Create new note’. Now you will have note so go for the note you just created.


3. Make you note title as “My Note Book” by click on it. Let interpreter to be default for now.

4. Let add title for note by click on title of the note.


Multiple languages by Zeppelin interpreter

Zeppelin is analytical tool. NoteBook is Multi-purpose  for data ingestion, discovery, visualization. It supports multiple lanuages by Zeppelin interpreter and data-processing-backend also plugable into Zeppelin. Default interpreter  are Scala with Apache Spark, Python with Sparkcontext, SparkSQL, Hive, Markdown and Shell.


##Hi Zeppelin

You can run it by press shift + enter or click on play button on top the note or pharagraph


Dynamic form for markdown


Hello ${name=bigdata}


Scala with Apache Spark

now we will try scala in our note book. Let is get the version



val text = “Hey, scala”


In my next post will go more deep in scala

Table and Charts

you can used escape characters  ‘/n’ for new line and  ‘/t’ for tab and build below data set (hard coded for sample)



Table magic come in here

By using %table

println("%table student\tsubject\tmarks\nMadhuka\tScience\t95\nJhon\tScience\t85\nJhon\tMaths\t100\n")


Adding form by z.input(“key”,”value”)

println("%table student\tsubject\tmarks\nMadhuka\tScience\t"+z.input("Marks", 95)+"\nJhon\tScience\t85\nJhon\tMaths\t100\n")


Here it also support remote sharing


Next post we will go more in to dynamic form idea and really data analysis

Madhuka UdanthaMaven 3.3.x for Mint

1.Open the terminal and download the ''

2. Unpack the binary distribution

3. Move the apache maven directory to /usr/local
sudo cp -R apache-maven-3.3.1 /usr/local/

4. Adding PATH and MAVEN_HOME
gedit .bashrc OR vi .bashrc

Then add two of them as below
export PATH="/usr/local/apache-maven-3.3.1/bin:/opt/java/jdk1.8.0_40/bin:$PATH"
export MAVEN_HOME="/usr/local/apache-maven-3.3.1"

source .bashrc

Optional way for step four
Create a soft link or symbolic link for maven
sudo ln -s /usr/local/apache-maven-3.3.1/bin/mvn /usr/bin/mvn

5. Test Maven version
mvn –version


Sajith RavindraA possible reason for "Error while accessing backend services for API key validation" in WSO2 API manager


When try to validate a token in WSO2 API manager if it returns the error,
<ams:fault xmlns:ams="">
<ams:message>Unclassified Authentication Failure</ams:message>
<ams:description>Error while accessing backend services for API key validation</ams:description>

Most likely cause of this problem is an error with Key Manager. This error means that it could not validate the tokens because it could not access the back-end or in other words, the  the Key Manager.

I had a distributed API manager 1.6 deployment and when I tried to generate a token for a user this error was returned. I went and had a look on the Key Manager's wso2carbon.log since it indicates an error in Key Manager. In the log file I noticed the following log But there was nothing wrong in Key Manager,

{org.wso2.carbon.identity.thrift.authentication.ThriftAuthenticatorServiceImpl} - Authentication failed for user: admin Hence, returning null for session id. {org.wso2.carbon.identity.thrift.authentication.ThriftAuthenticatorServiceImpl} 

 And In the API Gateway's log file following error was logged,

TID: [0] [AM] [2015-04-06 21:08:15,918] ERROR {} -  API authentication failure {} Error while accessing backend services for API key validation
        at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(
        at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(
        at org.apache.axis2.engine.AxisEngine.receive(
        at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(
        at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(
        at org.apache.axis2.transport.base.threads.NativeWorkerPool$
        at java.util.concurrent.ThreadPoolExecutor.runWorker(
        at java.util.concurrent.ThreadPoolExecutor$
Caused by: java.lang.NullPointerException
        at org.apache.commons.pool.impl.StackObjectPool.borrowObject(


When I investigated the problem further I realized that I have NOT put the correct super user name and password in /repository/conf/api-manager.xml  in the gateway (or the user name and password used to log into the management console). When I used the correct user name and password the problem was solved. 

This error occurs because the Gateway could not connect to Key Manager validation service due to invalid credentials. 

In api-manager.xml following 3 sections contians <Username> and <Password> and make sure thy are correct,
1) <AuthManager> 


This is not the only possible reason for the above mentioned error. Some other common causes are(but not limited to),
- Mis-configured master-datasources.xml file
- Connectivity issues between Database and Key Manager
- Key manager is not reachable
- etc .....
I suggest you should have a look at the Key manager log file when you investigate this error and it's very likely you would find a clue

Yumani RanaweeraWhat happens to HTTP transport when service level security is enabled in carbon 4.2.0

I was under impression this was a bug and am sure many of us will assume like that.

In carbon 4.2.0 products for example WSO2 AS 5.2.0, when you apply security, the HTTP endpoint disables and disappears from service dashboard as well.

Service Dashboad


In earlier carbon versions this did not happen, both endpoints use to still appear even if you have enabled security.

Knowing this I tried accessing HTTP endpoint and when failed tried;
- restarting server,
- dis-engaging security
but neither help.

The reason being; this is not a bug, but is as design.  The HTTP transport disables when you enable security and to activate it again you need to enable HTTPS from service level transport settings.

Transport management view - HTTP disabled

Change above as this;

Transport management view - HTTP enabled

Kasun Dananjaya DelgollaDebugging the Android EMM Agent - WSO2 EMM 1.1.0

We have done a custom debugger for Android Agent to check the input and output operations real time on the Agent app itself. To use this feature, we should 1st build the Android Agent by following [1]. And before we generate the APK file, make sure you change the constant "DEBUG_MODE_ENABLED' in the [2] to True.

Once it's built and installed on a device. You can first enroll the device to the EMM server. Once you reach the Register success screen of the Agent app. (When you reach the below Screen)

Once you reach this screen, click on the options icon on the top right corner of the app and click on Debug Log option and you will see the live logs. Click on Refresh to view the latest command logs. And these logs will be saved in a file named "wso2log.txt" in the device external storage.

If you need more information on configuring and debugging EMM visit [3] and [4]

[1] -
[2] -
[3] -
[4]  -

Sivajothy VanjikumaranSimple http based file sharing server with in a second!

Recently i have come across a situation where I have to quickly need to share a file with my colluge and i did not had any ftp servers or portable devices.

So instance solution use Python!!!! if you have already installed the python in you machine.

Go to the directory that you want to share your files
Run "python -m SimpleHTTPServer 8000"

Now you can access your directory via browser ::)

John Mathon“The Cloud” is facing resistance or failing in many organizations


Barriers to Cloud / PaaS / DevOps Adoption and New Technology Renewal

As described in my simple guide to PaaS, SaaS, IaaS, BaaS,  here “The Cloud” is divided into different services which all have common characteristics of incrementally, remote manageability, low initial cost and other characteristics.   It is my most popular article.

Many organizations are taking significant steps to adoption of “The Cloud” but there are many that are struggling.   I want to emphasize that the “Cloud” as a technology, as a business is a huge success with over $100 billion in sales in aggregate and gaining market share and importance daily at a huge pace but some organizations are still faltering.

In talking to some CIO’s recently I discovered some interesting problems that many organizations are running into adopting Cloud technologies around IaaS and PaaS that are critical to future success.


Some of the major stumbling blocks

If you are a startup organization  then adopting the Cloud is a no brainer and relatively easy.   You probably have employees who have used the Cloud and may not know how to do things differently.   If you have an organization steeped in Cloud technologies you probably have already adopted.

What if you are an older organization with a significant set of existing applications, some legacy from different generations of technology and you want to move to the Cloud and leverage Platform 3 technologies but your organization is not “Cloud Savvy.”

A lot of these organizations are faltering in their Cloud strategy.    The recent failure of Nebula is an interesting example where the problems of Cloud adoption can be.    For more mature organizations there is a proven success model for how to go forward but first lets look at some of the reasons I’ve seen for failure:

Organizations may have one or all of these problems.

1) Technology Skills of People

The fact is that many people in organizations are not trained in Cloud technology, the newest open source technologies underlying PaaS/DevOps may be putting the organization at risk to adopt before you can handle it.   You need a leader who can bring your organization to this new way of doing things and face all the hurdles that will be faced.   He / She will need to be able to put in place education and training, find suitable initial projects to prove success and then come up with a plan to transition the whole organization.

Some people think this is all about tools and not about people.   They are underestimating the difficulty of change.  Going to a DevOps/PaaS technology is more a mindset and philosophy people issue than a technology issue although there will be specialized skills needed to get up and running effectively.

2) Lack of Understanding Cloud Disruptive Potential

If you don’t already have services in a private Cloud on premises, deploy applications in the Cloud or planning to do significant applications or services in the Cloud, you may not have enough value that will be generated by implementing DevOps/PaaS technology.    This is frequently the case that many organizations don’t see the value and the need for change that will come from disruption.

3) Resistance by parts of the organization

Many parts of major organizations will resist the cloud as they will see it as a threat to their jobs, their existing way of doing things, existing applications or simply fear the change.

4) Immaturity of the technology

Cloud technology is rapidly changing and is also immature in some ways for some applications.   Some users have been burned by bugs that don’t get fixed, capabilities promised and not delivered.   Be careful about using technology that is too new.  Find out how many customers are really using a particular technology.   If it is less than 5 or 10 consider using an alternative solution even if it is only a short term solution.

5) Complicated for the business to understand

Cloud is new and difficult to comprehend sometimes.   It changes and complicates some aspects of security, operations and costs.   You need to gain experience before you jump in too fast.   Given that Cloud is here to stay and has to be part of your long term success you need to make the investment to at least learn.

6) Fear Cloud security

Numerous companies still fear cloud security.   I have a blog on this topic and believe that Cloud security is no more of an issue than your internal security probably less.   That doesn’t mean you should walk in blind.   Different vendors in the cloud have different standards and the threats can be different but overall I don’t believe this should be a concern for any company.

7) Can’t decide on an internal path that makes sense

There may be an effort to bite off more than you can swallow.   If you are confused about the path or worried about the path take smaller steps.  It’s easy to break your Cloud adoption into small steps.

8) Find the startup costs of the Cloud too high with too few takers willing to leverage the technology initially within the organization

Many organizations with lack of experience in the Cloud are simply not aware of how it will change their organization and what advantages it can be to virtually all parts of your organization.   Education is the only answer.    Find out more and more about the Cloud and learn from others what they did and benefits they got.


Initial Steps to Successful Large Organization Adoption to the Cloud, New Technology adoption

These are steps any large organization can do to start using the Cloud today with lower risk and big payoff down the road.

A) Adopt the Cloud internally by using an open source Cloud technology such as Openstack, Eucalyptus or others.  Promote projects to use this internal cloud technology instead of purchasing hardware.  Use Docker to package your applications and deploy in the cloud.

Almost all organizations of significant size can benefit by having an internal Cloud offering that gives internal projects a free or cheap way to get started and tested.     If the project is a big success you can always either expand internal resources or “burst” to the regular Cloud for some parts or move the whole application to a public Cloud usually pretty trivially.   The internal Cloud gives an organization experience with Cloud technologies without some of the hiccups of dealing with external vendors and the risks and unfamiliarity that this can engender in those who are not quite ready for the external Cloud.

Docker is a key standard technology developing that allows you to systematize the packaging of your applications.   Learn to do this fast as it is easy and has payoffs all the way through the following steps.

This is where Nebula failed.  They had problems finding organizations that could justify building an internal Cloud infrastructure. Organizations with less Cloud savvy employees may find it useless.    Docker is a big advantage in enabling these organizations.   You can take almost any existing application, even older school applications, package them in docker and then deploy them easily in an internal Cloud infrastructure.

What’s the advantage?  It may not be cheaper to do this initially.   What Nebula found was that organizations couldn’t find a benefit for this in some cases.    It takes a somewhat longer view to realize the benefit of cloudifying what you already have.   Encapsulating in Docker and deploying on an internal Cloud will give you experience but also over time you will find the flexibility, easier management and eventually shared resources will benefit the organization.  It may not happen in 6 months but I doubt many large organizations will find that in 2 years their internal Cloud infrastructure isn’t a benefit either to pull together lots of disparate applications and servers into standardized hardware and administration or to facilitate eventually moving some to the external Cloud.  Replacing services with new services or easier upgrade and deployment over time will come from a Docker strategy.

B) Do at least one project using the external Cloud right away (most organizations have renegade groups doing this already if it hasn’t been officially approved.)

Gain some experience with a vendor and start to get people trained by actually using the Cloud.   It is almost certain that a large company has this going on whether IT is aware of it or not.   Try to find these projects and bring them into the light and learn from their experiences.   Start to expose the Shadow IT projects all over your organization and help them become better and learn across the organization to improve your risks and best practices.

Most organizations have limited space in their IT infrastructure for new projects.   It can take months to free up a space to even put a server in.   Using an external Cloud is a good way to get these projects going faster.   You will need to learn the security risks and mitigations eventually, so you might as well start sooner because of shadow IT you are probably already taking those risks.

C) Move some parts of your organization over to DevOps/PaaS technology to gain experience with Cloud Automated Deployment technologies.

A big part of the gain in the use of the Cloud comes if you leverage DevOps/PaaS which automates the delivery of services and provides for more agile operation.

A lot of organizations start to build their own “PaaS”  for a number of reasons.   Engineering finds that a full PaaS is over their heads in scope and learning.  DevOps using Chef and Puppet are starters to PaaS that help you gain experience in automated issues.   So, operations adopts these tools and starts to build a PaaS.

A typical problem here is that management and Operations are scared of full scale automation as it takes away responsibility from a lot of people.   The fact is DevOps keeps a lot of operations people around because the automation is hand built by operations people and run by devops it provides for faster more consistent delivery but not necessarily reducing costs of your own internal operations.   PaaS will eventually make most of your operations group need to find new work.    There is some work in understanding and bringing up and managing the PaaS but it won’t be nearly the same workforce size and maybe not the same people.

It is easy to argue that PaaS is too big a step but if you don’t cut the constant development of DevOps technologies you may find yourself essentially implementing an entire PaaS yourself with problems in stability, compatibility with other projects.   I believe it is better to jump as fast as you possibly can into using a commercial open source or other PaaS technology that meets your needs or you will end up repeating the mistakes of many companies that have already gone down this path.

The benefits of PaaS/DevOps are significant and are critical.  The ability to deploy to the Cloud quickly, to scale automatically, to deploy upgrades and to do this on a daily basis if needed is critical.   This will transform your ability to respond to customers, to create value quickly and to improve every aspect of the technology side of your organization.

A full PaaS implementation will give you the ability to deploy new functionality daily if you need to.   It will give you the ability to scale up or down as demand changes.   It will give you much more agility to deploy new services and applications.  These are significant benefits but the road there isn’t as easy yet as it should be.

One caveat I would offer is to look at mainly open source PaaS, but depending on your usage scenario you may need some features more than others.   I have a blog on selecting a PaaS.     I also have a blog on 9 use cases for PaaS.  Please read both of these to learn key features and places to start.

D) Do an API Management project to start taking your existing services and providing user friendly RESTful APIs and mandate that new projects must put their APIs and services in the API store you build.

A key part of virtualization of your organization and leveraging the benefits of the new technologies including the Cloud is reuse of APIs.   I consider this a critical and first step for many organizations.   Most bigger organizations already have numerous services in the organization and it is trivial with a good open source API management product for instance to start to build your library of internal services to reuse and manage better.

The transparency that comes with the API Management adoption is critical to encouraging the kind of innovation and fast new product development most companies are looking to achieve with the Cloud.   It also will start to give you a path to organizing the external APIs you use and managing all services your organization uses internally or externally as you grow your Cloud usage.

Most large organizations I talk to get the benefits of doing this step and it is the most profound change going on in most organizations in my opinion because it is a crucial step in leveraging new technology whether in the Cloud or not.   I have a blog on 21 reasons to use API Management.   Also a blog on API revolution and on the API/Enterprise Store and its importance.

E) Do an integration project with a Cloud Service such as Salesforce, WorkDay or other major SaaS vendor you are using today.

This is a trivial first step to cloud usage if you are already using a SaaS vendor.    In many cases you will find that there is Shadow IT usage of SaaS in the company.   If you provide amnesty or some other way to get these projects visible you can find the ones you want to endorse and then build integration to bring that “outlaw” usage into compliance with the organization by integrating it.

Almost every large organization using a SaaS product in the Cloud will want to integrate it with internal applications.   Put the integrations you build into open source inside the company.  See Inner Source.   Place the APIs you use or build with this integration in your API Management infrastructure to give transparency and make it easier for others to integrate.

Bigger Steps to Cloud Adoption

After you have gained some expertise and experience with Cloud technologies and you have some people up to speed, some experience with some vendors and the ways your organization can benefit from the Cloud consider more widespread adoption

F)  Move all new technology development to Cloud based technologies

G) Establish standard vendors for Cloud and sign specialized SLAs and security requirements with them

H) Consider the scope of a transition to virtual organization, what money could be saved and what it would cost based on real world experience you’ve gained in previous projects.

I) Put in place the performance and application, API management platforms, the PaaS automation to make large scale usage of the cloud automated, scalable and manageable.   I have a blog on the new way to manage virtual enterprises.

G) Dismantle your hardware infrastructure and move or transition to new applications, provide lifelines to software and hardware that won’t be going.

Other resources on this topic you may find interesting:

A simple guide to Cloud Computing, IaaS, PaaS, SaaS, BaaS, DBaaS, iPaaS, IDaaS, APIMaaS

The Seven Phases of PaaS (YouTube)

Five open source PaaS options you should know

A Guide to Open Source Cloud Computing Software

Security in the cloud age

Why middleware on cloud?

 Docker Fundamentals

CIOs, This Is How To Talk To Your CEO About The Cloud


Chamila WijayarathnaImplementing Web Crawlers for Sinmin Sinhala Corpus

In my last blog I wrote a little description about my  final year projects. In this blog and next few blogs, I am planning to write about some technical details about the components I contributed to. One of the parts I contributed was the crawlers of the Corpus. So in this blog, I am presenting the design and implementation details of the crawlers.


Crawlers are responsible of finding web pages that contain sinhala content, fetching, parsing and storing them in a manageable format. Design of a particular crawler depends on the language resource that is expected to be crawled. Following list contains a list of online Sinhala language sources which were identified up to now.

When collecting data from these sites, first thing to be done is creating a list of web pages to visit and collect data. There are three main ways that can be followed to do this.

  1. Going to a root URL (may be the home page) of a given website, list all links available in that page, then continuously going to each listed page and doing the same thing while no more new pages could be found.
  2. Identify pattern in the page URL and generate URL’s available for each day and visit them.
  3. Identify a page where all articles are listed and then get list of URL from that page.

All 3 above mentioned methods has their advantages as well as disadvantages.
When using first method, same programme can be used for all newspapers and magazines. So it will minimize the workload of implementation. But if this method is used, it is difficult to keep track on what time periods are already crawled and what are not. Also some pages may not have an incoming link from other pages, those pages will not be listed. By using this, it is difficult to do controlled crawling, which means crawl from a particular date to another.
When considering sources like ‘Silumina’ news paper, its article urls look like “”. All the urls has a common format which is + date + _art.asp?fn= + unique article id
unique article id = article type+ last 2 digits of year + month + date + article number
For newspapers and magazines those have that kind of unique format, the second method can be used. But some newspapers have urls which includes article name in it, eg :අලුත්-අවුරුද්දට-අලුත්-කිරි-බතක්-හදමුද.html . The method we discussed is impossible to add for these kind of sources.
Most of the websites has a page for each day which has the list of articles published in that day, eg: . This url can be generated using the resource (newspaper, magazine, etc,) name and corresponding date. By referring to the HTML content of the page, crawler gets the list of urls of the articles published in that day. We used this 3rd method for URL generating since it can be used to all resources we identified and since we can track the time periods that are already crawled and that are not crawled yet.
URL Generator UML (2).png
URLGenerators Architecture

All URL generators implement the methods generateURL to generate the url of the page which contains the list of articles using baseURL and date, fetchPage to connect to internet and fetch the page that contains url list using given url, and getArticleList to extract url list of articles using a given html document. The way of extracting URL list vary from one resource to another, because the styling of each page that contains URL list also vary from one source to another.



Generators for Mahawansaya, Subtitles and Wikipedia follows the same architecture, but with some small modifications. Mahawansaya sources, wikipedia articles and subtitles we used for corpus can’t listed under a date. So above mentioned method for listing pages didn’t worked for them. But they had pages that lists all the links to articles/files. So when crawling those resources, crawler goes through the whole list of items available in above mentioned page. When it crawls a new resource, it will save its details in a mySQL database. So, before crawling, it will check the database if it has already crawled or not.
After creating the list of articles to crawl, next task that need to done is collecting content and other meta data from each of these pages. This was done by the Parser class.
All parsers other than subtitleParser and gazetteParser reads required content and metadata from HTML pages. Since subtitles are available as zip files and gazettes are available as pdf files, those parsers have additional functionalities to download and unzip as well.

Parser Architecture.png

After extracting all the urls in the list page as described above, crawler goes through each url and pass the page to the XMLFileWriter using the addDocument method. All data is kept with XMLFileWriter until they are written to the file. When a crawler finishes extracting articles of a particular data, it notifies the XMLFileWriter that it can write the content to the xml file and sends the finished date with the notification. So with this notification XMLFileWriter writes the content to a file named with the date of the article in a folder named with the ID of that particular crawler. At the same time, finished date is written to the database.
This figure shows the process of web crawling as a sequence diagram.
Untitled Diagram (3).jpg

Following figure shows the overall architecture of a crawlers.
Crawler Class Diagram.jpg


Web crawlers are implemented using java. We have used maven as the build tool. HTML parsing is done using jsoup 1.7.3 library and date handling process is done using joda time 2.3 library. Handling the XML is done using Apache Axiom 1.2.14. Writing the raw XML data into the files in a human readable format is done using StAX Utilities library version 20070216.
Procedure of crawling a particular online sinhala source is described here. From the user interface of the crawler controller we specify to crawl a source from particular date to another. Crawler controller find the path of that jar using from the database using the ID of the crawler. Then it runs the jar with the specified time period as parameters. Then the crawler class ask for a web page from the URL generator class. So the URL generator class tries to get a URL from its list. But as this is the first time of crawling, the list is empty. So it tries to find URLs to add to its list. It goes for a page that list a set of articles as described in the design section. URL generator knows the URL format of this particular page and it generates the URL for the first page of the specified time period. As the URL generator now has a URL of the page that list a set of articles, it extract all the URLs of articles in that page and add them to the list of URLs it keeps. Now the URL generator returns a web page to the crawler class. Crawler class now adds this page to its XML file writer.
Data extraction from the web page is initiated by XML file writer. XML file writer has a HTML parser written specifically to extract the required details from a web page of this particular online source. XML file writer asks the parser to extract the article content and the required meta data from the web page. All the data is added to an article element (OMElements objects are used in Axiom to handle XML elements) and the article is added to the document element.
Crawler continuously ask for web pages and URL generator returns pages from the specified time period. When the URL generator finishes sending back the web pages for particular date, it notifies the XML file writer about the status. So the XML file writer writes the articles of that particular date to single file with filename as the date.

Madhuka UdanthaData Binding in Angular

Data binding is the process that establishes a connection between the application UI (User Interface) and model/business logic. In JavaScript world we used 'Backbone.js', 'KnockoutJS', 'BindingJS' and 'AngularJS'. This post will go through over the data binding in Angular.

Traditional Data Binding System

Most web frameworks focus on one-way data binding and classical template systems are only one direction. they merge template and model components together into a view. After the merge occurs, changes to the model or related sections of the view are NOT automatically reflected in the view. Worse, any changes that the user makes to the view are not reflected in the model. This means that the developer has to write code that constantly syncs the view with the model and the model with the view.

Data Binding in Angular

AngularJS addresses this with two-way data binding.
In Angular first the template (the uncompiled HTML along with any additional markup or directives) is compiled on the browser. The compilation step produces a live view. The model is the single-source-of-truth for the application state. With two-way data binding, the user interface changes are immediately reflected in the underlying data model, and vice-versa.


Because the view is just a projection of the model, the controller is completely separated from the view and unaware of it. AngularJS, on the other hand, makes this approach to data a primary part of its architecture. This allows you to remove a lot of logic from the front-end display code, particularly when making effective use of AngularJS’s declarative approach to UI presentation.

Example application

It is sample application with calculator, that is stated in kilometres per litre after user enter the fuel amount and drive distance. Here is a live preview.


Here is code

1 <!DOCTYPE html>
2 <html lang="en-US">
4 <head>
5 <title>Fuel Calculator</title>
6 <script src=""></script>
7 </head>
9 <body>
11 <div ng-app ng-init="fuel=30;distance=400">
12 <b>Fuel Consumption Calculator</b>
13 <div>
14 Fuel Amount (litres): <input type="number" min="0" ng-model="fuel">
15 </div>
16 <div>
17 Drive Distance (km): <input type="number" min="0" ng-model="distance">
18 </div>
19 <div>
20 <b>Fuel Economy (km/L):</b> {{distance / fuel |number:2 }} km/L
21 </div>
22 </div>
24 </body>
25 </html>


This looks like normal HTML, with some new markup. In Angular, a file like this is called a template. When Angular starts your application, it parses and processes this new markup from the template using the compiler. The loaded, transformed and rendered DOM is then called the view.


The first kind of new markup are the directives and My last post talk about them in brief. They apply special behavior to attributes or elements in the HTML. In the example above we use the ng-app attribute, which is linked to a directive that automatically initializes our application. Angular also defines a directive for the input element that adds extra behavior to the element. The ng-model directive stores/updates the value of the input field into/from a variable

The second kind of new markup are the double curly braces {{ expression | filter }}. It will replace it with the evaluated value of the markup.

The important thing in the example is that Angular provides live bindings: Whenever the input values change, the value of the expressions are automatically recalculated and the DOM is updated with their values. The concept behind this is two-way data binding.



Ajith VitharanaAudit log publsiher for WSO2 Identity Server + WSO2 Business Activity Monitor(BAM)

WSO2 Identity server create an audit logs for  user account activities (Add users/roles, Delete users/roles, assign users to roles, ..etc) . We can publish those logs to BAM and analyze .

This is a custom log appender which is written to publish audit logs to BAM.

How to run.
1.  Download the source from here .

2. Open the build.xml file and change the value of product.home property and execute the ant command to build the jar.

3. Copy the org.wso2.carbon.auditlog.publisher-1.0.0.jar file to <IS_HOME>/repository/components/lib

4.  Open the file and add the following configuration. (Change BAM url, username,password according to BAM configurations.)
log4j.appender.AUDIT_LOGFILE1.layout.ConversionPattern=[%d] %P%5p - %x %m %n
log4j.appender.AUDIT_LOGFILE1.layout.TenantPattern=%U%@%D [%T] [%S]
5. Add the AUDIT_LOGFILE1 name.
6. Start the BAM server first.

7. Start the IS server.

8. Create  users/roles.

9. Logged in to the BAM and browse the Cassandra using "Explore Cluster" feature. Now you should see the "audit_log_IS" under the "EVENT_KS"

10. Browse the rows to view the attributes of the published events.

11. Run the following hive query to summarize the audit logs.

(key STRING, initiator STRING, action STRING, target STRING, result STRING, uuid STRING, logTime BIGINT) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' WITH SERDEPROPERTIES (
"" = "audit_log_IS" ,
"cassandra.columns.mapping" =
":key,payload_initiator, payload_action, payload_target, payload_result,payload_uuid, payload_logTime" );

'hive.jdbc.update.on.duplicate' = 'false' ,
'hive.jdbc.primary.key.fields' = 'uuid' ,
'hive.jdbc.table.create.query' = 'CREATE TABLE ACCOUNT_ACTIVITY_SUMMARY1_TBL (initiator VARCHAR(100), action VARCHAR(100), target VARCHAR(100),result VARCHAR(100),uuid VARCHAR(100), logTime VARCHAR(100))' );

insert overwrite table ACCOUNT_ACTIVITY_SUMMARY1 select initiator,action,target,result,uuid, from_unixtime(cast(logTime/1000 as BIGINT), 'yyyy-MM-dd HH:mm:ss') as logTime from ACCOUNT_ACTIVITY;

'hive.jdbc.update.on.duplicate' = 'false' ,
'hive.jdbc.primary.key.fields' = 'uuid' ,
'hive.jdbc.table.create.query' = 'CREATE TABLE ACCOUNT_ACTIVITY_SUMMARY2_TBL (initiator VARCHAR(100), action VARCHAR(100),result VARCHAR(100),totalcount INT)' );

insert overwrite table ACCOUNT_ACTIVITY_SUMMARY2 select initiator,action,result, count(DISTINCT key) from ACCOUNT_ACTIVITY group by initiator,action,result;

The summarized tables (ACCOUNT_ACTIVITY_SUMMARY1_TBL and ACCOUNT_ACTIVITY_SUMMARY2_TBL) will be created in BAM_STATS_DB  which is configured in bam-datasources.xml under the <BAM_HOME>/repository/conf/datasources.

12. Generate a gadget based on the summarized data.

JDBC URL*: jdbc:h2:repository/database/samples/BAM_STATS_DB;AUTO_SERVER=TRUE
Driver Class Name : org.h2.Driver
User Name* : wso2carbon
Password* : wso2carbon



13. Generate gadget , copy the url.

 14. Go to the dashboard and add new gadget.

John MathonA Simple Guide to Agile Software Development


Agile is critical to today’s business

@john_mathon: The purpose of agile is to learn fast.   Yes, but… For a GOAL.  Don’t agile in name only Don’t agile for agile sake @john_mathon:  think about the #ROL RETURN ON LEARNING not the ROI for new agile initiatives. 

Sometimes a tweet cannot get the full meaning across easily.   I am elaborating on a couple comments I made at the TCS Innovation Forum where speakers talked about Agile. Agile impacts the way the whole organization delivers products.  

In today’s cloudified world it is almost the only way software is delivered.  We are listening to our customers and improving software rapidly, offering new experiments in interaction paradigms, testing out new products and services constantly. Agile is the only way this can be done.   As a result the entire business is organized around the notion of incremental smaller deliveries.   

Making the Agile machine run effectively and without hiccups is a tough job for the entire organization including CIO, CTO, CMO, CSO, …  who need to be engaged.

The Agile process

Let’s return to a basic premise of Agile.  Agile is about making fast delivery of software to get feedback from customers to improve the product fast which meets the needs of the customers. There are many parts to making this work well.

1) Providing user stories (examples of things customers would like to be able to do with the software or service)  that are detailed enough but not too detailed


2) Understand your team and the stories enough that you can estimate the duration to complete a story with the team you will have


3) Structuring the right stories in an iteration that can be accomplished and deliver the most value to customers 


4) Making sure that the stakeholders are cognizant of the reasons and limitations, history and content of the iteration so that they are in sync with the result that is expected


5) Performing the iteration with consistency and quality so that at the end of the iteration you accomplish the goal of the iteration


6) After the iteration the stakeholders agree and the customers (if a customer delivery was made) what the results were including negatives that were uncovered (badness)


7) Holding a retrospective with the team and all the appropriate people to determine what badness happened and what corrective measures could be taken to prevent similar badness from recurring


8) Implementing learned lessons from the last iteration



man in wheat field

If Agile is done well

If executed well as I have seen and done it is a beautiful and inspiring thing to see happen.    Customers see a steady stream of well thought out features that meet their demands quickly.   The business owners understand what is going to be delivered on what time scale and what to expect from their engineering and delivery teams.  The teams execute with high quality and learn from their mistakes rapidly so that they execute with higher and higher precision gaining the trust of the organization, investors and customers.


If Agile is done poorly

Customers complain they get features that don’t make any sense to them, Developers are late in developing and start scrimping on important quality aspects of delivery, the organization becomes disenchanted with engineering and delivery and people are fired and there is great upheaval.   Other parts of the organization disrespect engineering and don’t trust or understand what they are doing or why.

Why does Agile get done poorly?

There could be failures in any part of the process I outlined above but there are places I’ve seen in my years of software engineering management that are more likely to be the cause. The critical pieces that many organizations miss implementing well in Agile are:

1) Story definition 

The very first part is defining stories.  Frequently I have seen stories that aren’t stories at all.  They are 5 or 6 words that some group knows what they think it is but others may be clueless or have a very different idea.   

The basic simplest requirement of a story is that it be phrased something like:   “As an online order taker I can change the billing method in 20 seconds while the customer is on the phone.” 

The idea is that the people understand who the feature will affect, what exactly the functionality desired is and metrics around the feature that make it acceptable.   Instead stories can be very vague allowing the result to be highly unsatisfactory to everyone. 

This is really not that hard but it is frequently, very frequently messed up.   The most common situation is people are too vague in the story definition but it is also frequently the case the story definition is too specific forcing developers or architects along a path that actually makes no sense or could be done better.     

If the story is too specific it can lead to developers feeling like they have no creative choices but also it is very likely that whoever specified things in such detail had a mental model of things that is only one way to do something and in many cases a very bad way to do it.   So, the story should leave open things that aren’t critical to the function wanting to be achieved.

2) Estimation process

Frequently the estimation process is over-simplified.  This isn’t always bad.  Initially it is a big mistake.  In the beginning or at major points of transition of team members or change in functionality the estimation process breaks and people make bad estimates.   What happens is that estimation in the agile way is done so that people who have familiarity with what took X amount of work in a previous iteration will take some multiple of X.   This is left purposely abstract not looking at X as a number of hours but simply looking at the difficulty and comparing it previous tasks accomplished.  The velocity is then the number of these X type tasks can be done with the team.  

The team can change.  People leave or go on vacation, they have other responsibilities.  The team may not have familiarity with the task or tools being used for doing this new task.  Over time a team should converge on a velocity but frequently the convergence  breaks when things change and the team needs to relearn its velocity.   When that happens you have to be very systematic at understanding why your velocity changed.  If your velocity doesn’t stabilize it means the team isn’t estimating well and that leads to distrust by the organization in the overall estimates putting the entire project at risk.

The estimation process is complicated and a person familiar with the technical aspects of this should be used to train and help the team get on track with good estimation practices or the team will not converge on a good velocity estimate and the iterations will be wildly off the mark all the time.

3) Engaging the rest of the organization in the agile process

It is critical that the stakeholders are aware of the reason that stories were selected, why the number and scope was selected and agree.   There are 2 reasons I’ve seen this fail most often.   The first is when Engineering is not good at explaining these things or doesn’t really involve the stakeholders.   When they do this they are really operating in a waterfall methodology.   

A key principle of Agile is to keep the stakeholders supportive they must be engaged and aware of the tradeoffs and why they exist.   Otherwise, managers tend to assume that engineering is screwed up, lazy or other negative perceptions. It can also be Managers feel they are too busy to be engaged and refuse to provide time to understand.  Then they get upset when things blow up and don’t happen at the pace they expect.    This can be managements fault. 

They may simply be guilty of poor management or they could be purposely trying to avoid complicity.   Managers are smart.   If they listen and understand and then things blow up they can be held complicit in the failure.  If they were involved then they agreed to everything.   Political management will avoid being drawn into the process so they can later lay the finger of blame without it backfiring on them.

4) Retrospectives and transparency

Another key aspect of Agile is the retrospective and the process of transparency.   For the process to converge on a smooth machine that operates as people expect there has to be transparency and honesty.    This is hard in many organizations.   There is retribution for admitting any flaw.   

If management doesn’t understand the process and its failures, why the failures occurred and what corrective actions were taken they will become disenchanted.  

If engineering feels that management is overreactive and uses failures to punish individuals or the team then the morale of the team erodes. This is a really tricky thing and it is hard to be truly honest about what went wrong, to admit failure and it can be very hard to identify the reason accurately.  

Frequently if the reason is too personal it can lead to firings or if it isn’t personal enough it doesn’t solve the problem.     To me this is a big management cultural issue.  

I prefer smaller organizations where we know we hire the best people.  We therefore know that if an error is made anybody could have made it.    It is simply something someone or some group of people didn’t understand and they will do better. Nobody assumes that it was an egregious error that indicates incompetence. It may be the case that people are incompetent and it may be the case some people don’t belong in a particular job or doing a particular thing.  That is a management question as well.

The process of talking honestly about what went wrong, trying to fix it is critical to becoming a great company, to creating consistency and improvement.   So, this is the learning part of agile and is arguably the single most important aspect.  Yet it is also in some ways the hardest.

Agile involves the entire organization

I think that you can see from the above short description that Agile is a process that requires participation of the entire organization.   It goes all the way to culture.   If the culture doesn’t support transparency you aren’t going to do the retrospective process very well and you aren’t going to learn.  

 You may have some exceptional people who overcome the problems of others.   That is frequently the case however, if there are one or two people who perform and everyone else is a dolt I suspect you really have a major transparency problem in your organization.

There can be problems in management who must be engaged in the process for it to work.  There can be problems in engineering whose leadership tries to avoid communicating, communicates too much or can’t give the true picture of what’s going on.

Why does agile work better than waterfall?

Some will argue that agile and waterfall are similar ways of getting to the same endpoint.   Some will argue that the overhead of the process of agile slows down the development or leads to things being done wrongly from an architectural perspective.   This is wrong. 

Agile takes many small steps.  There is overhead in dealing with the stories, the retrospectives, the demos and meetings that seem required for each iteration therefore many people think that the agile process can slow things down.   This is a major misunderstanding of the problem of software development. 

Theoretically if you want to achieve goal X using waterfall you carefully analyze goal X and understand everything involved in implementing goal X.   You understand the team, the work needed and the time.  You specify it all out carefully, review it with experts and you are good to go. The problem with this is that frequently you arrive at goal X late or not at all because of the large commitment and changes in goal X by the time you achieve goal X the company may have decided they don’t even want goal X.  

There are many reasons these projects failed and usually it was extremely damaging to individuals, groups and the whole organization when they failed.    Agile leads to arriving at point Y.  You did not know Y in advance.   It is where you all agreed during the process to end up but you didn’t know Y when you started.   It is unlikely point Y looks exactly like X. Agile says that because we are evaluating and changing what we are doing at each step we get to a different point than X, presumably a better point than X because of the constant adjustments.

We achieve Y with agile not X.   The difference is that at each point along the way to Y we chose a path that let us stop with something useful so that the whole thing is not a waste.   We also learned a lot about our customer, about the requirements, about our teams capabilities, what other ways to achieve X that are different maybe better than X.   Agile says that the stepwise process leads to a better faster outcome than X (point Y) than if you tried to push in one giant push to X.


Agile is a critical part of the modern enterprise.   So, everyone seems to assume it as a given.  However, that doesn’t mean your organization really knows how to do agile well or that your organization has the right management or culture to make agile work well. 

I can say that when agile is working it is brilliant and better than any waterfall process I ever did. When agile first started there was a lot of faddism and people would brag how short their iterations were.  There were a lot of complaints from different parts of the organization.  

All kinds of things were tried and differences emerged but over time we have come to understand this a lot better.   I don’t think agile is well implemented in most corporations.  I don’t think many people are schooled in agile well enough in engineering let alone in all the other parts of the management yet it is critical to the smooth and efficient functioning of the modern corporation.

Other stories you may find interesting:

How to become an innovative company?

The Virtuous Circle is key to understanding how the world is changing – Mobile, Social, Cloud, Open Source, APIs, DevOps

CIO: Enterprise Refactoring and Technology Renewal: How do you become a profit center to the Enterprise?

Security in the cloud age

Inner Source, Open source for the enterprise

Open Source is just getting started, you have to adopt or die

Madhuka UdanthaAngularJS and Angular Directives

AngularJS, is an open-source web application framework maintained by Google and a community of individual developers. It address many of the challenges encountered in developing single-page applications. Angular [1] is built around the belief that declarative programming should be used for building user interfaces and connecting software components, while imperative programming is better suited to defining an application's business logic. AngularJS extends HTML with new attributes. Those extended tags are dynamically content through two-way data-binding that allows for the automatic synchronization of models and views. AngularJS is a MVC based framework.


  • ng-app : This directive defines and links an AngularJS application to HTML.
  • ng-model : This directive binds the values of AngularJS application data to HTML input controls.
  • ng-bind : Directive binds application data to the HTML view.

Here is sample page that you can try to see those

1 <!DOCTYPE html>
2 <html lang="en-US">
4 <head>
5 <script src=""></script>
6 </head>
8 <body>
10 <div ng-app="">
11 <p>Name : <input type="text" ng-model="name"></p>
12 <h1>Hello {{name}}</h1>
13 </div>
15 </body>
16 </html>

Here are some most commonly used directives.

  • ng-model-options: Allows tuning how model updates are done.

  • ng-class: Allows class attributes to be dynamically loaded.

  • ng-controller: Specifies a JavaScript controller class that evaluates HTML expressions.

  • ng-click: Directive allows to specify custom behavior when an element is clicked.

  • ng-repeat: Instantiate an element once per item from a collection.

  • ng-list: Text input that converts between a delimited string and an array of strings. (default delimiter is a comma)

  • ng-show & ng-hide: Conditionally show or hide an element, by setting the CSS display style.

  • ng-switch: Conditionally instantiate one template from a set of choices, depending on the value of a selection expression.

  • ng-change: Evaluate the given expression when the user changes the input.

  • ng-view: The base directive responsible for handling routes

  • ng-form: It is useful to nest forms (validiting of a sub-group of controls).

  • ng-if :Basic if statement directive.

  • ng-include: Fetches, compiles and includes an external HTML fragment.

  • ng-animate: A module provides support for JavaScript, CSS3 transition and CSS3 keyframe animation

  • ng-style:  It allows to set CSS style on an HTML element conditionally.



sanjeewa malalgodaHow to use cxf intercepter to pre-process requests to JAX-RS services - Apply security for JAX_RS services

When we use jax-rs services sometimes we need to add request pre processors to services. In this post i will discuss how we can use cxf interceptor in jax-rs services.
You may find more information from this url[]
package demo.jaxrs.server;
import java.util.List;
import java.util.Map;
import java.util.TreeMap;
import org.apache.cxf.message.Message;
import org.apache.cxf.phase.AbstractPhaseInterceptor;
import org.apache.cxf.phase.Phase;

public class CustomOutInterceptor extends AbstractPhaseInterceptor {

    public CustomOutInterceptor() {
        //We will use PRE_INVOKE phase as we need to process message before hit actual service
        super(Phase.PRE_INVOKE );

    public void handleMessage(Message outMessage) {
        System.out.println("Token: "+ ((TreeMap) outMessage.get(Message.PROTOCOL_HEADERS)).get("Authorization"));
  // Do your processing with Authorization transport header.

Then we need to register Interceptor by adding entry to webapp/WEB-INF/cxf-servlet.xml file. Then it will execute before request dispatch to actual service.

<beans xmlns=""
    <jaxrs:server id="APIService" address="/">
            <ref bean="serviceBean"/>

            <ref bean="testInterceptor"/>
    <bean id="testInterceptor" class="demo.jaxrs.server.CustomOutInterceptor" />
    <bean id="serviceBean" class="demo.jaxrs.server.APIService"/>

Then compile web app and deploy in application server. Once you send request with Authorization header you will noticed that it printed in server logs.

See following sample curl request
curl -k -v -H "Authorization: Bearer d5701a8ed6f677f215fa4d65c05e361"

And server logs for request
Token: [Bearer d5701a8ed6f677f215fa4d65c05e361]
API Service -- invoking getAPI, API id is: qqqq-1.0.0-admin

Srinath PereraEmbedding WSO2 Siddhi from Java

Siddhi is the CEP Engine that powers WSO2 CEP. WSO2 CEP is the server, that can accepts messages over the network via long list of protocols  such as Thrift, HTTP/JSON, JMS, Kafka, and Web Socket.

Siddhi, in contrast, is a java Library. That means you can use it from a Java class, or a java main method. I personally do this to debug CEP queries before putting them into WSO2 CEP. Following Describes how to do it. However, you can embedded it and create your own apps.

First, add following jars into class path. ( You can find them from WSO2 CEP pack, and from ). The Jar versions might change with new packs, but what ever in the same CEP pack will work.
  1. siddhi-api-2.1.0-wso2v1.jar (from CEP_PACK/repository/components/plugins/) 
  2. antlr-runtime-3.4.jar (from CEP_PACK/repository/components/plugins/) 
  3. log4j-1.2.14.jar ( download from
  4. siddhi-query-2.1.0-wso2v1.jar (from CEP_PACK/repository/components/plugins/) 
  5. siddhi-core-2.1.0-wso2v1.jar (from CEP_PACK/repository/components/plugins/) 
Now you can use Siddhi using the following code. You define a Siddhi engine, add queries, register callbacks to receive results, and send events.

SiddhiManager siddhiManager = new SiddhiManager();
//define stream
siddhiManager.defineStream("define stream StockQuoteStream (symbol string,
    value double, time long, count long); ");
//add CEP queries
siddhiManager.addQuery("from StockQuoteStream[value>20] 
   insert into HighValueQuotes;");
//add Callbacks to see results
siddhiManager.addCallback("HighValueQuotes", new StreamCallback() {
public void receive(Event[] events) {

//send events in to Siddhi
InputHandler inputHandler = siddhiManager.getInputHandler("StockQuoteStream");
inputHandler.send(new Object[]{"IBM", 34.0, System.currentTimeMillis(), 10});

Here events you sent in must agree with the event streams you have defined. For example, StockQuoteStream must have a string, double, long, and a long as per event stream definition.

See my earlier blog for example of more queries.

Please see  [1] and [2] for more information about the Siddhi query language. If you create a complicated query, you can check intermediate results by adding callbacks to intermediate streams.

Enjoy! reach us via wso2 tag at stackoverflow if you have any questions or send a mail to


Lali DevamanthriInspircd security update discovered several problems in inspircd, an IRC daemon:

InspIRCd is a modular Internet Relay Chat (IRC) server written in C++ for Linux, BSD, Windows and Mac OS X systems which was created from scratch to be stable, modern and lightweight.

As InspIRCd is one of the few IRC servers written from scratch, it avoids a number of design flaws and performance issues that plague other more established projects, such as UnrealIRCd, while providing the same level of feature parity.

– an incomplete patch for CVE-2012-1836 failed to adequately resolve
the problem where maliciously crafted DNS requests could lead to
remote code execution through a heap-based buffer overflow.

– the incorrect processing of specific DNS packets could trigger an
infinite loop, thus resulting in a denial of service.

For the stable distribution (wheezy), this problem has been fixed in
version 2.0.5-1+deb7u1.

For the upcoming stable distribution (jessie) and unstable
distribution (sid), this problem has been fixed in version 2.0.16-1.

Further information about Debian Security Advisories, how to apply
these updates to your system and frequently asked questions can be
found at:

Danushka FernandoTips to write an Enterprise Application On WSO2 Platform

Enterprise applications or Business Applications, are complex, scalable and distributed. They could deploy on corporate networks, Intranet or Internet. Usually they are data centric and user-friendly. And they must meet certain security, administration and maintenance requirements.
Typically Enterprise Applications are large. Which is multi user, runs on clustered environments, contains large number of components, manipulates large amount of data and may use parallel processing and distributed resources. And they will try to meet some business requirements and at the same time it should provide robust and maintenance, monitoring and administration.

Here are some features and attributes that may include in an Enterprise Application.

  • Complex business logic.
  • Read / Write data to / from databases.
  • Distributed Computing.
  • Message Oriented Middleware.
  • Directory and naming services
  • Security
  • User Interfaces (Web and / or Desktop)
  • Integration of Other systems
  • Administration and Maintenance
  • High availability
  • High integrity
  • High mean time between failure
  • Do not lose or corrupt data in failures.

The advantages of using WSO2 platform to develop and deploy an Enterprise Application is that most of above are supported by WSO2 platform itself. So in this blog entry I am going to provide some tips to develop and deploy an Enterprise Application in WSO2 platform.

Read / Write data to / from databases.

In WSO2 platform convention of using databases is access them through datasources. Here the developer can use WSO2 SS (Storage Server) [1] to create the databases. [2]. So the developer of the Application can create the database needed and if needed add the data to database console provided by WSO2 SS which is explained in [2]. For security reasons we can restrict developers to use the mysql instances only through WSO2 SS by restricting the access outside the network.

After creating a database next step would be to create a datasource. For this purpose developer can create a datasource in WSO2 AS (Application Server) [3] and [4] explains how to add and manage data sources. As it is explained in [5] developer can expose the created data source as a JNDI resource and developer can use the data source/s in the application code as explained there.

Store Configuration, Endpoints in WSO2 Registry.

And developer can store configuration, endpoints in registry provided by each WSO2 product. So registry have three parts.

  • Governance - Shared across the whole platform
  • Config - Shared across the current cluster
  • Local - Only available to current instance

Normally developer need to store data in governance if that data needs to be accessed by other WSO2 products as well. Otherwise he/she needs to store data in config registry.

Use Distributed Computing and Message Oriented Middleware provided by WSO2

WSO2 ESB can be used to add Distributed computing to the application. [6] and [7] explains how the developer can use WSO2 ESB functionalities to add Distributed Computing to the his / her application.
WSO2 ESB also supports JMS (Java Messaging Service) [8] which is a widely used API in Java-based Message Oriented Middleware. It facilitates loosely coupled, reliable, and asynchronous communication between different components of a distributed application.

Directory And Naming Services Provided by WSO2 Platform

All WSO2 Products can be use with LDAP, AD or any other Directory or Naming services and WSO2 Carbon APIs provide developer the APIs which can do operations with these Directory or Naming services. This is handled using User Store Managers implemented in WSO2  products [9]. Anyone who will use WSO2 products can extend these User Store Managers to map it to their Directory structure. [10] provides a sample of how to use these Carbon APIs in side application to access the Directory Services from the Application.

Exposing APIs and Services

Web app developer can expose some APIs / Webservices from his / her application and he / she can publish them via WSO2 API Manager [21] so everyone can access them. In this way the Application can be integrated in to the other systems and the application can use the existing APIs without implementing them again.

And there is another commonly used feature in WSO2 Platform. The data sources created using WSO2 AS / WSO2 DSS can be exposed as data services and these data services can be exposed as APIs from WSO2 API Manager  [22] .

The advantage of using WSO2 API Manager in this case is mainly security. WSO2 API Manager provides oauth 2.0 based security.


When providing security we can provide security to the application by providing authentication and authorization. And we can provide security to the deployment by applying Java security and Secure vaults. And services deployed can be secured using Apache Rampart [11] [12].
To provide authentication and authorization to the Application developer can use the functionalities provided by the WSO2 IS (Identity Server) [13]. Commonly SAML SSO is used to provide authentication. [14] explains how SSO works, how to configure to work with SAML SSO and so on.

For authorization purposes developer can use the Carbon APIs provided in WSO2 products which is described in [15].

Java Security Manager can be used with WSO2 products so the deployment can be secured with the security provided by the policy file. As explained in [16] Carbon Secure Vaults can be used to store passwords in a secure way.

Develop an Application to deploy on Application Server

[20] provides a user guide to develop and deploy an java application on WSO2 AS. This documentation discuss about class loading, session replication, writing Java, JAX - RS, JAX - WS, Jaggery and Spring applications, Service Development Deployment and Management, usage of JAVA EE and so on.

Administration, Maintenance and Monitoring

WSO2 BAM (Business Activity Monitor) [17] can be use to collect logs and create some dashboards which will let people to monitor the status of the system. [18] explains how data can be aggregated, processed and presented with WSO2 BAM.


WSO2 Products which are based on Apache Axis2, Can be clustered. [19] provides clustering tips and how to cluster WSO2 products. By clustering the high availability can be achieved in the system.



Danushka FernandoDell Vostro 1520 Boot problem, Recovering & restore Grub from a live Ubuntu cd.

In Vostro 1520 Dell Laptop series, when Ubuntu installed with windows (Dual Boot), due to a problem in bios the MBR will get corrupted and after that you will be unable to see the Grub loader at the startup and you want be able to load either windows or Ubuntu.
This happens only because of the corruption at MBR record or partition table. 
So by reinstalling grub to the partition u can resolve this problem. The steps of resolving are given below.

  1. First you have to boot from the Ubuntu Live CD
  2. Then you have to find the sd number of the partition that grub is installed in. ( If you have a separate /boot partition it should be the boot partition sd number otherwise it should be the Ubuntu file system sd number. ). And here I assume that sd number is x.
  3. Open the Terminal and execute following codes
  4. sudo mkdir /mnt/root -- Setup a folder before change root.
  5. sudo mount -t ext3 /dev/sdax /mnt/root -- Using this code Mount the partition in to the folder created.
  6. Mount and bind necessary places using following code
  7. sudo mount -t proc none /mnt/root/proc
  8. sudo mount -o bind /dev /mnt/root/dev
  9. sudo chroot /mnt/root /bin/bash -- Now change the root to the configured folder using this code
  10. grub-install /dev/sdax -- Install the grub to specified device.
  11. Now you are done. Restart and check. You will see that the grub is loading.

Note : - When you are at step 7 and 8, if it fails with errors saying directories does not exists, then create directories by using following commands before step 7.
  1. sudo mkdir /mnt/root/proc
  2. sudo mkdir /mnt/root/dev

Danushka FernandoLinux Mint updates bug found

Bug Description : Failed to fetch (malformed Release file?) Unable to find expected entry multiverse/binary-i386/Packages in Meta-index file Some index files failed to download, they have been ignored, or old ones used instead.

Reasoning and Solution : There is no multiverse section in the Mint repository... please edit your /etc/apt/sources.list and remove the "multiverse" keyword from the lines referring to the Mint repositories. 

How to Fix it:
This command will open the sources list.
~$ sudo gedit /etc/apt/sources.list
This is my file before editing and after editing

Before :-

deb isadora main upstream import multiverse import
deb lucid main restricted universe multiverse
deb lucid-updates main restricted universe multiverse
deb lucid-security main restricted universe multiverse
deb lucid partner
deb lucid free non-free

# deb lucid-getdeb apps
# deb lucid-getdeb games


deb isadora main upstream import
deb lucid main restricted universe
deb lucid-updates main restricted universe
deb lucid-security main restricted universe
deb lucid partner
deb lucid free non-free

# deb lucid-getdeb apps
# deb lucid-getdeb games

Danushka FernandoHow to overcome the problem Linux kernel image is not booting properly

First check this post and come to this section

This is the part of the fstab file that needed in this matter.

# / was on /dev/sdax during installation
UUID=xxxx / ext3 relatime,errors=remount-ro a b
# swap was on /dev/sday during installation
UUID=yyyy none swap swc d
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8  0 0

x should be the number of the partition of the ubuntu installed
y should be the number of swap
xxxx - UUID of Ubuntu partition
yyyy - UUID of swap
a - dump            (0 is my recommendation)
c - dump            (0 is my recommendation)     
b - pass            (1 is my recommendation)
d - pass            (0 is my recommendation)

Danushka FernandoHow to fix grub loader when its not loading at the boot

When you installed or restore a Microsoft Windows after the Linux installation you won't see the grub loader is loading at the boot.

This is not a error of grub or Linux. But you couldn't load Linux at this situation.

There are few ways of fixing this issue. One way is using super grub disk.

Here I will tell you an another way of fixing this issue.

You just have to boot using any type of Linux CD/DVD

Now Open the terminal and type

sudo grub

and type the password to enter the kernel mode.

Now you have to find out few things.

First find the hd number of the hard disk.

This is normally zero since we are using one hard disk only.

But if we are using more of them we have to find out the hd number of the hard disk.

Then find the sd number of the partition which contains grub.

If you are not partitioned a separate /boot partition at the Linux installation this is same as the Linux partition.

If you are using Debian version you can start the application called Partition Editor and find out these things.

Here I will take hd number is x and sd number is y.

After typing sudo grub and enter the password you will be switched in to new prompt ( grub prompt ) like below


Now Enter

grub> root (hdx,y)

Now Restart the machine and you will find the grub loader is loading.

Danushka FernandoFind the menu list in the latest Grub Version

I have noted that in the latest version of the Grub the file u have to edit to customize the Grub view or to Fix Errors had changed. It is no longer exists inside menu.lst file. Now it is in the grub.cfx file in the same location.

Danushka FernandoHow to edit file /etc/fstab

This is how the my fstab file is looks like which is placed inside the etc folder in the Linux Folder structure.

You can overcome some problems by editing this file.

# /etc/fstab: static file system information.
# Use 'vol_id --uuid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
proc /proc proc defaults 0 0
# / was on /dev/sda8 during installation
UUID=64839f40-b2f1-412f-ae1a-c5a213ba449a / ext3 relatime,errors=remount-ro 0 1
# /home was on /dev/sda7 during installation
UUID=2ab422e9-b37d-4e7e-966f-6ca7d1d081cf /home ext3 relatime,errors=remount-ro 0 2
# /boot was on /dev/sda6 during installation
UUID=faaab6f9-539b-4f1a-82fc-b5a18887d28d /boot ext3 relatime,errors=remount-ro 0 3
# swap was on /dev/sda5 during installation
UUID=ab5f806d-8f4e-42b9-b67b-5618e9715585 none swap sw 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0

Now we can study a line what it contains

# /home was on /dev/sda7 during installation

UUID=2ab422e9-b37d-4e7e-966f-6ca7d1d081cf /home ext3 relatime,errors=remount-ro 0 2

This line contains following parts

  1. # /home was on /dev/sda7 during installation
  2. UUID=2ab422e9-b37d-4e7e-966f-6ca7d1d081cf
  3. /home
  4. ext3
  5. relatime,errors=remount-ro 0 2

Now let me consider each one of these.

Actually I have separate home and boot partitions.

And when I installed linux for the second time I had to edit this file to use them as the actual home and boot partitions.

Part 1
This says that where the partition was in at the installation. In my case the partition is in sda7. It is said by the "dev/sda7"

Part 2
First of all launch the Partition Editor Software.

You can find the UUID of any of the partitions from right click the partition and select information at the Partition Editor Software.

Part 3
This part says where to mount the partition. Example is this sda7 partition has mounted as the home.

Part 4
This says about the file system about the partition. In this example case home partition contains the ext3 file format.

Part 5
I also don't know the exact meaning of this line. But it contains
options       dump  pass

options can have values of errors=remount or defaults
errors=remount is used only for the root partition
But if u want u can use that option for any partition
Next two numbers represent dump and pass
It can have following variations

0 0 for /proc and swap
0 1 for root
0 2 for others

You can see that last two lines is bit different because they are for swap partition and cdrom. Don't edit them if unless you really needed and u know what to do there.

Most probably you will need to restart twice after editing this to make it work. Don't get afraid if it doesn't work at the first time.

Danushka FernandoHow to mount a partition permanently some where in the folder structure.

First check this post and come to this section

You needed to add the following line to the fstab file.

# / was on /dev/sdax during installation
UUID=xxxx /media/xxxxxxxxx xxx relatime,errors=remount-ro a b

xxxx - UUID of the partition
x - number of the partition you needed to mount
xxx - file format of the partition
xxxxxxxxx - Name of the Partition**
a - dump
b - pass

** Name of the partition should be the same as the Real name of the partition (The name we can see at the Computer (Simultaneous to My Computer in Microsoft Windows)).

An example line for this is

# /media/MyDisk was on /dev/sda8 during installation
UUID=64839f40-b2f1-412f-ae1a-c5a213ba449a /media/MyDisk ntfs relatime,defaults-ro 0 2

It is recommended to have name without spaces here. Because spaces can cause troubles. 

Danushka FernandoCustomize your grub loader by just editing a file

You can find a file called menu.lst in the path /boot/grub. By editing this file you can customize your Grub loader easily.

First we can set the time out amount of the grub. Go to Section ## timeout sec and set timeout value to an amount of seconds as you like.

Then you can Customize the view by changing the colors. It is under # Pretty colors.

Then comes the most important thing of the Grub menu list. Find out the kernel list in the File. Normally it is in the bottom. Sample section for one kernel is given below.

title        Linux Mint 7 Gloria, kernel 2.6.28-16-generic
root       faaab6f9-539b-4f1a-82fc-b5a18887d28d
kernel    /vmlinuz-2.6.28-16-generic root=UUID=6a2a464d-bf70-4b4a-8c8e-8dad3ccafe2c ro quiet splash
initrd      /initrd.img-2.6.28-16-generic

You can always change the title as you wish. root is the UUID of the boot partition. If u don't have a separate boot partition then it is same as the partition containing Linux kernel. Then the generalize form is given below.

title        <Title of the kernel / os>
root      <UUID of the boot partition>
kernel    /vmlinuz-2.6.28-16-generic root=UUID= ro quiet splash
initrd      /initrd.img-2.6.28-16-generic

Other details are about the Kernel versions. So don't try to edit them until its really needed and u exactly know what is doing.

After finish the editing reboot and see what happened.

sanjeewa malalgodaHow to generate custom error message with custom http status code for throttled out messages in WSO2 API Manager.

In this post we will discuss how we can override the throttle out message HTTP status code. APIThrottleHandler.handleThrottleOut method indicates that the _throttle_out_handler.xml sequence is executed if it exists. If you need to send custom message with custom http status code we may execute additional sequence which can generate new error message. There we can override message body, http status code etc.

Create convert.xml with following content

<?xml version="1.0" encoding="UTF-8"?><sequence xmlns="" name="convert">
    <payloadFactory media-type="xml">
            <am:fault xmlns:am="">
                <am:type>Status report</am:type>
                <am:message>Runtime Error</am:message>
            <arg evaluator="xml" expression="$ctx:ERROR_CODE"/>
            <arg evaluator="xml" expression="$ctx:ERROR_MESSAGE"/>
    <property name="RESPONSE" value="true"/>
    <header name="To" action="remove"/>
    <property name="HTTP_SC" value="555" scope="axis2"/>
    <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
    <property name="ContentType" scope="axis2" action="remove"/>
    <property name="Authorization" scope="transport" action="remove"/>
    <property name="Access-Control-Allow-Origin" value="*" scope="transport"/>
    <property name="Host" scope="transport" action="remove"/>
    <property name="Accept" scope="transport" action="remove"/>
    <property name="X-JWT-Assertion" scope="transport" action="remove"/>
    <property name="messageType" value="application/json" scope="axis2"/>

Then copy it to wso2am-1.6.0/repository/deployment/server/synapse-configs/default/sequences directory or use source view to add it to synapse configuration.
If it deployed properly you will see following message in system logs. Please check the logs and see is there any issue in deployment process.

[2015-04-13 09:17:38,885]  INFO - SequenceDeployer Sequence named 'convert' has been deployed from file : /home/sanjeewa/work/support/boeing/wso2am-1.6.0/repository/deployment/server/synapse-configs/default/sequences/convert.xml

Now sequence deployed properly then we may use it in _throttle_out_handler_ sequence. Add it as follows.

<?xml version="1.0" encoding="UTF-8"?><sequence xmlns="" name="_throttle_out_handler_">
    <sequence key="_build_"/>
    <property name="X-JWT-Assertion" scope="transport" action="remove"/>
    <sequence key="convert"/>

Once _throttle_out_handler_ sequence deployed properly you will see following message in carbon logs. Check carbon console and see is there any errors with deployment.

[2015-04-13 09:22:40,106]  INFO - SequenceDeployer Sequence: _throttle_out_handler_ has been updated from the file: /home/sanjeewa/work/support/boeing/wso2am-1.6.0/repository/deployment/server/synapse-configs/default/sequences/_throttle_out_handler_.xml

Then try to invoke API until requests get throttled out. You will see following response.

curl -v -H "Authorization: Bearer 7f855a7d70aed820a78367c362385c86"

* About to connect() to port 8280 (#0)
*   Trying
* Adding handle: conn: 0x17a2db0
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x17a2db0) send_pipe: 1, recv_pipe: 0
* Connected to ( port 8280 (#0)
> GET /testam/sanjeewa/1.0.0 HTTP/1.1
> User-Agent: curl/7.32.0
> Host:
> Accept: */*
> Authorization: Bearer 7f855a7d70aed820a78367c362385c86
< HTTP/1.1 555
< Access-Control-Allow-Origin: *
< Content-Type: application/json
< Date: Mon, 13 Apr 2015 05:30:12 GMT
* Server WSO2-PassThrough-HTTP is not blacklisted
< Server: WSO2-PassThrough-HTTP
< Transfer-Encoding: chunked
* Connection #0 to host left intact
{"fault":{"code":"900800","type":"Status report","message":"Runtime Error","description":"Message throttled out"}}

Chamila WijayarathnaSinMin - A Corpus for Sinhala Language

In this blog, I thought of giving a little description to the Research and Development Project I did in my final year at department of Computer Science and Engineering, University of Moratuwa. I did this project in 4 member group with Dimuthu Upeksha, Maduranga Siriwardena and Lahiru Lasandun. Project was supervised by Dr. Chinthana Wimalasuriya, Mr. Nisansa De Silva and Prof. Gihan Dias.
A language corpus can be identified as a collection of authentic texts that are stored electronically. It contains different language patterns in different genres, time periods and social variants. Quality of a language corpus depends on the area that has been covered by it.
A corpus will help people to study about a language and identify patterns of a language. Also there are so many other usages as well.

  1. Implementing translators, spell checkers and grammar checkers.
  2. Identifying lexical and grammatical features of a language.
  3. Identifying varieties of language of context of usage and time.
  4. Retrieving statistical details of a language.
  5. Providing backend support for tools like OCR, POS Tagger, etc.

Following is the overall architecture of the corpus we developed.

Source codes of the each components we developed are available at Github at
Other than developing the corpus, during this project we did a performance analysis on what is the most effective data storage solution to implement a language corpus. Results of this analysis was published in 11th BDAS Conference ( From the results we got, we used Cassandra as our primary data storage solution and Oracle as backup storage.
So more details about the corpus can be found from the following resources.
If you are interested in identifying interesting patterns about Sinhala language, you can access the user interface of our corpus from following link.
Other than the User Interface, we have exposed a REST API for people who are interested in using information created in the corpus on their applications. Details about accessing the API is available at the Confluence site I mentioned above.

Charitha KankanamgeMotivation

An inspiring quote I heard from an inspiring person. 

Lali DevamanthriIf you’re not the buyer then you’re the product ~ VPN services

Virtual private networks, or VPNs, have been around for a while but they still aren’t much of a mainstream service. The only ones who know about VPNs are those who are sufficiently tech-literate, and of the ones who do know, only a fraction actually use them. That’s a shame because VPNs are a fantastic bit of technology that deserve more attention.

How is the VPN useful?

  • In a business environment, VPNs can be used for telecommuting by allowing employees to “remote in” from home and remain as part of the network.
  • Users can redirect their Internet traffic through VPNs, which provides an added layer of privacy and anonymity. It’s similar to the concept of proxy networks, but VPNs offer many more features than simple web proxies do.
  • When set up properly, VPNs can be used to access region-blocked web content. The website will see that your connection is coming from the VPN’s location instead of your own, but the VPN will forward the data to you.
  • VPNs can be used by gamers to simulate a local area network over the public Internet.

Most VPN providers charge less than $10 a month which we think is a great deal for the level of security they provide but for this review we have capped the price at $7/mo with the lowest costing just under $3/month! Of course to get these prices you will have to sign up for a longer period of time but luckily most have a free trial that allows you to test their services.


CactusVPN Logo

Positives: great client, Moldovan based, blazing fast

Negatives: only 4 countries, no BitCoins

CactusVPN is a fairly new Moldovan company that ranks high in so many of our lists without surprise. They have a great set of software, low prices and some of the fastest speeds we have seen from a VPN and all this without logs.

Using a free VPN is a big risk. If you aren’t paying for it, how is the VPN paying their costs? I’m highly recommend you use a cheap VPN instead.


CyberGhost Logo

Positives: no logs, based in Romania, good client, accepts Bitcoin, allows P2P(on paid plans only) , shared IPs

Negatives: multiple simultaneous connections only allowed on most expensive plan, speed cap

CyberGhost is a large Romanian company who has been doing fantastic developments lately. With regards to security they are absolutely top notch, going as far as deleting your payment details after it’s been processed, are working on some security technologies themselves and are willing to support promising security start ups too! Not only is their security great, but other areas of their service such as the client and support are also fantastic.  CyberGhost5 Premium VPNMonthly ongoing subscription is $6.99

Shelan PereraHow to make your presentation effective - 10/20/30 rule

 Have you ever presented yourself in front of an audience ? Then you may have use slides for your talk to make it better.

But did it really improve or destroy your talk?

 Lets find out few facts that may be things you will not like to see if you are in an audience. Because that is the best place to give a verdict about a presentation.

Here are few common errors that i have observed in the presentations which you should avoid. 

1) Too much text in the presentation

How often have we seen the entire wikipedia or at least close to that in slides? Slides are not meant to be read. Believe me if you let the audience read slides you loose attention. You loose some of the audience. Once they are out of your story it is not easy to grab them back.  Maximum 10 words would be ideal. If you can find a single picture which gives the idea at once fantastic use that..

2) Too much slides in the presentation

Do not try to overload the presentation. Usually if have lots of slides then you are at the risk of going out of your time limits.  Be careful not to give lots of ideas in your presentation. Usually i follow a rule of 3. Which means pitch 3 key ideas to the audience and make sure they remember that 3 even after the presentation.

3) Slide formatting.

If you have letters in the slides the text size and the text formatting matters a lot. Text should look great in the presentation screen and not only on your laptop/desktop screen. Use bigger fonts as possible and make sure everyone can see without any effort. Use a proper standard font which looks neat ,well spaced and easy on eyes.

Following video describes a better way to optimize your slides or a good framework. Using 10/20/30 rule.

1) 10 -  slides
2) 20 - minutes
3) 30 - minimum font size.

Just watch this short video and you will understand it better.

Madhuka UdanthaReact.JS and Virtual DOM

React is a open source UI library developed at Facebook to facilitate the creation of interactive and reusable UI components. It is not only does it perform on the client side, but it can also be rendered server side, and they can work together inter-operably. React has pluggable back-ends so it can be used to target the DOM, HTML, canvas, SVG, and other formats. It also uses a concept called the Virtual DOM that selectively renders sub trees of nodes based upon state changes. It encourages the creation of reusable UI components, which present data that changes over time. An important difference with frameworks like AngularJS – which uses a two-way data binding model – is that React features a one-way data binding model.

Here are four thing of React JS that make it stronger.

  • Virtual DOM
  • Server rendering
  • Descriptive warnings
  • Custom events system


Virtual DOM

One of the most important concepts of this project is the virtual DOM. The virtual DOM is used for efficient re-rendering of the DOM by using a diff algorithm that only re-renders the components changed. This in turn enables the library to be ultra fast. React keeps two copies of a virtual DOM

  • The original
  • Updated versions

These two virtual DOM trees are passed into a React function that diffs them and a stream of DOM operations are returned.


Let's dive a little deeper

The Component is the primary building block in React. You can compose the UI of your application by assembling a tree of Components. Each Component provides an implementation for the render() method, where it creates the intermediate-DOM. Calling React.renderComponent() on the root Component results in recursively going down the Component-tree and building up the intermediate-DOM. The intermediate-DOM is then converted into the real HTML DOM. React provides a convenient XML-based extension to JavaScript, called JSX, to build the component tree as a set of XML nodes. JSX also simplifies the association of event-handlers and properties as xml attributes. Since JSX is an extension language, there is a tool to generate the final JavaScript. We can either use the in-browser JSXTransformer or use the command line tool, called react-tools (installed via NPM)[1].

Server Rendering

Another great feature is that it can also render on the server using Node.js. Therefore, you can use the same knowledge you’ve gained both on the client and on the server. This has major performance and SEO benefits. It bit like for jaggeryjs. Many developers are using React.js to render a first, static version of the page on the server, which is faster than doing so on the client and is also SEO-friendly. Then they enable fast user interactions and UI updates by using React.js on the client side.



Madhuka UdanthaDensity-based clustering algorithm (DBSAN) and Implementation

Density-based spatial clustering of applications with noise (DBSCAN)[1] is a density-based clustering algorithm. It gives a set of points in some space, it groups together points that are closely packed together (points with many nearby neighbors), marking as outliers points that lie alone in low-density regions. In 2014, the algorithm was awarded the test of time award at the leading data mining conference, KDD.

Density Definition

  • ε (eps) Neighborhood – Objects within a radius of ε from an object
  • “High density” - ε-Neighborhood of an object contains at least MinPts of objects



Core, Border & Outlier

A point is a core point if it has more than a specified number of points (MinPts) within Eps—These are points that are at the interior of a cluster.

A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point.



Reachability is not a symmetric relation since, by definition, no point may be reachable from a non-core point, regardless of distance. Two points p and q are density-connected if there is a point o such that both p and q are density-reachable from o. Density-connectedness is symmetric.

Let check with example

– A point p is directly density-reachable from p2
– p2 is directly density-reachable from p1
– p1 is directly density-reachable from q
– p <- p2 <- p1 <- q form a chain

p is (indirectly) density-reachable from q
q is not density-reachable from p



DBSCAN(D, eps, MinPts)
   C = 0
   for each unvisited point P in dataset D
      mark P as visited
      NeighborPts = regionQuery(P, eps)
      if sizeof(NeighborPts) < MinPts
         mark P as NOISE
         C = next cluster
         expandCluster(P, NeighborPts, C, eps, MinPts)
expandCluster(P, NeighborPts, C, eps, MinPts)
   add P to cluster C
   for each point P' in NeighborPts
      if P' is not visited
         mark P' as visited
         NeighborPts' = regionQuery(P', eps)
         if sizeof(NeighborPts') >= MinPts
            NeighborPts = NeighborPts joined with NeighborPts'
      if P' is not yet member of any cluster
         add P' to cluster C
regionQuery(P, eps)
   return all points within P's eps-neighborhood (including P)


Let is Implement the Algorithm.

I will be using python sklearn.cluster.DBSCAN. Perform DBSCAN clustering from vector array or distance matrix. I will be using my sample data that was genrated from Gaussian blobs and it is explain and code can found in my previous post[2]

1 from sklearn.cluster import DBSCAN
3 db = DBSCAN(eps=0.3, min_samples=10).fit(X)


  • eps : The maximum distance between two samples for them to be considered as in the same neighborhood (float, optional).

  • min_samples : The number of samples in a neighborhood for a point to be considered as a core point. This includes the point itself. (int, optional)

  • metric : The metric to use when calculating distance between instances in a feature array. (string, or callable)

  • algorithm : The algorithm to be used by the NearestNeighbors module to compute pointwise distances and find nearest neighbors.
    ({‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, optional)

  • leaf_size :Leaf size passed to BallTree or cKDTree. This can affect the speed of the construction and query, as well as the memory required. (int, optional,default = 30)


  • core_sample_indices_ : Indixes of core samples. (array, shape = [n_core_samples])

  • components_ : Copy of each core sample found by training. (array, shape = [n_core_samples, n_features])

  • labels_ : Cluster labels for each point in the dataset given to fit(). Noisy samples are given the label -1. (array, shape = [n_samples])

Memory complexity

  • Memory complexity to O(n.d) where d is the average number of neighbors, while original DBSCAN had memory complexity O(n).


  • fit(X[, y, sample_weight]) :    Perform DBSCAN clustering from features or distance matrix.

  • fit_predict(X[, y, sample_weight]): Performs clustering on X and returns cluster labels.

  • get_params([deep]) : Get parameters for this estimator.

  • set_params(**params) : Set the parameters of this estimator.

1 # Compute DBSCAN
2 import numpy as np
3 from sklearn.cluster import DBSCAN
4 from sklearn import metrics
7 db = DBSCAN(eps=0.3, min_samples=10).fit(X)
9 #zeros_like :Return an array of zeros with the same shape and type as a given array., dtype will overrides the data type of the result.
10 core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
12 #core_sample_indices_ : Attributes and it is index of core samples (array, shape = [n_core_samples])
13 core_samples_mask[db.core_sample_indices_] = True
14 labels = db.labels_
16 # Number of clusters in labels, ignoring noise if present.
17 n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
19 #print results
20 print('Estimated number of clusters: %d' % n_clusters_)
21 print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
22 print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
23 print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
24 print("Adjusted Rand Index: %0.3f"
25 % metrics.adjusted_rand_score(labels_true, labels))
26 print("Adjusted Mutual Information: %0.3f"
27 % metrics.adjusted_mutual_info_score(labels_true, labels))
28 print("Silhouette Coefficient: %0.3f"
29 % metrics.silhouette_score(X, labels))

Results for DBSCAN is compared by following

  • Homogeneity
    A clustering result satisfies homogeneity if all of its clusters contain only data points which are members of a single class.

  • Completeness
    A clustering result satisfies completeness if all the data points that are members of a given class are elements of the same cluster.

  • V-measure[3]
    The V-measure is the harmonic mean between homogeneity and completeness.
    v = 2 * (homogeneity * completeness) / (homogeneity + completeness)

  • The Rand Index
    The Rand Index computes a similarity measure between two clusterings by considering all pairs of samples and counting pairs that are Rand index adjusted for chance.assigned in the same or different clusters in the predicted and true clusterings.

  • Adjusted Mutual Information (AMI)[4]
    It is an adjustment of the Mutual Information (MI) score to account for chance. It accounts for the fact that the MI is generally higher for two clusterings with a larger number of clusters, regardless of whether there is actually more information shared.

  • Silhouette Coefficient[5]
    The Silhouette Coefficient is calculated using the mean intra-cluster distance (a) and the mean nearest-cluster distance (b) for each sample. The Silhouette Coefficient for a sample is (b - a) / max(a, b). To clarify, b is the distance between a sample and the nearest cluster that the sample is not a part of. Note that Silhouette Coefficent is only defined if number of labels is 2 <= n_labels <= n_samples - 1.

Sample outcome from the data



Charting for DBSCAN

Here is simple chart for to show DBSCAN results noised data will be in black and other all Cluster point have their colors.

1 import matplotlib.pyplot as plt
3 # Black removed and is used for noise instead.
4 unique_labels = set(labels)
5 colors =, 1, len(unique_labels)))
6 for k, col in zip(unique_labels, colors):
7 if k == -1:
8 # Black used for noise.
9 col = 'k'
11 class_member_mask = (labels == k)
13 xy = X[class_member_mask & core_samples_mask]
14 plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col,
15 markeredgecolor='k', markersize=14)
17 xy = X[class_member_mask & ~core_samples_mask]
18 plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col,
19 markeredgecolor='k', markersize=6)
21 plt.title('Estimated number of clusters: %d' % n_clusters_)


Here is full Code

1 import matplotlib.pyplot as plt
2 import numpy as np
3 from sklearn.cluster import DBSCAN
4 from sklearn import metrics
5 from sklearn.datasets.samples_generator import make_blobs
7 # generating sampl data
8 centers = [[5, 5], [0, 0], [1, 5],[5, -1]]
9 X, labels_true =make_blobs(n_samples=500, n_features=5, centers=centers, cluster_std=0.9, center_box=(1, 10.0), shuffle=True, random_state=0)
12 # Compute DBSCAN
13 db = DBSCAN(eps=0.5, min_samples=10).fit(X)
15 #zeros_like :Return an array of zeros with the same shape and type as a given array., dtype will overrides the data type of the result.
16 core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
18 #core_sample_indices_ : Attributes and it is index of core samples (array, shape = [n_core_samples])
19 core_samples_mask[db.core_sample_indices_] = True
20 labels = db.labels_
22 # Number of clusters in labels, ignoring noise if present.
23 n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
25 #print results
26 print('Estimated number of clusters: %d' % n_clusters_)
27 print("Homogeneity: %0.3f" % metrics.homogeneity_score(labels_true, labels))
28 print("Completeness: %0.3f" % metrics.completeness_score(labels_true, labels))
29 print("V-measure: %0.3f" % metrics.v_measure_score(labels_true, labels))
30 print("Adjusted Rand Index: %0.3f"% metrics.adjusted_rand_score(labels_true, labels))
31 print("Adjusted Mutual Information: %0.3f"% metrics.adjusted_mutual_info_score(labels_true, labels))
32 print("Silhouette Coefficient: %0.3f"% metrics.silhouette_score(X, labels))
35 # Drawing chart
36 # Black removed and is used for noise instead.
37 unique_labels = set(labels)
38 colors =, 1, len(unique_labels)))
39 for k, col in zip(unique_labels, colors):
40 if k == -1:
41 # Black used for noise.
42 col = 'k'
44 class_member_mask = (labels == k)
46 xy = X[class_member_mask & core_samples_mask]
47 plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col,
48 markeredgecolor='k', markersize=14)
50 xy = X[class_member_mask & ~core_samples_mask]
51 plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col,
52 markeredgecolor='k', markersize=6)
54 plt.title('Estimated number of clusters: %d' % n_clusters_)

Gist :

code :



[1] Ester, Martin; Kriegel, Hans-Peter; Sander, Jörg; Xu, Xiaowei (1996). Simoudis, Evangelos; Han, Jiawei; Fayyad, Usama M., eds. A density-based algorithm for discovering clusters in large spatial databases with noise. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96). AAAI Press. pp. 226–231. ISBN 1-57735-004-9. CiteSeerX:


[3] Rosenberg, Andrew, and Julia Hirschberg. "V-Measure: A Conditional Entropy-Based External Cluster Evaluation Measure." EMNLP-CoNLL. Vol. 7. 2007.

[4]Vinh, Nguyen Xuan, Julien Epps, and James Bailey. "Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance." The Journal of Machine Learning Research 11 (2010): 2837-2854.

[5] Rousseeuw, Peter J. "Silhouettes: a graphical aid to the interpretation and validation of cluster analysis." Journal of computational and applied mathematics 20 (1987): 53-65.

Sumedha KodithuwakkuDisable Chunking in APIs of WSO2 API Cloud

Sometimes it may be required to disable chunking in the API requests done via WSO2 API Cloud. Some back-end servers does not accept chunked content and therefore you may need to disable chunking.

In a Standalone WSO2 APIM, this can be done by editing the velocity template as described in this blog post. However in WSO2 API Cloud it is not possible to edit the velocity template.

This can be done with the use of a custom mediation extension which will disable chunking, as described below.

The corresponding sequence would be like;

<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns=""
<property name="DISABLE_CHUNKING" value="true" scope="axis2" />

You can find a sample sequence (disable-chunking.xml) which you can upload to the Governance registry of the API Cloud using the management console UI of Gateway (

You can get the user name from the top right corner of the Publisher and then enter your password and log in. Once you are logged in select Resources (left hand side of the Management console) and click on Browse and then navigate to /_system/governance/apimgt/customsequences registry location. Since this sequence need to be invoked in the In direction (or the request path) navigate to the in collection. Click Add Resource and upload the XML file of the sequence configuration and add it.  (Note: Once you add the sequence it might take up-to 15 minutes until it is deployed into the publisher)

In Publisher select the required API and go to edit wizard by clicking edit and then navigate into Manage section. Click on Sequences check box and select the sequence which disable chunking, from the In Flow. After that Save and Publish the API.

Now invoke the API. If you investigate your requests, you will notice that chunking is disabled.

Chunking Enabled

POST customerservice/customers HTTP/1.1
Content-Type: text/xml; charset=ISO-8859-1
Accept: text/xml
Transfer-Encoding: chunked
Connection: Keep-Alive
User-Agent: Synapse-PT-HttpComponents-NIO

Chunking Disabled
POST customerservice/customers HTTP/1.1
Content-Type: text/xml; charset=ISO-8859-1
Accept: text/xml
Content-Length: 42
Connection: Keep-Alive
User-Agent: Synapse-PT-HttpComponents-NIO

Madhuka Udanthascikit-learn to generate isotropic Gaussian blobs

Scikit-learn is an open source machine learning library for the Python programming language. It features various classification, regression and clustering algorithms ,support vector machines, logistic regression, naive Bayes, random forests, gradient boosting, k-means DBSCAN, Decision Trees, Gaussian Process for ML, Manifold learning, Gaussian Mixture Models, Model Selection, Nearest Neighbors, Semi Supervised Classification, Feature Selection etc.

I was working on them for my Big Data research in last few week and I thought share it. I am planning to go some of them in my blog. Before that we need some sample data/ data frames. So Here I will be go through Generate isotropic Gaussian blobs[1] for clustering. I will be using 'sklearn.datasets.make_blobs'.

X : array of shape (The generated samples)

y : array of shape (The integer labels for cluster membership of each sample)

1 from sklearn.datasets.samples_generator import make_blobs
3 X, y =make_blobs()
5 print X.shape
6 print y

Out will be



  • n_samples : The total number of points  (int, default=100)
    Points equally divided among clusters

  • n_features : The number of features (int, default=2)

  • centers : Centers of the cluster (int or array, default=3)

  • cluster_std: The standard deviation of the clusters (float or sequence, default=1.0)

  • center_box: The bounding box for each cluster center  (pair of floats (min, max), default=-10.0, 10.0)

  • shuffle : Shuffle the samples (boolean, default=True)

  • random_state : If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. (int, RandomState instance or None (default=None))

1 from sklearn.datasets.samples_generator import make_blobs
3 X, y =make_blobs(n_samples=6, n_features=2, centers=5, cluster_std=1.0, center_box=(1, 10.0), shuffle=True, random_state=0)

Play Time!!!

Here you will get array shape (6L, 2L)

To get more idea on generated data we can used Matplotlib[3].  You can added below line for python code and see point distribution over X and Y

1 import matplotlib.pyplot as plt
3 # Plot the training points
4 plt.scatter(X[:, 0], X[:, 1])
5 plt.xlabel('X axis')
6 plt.ylabel('Y axis')

Here is look of our data


Let increase our data sample to 1000 and distribute them over 4 centers with 0.5 standard deviation

1 from sklearn.datasets.samples_generator import make_blobs
2 centers = [[2, 2], [8, 9], [9, 5], [3,9]]
3 X, y =make_blobs(n_samples=1000, n_features=2, centers=centers, cluster_std=0.5, center_box=(1, 10.0), shuffle=True, random_state=0)
5 print X.shape
6 print y
8 import matplotlib.pyplot as plt
10 # Plot the training points
11 plt.scatter(X[:, 0], X[:, 1])
12 plt.xlabel('X axis')
13 plt.ylabel('Y axis')


I need to distribute my data points more around cluster center so I  increase standard deviation 0.9


For to understand easily we can add separate colors for clusters

1 colors = ['r','g','b','c','k','y','m']
2 c = []
3 for i in y:
4 c.append(colors[i])

Finally your code will look like this

1 #adding sample data
2 from sklearn.datasets.samples_generator import make_blobs
3 centers = [[2, 2], [8, 9], [9, 5], [3,9],[4,4],[0,0],[2,5]]
4 X, y =make_blobs(n_samples=5000, n_features=2, centers=centers, cluster_std=0.9, center_box=(1, 10.0), shuffle=True, random_state=0)
6 #print our enarated sample data
7 print X[:, 0]
8 print y
10 #Drawing a chart for our generated dataset
11 import matplotlib.pyplot as plt
13 #set colors for the clusters
14 colors = ['r','g','b','c','k','y','m']
15 c = []
16 for i in y:
17 c.append(colors[i])
19 # Plot the training points
20 plt.scatter(X[:, 0], X[:, 1], c= c)
21 plt.gray()
22 plt.xlabel('X axis')
23 plt.ylabel('Y axis')


Try few






Manisha EleperumaSecuring Plaintext Passwords of WSO2 Server Config Files

Watch the video

Jayanga DissanayakeError occurred while applying patches {org.wso2.carbon.server.extensions.PatchInstaller}

I have seen people complaining that WSO2 servers logs the following error message at server start up.

[2015-04-06 15:48:57,572] ERROR {org.wso2.carbon.server.extensions.PatchInstaller} -  Error occurred while applying patches Destination '/home/jayanga/WSO2/wso2am-1.7.0/repository/components/plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.1.200.v20120522-1813' exists but is a directory
at org.wso2.carbon.server.util.FileUtils.copyFile(
at org.wso2.carbon.server.util.PatchUtils.copyNewPatches(
at org.wso2.carbon.server.extensions.PatchInstaller.perform(
at org.wso2.carbon.server.Main.invokeExtensions(
at org.wso2.carbon.server.Main.main(
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at org.wso2.carbon.bootstrap.Bootstrap.loadClass(
at org.wso2.carbon.bootstrap.Bootstrap.main(

The main reason to see such error message is, some erroneous entries in the patch metadata files for files itself.

This might happen if you try to forcefully stop the server as soon as you start the server

When the server start up, it copies the patch files. If the server is forcefully stopped at that time using [ctrl+c], patching processes get immediately stopped and patch meta-data will get corrupted.

You can get rid of this issue by removing corrupted patch related meta-data manually and restating the server, so that the server will apply all the patches form the beginning.

  1. remove [CARBON_HOME]/repository/components/patches/.metadata
  2. restart the server. (do not interrupt while starting up)

Jayanga DissanayakeEveryday Git (Git commands you need in your everyday work)

Git [1] is one of the most popular version control systems. In this post I am going to show you how to work with GitHub [2]. When it comes to GitHub there are thousands of public repositories. If you are interested in a project you can start working on it and contributing it. Followings are the steps and commands you will have to use while you work with GitHub.

1. Forking a repository
This is done via the GithHub [2] web site.

2. Clone a new repository
git clone

3. Get updates from the remote repository (origin/master)
git pull origin master

4. Push the updates to the the remote repository (origin/master)
git push origin master

5. Add updated files to staging
git add

6. Commit the local changes to the remote repository
git commit -m "Modifications to" --signoff

7. Set the upstream repository
git remote add upstream

8. Fetch from upstream repository
git fetch upstream

9. Fetch from all the remote repositories
git fetch --all

10. Merge new changes from upstream repository for the master branch
git checkout master
git merge upstream/master

11. Merge new changes from upstream repository for the "otherbranch" branch
git checkout otherbranch
git merge upstream/otherbranch

12. View the history of commits
git log

13. If needed to discard some commits in the local repository
First find the commit ID to which you want to revert back to. The user the following command
git reset --hard #commitId

14. To tag a particular commit
git checkout #commitid
git tag -a v1.1.1 -m 'Tagging version v1.1.1'
git push origin --tags


Jayanga DissanayakeWSO2 Carbon : Get notified just after the server start and just before server shutdown

WSO2 Carbon [1] is a 100% open source, integrated and componentized middleware platform which enables you to develop your business and enterprise solutions rapidly. WSO2 Carbon is based on OSGi framework [2]. It inherits molecularity and dynamism from the OSGi.

In this post I am going to show you how to get notified, when the server is starting up and when the server is about to shut down. 

In OSGi, bundle start up sequence is random. So you can't rely on the bundle start up sequence.

There are real world scenarios where you have some dependencies amount bundles, hence need to perform some actions before other dependent bundles get deactivated in the server shutdown.

Eg. Let's say you have to send messages to a external system. Your message sending module use your authentication module to authenticate the request and send it to the external system and your message sending module try to send all the buffered messages before the server shutdown.

Bundle unloading sequence in OSGi not happened in a guaranteed sequence. So, what would happen if your authentication bundle get deactivated before your message sending bundle get deactivated. In this case message sending module can't send the messages

To help these type of scenarios WSO2 Carbon framework provide you with a special OSGi service which can be used to detect the server start up and server shutdown

1. How to get notified the server startup

Implement the interface org.wso2.carbon.core.ServerStartupObserver [3], and register it as a service via the bundle context.

When the server is starting you will receive notifications via completingServerStartup() and completedServerStartup()

2. How to get notified the server shutdown

Implement the interface org.wso2.carbon.core.ServerShutdownHandler [4], and register it as a service via the bundle context.

When the server is about to shutdown you will receive the notification via invoke()


protected void activate(ComponentContext componentContext) {
try {
componentContext.getBundleContext().registerService(ServerStartupObserver.class.getName(), new CustomServerStartupObserver(), null) ;
} catch (Throwable e) {
log.error("Failed to activate the bundle ", e);

Madhuka UdanthaCouchDB 2.0 (Developer Preview) with HTTP APIs


The Apache CouchDB project had announced a Developer Preview release of its CouchDB 2.0. The Developer Preview 2.0 brings all-new clustering technology to the Open Source NoSQL database, enabling a range of big data capabilities that include being able to store, replicate, sync, and process large amounts of data distributed across individual servers, data centers, and geographical regions in any deployment configuration, including private, hybrid, and multi-cloud.

In earlier versions of CouchDB, databases could be replicated across as many individual servers as needed, but each server was limited by vertical scaling. With the clustering introduced in the Developer Preview, databases can now be distributed across many servers, adding horizontal scaling capability to CouchDB. This technology works by borrowing many principles from Amazon's Dynamo paper, and improves the overall performance, durability, and high-availability of large-scale CouchDB deployments.

Other additions in the Developer Preview include a faster database compactor, a faster replicator, easier setup, re-organised code for easier contributions, a global changes feed, and improved test coverage. This version of CouchDB also includes a new Web dashboard admin interface called Fauxton, with improved user experience design, rich query editors, and a management interface for replication. (A screenshot of Fauxton is available at

The Apache CouchDB community encourages feedback as it works towards the 2.0 General Availability release, which is expected early 2015.

To Build

You can follow the steps in here[3] . If you are windows user you can referee my previous post. You can start CouchDB 2.0 (Developer Preview)


The API can be divided into Server, Databases, Documents and Replication. By giving below command we can start 2 nodes in cluster.

dev/run -n 2

cluster couchDB

Servers / nodes

Two node cluster on the ports 15984 and 25984. They represent the endpoints in a two node cluster and you can connect to either one of them to access the full cluster. The backdoor port for 15984 would be 15986 and same way for 25984. Here We are checking the ports of two nodes.


cluster couchDB2


Let create databases. CouchDB is a database management system (DMS) and it can hold multiple databases. So creat database called 'books'

curl -X PUT

cluster couchDB node1


CouchDB replies as 'OK' in JSON so it is created. You can not create a database that already exists therefore we can only have one database called 'books' in one node.

You can view all dbs in you node from below. I have shards that I was testing and I will about shards of CouchDB later.


cluster couchDB node1

You can delete the database by below request. You can see the full request that we have send from CURL and full respond with headers.
curl -vX DELETE

cluster couchDB node1


Documents are CouchDB’s central data structure. Each document has it's ID and it is unique per database. We can choose any string to be the ID

curl -X PUT -d '{"title":"Unbroken: A World War II Story of Survival, Resilience, and Redemption","author":"Laura Hillenbrand"}'

/books/6e4567ed6c29495a567c05947f18d234bb specifies the location of a document inside our books database.
curl -X GET


we can ask CouchDB to give us UUID by
curl -X GET


If we want to update or delete a document, CouchDB expects us to include the _rev field of the revision you wish to change. When CouchDB accepts the change it will generate a new revision number. This mechanism ensures that, in case somebody else made a change without you knowing before you got to request the document update, CouchDB will not accept your update because you are likely to overwrite data you didn’t know existed. Or simplified: whoever saves a change to a document first, wins.

curl -X PUT -d '{"_rev":"1-80d0a1648cce795265cb4f2ba0557a12","ttitle":"Unbroken: A World War II Story of Survival, Resilience, and Redemption","author":"Laura Hillenbrand","published-date":"July 29, 2014"}'



CouchDB replication is a mechanism to synchronize databases. Much like rsync synchronizes two directories locally or over a network, replication synchronizes two databases locally or remotely.
In a simple POST request, you tell CouchDB the source and the target of a replication and CouchDB will figure out which documents and new document revisions are on source that are not yet on target, and will proceed to move the missing documents and revisions over.

curl -X PUT

curl -vX POST -d '{"source":"books","target":"books-replica"}' -H "Content-Type: application/json"

We can mark remote target such as our node 2 as below

curl -vX POST -d '{"source":"books","target":""}' -H "Content-Type: application/json"

Here we can look database details on each node by below command. We can find memory details

curl -X GET
curl -X GET


Those are very basic couchDB APIs.


Shiva BalachandranHow to update the Fork : GitHub : Fork

Syncing a fork

The Setup

Before you can sync, you need to add a remote that points to the upstream repository. You may have done this when you originally forked.

Tip: Syncing your fork only updates your local copy of the repository; it does not update your repository on GitHub.

$ git remote -v
# List the current remotes
origin (fetch)
origin (push)

$ git remote add upstream
# Set a new remote

$ git remote -v
# Verify new remote
origin (fetch)
origin (push)
upstream (fetch)
upstream (push)


There are two steps required to sync your repository with the upstream: first you must fetch from the remote, then you must merge the desired branch into your local branch.


Fetching from the remote repository will bring in its branches and their respective commits. These are stored in your local repository under special branches.

$ git fetch upstream
# Grab the upstream remote's branches
remote: Counting objects: 75, done.
remote: Compressing objects: 100% (53/53), done.
remote: Total 62 (delta 27), reused 44 (delta 9)
Unpacking objects: 100% (62/62), done.
 * [new branch]      master     -> upstream/master

We now have the upstream’s master branch stored in a local branch, upstream/master

$ git branch -va
# List all local and remote-tracking branches
* master                  a422352 My local commit
  remotes/origin/HEAD     -> origin/master
  remotes/origin/master   a422352 My local commit
  remotes/upstream/master 5fdff0f Some upstream commit


Now that we have fetched the upstream repository, we want to merge its changes into our local branch. This will bring that branch into sync with the upstream, without losing our local changes.

$ git checkout master
# Check out our local master branch
Switched to branch 'master'

$ git merge upstream/master
# Merge upstream's master into our own
Updating a422352..5fdff0f
 README                    |    9 -------                 |    7 ++++++
 2 files changed, 7 insertions(+), 9 deletions(-)
 delete mode 100644 README
 create mode 100644

If your local branch didn’t have any unique commits, git will instead perform a “fast-forward”:

$ git merge upstream/master
Updating 34e91da..16c56ad
Fast-forward                 |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

Chamila WijayarathnaAdding Password Protection to Cassandra Database (Keyspace)

Today I thought about writing some very simple thing when working with a Cassandra database. That is about adding password protection to a Cassandra database. Even though this is a very basic thing, it took a while for me to find how to do this and access a password protected database. That is why I thought of writing this blog.
When we install Cassandra in a machine, by default any user can access it without any problem.

public void connect(String node) {
cluster = Cluster.builder()
Metadata metadata = cluster.getMetadata();
System.out.printf("Connected to cluster: %s\n",
for ( Host host : metadata.getAllHosts() ) {
System.out.printf("Datatacenter: %s; Host: %s; Rack: %s\n",
host.getDatacenter(), host.getAddress(), host.getRack());
session = cluster.connect();
So anyone who knows the IP address and port of the Cassandra server can access it and also can change it.
Because of this we need password protection for the database.
For adding password protection, we need to change "authenticator" at the cassandra.yaml file. If Cassandra is installed in a Ubuntu server, this is available at '/etc/cassandra'.
After opening it we have to change line

authenticator: AllowAllAuthenticator


authenticator: PasswordAuthenticator.

Then we have to restart the Cassandra server by 'sudo service cassandra restart'.
Now if we try to connect to database without proper authentication, we will get following message.

If we try to access this via java with code I mentioned above, we will receive following error.

Exception in thread "main" com.datastax.driver.core.exceptions.AuthenticationException: Authentication error on host / Host / requires authentication, but no authenticator found in Cluster configuration
at com.datastax.driver.core.AuthProvider$1.newAuthenticator(
at com.datastax.driver.core.Connection.initializeTransport(
at com.datastax.driver.core.Connection.<init>(
at com.datastax.driver.core.Connection$
at com.datastax.driver.core.ControlConnection.tryConnect(
at com.datastax.driver.core.ControlConnection.reconnectInternal(
at com.datastax.driver.core.ControlConnection.connect(
at com.datastax.driver.core.Cluster$Manager.init(
at com.datastax.driver.core.Cluster.getMetadata(
at SimpleClient.connect(
at SimpleClient.main(

When we do this for the first time, we can use the default credentials for accessing the database.
Username - cassandra
Password - cassandra

We can access the database via java using following code.

public void connect(String node) {
cluster = Cluster.builder()
.withCredentials("yourusername", "yourpassword")
Metadata metadata = cluster.getMetadata();
System.out.printf("Connected to cluster: %s\n",
for ( Host host : metadata.getAllHosts() ) {
System.out.printf("Datatacenter: %s; Host: %s; Rack: %s\n",
host.getDatacenter(), host.getAddress(), host.getRack());
session = cluster.connect();
Now we are almost there. Still using default username and password will give the same issue we considered at the beginning.
So we need to change its password where only authorized people can access it.
We can do it using cqlsh as follows.
ALTER USER cassandra WITH PASSWORD 'newPassword';
Also if necessary, we can create new users as well. More details on creating new users are available at .

John MathonThe virtuous circle Connected Health will drive Healthcare improvements faster


Change is happening in our health care system in the US.    There are a number of factors both endogenous to the health care industry and external to it converging to create a real chance for significant change.   Evidence for that change are numerous including a biotech boom, genetics boom, information systems boom.   More money is being spent on healthcare systems and technology for the first time in a long time.    WSO2 is seeing a significant amount of its revenue coming from health care companies.

Some of the endogenous change includes:

dna to protein

Better understanding and ability to manipulate genes.

The ability to manipulate genes has progressed at a fantastic rate reminiscent of the rate of computer hardware change, literally billions of times faster than before and cheaper.    The first genome sequencing took years but it is now possible for $100 to sequence your genetic code in a few days, determine your ancestry and your health proclivities.

Clinical trials are being done with various flavors of genetic manipulation.  One trial is testing pancreatic cells that can produce insulin in precise reaction to a body’s needs like a person’s own body would.   This would be a cure for Type 1 Diabetes, a disease that afflicts more than a million people in the US alone.  Type 2 diabetes may not be far behind and afflicts 29 million people in the US alone.   For years we’ve been promised cures for cancer and the problem is we really had no idea fundamentally what cancer was precisely let alone a way to stop it.

tubulin and microtubules

The ability to see and manipulate what is happening at the atom level

is uncovering fundamental new understanding of how drugs work, don’t work and what kind of drug is needed to get something done.   Learning how to target specific cells by seeing precisely atomic level markers.   Our ability to see and manipulate at the nano level has also crossed a barrier of utility that now makes many things possible that weren’t before.

An example of this is a new drug by Genentech that will literally stop the common cold.   Genentech identified a precise marker on all cold viruses and constructed an antigen that precisely attacks it without your body’s immune system.   This is incredibly important as many people die with a defective immune system unable to fight the common cold.

This is just some of the change happening inside the biology business that portend over the next few decades truly astounding impacts.

Other factors related to data, computers and the social aspect of medicine sometimes called Connected Health ( what I call Platform 3)  is enabling a new kind of medical care and medical data that everyone is hoping will finally transform our medical system into a more personal, more efficient and intelligent system.

Connected Health

Connected health is about connecting the patient, the patient data, the providers, the researchers, the payers into a cohesive system that enables radical change in quality of service and knowledge of disease itself.

Connected Devices

migraine reducer - cefaly-400x400 ultrasound_iot

The ability to have connected devices (Internet of Things) such as medical monitoring devices on an outpatient basis can impact clinical studies costs as well as length of time in hospitals.   It can help patients get response to problems faster and it can help create a much closer connection of provider and patient.

These devices are hoped to enable less time in hospitals reducing health care costs and also more responsive and consistent care in or out of the hospital.  More outpatient services.   It also can mean faster reaction to medical conditions, warnings of potential conditions.   Google is working as are some other companies on devices that can detect the symptoms of heart attack or stroke before it happens.   Some people think of IOT for health is all about fitness trackers.  That is just the surface of what is happening.

Bigdata in Healthcare


Medical information consists of Medical records (EHR), Medical History (past actions and events, prescriptions), Genomic information, Clinical Data, IOT data and mobile collected data.    These last categories are newer but form a lot of the backbone of what people hope will tie together all the data to make sense of things at a level never before possible

One has to understand that our ability today to find out what any one thing does to a disease or to your body is confounded by the vast differences that comprises each person.   The different things that different people eat, the different combination of other conditions they have, genetic predispositions, the different behaviors and routines of people are so different that it becomes almost impossible to control for all these variables and conclude anything without enormous expense.  Even then the data is sparse and things go wrong.   A company does a study on a drug and after over a hundred million dollars in studies discovers years later that it causes impacts that render the drug more of a problem than a benefit.    More information, more bigdata, more careful and precise monitoring of patient data through the use of connected technologies enables a way to drastically reduce the cost of studies and simultaneously massively increase the amount of information available to discern what is really going on.

The ability to collect vast amounts of data about patients in electronic records and combine that with data from millions of people and their experiences in daily routine can drastically impact our understanding of how disease is onset, how to manage disease, what works and doesn’t work, and a potentially powerful way to gain new ideas about the intersection of drugs, supplements and behavior.   This can change the cost of trials by orders of magnitude by leveraging big data to find correlations that are impossible to discern with limited clinical trials we do today.

Similarly the ability to analyze vast amounts of genetic data and studies combined with health data can produce vast new insight and creativity in what possible drugs, supplements or behaviors can impact our health.


Apple is helping organize some of this effort in an initiative called HealthKit.   However, the energy and ideas driving all this come from a vast array of optimistic individuals who believe that with the connected technologies we can dramatically change, reduce the cost and improve the results of medical care.

HIPAA and progress

The last couple decades saw a tremendous effort to insure the privacy of medical information for patients.   Ironically the next several decades may see the reversal of some of that.   The ability to leverage connected health care comes to some extent from the ability to merge the experiences and results of millions of patients.   If patients believe the health data will be reasonably anonymized and will be used to benefit them then I believe millions will agree to share their medical information and their behavior data for the good of humankind.   However, if that information is abused or the results aren’t forthcoming we may see a backlash.  I am optimistic that having vast amounts of data will help us uncover information that will be of incredible value to the health of everyone.  I am very hopeful that people will not abuse this information. (Crossing my fingers)



What is Connected Health?

Connected Health


Connected healthcare connects all aspects of a patient and his providers as well as the information the patient needs to make decisions and understand his condition.    It is estimated that a typical hospital (not sure what that is) will have to maintain 1,000 Terabytes of patient information in a few years.

1) Connect the patient with his care giver to be able to get answers and options

2) Connect the patient with his insurance sources to get information on his options

3) Bringing together electronically all information on a patient from genomic data to data from IOT devices and mobile devices as well as all the patients Electronic health records and history.

4) The ability of the patient to control and expose their information for the betterment of themselves and others

5) The ability of the patient to socialize with others who have a similar situation

6) The ability of the patient to leave the hospital when possible using IOT monitoring devices or to receive outpatient care using IOT devices and to automate data collection and make it portable wherever the patient is

7) An open environment for new technology to leverage medical information in an anonymized way


The Future of Healthcare

What we are going to see is a combination of a much more precise and learned industry able to figure out how to attack specific problems, with tools to engineer solutions like never before and with data to understand impact and bring to market cheaper and faster new therapies and lessons, drugs, treatments, behaviors that will change the face of medical care dramatically over the next 10 to 20 years.



The medical industry in the US is over $3 trillion.  The scale of it is almost impossible to imagine.  Across the world health care averages 10% of GDP, in the US twice that.   In the US there are millions of providers.   Getting change across such a vast and huge empire of companies and competing interests has been an incredible challenge over the years.

It is estimated to cost $38 billion just to change to the ICD10 coding system.   A coding change costs more than the GDP of some countries.   It’s ridiculous.  The sheer number of participants and scale makes change on a large scale prohibitively difficult, time consuming and expensive.   The politics surrounding health care are intimidating as everyone is affected and it is deeply politicized at times.   All these factors conspire to make one pessimistic about the possibility of change.


Lack of information inhibits change.  Even if someone discovers something that is better the ability to propagate that throughout such a vast system is inhibited by the disconnectedness of the system.  Vast sections of the industry can remain blithely ignorant and unresponsive because nobody knows there is a better way.   Connectedness promises a flood of information that counteracts the inertia of this system.   Everyday hundreds of millions of people will have access to information that they can confront the elements that resist what benefits them.   Consumers will see choices.  They will demand and change will come faster than we have seen before.   Information will drive change.

 Articles you may find interesting on this topic:

Infographic: The State of the Connected Patient in 2015

Emerging market for mobile health products – IOT ultrasound

Epilepsy Center Improves Sharing of Patient Data with WSO2 Carbon Enterprise Middleware

BDigital Delivers E-Health and Smart Home Platform Using the WSO2 Carbon Platform

Connected Medical Devices, Apps: Are They Leading the IoT Revolution — or Vice Versa?

Connected Medical Devices in the Internet of Things

List of DNA testing companies

What Do IoT And M2M Mean For Personal Medical Device Providers? Part I

Telecommunication Companies Back mHealth Solutions

Biotech First Sector Ever To Outperform For 5 Years In A Row $IBB

Healthcare Data: What is the Patient’s Role?

Making the case for ‘medical grade’ mHealth

How medical startups became the biggest thing in 2014

10 Medical Breakthroughs That Sound Like Science Fiction

Targeted Cancer Therapies

Jayanga DissanayakeCustom Authenticator for WSO2 Identity Server (WSO2IS) with Custom Claims

WSO2IS is one of the best Identity Servers, which enables you to offload your identity and user entitlement management burden totally from your application. It comes with many features, supports many industry standards and most importantly it allows you to extent it according to your security requirements.

In this post I am going to show you how to write your own Authenticator, which uses some custom claim to validate users and how to invoke your custom authenticator with your web app.

Create your Custom Authenticator Bundle

WSO2IS is based OSGi, so if you want to add a new authenticator you have to crate an OSGi bungle. Following is the source of the OSGi bundle you have to prepare.

This bundle will consist of three files,
1. CustomAuthenticatorServiceComponent
2. CustomAuthenticator
3. CustomAuthenticatorConstants

CustomAuthenticatorServiceComponent is an OSGi service component it basically registers the CustomAuthenticator (service). CustomAuthenticator is an implementation of org.wso2.carbon.identity.application.authentication.framework.ApplicationAuthenticator, which actually provides our custom authentication.

1. CustomAuthenticatorServiceComponent

package org.wso2.carbon.identity.application.authenticator.customauth.internal;

import java.util.Hashtable;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.osgi.service.component.ComponentContext;
import org.wso2.carbon.identity.application.authentication.framework.ApplicationAuthenticator;
import org.wso2.carbon.identity.application.authenticator.customauth.CustomAuthenticator;
import org.wso2.carbon.user.core.service.RealmService;

* @scr.component name="identity.application.authenticator.customauth.component" immediate="true"
* @scr.reference name="realm.service"
* interface="org.wso2.carbon.user.core.service.RealmService"cardinality="1..1"
* policy="dynamic" bind="setRealmService" unbind="unsetRealmService"
public class CustomAuthenticatorServiceComponent {

private static Log log = LogFactory.getLog(CustomAuthenticatorServiceComponent.class);

private static RealmService realmService;

protected void activate(ComponentContext ctxt) {

CustomAuthenticator customAuth = new CustomAuthenticator();
Hashtable<String, String> props = new Hashtable<String, String>();

ctxt.getBundleContext().registerService(ApplicationAuthenticator.class.getName(), customAuth, props);

if (log.isDebugEnabled()) {"CustomAuthenticator bundle is activated");

protected void deactivate(ComponentContext ctxt) {
if (log.isDebugEnabled()) {"CustomAuthenticator bundle is deactivated");

protected void setRealmService(RealmService realmService) {
log.debug("Setting the Realm Service");
CustomAuthenticatorServiceComponent.realmService = realmService;

protected void unsetRealmService(RealmService realmService) {
log.debug("UnSetting the Realm Service");
CustomAuthenticatorServiceComponent.realmService = null;

public static RealmService getRealmService() {
return realmService;


2. CustomAuthenticator

This is where your actual authentication logic is implemented

package org.wso2.carbon.identity.application.authenticator.customauth;

import java.util.Map;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.wso2.carbon.identity.application.authentication.framework.AbstractApplicationAuthenticator;
import org.wso2.carbon.identity.application.authentication.framework.AuthenticatorFlowStatus;
import org.wso2.carbon.identity.application.authentication.framework.LocalApplicationAuthenticator;
import org.wso2.carbon.identity.application.authentication.framework.config.ConfigurationFacade;
import org.wso2.carbon.identity.application.authentication.framework.context.AuthenticationContext;
import org.wso2.carbon.identity.application.authentication.framework.exception.AuthenticationFailedException;
import org.wso2.carbon.identity.application.authentication.framework.exception.InvalidCredentialsException;
import org.wso2.carbon.identity.application.authentication.framework.exception.LogoutFailedException;
import org.wso2.carbon.identity.application.authentication.framework.util.FrameworkUtils;
import org.wso2.carbon.identity.application.authenticator.customauth.internal.CustomAuthenticatorServiceComponent;
import org.wso2.carbon.identity.base.IdentityException;
import org.wso2.carbon.identity.core.util.IdentityUtil;
import org.wso2.carbon.user.api.UserRealm;
import org.wso2.carbon.user.core.UserStoreManager;
import org.wso2.carbon.utils.multitenancy.MultitenantUtils;

* Username Password based Authenticator
public class CustomAuthenticator extends AbstractApplicationAuthenticator
implements LocalApplicationAuthenticator {

private static final long serialVersionUID = 192277307414921623L;

private static Log log = LogFactory.getLog(CustomAuthenticator.class);

public boolean canHandle(HttpServletRequest request) {
String userName = request.getParameter("username");
String password = request.getParameter("password");

if (userName != null && password != null) {
return true;

return false;

public AuthenticatorFlowStatus process(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException, LogoutFailedException {

if (context.isLogoutRequest()) {
return AuthenticatorFlowStatus.SUCCESS_COMPLETED;
} else {
return super.process(request, response, context);

protected void initiateAuthenticationRequest(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException {

String loginPage = ConfigurationFacade.getInstance().getAuthenticationEndpointURL();
String queryParams = FrameworkUtils

try {
String retryParam = "";

if (context.isRetrying()) {
retryParam = "&authFailure=true&";

response.sendRedirect(response.encodeRedirectURL(loginPage + ("?" + queryParams))
+ "&authenticators=" + getName() + ":" + "LOCAL" + retryParam);
} catch (IOException e) {
throw new AuthenticationFailedException(e.getMessage(), e);

protected void processAuthenticationResponse(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException {

String username = request.getParameter("username");
String password = request.getParameter("password");

boolean isAuthenticated = false;

// Check the authentication
try {
int tenantId = IdentityUtil.getTenantIdOFUser(username);
UserRealm userRealm = CustomAuthenticatorServiceComponent.getRealmService()

if (userRealm != null) {
UserStoreManager userStoreManager = (UserStoreManager)userRealm.getUserStoreManager();
isAuthenticated = userStoreManager.authenticate(MultitenantUtils.getTenantAwareUsername(username),password);

Map<String, String> parameterMap = getAuthenticatorConfig().getParameterMap();
String blockSPLoginClaim = null;
if(parameterMap != null) {
blockSPLoginClaim = parameterMap.get("BlockSPLoginClaim");
if (blockSPLoginClaim == null) {
blockSPLoginClaim = "";
if(log.isDebugEnabled()) {
log.debug("BlockSPLoginClaim has been set as : " + blockSPLoginClaim);

String blockSPLogin = userStoreManager.getUserClaimValue(MultitenantUtils.getTenantAwareUsername(username),
blockSPLoginClaim, null);

boolean isBlockSpLogin = Boolean.parseBoolean(blockSPLogin);
if (isAuthenticated && isBlockSpLogin) {
if (log.isDebugEnabled()) {
log.debug("user authentication failed due to user is blocked for the SP");
throw new AuthenticationFailedException("SPs are blocked");
} else {
throw new AuthenticationFailedException("Cannot find the user realm for the given tenant: " + tenantId);
} catch (IdentityException e) {
log.error("CustomAuthentication failed while trying to get the tenant ID of the use", e);
throw new AuthenticationFailedException(e.getMessage(), e);
} catch (org.wso2.carbon.user.api.UserStoreException e) {
log.error("CustomAuthentication failed while trying to authenticate", e);
throw new AuthenticationFailedException(e.getMessage(), e);

if (!isAuthenticated) {
if (log.isDebugEnabled()) {
log.debug("user authentication failed due to invalid credentials.");

throw new InvalidCredentialsException();

String rememberMe = request.getParameter("chkRemember");

if (rememberMe != null && "on".equals(rememberMe)) {

protected boolean retryAuthenticationEnabled() {
return true;

public String getContextIdentifier(HttpServletRequest request) {
return request.getParameter("sessionDataKey");

public String getFriendlyName() {
return CustomAuthenticatorConstants.AUTHENTICATOR_FRIENDLY_NAME;

public String getName() {
return CustomAuthenticatorConstants.AUTHENTICATOR_NAME;

3. CustomAuthenticatorConstants

This is a helper class to just to hold the constants you are using in your authenticaator

package org.wso2.carbon.identity.application.authenticator.customauth;

* Constants used by the CustomAuthenticator
public abstract class CustomAuthenticatorConstants {

public static final String AUTHENTICATOR_NAME = "CustomAuthenticator";
public static final String AUTHENTICATOR_FRIENDLY_NAME = "custom";
public static final String AUTHENTICATOR_STATUS = "CustomAuthenticatorStatus";

Once you are done with these files, your authenticator is ready. Now you can build you OSGi bundle and place the bundle inside <CRBON_HOME>/repository/components/dropins.

*sample pom.xml file [3]

Create new Claim

Now you have to create a new claim in WSO2IS. To do this, log into the management console of WSO2IS and do the steps described in [1]. In this example, I am going to create new claim "Block SP Login".

So, goto configuration section of the management console click on "Claim Management", then select "" Dialect

Click on "Add New Claim Mapping", and fill the details related to your claim.

Display Name   Block SP Login
Description   Block SP Login
Claim Uri
Mapped Attribute (s)  localityName
Regular Expression
Display Order 0
Supported by Default true
Required false
Read-only false

Now, your new claim is ready in WSO2IS. As you select "Supported by Default" as true, this claim will be available in your user profile. So you will see this field appear, when you try to create a user, but this field in not mandatory as you didn't specify it as "Required"

Change application-authentication.xml

There is another configuration change you have to do, as it is going to take the claim name from the configuration file (, 107-114). Add the information about the your new claim into repository/conf/security/application-authentication.xml

<AuthenticatorConfig name="CustomAuthenticator" enabled="true">
<Parameter name="BlockSPLoginClaim"></Parameter>

If you check the code line,107-128. You will see in the processAuthenticationResponse, in addition to authenticating the user from the user store, it checks for the new claim,

So, this finishes the, basic steps to setup your custom authentication. Now you have to setup new Service Provider in WSO2IS and set you custom authentication to it. So that when ever your SP try to authenticate a user from WSO2IS, it will use your custom authenticator.

Create Service Provider and set the Authenticator

Follow the basic steps given in [2] to create a new Service Provider.

Then, goto, "Inbound Authentication Configuration"->"SAML2 Web SSO Configuration", and make the following changes,

Issuer* = <name of you SP>
Assertion Consumer URL = <http://localhost:8080/your-app/samlsso-home.jsp>
Enable Response Signing = true
Enable Assertion Signing = true
Enable Single Logout = true
Enable Attribute Profile = true

Then goto, "Local & Outbound Authentication Configuration" section,
select "Local Authentication" as the authentication type, and select your authenticator, here "custom".

Now you have completed all the steps needed to setup your custom autheticator with your custom claims

You can now start the WSO2IS, and start using your service. Meanwhile, change the value of the "Block SP Login" of a particular user and see the effect.


sanjeewa malalgodaHow to use custom passowrd validation policy in WSO2 API Manager and update sign-up UI to handle new policy

In this article we will discuss how we can change password policy by using identity server specific configurations. Also we will discuss about updating API store sign-up page according to custom password policy. So when new users sign-up in API store it will be easy for them.

Please follow below instructions to change password policy by updating file available in repository/conf/security directory.
You can find more information from this url -

Do necessary changes in file
# Identity listener is enable


# Define password policy enforce extensions

Password should contain a digit 0-9, a lower case letter a-z, an upper case letter A-Z, one of !@#$%&* characters as in Password.policy.extensions.3.pattern. But sign-up process has issue with this pattern.

Then we can customize sign-up process of store by adding sub theme.

1. create a folder call custom-signup in below path of store. You can give any preferable name.

2. copy following folders to above location.
wso2am-1.7.0/repository/deployment/server/jaggeryapps/store/site/themes/fancy/js folder to wso2am-1.7.0/repository/deployment/server/jaggeryapps/store/site/themes/fancy/subthemes/custom-signup
wso2am-1.7.0/repository/deployment/server/jaggeryapps/store/site/themes/fancy/templates folder to wso2am-1.7.0/repository/deployment/server/jaggeryapps/store/site/themes/fancy/subthemes/custom-signup

3. Open below file. This file contain password validation function of sign-up.
You can change validation logic written in $.validator.passwordRating = function(password, username) according to above pattern or whatever pattern you want.

Please find the attach jquery.validate.password.js modified according to above pattern.

(function($) {
       function rating(rate, message) {
        return {
            rate: rate,
            messageKey: message
    function uncapitalize(str) {
        return str.substring(0, 1).toLowerCase() + str.substring(1);
    $.validator.passwordRating = function(password, username) {
            var minLength = 6;
            var passwordStrength   = 0;
            if ((password.length >0) && (password.length <=5)) passwordStrength=0;
            if (password.length >= minLength) passwordStrength++;
            if ((password.match(/[a-z]/)) && (password.match(/[A-Z]/))) passwordStrength++;
            if (password.match(/\d+/)) passwordStrength++;
            if (password.match(/.[!,@,#,$,%,^,&,*]/)) passwordStrength++;
            if (password.length > 12) passwordStrength++;
            if (username && password.toLowerCase()== username.toLowerCase()){
                passwordStrength = 0;
            case 1:
                return rating(1, "very-weak");
            case 2:
                return rating(2, "weak");
            case 3:
                return rating(3, "weak");
            case 4:
                return rating(4, "medium");
            case 5:
                 return rating(5, "strong");
            case 6:
                 return rating(5, "vstrong");
                return rating(1, "very-weak");
    $.validator.passwordRating.messages = {
        "similar-to-username": "Too similar to username",
        "very-weak": "Very weak",
        "weak": "Weak",
        "medium": "Medium",
        "strong": "Strong",
        "vstrong": "Very Strong"
    $.validator.addMethod("password", function(value, element, usernameField) {
        // use untrimmed value
        var password = element.value,
        // get username for comparison, if specified
            username = $(typeof usernameField != "boolean" ? usernameField : []);        
        var rating = $.validator.passwordRating(password, username.val());
        // update message for this field
        var meter = $(".password-meter", element.form);
        meter.find(".password-meter-bar").removeClass().addClass("password-meter-bar").addClass("password-meter-" + rating.messageKey);
        .addClass("password-meter-message-" + rating.messageKey)
        // display process bar instead of error message
        return rating.rate > 3;
    }, "Minimum system requirements not met");
    // manually add class rule, to make username param optional
    $.validator.classRuleSettings.password = { password: true };

Also make sure return rating.rate > 2; of $.validator.addMethod("password", function(value, element, usernameField) will give you the expected strength of password.
We can modify it as follows
return rating.rate > 3;
to make sure that entered password adhere above pattern.

4. If you want to change above Password.policy.extensions.3.pattern to different pattern and show password tips to user when entering password you need to edit following files.

Below section correspond to display password tips
<div class="help-block" id="password-help" style="display:none">
 <li>This is new changed TIP :-) </li>               
Please change text as you want in below file accroding to above keys.

5. Finally, do below config to enable above changes in store.
Open wso2am-1.7.0/repository/deployment/server/jaggeryapps/store/site/conf/site.json
Add sub theme folder as below
    "theme" : {
        "base" : "fancy",
        "subtheme" : "custom-signup"
This will override current user sign-up validation by above changes as well as if you did any changes to password tips text.

Shelan PereraWhat you should know if you travel in Paris

 I had  a dream to visit Paris , City of love with a breeze of Romance. It is a lovely city to explore lots of things. But if you are a traveller knowing few things i have experience will make your journey much nicer :).

1) If you arrive in Beauvais airport and you need to find a way to go to Paris city center you can get Bus tickets from here. Because Ryan air which is most of the time cheaper will land only in this airport. Most of the people suggest other airports which is near to Paris but if you can spend one extra hour you can land in Beauvais and come to city center by bus. Because it (flight + bus) may be cheaper than some flight options.
2) Paris city metro is little bit complicated but it covers almost all the areas. It may be cheaper to get Paris 10 tickets options which will give you 10 tickets at once at a discounted price.

3) Get a metro map and correctly identify zones and lanes. Because there are some connecting lanes which can be confusing a bit. And also make sure you are not going through an exit when changing lanes or metros, Because once you do you ticket become void and have to use another ticket. Some routes combine rail and metro and may be cumbersome to find them. So to make things easier i opted only to use metro even though the ticket is valid for both to make my journeys hassle free.

3) You should visit paris by night which is more beautiful than the day time. I am posting few pics below and you can decide :)

4) Louvre is an amazing place to visit. But it takes time to explore since it is massive. Plan to spend at least  5 to 6 hours of the day there. Make sure you check the timetable as they closes the Museum on time

John MathonHow is the Internet of Things like a Trading Floor or Stock Exchange?

Event Driven Architecture Publish Subscribe for the Internet of Things is important


The Internet of things is projected to be a multi-trillion dollar market with billions of devices expected to be sold in a few years.  It is happening.

What is driving IOT is a combination of much lower cost hardware, lower power communications.   This is enabling virtually everything to become connected cheaply.


There is a lot of emphasis on watches but a vast array of devices that make our lives easier and smarter are flooding the market, fuel efficient thermostats, security systems, drones and robots of all types for business but eventually for the home.

Tesla has proved that the connected car is a very useful thing.  Virtually every car manufacturer has decided to move in this direction rapidly.



The industrial market for connected control and monitoring has existed and will expand in automated factories, logistics automation, building automation.   However, efficiencies are being found with new areas, for instance, connected tools for the construction site enable construction companies to better manage construction process.

IOT Reference Architecture

We are also seeing increased intelligence from what I call the network effect.  The excess value created by the combination of devices all being on a network.


There can be no doubt that even in the most pessimistic scenario it is a multi-trillion dollar market.


The remarkable thing is that all the protocols for the Internet of things share one common characteristic.  They are all designed around publish/subscribe.  TIBCO, the company I founded used publish/subscribe to build trading floors.   What do trading floors and weather stations at home, light switches and robotic lawn mowers have in common?

Trading Floor Info Bus

The commonality is that information on a trading floor comes from numerous sources and is distributed to whoever needs it.  In the same way information from a numerous IOT devices could be used by many subscribers.  If I publish a message like shut off all living room switches it is not different conceptually from saying I just bought 100 shares of IBM.  The trade may trigger a number of processes in a trading firm.   The command triggers a number of actions of different devices.
Publish subscribe led to massive efficiency gain winning in 3 years virtually the entire financial world.   Today, trading floors are still built on publish subscribe event driven architecture.

The benefit of publish subscribe event driven computing is simplicity and efficiency.



Devices or endpoints can be dynamic, added or lost with little impact on the system.  New Devices can be discovered and rules applied to add them to the network and establish their functioning.  All the IOT standards support some form of discovery mechanism so that new devices can be added as close to seamlessly as possible.   Over the air a message can be delivered once to many listeners simultaneously without any extra effort by the publisher.  When the cost of publishing is high, for instance a battery powered device less work means longer life and cheaper cost.





Security and Privacy


There is a problem with all this efficiency and flexibility.  Security and privacy.   Most of the protocols do support a way to encrypt messages but there are serious issues in security and privacy with todays protocols.  Z-Wave probably has the best story of the IOT protocols although vulnerabilities have been discovered recently.   Zigbee also supports security mechanisms.  There are a dozen IOT protocols and the diversity means a lot of devices will not be secure and it is likely that different protocols will have different vulnerabilities.  Authentication of devices is not generally performed so various attacks based on impersonation are possible.


Most of the devices and protocols don’t automate software updating.  Sometimes quite complicated action is needed to update software on devices.   This can lead to vulnerabilities persisting for a long time.


Eventually these issues will be worked out and devices will automatically download authenticated updates.  The packets will be encrypted to prevent eavesdropping and it will be harder to hack IOT device security but this could take years.   Enterprise versions of devices will undoubtedly flourish which support better security as this will be a requirement for enterprise adoption.


Publish subscribe generates a lot of excitement I think because of the agility it gives people to leverage information easily providing for faster innovation and more network effect.   Point to point technologies lead to brittle architectures that are burdensome to add or change functionality.   I have always found the publish subscribe paradigm an exciting way of designing systems.  I am excited that IOT has become a new venue where we can explore the creativity that is engendered by this publish subscribe paradigm I started.


WSO2 has staked out a significant amount of mindshare and software to support IOT technologies.   We have numerous projects with IOT companies.   We help IOT companies with our lean open source componentized event driven messaging and mediation technology that can go into devices.  Adapters and connectors for communication between devices and services on hubs, in the cloud or elsewhere.   Bigdata components for streaming, storing and analyzing data from devices.   Process automation and device management for IOT and application management software for the IOT applications and devices.   We can help large and small firms deploying or building IOT devices bring products to market sooner, make their devices or applications smarter, easier and cheaper to manage.


IOT Reference Architecture



This EDA Series:

Event Driven Architecture – The Basics

Event Driven Architecture – Internet of Things – What do trading floors and home thermostats have in common?

Event Driven Architecture – Architectural Examples and Use Cases

Event Driven Architecture – Enterprise Concerns:  High Availability / Disaster Recovery / Fault Tolerance / Load Balancing / Transactional Semantics / Performance Design

Other Articles you may find interesting:

MicroServices – Martin Fowler, Netscape, Componentized Composable Platforms and SOA (Service Oriented Architecture)

Publish / Subscribe Event Driven Architecture in the age of Cloud, Mobile, Internet of Things(IoT), Social

ESB Performance Comparison

Enterprise Application Integration

Event Driven Architecture (EDA) Pattern

Understanding ESB Performance & Benchmarking

Understanding Enterprise Application Integration – The Benefits of ESB for EAI



Ashansa PereraWant to map your keyboard with the keyboard for your VM?

Recently I had to work in VMs for few of my projects and the biggest burden is the changes in keyboard. I am using a Mac and wanted to use Windows on a VM for a project. It is really irritating when you have to work in both environments interchangeably. You will definitely press cmd + c to copy in your windows VM and wonder why cmd + v won't work :O

I could find a simple solution for this with the help of autohotkey.
So this is how I map the Mac keyboard to Windows VM. ( and I am using VirtualBox)

Map the hostkey of the VirtualBox to right command key.

Start VirtualBox
Select VirtualBox > Preferences from the top menu

Input > Virtual Machine
Search for 'host key combination' and set the right command key for the key combination

Setup the VM to have changes

Download autohotkey to VM
Create a file in VM with extension .ahk ( ie: script.ahk )
Following is a sample content for the file with some of the most useful key mappings

#SingleInstance force
#r::Send ^r ;reload
#z::Send ^z ; undo
#y::Send ^y ; redo
#f::Send ^f ; find inside apps
#c::Send ^c ; copy
#x::Send ^x ; cut
#v::Send ^v ; paste
#t::Send ^t ; new tab, Firefox
#s::Send ^s ; save inside apps
LWin & Tab::AltTab ; the motherlode, alt-tab!
#b::Send ^b ; bold / bookmarks in Firefox
#i::Send ^i ; italic
#u::Send ^u ; underline
#a::Send ^a ; select all

Save the content and save the file.
Right click on file and do 'run as administrator' and you are ready to go... :)

Add the script to the Startup Programs

Add your file to Startup folder and you will have no issue with VM restarts...
Start > Program Files > Startup

Hope you will find your life with VMs much more easier with this.

John MathonEvent Driven Architecture – Architectural Examples

Use cases of EDA Event Driven Architecture

Here is a general architectural toolset for building EDA:

New Platform Slide


One of the first use cases for publish / subscribe event driven computing was on a trading floor.   Let’s look at the typical architecture of a trading floor.


Use Case Trading Floors


trading floor example

A trading floor has information sources from a variety of providers.   These providers aggregate content from many sources and feed that information as a stream of subject oriented feeds.   For instance, if I am a trader who focuses on the oil sector I will subscribe to any information that I believe is relevant to what I think will affect the prices of oil securities.  Each trader has a different view of what affects oil securities or the type of trading they do so that even though you may have 2000 traders on your trading floor no two of them are interested in the same set of information nor do they want it presented in the same way.

Trading Floor Architecture Example


Building a trading floor using EDA architecture involves building an extremely performant infrastructure consisting of a number of services that must be able to sustain data rates well in excess of 1000 transactions/second.    Ultra high reliability and transactional semantics are needed throughout.   Every process is provided in a cluster or set of clusters and usually an active/active method of fault tolerance is employed. Message broker is used for trades and things related to auditable entities.   Topics are used to distribute market data.   Systems are monitored using Activity Monitor and Metrics produced.   Data also needs to be reliably sent to Risk Analysis which computes credit limits and other limits the firm has for trading operations in real-time. Complex event processing is used to detect anomalous events, security events or even opportunities.

Trading Floor EDA Arch


In a high-frequency-trading application (HFT) specialized message brokers are used that minimize latency to communicate to the Stock Exchanges Directly.  A bank of computers will be taking in market information directly from sources and high powered computers will calculate opportunities to trade.   Such trading happens in an automated way because the timing has to be at the millisecond level to take advantage of opportunities. Specialized hardware is also used.


Other applications are for macro analysis which involves usually complex ingestion of data from sources that aren’t readily available normally.  A lot of effort is put into data cleansing and a columnar time-series databases which understands the state of things as they were known, as they were modified by improvements in data.  These are called as-of data and involve having persistent all variations of data and modifications so the time-series can be recreated as was known at a certain time.  Apple uses such notions in its Time Machine technology.   Calculations involve running historical data through algorithms to determine if the calculations will produce a profit or are reliable.


Use Case Health Care


Insurance Companies, State Health Care Systems, HMO’s need to manage the health of customers, provide medical decisions.   There are 4 parts of such systems.  These are called MMIS systems sometimes.   The 4 components of an MMIS system are:


  1. Provider – Enrollment, Management, Credentials, Services enrollment
  2. Consumer – Enrollment, Services Application, Health Care Management
  3. Transactions, Billing and Service approvals
  4. Patient Health Data, Bigdata, Health Analysis and Analytics

Each of these systems are integrated and each requires its own EDA architecture.   Standards in the health industry include HL-7 for the message format and coding. Important standards to be supported in any system include HL-7, EHR standards, ICD coding standards and numerous other changing specifications.   Systems need to support strong privacy, authentication and security to protect individuals.


 Here is a view of the Connected Health vision:

Connected Health


Here is another view of the architecture:

Health Manangement Example Summary Context Diagram 

A typical Enrollment system for consumers would include at least the following components:

Health Enrollment EDA Arch


When a patient requests to enroll in a medical insurance company or system they typically make an application in one form or another.  To facilitate numerous ways this application can be made a mediation ESB is best practice.   Mobile applications for instance can talk directly to the ESB.

Once an application has been received it needs to be reliably stored and a business process initiated to process the application.   Typically, the patients past data will have to be obtained from existing medical systems as well as history of transactions, payments, providers etc so that a profile can be made to determine if the application should be approved. 

Over time, new information coming into the system may undermine an applicant’s eligibility to participate in a certain plan.   So, the system has to continue to ingest data from various data sources including information on the applicants living address, medical conditions and behaviors.   A CEP engine can detect events that may trigger a business process to review an applicant’s status.


Use Case: Online Shopping 

Online shopping can vary considerably in complexity depending on the scale and ways in which goods can be sold or acquired, the process of fulfillment.    An example online seller is presented.

Online Sales EDA Arch


In this architecture consumers have a possibility to communicate through a mobile app or to go to a web site to buy things.   When they use a mobile app it can talk directly to the ESB.  When coming in through a web service it will typically initiate a process in an app server.

All information goes through the ESB so requests to search, look for more information, place orders, query the status of orders are all processed through the ESB and lead to initiation of business processes or directly querying the database and returning a result.

A business process will coordinate fulfillment, determine if there is inventory or where the inventory is, kick off a back-order process if required which may then kick off processes to inform the customer in delivery dates.  Shipping may be notified in a warehouse to initiate a delivery.

In this architecture we are assuming the suppliers have an API to interact with the selling merchant so they can inform the merchant of their delivery and to place orders.  Real-time inventory must be managed in the RDB and product information constantly ingested and updated. 

Activity monitoring is used to collect data on all activities throughout the system including the customer so that metrics and bigdata can be analyzed.   A CEP processor is included so real-time offers can be made to customers if analytics determines it would be beneficial.   RDB is used with Message broker to log transactions and other mission critical data.


Use Case: Online Taxi Service


Let us consider an online taxi service.  What is the architecture such a firm would need:

Ufer Taxi Arch

An on-line taxi service has several applications which all talk directly to a ESB hub in the cloud or an API management service.  A message broker is added for queueing and creating a publish subscribe framework in the backend infrastructure.  This allows a new pickup to alert several support services and tracking.   I also include an API Store for external developers who want to integrate Ufer service into their apps making it easier to arrange pickup or drop off from any event, bar or business that wants.


This EDA Series:

Event Driven Architecture – The Basics

Event Driven Architecture – Internet of Things – What do trading floors and home thermostats have in common?

Event Driven Architecture – Architectural Examples and Use Cases

Event Driven Architecture – Enterprise Concerns:  High Availability / Disaster Recovery / Fault Tolerance / Load Balancing / Transactional Semantics / Performance Design

Other Articles you may find interesting:

MicroServices – Martin Fowler, Netscape, Componentized Composable Platforms and SOA (Service Oriented Architecture)

Publish / Subscribe Event Driven Architecture in the age of Cloud, Mobile, Internet of Things(IoT), Social

ESB Performance Comparison

Enterprise Application Integration

Event Driven Architecture (EDA) Pattern

Understanding ESB Performance & Benchmarking

Understanding Enterprise Application Integration – The Benefits of ESB for EAI


Shazni NazeerInstalling and testing Django

Django is a web development framework for python. It eases your development time and makes web development a real joy. In this article I'll show you how to install Django and test it, given you already have python installed in Linux. Installing Django is a simple step and it's a single command.
$ sudo pip install django
It assumes you have python-pip module installed. If not install it using the following command.
sudo apt-get install python-pip        // In Debian Linux/Ubuntu

sudo yum install python-pip // In Red Hat and Fedora
After installing you can check whether everything is setup properly. Use the following commands.
$ python
Python 2.7.8 (default, Nov 10 2014, 08:19:18)
[GCC 4.9.2 20141101 (Red Hat 4.9.2-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
>>> print django.VERSION
(1, 7, 7, 'final', 0)
Let's start off with setting up a default project to see the Django in work.

Create a directory of your choice. Say $HOME/DjangoProjects.

Navigate to that directory and issue the following command.
$ startproject mysite     // Here 'mysite' is your project name
This will create a directory called mysite, inside which you'll find a file named and another directory named 'mysite' which in turn contains four files; namely,,, urls,py and

Let's not bog down into details of what these files are all about. You can find detailed information in the Django web site

Let's start the Django's test server for this default site. (This server is merely for testing and not recommended to use in production). Type the following to start the server.
$ python runserver 9876

This should start run the server on port 9876. To see the welcome page, open a browser and navigate to

Hope this was a good starting point for you to learn Django. Enjoy working with Django.

sanjeewa malalgodaHow to fine tune API Manager 1.8.0 to get maximum TPS and minimum response time

Here in this post i will discuss about API Manager 1.8.0 performance tuning. I have tested this is below mentioned deployment. Please note that this results can vary depend on you hardware server load and network. And this is not even fully optimized environment and probably you may go beyond this with better hardware+ network and configuration combination according to your use case.

Server specifications

System Information
Manufacturer: Fedora Project
Product Name: OpenStack Nova
Version: 2014.2.1-1.el7.centos

4 X CPU cores
Processor Information
Socket Designation: CPU 1
Type: Central Processor
Family: Other
Manufacturer: Bochs
Max Speed: 2000 MHz
Current Speed: 2000 MHz
Status: Populated, Enabled

Memory Device
Total Width: 64 bits
Data Width: 64 bits
Size: 8192 MB
Form Factor: DIMM
Type: RAM

Deployment Details

Deployment 01.
2 gateways (each run on dedicated machine)
2 key managers(each run on dedicated machine)
MySQL database server
1 dedicated machine to run Jmeter

Deployment 02.
1 gateways
1 key managers
MySQL database server
1 dedicated machine to run Jmeter 


Configuration changes.

Gateway changes.
Enable WS key validation for key management.
Edit /home/sanjeewa/work/wso2am-1.8.0/repository/conf/api-manager.xml with following configurations.
[Default value is ThriftClient]

[Default value is true]
Other than this all configurations will be default configuration.
However please note that each gateway should configure to communicate key manager.

Key Manager changes.
Edit /home/sanjeewa/work/wso2am-1.8.0/repository/conf/api-manager.xml with following configurations.
[Default value is true]
No need to run thrift server there as we use WS client to key validation calls.
Both gateway and key managers configured with mysql servers. For this i configured usr manager, api manager and registry database with mysql servers.

Tuning parameters applied.

Gateway nodes.

01. Change synapse configurationsAdd following entries to /home/sanjeewa/work/wso2am-1.8.0/repository/conf/ file.

02. Disable HTTP access logs
Since we are testing gateway functionality here we should not much worry about http access logs. However we may need to enable this to track access. But for this deployment we assume key managers are running in DMZ and no need track http access. For gateways most of the time this does not require as we do not expose servlet ports to outside(normally we only open 8243 and 8280).
Add following entry to /home/sanjeewa/work/wso2am-1.8.0/repository/conf/ file.

Remove following entry from /wso2am-1.8.0/repository/conf/tomcat/catalina-server.xml to disable http access logs.

          <Valve className="org.apache.catalina.valves.AccessLogValve" directory="${carbon.home}/repository/logs"
               prefix="http_access_" suffix=".log"
               pattern="combined" />

03. Tune parameters in axis2client.xml file. We will be using axis2 client to communicate from gateway to key manager for key validation. For this edit wso2am-1.8.0/repository/conf/axis2/axis2_client.xml and update following entries.

    <parameter name="defaultMaxConnPerHost">1000</parameter>
    <parameter name="maxTotalConnections">30000</parameter>

Key manager nodes.

01. Disable HTTP access logs
Since we are testing gateway functionality here we should not much worry about http access logs. However we may need to enable this to track access. But for this deployment we assume key managers are running in DMZ and no need track http access.

Add following entry to /home/sanjeewa/work/wso2am-1.8.0/repository/conf/ file.

02. Change DBCP connection parameters / Datasource configurations.
There can be argument on these parameters. Specially disable validation query. But when we have high concurrency and well performing database servers we may disable this as created connections are heavily used. And on the other hand connection may work when we validate it but when we really use it connection may not work. So as per my understanding there is no issue with disabling in high concurrency scenario.

Also i added following additional parameters to optimize database connection pool.

If you dont want to disable validation query you may use following configuration(here i increased validation interval to avoid frequent query validation).

<validationQuery>SELECT 1</validationQuery>

03. Tuning Tomcat parameters in key manager node.
This is important because we call key validation web service service from gateway.
change following properties in this(/home/sanjeewa/work/wso2am-1.8.0/repository/conf/tomcat/catalina-server.xml) file.

Here is the brief description about changed parameters. Also i added description for each field copied from this( document for your reference.

I updated acceptorThreadCount to 4(default it was 2) because in my machine i have 4 cores.
However after adding this change i noticed considerable reduction of CPU usage of each core.

Increased maxThreads to 750(default value was 250)
The maximum number of request processing threads to be created by this Connector, which therefore determines the maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200. If an executor is associated with this connector, this attribute is ignored as the connector will execute tasks using the executor rather than an internal thread pool.

Increased minSpareThreads to 250 (default value was 50)
The minimum number of threads always kept running. If not specified, the default of 10 is used.

Increased maxKeepAliveRequests to 400 (default value was 200)
The maximum number of HTTP requests which can be pipelined until the connection is closed by the server. Setting this attribute to 1 will disable HTTP/1.0 keep-alive, as well as HTTP/1.1 keep-alive and pipelining. Setting this to -1 will allow an unlimited amount of pipelined or keep-alive HTTP requests. If not specified, this attribute is set to 100.

Increased acceptCount to 400 (default value was 200)
The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 100.

Disabled compression. However this might not effective as we do not use compressed data format.

<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
               server="WSO2 Carbon Server"
               noCompressionUserAgents="gozilla, traviata"               



Test 01 - Clustered gateway/key manager test(2 nodes)

For this test we used 10000 tokens and 150 concurrency per single gateway server. Test carried out 20 minutes to avoid caching effect on performance figures.

John MathonEvent Driven Architecture – The Basics

Networking in the platform 1 days was a complicated affair and there were few computers that were networked.

How do you share information


I imagined a world where when something changed in one place it would automatically be delivered to all interested parties everywhere virtually instantly.



I called it the information bus and initially when we went to VCs they pretty much laughed at us.  They had no idea there was a market to integrate applications and nobody cared if information was current.  They wouldn’t even hear our presentation which is pretty remarkable because they will generally listen to almost anything.



Managers would typically get information at the end of the month.  It staggers me to think that was considered perfectly okay.  I wonder what their reaction to todays social “checking in” would be.

Fortunately we found an industry that did care if information was current.  Trading floors and stock exchanges needed information as soon as it was available and our event driven technology of publish subscribe became the standard quickly.

trading floor example

Over a short time many businesses discovered that having information just in time was actually very useful for factory floors, for logistical operations and even companies which didn’t seem to be needy of real-time event-driven infrastructure.

Trading Floor Info Bus


As we implemented these event based architectures we discovered common elements that all the systems needed.  Among the first common components was Business Activity monitoring and Mediation.

EDA spurred Middleware


After that, people discovered a need for Business process automation and Complex event processing.   Thus event driven architecture evolved naturally to solve problems in logically orthogonal problem spaces.  Long-running processes used Business Process Servers, shorter processes were best handled in Message Brokers, stateless transactions in Enterprise Service Buses, stateful transactions in Data Services integrated with RDB’s.  Slow changing data was best handled in Registries, fast changing data was best handled in rules engines.   Real-time eventing was best handled in Complex Event Processing Engines and batch eventing in Business Activity Monitors.

EDA components classified

Each of these tools is a best practices for that type of problem and all the tools are generally useful in any organization to reduce complexity and increase robustness.   Using a tool not suited to a problem inevitably leads to complexity and failures or brittleness.  So, most organizations should break problems into pieces and use the appropriate tools.



These event driven technologies made event driven computing scalable and solved thorny problems.  When Ray Dalio, CEO of Bridgewater systems one of the smartest people in the world and a man obsessed with the quality of business processes first saw BPEL BPS based system for creating systematic behavior to business processes and monitor it he was giddy.  Hardcoded point to point systems had dependencies and inconsistent quality between networked components so that changing anything broke other things you never imagined.

spaghetti architecture


Mediation, registries and publish subscribe made the integration points between applications explicit, manageable and consistent quality.  Event Driven Architecture facilitated agility in the long term and in my experience in the short term with a little education.

SOA (Service Oriented Architecture)

Event driven computing evolved into SOA and numerous practices and technologies evolved that made SOA the architectural standard.  Standard components in a standard SOA architecture include message broker which implements a publish/subscribe mechanism of topics and queues.  Topics and queues use the subjects of the publish/subscribe technology I invented to distribute information to interested parties.

Message Broker Spoke

Over time event driven computing has become synonymous with this SOA architecture but if your goal is to provide information to interested parties as fast as possible other approaches have evolved.

HTTP / HTML came up with an alternative path to EDA

In the early days of the internet the http protocol was invented to deliver information.  Originally there was no need for this information to be current.  In fact Google and Yahoo could take 6 months to index your website.   Even today Google doesn’t promise to update its search of your site on any schedule.    I imagined an internet based on publish/subscribe and worked with Cisco to implement a protocol in routers called PGP but it never took off for many reasons.

The http protocol was enhanced and tricks were employed to create the appearance of real-time updating.  Further refinements were made in HTML5 to make bidirectional instant event oriented communication easy and less of a kludge.

The publish / subscribe event oriented architecture did not proliferate over the internet, instead the web services architecture evolved.   However, most recently publish / subscribe has emerged as the dominant protocol for internet of things devices and protocols.



Enterprises continue to implement SOA Event driven architectures as the gold standard for many applications but a web services architecture has evolved alongside it and the two work fine together.  SOA architecture works well for transactional and highly real-time dependent applications.   Web services architecture works well for more user oriented applications where the end consumer is a human operating at human speed.

Event driven architecture is the standard today.  Almost every application or service built today assumes that you can do whatever you want immediately, get instant feedback on the status of your request and interact in real time with the request and anybody involved in the process.   Whether it is ordering products or a taxi, manufacturing, financial processes, chat or messaging everything is event driven today.

WSO2 offers a full suite of open source components for both event driven SOA architectures and web services architectures to implement highly scalable reliable enterprise grade solutions.      It is typical to use both architectures in today’s enterprises.  WSO2 is one of the only vendors that can deliver all components of both architectures.  WSO2 is also open source and built to be enterprise grade throughout.



This EDA Series:

Event Driven Architecture – The Basics

Event Driven Architecture – Internet of Things – What do trading floors and home thermostats have in common?

Event Driven Architecture – Architectural Examples and Use Cases

Event Driven Architecture – Enterprise Concerns:  High Availability / Disaster Recovery / Fault Tolerance / Load Balancing / Transactional Semantics / Performance Design

Other Articles you may find interesting:

MicroServices – Martin Fowler, Netscape, Componentized Composable Platforms and SOA (Service Oriented Architecture)

Publish / Subscribe Event Driven Architecture in the age of Cloud, Mobile, Internet of Things(IoT), Social

ESB Performance Comparison

Enterprise Application Integration

Event Driven Architecture (EDA) Pattern

Understanding ESB Performance & Benchmarking

Understanding Enterprise Application Integration – The Benefits of ESB for EAI



Shiva BalachandranWSO2 APIM : Publisher API : Add-ons

The following topics discuss the APIs exposed from the API Publisher and API Store Web applications using which you can create and manage APIs. You can consume APIs directly through their UIs or an external REST client like cURL or the WSO2 REST client.

There are few other properties that can be set while adding APIs, which are missing in the WSO2 Documentation.

Specifying the In Sequence and Out Sequence 

You can specify the In Sequence and Out Sequence when adding an API. Please check the example below.

Example : curl -X POST -b cookies http://localhost:9763/publisher/site/blocks/item-add/ajax/add.jag -d “action=addAPI&name=APINAME&context=/APICONTEXT1&version=1.0.0&visibility=public

&thumbUrl=&description=DESCRIPTION OF THE API&tags=phone,mobile,multimedia&endpointType=nonsecured&wsdl=&wadl=&tier=Silver

&tiersCollection=Gold,Bronze&http_checked=http&https_checked=https&inSequence=log-in-message&outSequence=log-out-messageresource&Count=0&resourceMethod-0=GET&resourceMethodAuthType-0=Application&resourceMethodThrottlingTier-0=Unlimited&uriTemplate-0=/*” -d’endpoint_config={“production_endpoints”:

Setting Endpoint Timeout

All the advanced options of the endpoint is set in the ‘config’ element of the endpoint_config parameter.

Example : endpoint_config={“production_endpoints”:{“url”:”http://appserver/resource&#8221;,”config”:{“format”:”leave-as-is”,”optimize”:”leave-as-is”,”actionSelect”:”fault”,”actionDuration”:60000}},”endpoint_type”:”http”}

Creating and Updating “Business Information” section attributes (4 items) and “Make this default version”

Example : curl -X POST -b cookies http://localhost:9763/publisher/site/blocks/item-add/ajax/add.jag -d “action=addAPI&name=APINAME&context=/APICONTEXT1&version=1.0.0&visibility=public

&thumbUrl=&description=DESCRIPTION OF THE API&tags=phone,mobile,multimedia&endpointType=nonsecured&wsdl=&wadl=&tier=Silver

&tiersCollection=Gold,Bronze&http_checked=http&https_checked=https&inSequence=log-in-message&outSequence=log-out-messageresource&Count=0&resourceMethod-0=GET&resourceMethodAuthType-0=Application&resourceMethodThrottlingTier-0=Unlimited&uriTemplate-0=/*” -d’endpoint_config={“production_endpoints”:

Hope the examples provide you the help that you are looking for. Will update the blog further on the properties.

Sivajothy VanjikumaranInvoking WSO2 ESB proxy which uses Entitlement Mediator to evaluate a XACML rule in the WSO2 Identity Server in WSO2 Stratos 1.6

In this blog, I am assuming that you have an understanding on XACML Policy Language and use of WSO2 Stratos 1.6

This it the full synapse configuration that has been used in this sample.

Shelan PereraHow does the confidence look like?

"John Carpenter (born c. 1968)[1] became the first millionaire on the United States version of the game show Who Wants to Be a Millionaire on November 19, 1999" [1].

The most important part of the show is the last question where he attempted to claim the final prize. Watch the following video which is too good for being a contestant at his final question for 1 million dollars.


Dedunu DhananjayaYosemite Full Screen problem :(

People hate Yosemite. But I don't know why. I like Yosemite more than Mavericks. But Yosemite has a problem with Maximize button.(Zoom) As you click Zoom or maximize button it will go to full screen mode.

To avoid this press zoom while you are pressing Alt (key).

If you want to maximize Chrome click zoom while you are pressing Alt + Shift

Enjoy yosemite!

Umesha GunasingheOAuth2 Playground app with WSO2 Identity Server 5.0.0

This is basically a how to reference post ...:)

1) Download the playground app from here and build using maven

2) Get the .war app and deploy on tomcat server
3) Download the  WSO2 Identity Server.

Now we need to configure the Playground app in the IS.

4) Go do add new service provider

5) Give a name = playground (example)
6) Register the application
7) Now you would be able to see a long list of options for a service provider, and if you expand the inbound authentication tab, you could see the OAuth configuration

8)  Click on configure, and add the relevant configuration and save

callback url :- http://localhost:8080/playground2/oauth2client

select needed oauth grant types , oauth version 2.0

9) This will generate a key and a secret for the application, this can be used to invoke the authorization / token end points on the server (displayed after generation)

10) once done, save the application configs

11) start tomcat server and goto http://localhost:8080/playground2

12) click on import photos , then you can select the relevant grant type and fill in the details as you go in the steps, basically the information needed are at the IS service provider application side (secret, key, urls etc)

13) According to the relevant grant type, you can interact with the oauth handshake relevant to the grant type, after getting the access token , you can import the photos :)

References :-


Listen to this awesome webinar for OAuth :-


Following is a very useful rescource link :-


Dedunu DhananjayaTitan DB vs Neo4J

This comparison is an outdated comparison. I think Neo4J has improved a lot with the time. But I'm posting this because a person who wants to compare both of these technologies, can get an idea about the aspects they need to focus. If you know something is outdated please feel free to suggest using comments. I'll update the blog post accordingly.

LicenseGPL/AGPL/CommercialApache 2 License
Commercial SupportAvailable
Advanced: Email 5x10 USD 6000/yr
Enterprise: Phone 7x24 USD 24,000/yr
Available (Prices and availability of support not published officially.)
Graph TypeProperty GraphProperty Graph
Storage BackendNative Storage EngineCassandra, Hbase, Berkeley DB
Dependin on the requirement we should select Database Backend (eg : Cassandra for Availabilty and Partitionabilty, Hbase for Consistency and Partitionabilty)
ACID SupportYesACID is supported on BerkeleyDB Storage Backend
Has Transactions in Java API.On Cassandra Eventually consistent
ScalabiltyCan't Scale out like TitanOwns very good scalabilty
can scale like Cassandra if storage backend is cassandra
High AvailabiltyReplication is the only way to have high scalabilityTitan is like API because of that Availability of Storage backend is the availabilty for graph database
Failover is not smoothIf we are using cassandra with Titan No-Single-Point of failure. Extremely Available
Query LanguageCypher and GremlinGremlin
Cypher easy to learn but only suitable for simple queries.Gremlin has good algorithms to retrieve data in optimal way. (+ More generic)
Graph Sharding Not Available, under developmentNot Available, under development
Support for languagesJava/.NET/Python/PHP/NodeJS/Scala/GOJava
Written inJavaJava
ProtocolHTTP/RESTCan expose REST using Rexster
Use casesmore than 10 available0 use cases exposed officially
Number of edges vertices supported2^35 (~34 Billion) Nodes (Vertices)2^30 Vertices
2^35 (~34 Billion) Relationships (edges)2^60 (quintillion) edges
2^36 (~68 Billion) Properties
2^15 (~32 000) Relationship types
LimitationsKey Index must be created prior to key being used
Unable to drop key indices
For bulk graph operations we have to use Faunus otherwise storage backends get OutOfMemoryException
Types cannot be changed once created
Web AdminAvailableNot Available
MapReduce-Yes with Faunus
Lucene Indexing SupportYesYes
BackupsYesYes (+Titan Parellel backup)

This link is also very usefull -

Shelan PereraHow to build Open JDK 9 on Mac OSX Yosemite

I have been struggling lately to find good resources to compile and change OpenJDK. There is a problem with Mac OS Yosemite as it uses Clang as the compiler. But Open JDK 9 builds without a problem. I am adding useful resources here just incase someone finds it useful.

How to build it

1) hg clone jdk9

2) cd ./jdk9

3) bash ./ (This will take sometime to download the sources so be patient. May be few hours sometimes)

4) when you try to issue command ./configure you will get the following issue.

configure: error: Could not find freetype! configure exiting with result code 1

5) You need install Xquartz to eliminate the above issue.

6) type make all to build the system. In the make file you can see different options to build the system.

7) Built image will be available usually at at build/macosx-x86_64-normal-server-release/jdk/

If you are trying on Mavericks i could find the following resource which seems to be useful but did not verify whether it actually works 100%

Lahiru SandaruwanStratos 4.1.0 Deveper Guide: Autoscaler Member Lifecycle

Autoscaling periodic task

Autoscaler can decide to add a member to cluster for several reasons.
  • Fulfil the minimum member count of the cluster
Minimum check rule is responsible for this task. Logic resides in mincheck.drl file

  • Scaling up due to high values of statistics such as memory consumption, requests in flight etc.
scaling.drl file will be the one if anyone want to check logic related to this.
  • Scaling up a dependency which affects the current cluster

This is triggered when we mention the relationship of scaling dependencies in the application.

           "scalingDependants": [

Pending status

This status is the first status a member will become in Autoscaler. When the member is created in IaaS, we will keep the member in this status until we receive the MemberActivatedEvent. We also have a timeout for pending members which can be configured using “autoscale.xml”. If the member does not become active in that period of time, Autoscaler will move the member to Obsolete list.
We have a “ObsoletedMemberWatcher” class which runs periodically and check if the members are expired.

Active Status

This is the state of the member which will be, after the MemberActivatedEvent is received to Autoscaler from Cloud Controller.

Termination Pending

If the Autoscaler decided to remove a member of a cluster due to one of the following reasons, it will move the member to this status.

  • Scaling down due to low values of statistics such as memory consumption, requests in flight etc.
scaling.drl file will be the one if anyone want to check logic related to this.
  • Scaling down a dependency which affects the current cluster

This is triggered when we mention the relationship of scaling dependencies in the application.

           "scalingDependants": [

Autoscaler will also send the instance cleanup event  for termination pending members. “obsoletecheck.drl” has the logic to send that event.
When the MemberReadyToShutdownEvent received for that particular member, Autoscaler will move that member to Obsolete status. Also if the MemberReadyToShutdownEvent is not received for the “terminationPendingMemberExpiryTime” particular member, which is configurable using “autoscale.xml”, it will move the member to Obsolete status.

Obsolete Status

These members are terminated using cloud controller API directly, by running the “ObsoletedMemberWatcher” task. If the member is failed to terminated for a timeout defined in “autoscaler.xml”, it will remove the member from obsolete list and give up on the member termination.


Autoscaler will remove the member from Obsolete list and when the MemberTerminatedEvent is received. Then Autoscaler will have no record about that member.

Sriskandarajah SuhothayanMake it fast for everyone

I spoked at SLASSCOM Tech talk 2015 on performance. I have attached the slides below. This mainly covers how we need to think in terms of performance from product design, choosing technologies and upto testing. Here I have taken examples form middleware products. 

The topic "Make it fast for everyone" simply means that when you implement a solution that can be used by many other solutions and then when you make your solution faster and better performing then you are basically making all the dependent components to do their task faster. 

Udara LiyanageHow to configure ciphers in WSO2 Servers

The SSL ciphers supported by are the ciphers supported by internal Tomcat server. However you may sometime want customize the ciphers that your server should support. For instance Tomcat support export grade ciphers which will make your server vulnerable to recent FREAK attack. Let’s see how you can define the ciphers.

  • How to view the supporting ciphers

1) Download TestSSLServer.jar jar at

2) Start the WSO2 server

List the supported ciphers
3) java -jar TestSSLServer.jar localhost 9443

Supported cipher suites (ORDER IS NOT SIGNIFICANT):
(TLSv1.1: idem)
Server certificate(s):
6bf8e136eb36d4a56ea05c7ae4b9a45b63bf975d: CN=localhost, O=WSO2, L=Mountain View, ST=CA, C=US

  • Configure the preffered ciphers

1) Open [CARBON_HOME]/repository/conf/tomcat/catalina-server.xml and find the Connector configuration corresponding to SSL/TLS. Most probably this is the connector which has port 9443

2) Add a attribute called ciphers which have allowed ciphers in comma separated

<Connector protocol=”org.apache.coyote.http11.Http11NioProtocol”

Here I have added just one cipher for the simplicity.

3) List the supported ciphers now
java -jar TestSSLServer.jar localhost 9443

Supported versions: TLSv1.0 TLSv1.1 TLSv1.2
Deflate compression: no
Supported cipher suites (ORDER IS NOT SIGNIFICANT):
(TLSv1.1: idem)
(TLSv1.2: idem)
Server certificate(s):
6bf8e136eb36d4a56ea05c7ae4b9a45b63bf975d: CN=localhost, O=WSO2, L=Mountain View, ST=CA, C=US


References :

Nandika JayawardanaHow to configure a ESB proxy service as a consumer to listen to two message broker queues

In this blog post, we will look at how we can configure multiple transport receivers and senders with WSO2 ESB and configure a proxy service to have multiple transport receivers.

In order to test our scenario, we need to start two message broker instances.

Lets configure active MQ to run as two instances.

1. Download activemq and extract it.
2. Run the following command to create an instance of it.

$ ./activemq create instanceA
$ ./activemq create instanceB

Running these two commands will create two directories inside activemq bin directory with configuration files and start-up scripts duplicated within them. Now we can modify the configuration files to use different ports so that when we start the two mq instances, there wont be port conflicts.

Open InstanceB/conf/activemq.xml file and modify the ports under transportConnectors.

               <transportConnector name="openwire" uri="tcp://;wireformat.maxFrameSize=104857600"/>
            <transportConnector name="amqp" uri="amqp://;wireformat.maxFrameSize=104857600"/>

Now open jetty.xml in the same directory and modify the ui port from 8161 to different port.

Now we are ready to start the two activemq instances. 

cd instanceA/bin
./instanceA console

cd instanceB/bin
./instanceB console

Now we have two activemq instance running in console mode.

Log into activemq instanceA ui and create a queue named MyJMSQueue.
Similiary, log into activemq instanceB and create a queue with the same name.

Use http://localhost:8161/admin and username and password admin for defaults.

Now, we have done our configurations for activemq broker.

Now copy the following jar files to repository/components/lib directory of ESB.


Configuring axis2.xml

Now go to repository/conf/axis2/axis2.xml and uncomment jms transport section for activemq and duplicate it with a transport named jms1. Make sure to update the provider url port with the value you specified in activemq.xml. My configuration looks like the following.

<transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">
        <parameter name="myTopicConnectionFactory" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">TopicConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">topic</parameter>

        <parameter name="myQueueConnectionFactory" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>

        <parameter name="default" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>

    <transportReceiver name="jms1" class="org.apache.axis2.transport.jms.JMSListener">
        <parameter name="myTopicConnectionFactory1" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61636</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">TopicConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">topic</parameter>

        <parameter name="myQueueConnectionFactory1" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61636</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>

        <parameter name="default" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61636</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>

Now start ESB and deploy the following proxy service.

<proxy xmlns=""
       transports="jms jms1"
         <log level="full"/>
   <parameter name="transport.jms.Destination">MyJMSQueue</parameter>

Now if you publish message to queue MyJMSQueue of either activemq instance, you will notice that the message is consumed by our proxy service and logged.

How does it work ?

In our scenario, since we are going to have to have different configurations for transports jms and jms1, we cannot specify the connection factory details in the proxy service itself. Hence we have resorted to use the default configurations specified in the axis2.xml.

However, we can specify the jms destination name in our proxy service. This make sense as this kind of approach would only be required for mq high availability scenario and hence we can afford to have the same queue name for both message broker instances. 

Sajith RavindraEmpty string returned when fetching properties stored in WSO2 ESB?

Once I encountered this scenario where I have set of properties stored in registry and I wanted to fetch them in my ESB mediation sequence.

I had set of properties stored in /_system/config/myConfig. So I used get-property function as follows in my mediation sequence to read the value of stored properties,



The problem was, get-property function was keep returning empty value.

Then I debugged to find out the issue and it turned out that registry stored in the synapse context was null.


The reason for the registry to be not loaded into the context was, I have missed following in my synapse definition,

<registry provider="org.wso2.carbon.mediation.registry.WSO2Registry">

<parameter name="cachableDuration">15000</parameter>


When browsed to the soruce view of WSO2 ESB, I noticed that above entry is not present in the configuration and I prevents registry being loaded into the synapse context. After adding the above to the synapse definition everything started working as expected.

Ajith VitharanaLogging for Jax-RS webapps.

1. Add logs to custom log file using java logging API.

i) Define a logging.peroperties file (sample file in webapp property description

handlers=java.util.logging.FileHandler, java.util.logging.ConsoleHandler
#Log file name in home directory(GNU)

#Size of the log file

#Number of log file

#Append logs to original file without creating new one

ii. You need to add this file (logging.peroperties) to WEB-INF/classes location inside the war file.(This is configured in pom.xml file in sample webapp )

2. Add logs to wso2carbon.log file (If deployed on WSO2  Server ).

i. Copy the file to war file (WEB-INF/classes) which can be found in <server_home>repository/conf

3.  Initialize the logger.
//This it to write to logs to external file
private static Logger customLog = Logger.getLogger(CustomerService.class.getName());
//This us to write to logs to wso2carbon.log file
private static final Log wso2Log = LogFactory.getLog(CustomerService.class);
4. Java Imports required.
import java.util.logging.Logger;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
    public Customer getCustomer(@PathParam("id") String id) {
        System.out.println("----invoking getCustomer, Customer id is: " + id);"get customer information for " + id);"get customer information for "+ id);

        long idNumber = Long.parseLong(id);
        Customer c = customers.get(idNumber);
        return c;

5. Build the war file using maven and deploy. (Eg: WSO2 Application Server).

6. Invoke the sample REST service using curl
curl -X  GET "http://localhost:9763/jaxrs_basic/services/customers/customerservice/customers/123"
7. You can find the logs in following locations.

i) wso2carbon.log (wso2_server/repository/log/wso2carbon.log)
ii) jaxrs_basic0.log (created in home dir)
iii) Start up console

Nandika JayawardanaHow to configure IBM MQ 8 With WSO2 ESB

In this blog post, we will look at how to configure IBM MQ version 8 with WSO2 ESB and implement a proxy service to consume messages from a queue in IBM MQ.

Following are the steps we need to follow in order to configure ESB and implement our proxy service. 

1. Create the relevant JMS Administrative objects in IBM MQ.
2. Generate the JNDI binding file from IBM MQ
3. Configure WSO2 ESB JMS transport with the generated binding file and connection factory information.
4. Implement the proxy service and deploy it.
5. Publish a message to MQ and observe how it is consumed by ESB.

Create Queue Manager and Queue and Server Connection Channel in MQ


Start the Web Sphere MQ Explorer. If you are not running on an administrator account, right click on the icon and select Run as Administrator option.

Step 2.

Click on the Queue Managers and Select New => Queue Manager to create a new queue manager.

We will name the queue manager as ESBQManager. Select create server connection channel option as you pass through the wizard with next button. You will get the option to specify the port this queue manager will use. Since we do not have any queue managers at the moment, we can use the default 1414 port.

Now we have created a queue manager object. Next we need to create a local queue which we will used to publish massages and consume from ESB. Lets name this queue as LocalQueue1.

Expand newly created ESBQManager and click on Queues and select New => Local Queue.

We will use default options for our local queue.

Next we need to create a server connection channel which will be used to connect to the queue manager.

Select Channels => New => Server-connection Channel option and give the channel name mychannel. Select default options for creating the channel.

Now we have created our queue manager, queue and server connection channel.

Generating the binding file

   Next we need to generate the binding file which will be used by IBM MQ client libraries for JNDI Look-up.  For that, we need to first create a directory where this binding file will be stored. I have created a directory named G:\jndidirectory for this purpose. 

Now go to MQ Explorer, click on JMS Administered Objects and select Add Initial Context.

In the connection details wizard, select File System option and browse to our newly created directory and click next and click finish.

Now, under the JMS Administered objects, we should be able to see our file initial context.

Expand it and click on Connection Factories to create a new connection factory.

We will name our connection factory as MyQueueConnectionFactory. For the connection factory type, select Queue Connection Factory.

Click next and click finish. Now Click on the newly created Connection Factory and select properties. Click on the connections option, browse and select our queue manager. You can also configure the port and the host name for connection factory. Since we used default values, we do not need to do any changes here. 

For the other options, go with the defaults. Next , we need to create a JMS Destination. We will use the same queue name LocalQueue1 as our destination and map it to our queue LocalQueue1 . Click on Destinations and select New => Destination. and provide name LocalQueue1. When you get the option to select the queue manager and queue browse and select ESBQManager and LocalQueue1 .

Now we are done with creating the Initial Context. If you now browse to the directory we specified, you should be able to see the newly generated binding file.

In order to connect to the Queue, we need to configure channel authentication. For the ease of use, lets disable channel authentication for our scenario. For that run the command runmqsc from the command line and execute the following two commands. Note that you have to start command prompt as admin user.

runmqsc ESBQManager



Now we are done with configuring the IBM MQ.  

Configuring WSO2 ESB JMS Transport. 

open axis2.xml found in wso2esb-4.8.1\repository\conf\axis2 directory and add the following entries to it near the commented out jms transport receiver section.

<transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">
  <parameter name="default" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>

  <parameter name="myQueueConnectionFactory1" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>

Similarly add jms transport sender section as follows.

<transportSender name="jms" class="org.apache.axis2.transport.jms.JMSSender">
  <parameter name="default" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>

  <parameter name="myQueueConnectionFactory1" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>

Since we are using IBM MQ queue manager default configuration, it is expecting username password client authentication. Here, the username and password is the login information of your logged in operating system account.

Copy MQ client libraries to respective directories.

Copy jta.jar and jms.jar to repository/components/lib directory.
Copy and fscontext_1.0.0.jar to repository/components/dropins directory. Download the jar files from here.

Deploy JMSListener Proxy Service.

Now start esb and deploy the following simple proxy service. This proxy service act as a listener to our queue LocalQueue1 and when ever we put a message to this queue, the proxy service will pull that message out of the queue and log it.

<proxy xmlns=""
         <log level="full"/>
   <parameter name="transport.jms.Destination">LocalQueue1</parameter>

Testing our proxy service

Go to MQ Explorer and add a message to local queue. 

Now you will be able to see the message logged in ESB console as well as in the log file.

Enjoy JMS with IBM MQ

sanjeewa malalgodaHow to use API Manager Application workflow to automate token generation process

Workflow extensions allow you to attach a custom workflow to various operations in the API Manager such as user signup, application creation, registration, subscription etc. By default, the API Manager workflows have Simple Workflow Executor engaged in them.

The Simple Workflow Executor carries out an operation without any intervention by a workflow admin. For example, when the user creates an application, the Simple Workflow Executor allows the application to be created without the need for an admin to approve the creation process.

Sometimes we may need to do additional operations as part of work flow.
In this example we will discuss how we can generate access tokens once you finished application creation. By default you need to generate keys once you created Application in API store. With this sample that process would automate and generate access tokens for you application automatically.

You can find more information about work flows in this document.

Lets first see how we can intercept workflow complete process and do something.

ApplicationCreationSimpleWorkflowExecutor.complete() method will execute after we resume workflow from BPS.
Then we can write our implementation for workflow executor and do whatever we need there.
We will have user name, application id, tenant domain and other required parameters need to trigger subscription/key generation.
If need we can directly call dao or APIConsumerImpl to generate token(call getApplicationAccessKey).
In this case we may generate tokens from workflow executor.

Here you will see code for ApplicationCreation. This class is just the same as ApplicationCreationSimpleWorkflowExecutor, but additionally generating the keys in the ApplicationCreationExecutor.complete().
In this way the token will be generated as and when the application is created.

If need user can use OAuthAdminService getOAuthApplicationDataByAppName in the BPS side using soap call to get these details. If you want to send mail with generate tokens then you can do that as well.


import java.util.List;
import java.util.Map;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.wso2.carbon.apimgt.api.APIManagementException;
import org.wso2.carbon.apimgt.impl.APIConstants;
import org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO;
import org.wso2.carbon.apimgt.impl.dto.ApplicationWorkflowDTO;
import org.wso2.carbon.apimgt.impl.dto.WorkflowDTO;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowException;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowExecutor;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowConstants;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowStatus;

import org.wso2.carbon.apimgt.impl.dto.ApplicationRegistrationWorkflowDTO;
import org.wso2.carbon.apimgt.impl.APIManagerFactory;
import org.wso2.carbon.apimgt.api.APIConsumer;

public class ApplicationCreationExecutor extends WorkflowExecutor {

    private static final Log log =

    private String userName;
    private String appName;

    public String getWorkflowType() {
        return WorkflowConstants.WF_TYPE_AM_APPLICATION_CREATION;

     * Execute the workflow executor
     * @param workFlowDTO
     *            - {@link ApplicationWorkflowDTO}
     * @throws WorkflowException

    public void execute(WorkflowDTO workFlowDTO) throws WorkflowException {
        if (log.isDebugEnabled()) {
  "Executing Application creation Workflow..");


     * Complete the external process status
     * Based on the workflow status we will update the status column of the
     * Application table
     * @param workFlowDTO - WorkflowDTO
    public void complete(WorkflowDTO workFlowDTO) throws WorkflowException {
        if (log.isDebugEnabled()) {
  "Complete  Application creation Workflow..");

        String status = null;
        if ("CREATED".equals(workFlowDTO.getStatus().toString())) {
            status = APIConstants.ApplicationStatus.APPLICATION_CREATED;
        } else if ("REJECTED".equals(workFlowDTO.getStatus().toString())) {
            status = APIConstants.ApplicationStatus.APPLICATION_REJECTED;
        } else if ("APPROVED".equals(workFlowDTO.getStatus().toString())) {
            status = APIConstants.ApplicationStatus.APPLICATION_APPROVED;

        ApiMgtDAO dao = new ApiMgtDAO();

        try {
        } catch (APIManagementException e) {
            String msg = "Error occured when updating the status of the Application creation process";
            log.error(msg, e);
            throw new WorkflowException(msg, e);

        ApplicationWorkflowDTO appDTO = (ApplicationWorkflowDTO) workFlowDTO;
        userName = appDTO.getUserName();

        appName = appDTO.getApplication().getName();

        System.out.println("UseName : " + userName + "   --- appName = " + appName) ;

        Map mapConsumerKeySecret = null ;

        try {
            APIConsumer apiConsumer = APIManagerFactory.getInstance().getAPIConsumer(userName);
            String[] appliedDomains = {""};
//Key generation
            mapConsumerKeySecret = apiConsumer.requestApprovalForApplicationRegistration(userName, appName, "PRODUCTION", "", appliedDomains, "3600");
        } catch(APIManagementException e) {
            throw new WorkflowException(
                    "An error occurred while generating token.", e);

        for (Map.Entry entry : mapConsumerKeySecret.entrySet()) {
            String key = entry.getKey();
            String value = entry.getValue();

            System.out.println("Key : " + key + "   ---  value = " + value);

    public List getWorkflowDetails(String workflowStatus) throws WorkflowException {
        return null;


Then add this as application creation work flow.

Lakmali BaminiwattaCSV to XML transformation with WSO2 ESB Smooks Mediator

This post provides a sample CSV to XML transformation with WSO2 ESB. WSO2 ESB supports executing Smooks features through 'Smooks Mediator'. 

Latest ESB can be downloaded from here

We are going to transform the below CSV to an XML message.


This is the format of the XML output message.



First lets write the smooks configuration to transform above CSV to given XML message (smooks-csv.xml).


<resource-config selector="org.xml.sax.driver">
<param name="fields">firstname,lastname,gender,age,country</param>
<param name="rootElementName">people
<param name="recordElementName">person
Now let's write a simple proxy service to take the CSV file as the input message and process through the smooks mediator. For that first you need to enable VFS transport sender and reciever.

Below is the service synapse configuration. Make sure to change the following parameters according to your file system. You can find more information about the parameters from here.

  •   transport.vfs.FileURI
  •   transport.vfs.MoveAfterProcess
  •   transport.vfs.ActionAfterFailure

<proxy xmlns=""
         <smooks config-key="smooks-csv">
            <input type="text"/>
            <output type="xml"/>
         <log level="full"/>
   <parameter name="transport.PollInterval">5</parameter>
   <parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
   <parameter name="transport.vfs.FileURI">file:///home/lakmali/dev/test/smooks/in</parameter>
   <parameter name="transport.vfs.MoveAfterProcess">file:///home/lakmali/dev/test/smooks/original</parameter>
   <parameter name="transport.vfs.MoveAfterFailure">file:///home/lakmali/dev/test/smooks/original</parameter>
   <parameter name="transport.vfs.FileNamePattern">.*.csv</parameter>
   <parameter name="transport.vfs.ContentType">text/plain</parameter>
   <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>

You have to make a ESB local entry with the key 'smooks-csv' and give path to smooks-csv.xml which we crated above. So in the smooks mediator above, we are loading the smooks config through the local entry key name (smooks-csv).

To perform the transformation what you need to do is drop the input message file to transport.vfs.FileURI location. In the log you can see the transformed message in XML!! Now you got the CSV message in XML in your synapse sequence. So you can perform any further mediation to this message such as send to some endpoint/database/file etc.

Ajith VitharanaConnect WSO2 server to PostgreSQL

I'm going to install PostgreSQL in Ubuntu machine and connect WSO2 API manager 1.8.0.

1. Install PostgreSQL.
sudo apt-get install -y postgresql postgresql-contrib
2. Open the postgresql.conf  file (/etc/postgresql/9.3/main/postgresql.conf ) and change the listen_addresses.
listen_addresses = 'localhost'
3. Logged to the  PostgreSQL.
sudo -u postgres psql
4. Create a user(eg : vajira/vajira123).
CREATE USER <user_name> WITH PASSWORD '<password>';
5. Create a database (eg: wso2carbon_db).
CREATE DATABASE <database_name>;
6. Grant permission to for user for that database.
GRANT ALL PRIVILEGES ON DATABASE <database_name> to <user_name>;
7. Open the pg_hba.conf file (/etc/postgresql/9.3/main/pg_hba.conf) and change the peer authentication to md5. (using the peer authentication only the operating system user can login to the database)

    local      all                all                        peer
8.  Restart the PostgreSQL.
sudo service postgresql restart
9. Run the script to create registry and user manager database.
psql -U vajira -d wso2carbon_db -a -f wso2am-1.8.0/dbscripts/postgresql.sql
7. Logged in to the database.
psql -U <user_name> -d <database_name> -h
8. Use the following command to view the table list of wso2carbon_db.
8. Download the jdbc driver and copy to <server_home>/repository/components/lib.

9. Open the master-datasources.xml  file (server_home/repository/conf/datasources/master-datasources.xml) and the change the data source configuration as bellow.
            <description>The datasource used for registry and user manager</description>
            <definition type="RDBMS">
                    <validationQuery>SELECT 1</validationQuery>

9. Start the server.
10. Using same steps you can create other databases in   PostgreSQL.

11. You can use pgAdmin PostgreSQL tool to connect to databse.

Dimuthu De Lanerolle

Useful Git commands

Q: How can I merge a distinct pull request to my local git repo ?

   You can easily merge a desired pull request using the following command. If you are doing this merge at first time you need to clone a fresh check-out of the master branch to your local machine and apply this command from the console.
git pull +refs/pull/78/head

Q: How do we get the current repo location of my local git repo?

A: The below command will give the git repo location your local repo is pointing to.

git remote -v

Q: Can we change my current repo url to a remote repo url

A: Yes. You can point to another repo url as below.

git remote set-url origin

Q: What is the git command to clone directly from a non-master branch (eg: two branches master & release-1.9.0 how to clone from release-1.9.0 branch directly without switching to release-1.9.0 after cloning from the master) 

A: Use the following git command.

git clone -b release-1.9.0


Q : I need to go ahead and build no matter i get build failures. Can I do that with maven build?

A: Yes. Try building like this.

mvn clean install -fn 

Q : Can I directly clone a tag of a particular git branch ?

A : Yes. Lets Imagine your tag is 4.3.0 , Following command will let you directly clone the tag instead the branch.

Syntax : git clone --branch <tag_name> <repo_url>

git clone --branch carbon-platform-integration-utils-4.3.0

Sajith KariyawasamConvert request parameter to an HTTP header in WSO2 ESB

Say, you have a requirement to pass a request parameter (named key) which comes to an ESB proxy service, and that need to be passed to the backend service as an HTTP header (named APIKEY).

Request to ESB proxy service would be as follows.
      curl -X GET "http://localhost:8280/services/MyPTProxy?key=67876gGXjjsskIISL

In that case you can make use of xpath expression $url:[param_name]

In your in-sequence you can add a header mediator as follows to set the HTTP header with key request parameter

John MathonIOT : Should you do your Automation for your disparate IOT devices locally or in the cloud, review HUBS

IOT publish subscribe


Here is my blog series on this IoT project to automate some key home functions using a variety of IOT devices from many manufacturers.

Here is my “Buy/no buy/might buy list and IOT landscape article here”

Here is my “What are the integration points for IoT article here”

Here is my “Why would you want to integrate IoT devices article here”

Combining Services from disparate devices

There is a question where to integrate different IOT devices.  Conceptually integrating different devices would be best done locally since there would be less latency and there would be less dependency on outside systems or loss of connectivity to the cloud.   However, almost everyone wants to be able to control their devices from the cloud so cloud connectivity is needed anyway.   Also, if any of your devices collect data you may want to have that data in the cloud because of issues around backup at home.   If you build the automation locally you will still want to have access in the cloud so you will have to build or have access to your automation in the cloud.

Whether you decide to do your integration locally or in the cloud the decision is also affected by what is available.   Devices have a variety of compatibility with local hubs.  Some hubs can support some devices.  Nothing can support all devices, not even the Homey which seems to support virtually every protocol out there.   The reason is that device manufacturers still feel they have the option to invent their own protocols and hardware or there may be a devices from the ancient past that use something before the latest craze that still need to be integrated.

Some IOT devices offer SDKs for development of applications on computers or for SmartPhone apps.    Some offer APIs to talk to the device directly.   Some offer APIs to talk to a Web Service in the cloud.  Some offer “standard protocols” such as Z-Wave or Zigbee, CoAp, etc which provide a way to talk to numerous devices in a standard way locally.    Some only offer a Web Service.   Some offer all of these and some offer only a few.   So, how do you build automation across numerous devices of different protocols and approaches and where?

One way is to buy only devices which fit a certain profile so that you are sure that all your devices can be communicated with in a way you will support.   This is almost certainly to some extent required as the variety of different protocols and device integrations can be extremely costly and time consuming beyond the utility of a particular manufacturers device features.

For integration locally we have the hubs on the market.  For integration in the cloud we have a service like IFTTT.  Let’s discuss each.


Local Integration Compatability of Protocols

The IOT market is beset with hubs galore at this time.      Here are 13 hubs that are either in the market today or soon to be.  7 of them are shipping today and 6 are soon to be shipped.

Manufacturer Protocol Programmability
Shipping Wifi 802.11 Bluetooth Low Energy Near Field ZigBee Z-Wave Lutron 433Mhz nrf24l01+ Insteon Infrared Cell Apple HomeKit Other
SmartThings Y    Y    Y    Y Groovy
StaplesConnect Y    Y    Y    Y    Y NO
Wink Y    Y    Y    Y    Y    Y Robots
Insteon Hub – Microsoft Y    Y    Y    Y Insteon, dual band, wireline YES
Honeywell Lynx Y    Y    Y    Y 345Mhz, Power Backup NO
Mi Casa Verde Vera Y    Y    Y    Y    Y YES
HomeSeer Y    Y    Y    Y    Y LOCAL ONLY YES
New Smartthings N    Y    Y    Y    Y    Y Power Backup Groovy
Ninja Sphere N    Y    Y    Y    Y    Y Gestures and Trilateralization YES
Apple Hub N    Y    Y    Y ??
Homey – Kickstarter N    Y    Y Y    Y    Y    Y    Y    Y JavaScript
Oort N    Y    Y NO
Almond+ N    Y    Y    Y NO


If you want to integrate your devices locally then you face the issue if they are all available through a single hub device.     Wifi devices tend to be devices that are powered by a plug since 802.11 requires more power although it is ubiquitous.  BLE is gaining popularity but few devices are BLE in my experience.  Zwave is the most popular protocol and is supported by almost all hubs.  Zigbee close behind.   A requirement to do sophisticated integration is that the hub support programming.   The Wink hub looks promising but it’s robot language may not be sophisticated enough for some applications.   This leaves the SmartThings, MiCasaVerde, HomeSeer, Ninja Sphere and Homey as contenders.  I discarded the Insteon as it refuses to support recent protocols.   It may be a viable entry if you have a lot of legacy X10 and other devices from the past.

The first 3 SmartThings, Vera and HomeSeer are the only currently shipping products.

In its infinite wisdom Lutron which has a history of making lots of lights and other devices decided to use a 433Mhz frequency protocol which is not well supported by all the hubs.  NRF24L01+ is a hobbyist protocol.   The infrared protocol can be mitigated by purchasing irBeam kickstarter devices.   These devices support BLE and allow you to automate transmission of IR commands to control all your IR based home entertainment devices.   Therefore, by having a BLE enabled hub and some automation you can probably support IR devices without having support in the hub.   Some devices have a backup Cell Phone connection capability in case your internet connection fails.  The new SmartThings has power backup as does the Honeywell Lynx.  However, the Lynx isn’t programmable.

It is possible that like the Infrared option there may be “point solutions” that could offer additional protocols through a proxy.  A company is proposing to build such point solution products with no user interface so you can buy them individually.  However, this is a kickstarter project not quite past the conceptualization stage.    The Davis weatherstation uses a proprietary protocol and hardware for communicating between its weather sensors so this required me to purchase the weather envoy which consolidates all their devices and allows me to deliver that locally and to the internet through an application on my Mac Mini.   Similarly Honeywell has a history of devices using the 433Mhz frequency that work very well for security and won’t interact with anything but a Honeywell device.

The Ninja Sphere deserves mention because it has 2 very cool technologies built into it besides the standard ones.  The Ninja has a capability to detect the location in the house of the devices it is talking to.   So, by attaching a “Tag” device to anything in the house or from the movement of devices it is listening to it can triangulate an approximate position.   It uses this to detect if things are moving in the house.  Another feature of the Ninja is the ability to detect hand gestures around the device itself.  So, they can support the idea that tapping the device will turn on things or a movement of your arm a certain way above the device will cause the temperature of the house to go up  or something else.   The Ninja also has the ability theoretically to support Spheramid’s which they say can be used to attach any future protocol that someone wants to.   It’s also kinda cool looking.

I bought a MYO armband which requires you wear something on your arm but the MYO works throught BLE and if your hub uses BLE and had an integration with the MYO then you could support gestures anywhere not just near the Ninja.

IOT Automation example code.38217a6b20616a48440c2fc66de6a354_original

Local Integration Automation

Once you have a hub selected to do all your automation you need to be able to program it to the devices you are supporting.   In my case there are several devices on my list that aren’t supported by any hub so a hub can’t do it all for me.  In addition you may require, like me, the use of external web services to support some of the functionality and intelligence.

For instance, I need to go to PGE web site to find rates and times of rate changes.   I need to go to a weather service to find predictions for future weather which I want to use in my automation.   I also want to look at my cell phone’s movement to detect my movements and I want to automate some things around the car (Tesla) and all of these will require external web services to work.  So, my hub must allow a programming environment powerful enough to allow me to include these external services as well as the data coming from my devices that are connected to the hub.   Even if I do the integration on a local device I will need internet connectivity to implement all the intelligence I want.

The SmartThings, Homey, Vera and Ninja claim to support a full language that can allow the complexity I would want.


Cloud Integration Compatibility of Protocols

All of my devices connect to the internet either directly or through a device locally.   So, my electric meter is monitored by PG&E which stores data on the Opower website.  I also collect real time data via a HAN compatible device which funnels the real time information to the internet as well as locally.   The Davis weather station data is available locally but also is delivered to WeatherUnderground to be accessible from the Internet.  All my Z-Wave devices talk to the Lynx 7000 locally but also deliver their status to the cloud.   So, whether I want it or not everything is in the cloud anyway.   To be truthful some of the devices could be set up so they didn’t go to the cloud but most of the time people want to be able to access devices from the cloud so it makes sense to have the data and control in the cloud too.

The spreadsheet for supported services in the cloud is much sparser than the HUB market above:

Service Integration of
F=Future Planned Prod? WEMO Google Mail Drop Box SMS TESLA GPS Following HAN Energy UP Jawbone Weather Hub Support MYO Carrier irBeacon ATT DVR MyQ Liftmaster
Google Home N
myHomeSeer N
ATT Digital Life
Vera Y Y Y Vera Y
SmartThings Y Y SmartThings
Ninja N Ninja
Homey N Homey


Google Home or Android@home is a concept at this point.  I am not aware of a delivery date.   myHomeSeer is a service that promises to allow you to control your HomeSeer devices from the cloud. ATT Digital Life is still a concept from what I can see.  I am surprised there are not more IFTTT like clones out there.
Basically, only IFTTT at this time seems to support doing automation in the cloud.   There are a number of IFTTT like services including:  Zapier, Yahoo!Pipes, CloudWork, CloudHQ.     None of them seem to provide any IOT support at this time although they could all in theory help in building the automation I am considering.

You could of course create your own virtual server in the cloud, run your own application and implement whatever automation you wanted.  You can use any of a number of PaaS services in the cloud that allow you to build and run applications in the cloud.   This requires a complete knowledge of development and you building your own application from the ground up.

You could also build your own Ios application or Android application and run it on your phone.   That’s even more complicated.

IFTTT allows you to specify a channel as a trigger.   The channel can be any of 70 different services they support today.  So, I can say that when I send an SMS message to IFTTT it will turn on a WEMO switch which turns on something in my house.

IFTTT is far from perfect as well.   It supports a very few devices directly and I will need to leverage some clever tricks to get IFTTT to do some of the automation I want.   For instance, IFTTT allows you to specify when an email with a specific subject comes in to do something.   So, I can build some automation either locally or in some PaaS service that will perform some of the automation I want and gather information and then send a message to IFTTT through either an SMS message or google email or other channels that already exist in IFTTT to trigger the functionality I want.

IFTTT is working on a development platform that will allow people to build their own channels and actions.   They also have a large number of Future items that many people seem to be working on.  I suspect in a year the IFTTT story will be very much more complete and compelling looking at the comments and momentum it seems to have garnered.


Could this be simpler?

Of course I am tackling a difficult problem because I am trying to stretch what IOT can do today and how smart it can be.   There are choices I could have made to make life much simpler.   If I had stuck to virtually everything being a Z-Wave device it would have eliminated a number of complexities.  If I relaxed some of the requirements, like knowing my location or trying to integrate my Tesla, BodyMedia, Myo, Carrier devices it would make things easier.

Virtuous Circle

Things are rapidly evolving


The state of things is rapidly evolving.   Some of the hubs I am talking about are due out soon.  I expect there will be a lot more automation available in all the platforms and more compatibility.   There will still be numerous legacy things that can’t be changed and there will be vendors who refuse to be helpful.   A number of new entrants will undoubtedly confuse things.  It’s early in the IOT space.   Many vendors are trying to stake out ownership of the whole space.   Too many protocols and different models of how things can be integrated exist than can be supported.   I suspect in a year things will be better but also I don’t think the wars between the participants are nearly won.

Where to put the automation?

I am going to decide on a hub and put some integration and intelligence into the hubs and some into the cloud.   I will describe more of how I propose to split the work and the data in the next blog.  I will also address how to get around some of the thornier problems if possible.


Other articles you may find interesting:

Why would you want to Integrate IOT (Internet of Things) devices? Examples from a use case for the home.

Integrating IoT Devices. The IOT Landscape.

Iot (Internet of Things), $7 Trillion, $14 Trillion or $19 Trillion? A personal look

Siri, SMS, IFTTT, and Todoist.


A Reference Architecture for the Internet of Things



 Alternatives to IFTTT

Home Automation Startups

Denis WeerasiriMeasuring the Livability

Every year, I come back to Sri Lanka for 5-6 weeks to meet my family during the new year season. Every time when I am back in Sri Lanka, I think what has been changed compared to the previous year?, which aspects have been improved? and do such improvements affect the livability of Sri Lanka?.

When it comes to the livability, I am concerned and believe on three things which are absolutely important. They are,
  • Do people respect other people?
  • Do people respect the law of the country?
  • Do people respect money?
But, how do I measure them? I just observe three simple things when I commute via public transportation.
  • Do motorist stop when pedestrians wait beside crosswalks?
  • Do motorist and pedestrians oblige to traffic lights and signs?
  • Do bus conductors give back the right amount of change back to commuters?
If I get “Yes” for at least one question, I would say "Yes, the livability is improving".

Chathurika Erandi De SilvaValidating a WSDL

In a scenario if you want to validate a wsdl we can use the following two methods

1. Validate it using WSO2 ESB
2. Validate it using Eclipse Web Tools

Validate it using WSO2 ESB

In WSO2 ESB there's a inbuilt WSDL validator. This can be used to validate Axis 2 based WSDLs. You can either specify the file or the source url in order to provide the WSDL.

More on WSO2 ESB Validator

Validate it using the Eclipse

Follow the below steps to get the wsdl validated through ESB

1. Open Eclipse
2. Go to help -> Install New Software
3. Search for "Eclipse Web Tools"
4. Install the latest version of "Eclipse Web Tools"
5. Create a new WSDL (File -> New -> WSDL)
6. Go to the source view of the WSDL and copy paste the WSDL content you want to validate
7. Save the file
8. Right click -> Validate

By this step the WSDL can be validated.

Yumani RanaweeraJTW test cases and queries related to WSO2 Carbon 4.2.0

JWT stands for JSON Web Token. Used for authentication as well as resource access.

Decoding and verifying signed JWT token

In the default WSO2 server, the signature of JWT token is generated using the primary keystore of the server, where the private key of wso2carbon.jks is used to sign it. Therefore when you need to verify the signature you need to use public certificate of wso2carbon.jks file.

Sample Java code to validate the signature is available here -

Is token generation and authentication happening in a different webapp in wso2 IS ?

In WSO2 Identity server 5.0, the token generation is done via a RESTful web app hosted inside WSO2 IS. It is accessible via https://$host:9443/oauth2/token. You can call with any REST client. This blog post shows how retrieve access token via curl -

The token validation is performed via a SOAP service, its not a web application.

How to generate custom tokens?

How custom JWT tokens can be generated is explained in here -

From where does the JWT reads user attributes?

User attributes for the JWT token is retrieved from the user store that is connected with APIM. If we take APIM scenario as an example user must be in the user store that APIM (Key Manager) is connected. If claims are stored in some other place (rather than in the user store, in some custom database table), then you need to write an extension point to retrieve claims. As mentioned in here [], you can implement new "ClaimsRetrieverImplClass" for it.

Use cases:

There was a use case where the claims contained in SAML response needed to be used to enrich HTTP header at API invocation time.

Claims that are contained in SAML2 Assertion are not stored by default. Therefore, they can not be retrieved when JWT token is created. But we can create a new user in the user store and store these attributes based on the details in the SAML2 assertion. If claims are stored in the user store by creating new user, those attributes would be added in to the JWT token by default and any customization is not required here.

Optionally we can store them in a custom database table. To achieve this, you may need to customize the current SAML2 Bearer grant handler implementation as it does not store any attribute of the SAML2 Assertion. Details on customizing existing grant type can be found here [].

Ashansa PereraNeed to run many instances of your project in visual studio?

Solution is simple.

Right click on your solution
Set Startup Projects
Select ‘Multiple startup projects
And select ‘Start’ option for your projects in the list.

And now when you hit start button, all the project you selected will be started.

Well, you need several instances of the same project to be started?
Right click on the project
Debug > Start New Instance

(This option will be useful to interestingly try out the Chat Application we developed previously. Because with more chat clients, it will be more interesting )

Shelan PereraWhat should you do when your mobile phone is lost?

Have you ever lost your mobile phone in your life? I have lost twice and yes it is not a great position to be. But these two incidents had two different implications. When i lost my first phone it was a nokia phone (N71 to be exact) which was quite a smart phone at that time around 2007. Yes i wanted to find the phone but could not the story was over and life went on. But...  Second time i lost a Nexus 4 which is android.

So what is the big deal?

"Oh my gmail account"

When i sensed for the first time that i have lost my mobile phone the first thing that came to my mind was "oh gosh all my accounts are there". But fortunately i have thought about this topic before i lost this phone so the next steps to be done was obvious to me. I think you might be interesting and you may have better suggestions as well. :)

Three important things

1) Android phone has your email account , If you use gmail as the primary account then you might use gmail for most of other online accounts as well. (Facebook , twitter, ebay so on...)

So if anyone can get hold of it then you are doomed, Because anyone can use forgot my username or password reset features to take control of other account.

2) Android phone has all your data :). In this what google does not back up. I think only your life which cannot be backed up restore later.

3) If your phone is not pin protected or not protected with any other mechanism then you are at the highest level of vulnerability.

So what I should do?

1) Change your email address password immediately. This is very crucial and the most important step

You can check login history here. If there is a recent login just after you

2) You should visit Using this you can erase your mobile phone data (or in other words do a remote wipe out of data).

This will only happen if the device is online.

If you need to locate the device you need to do it before the wipe out using the same above link.

3) You may register a police complaint at last.

Finding the phone is more important. But for someone who needs to protect data, other accounts and also privacy above mentioned steps becomes vital.

If you have other suggestions please do share in comments :)

Madhuka UdanthaCouchDB-fauxton introduction

Here we will be using CouchDB (developer-preview 2.0). You can build couchDB with preview guide line in here.

After built is success, you can start couchDB from 'dev/run'


Above command starts a three node cluster on the ports 15984, 25984 and 35984. Front ports are 15986, 25986 and 35986.


Using front port to check the nodes by http://localhost:15986/nodes


Then run the haproxy

HAProxy is providing load balancing and proxying for TCP and HTTP-based applications by spreading requests across multiple servers/nodes

haproxy -f rel/haproxy.cfg


Now look at Fauxton

If your in windows you can follow this post for to build it.

cd src/fauxton

run two below lines

npm install

grunt dev


Go to http://localhost:8000


Adding Document


Now time to enjoy fauxton UI for couchDB, it is the times explore the couch features in more interactive manner.

Sajith KariyawasamJava based CLI client to install WSO2 carbon features

WSO2 Products are based on WSO2 Carbon platform + set of features. (Carbon platform is also a collection of features) So, WSO2 products are released with set of pre-bundled features. For example, WSO2 Identity Server comes with System and User Identity Management feature, Entitlement Management features etc. API Manager comes with API management features, and features related to publishing and managing APIs etc. There can be requirement where the Key management features found in API Manager, need to be handled in the Identity Server itself, without going for a dedicated API Manager (Key manager) node. In such a scenario, you can install key management features into Identity Server.

Feature installation / management can be done via Management console UI [1] or using pom files [2]

In this blog post I'm presenting another way of installing features, that is via a Java based client.

This method can be used in situations where you have an automated setup to provision nodes with required features. You can find the code for that client in github.


sanjeewa malalgodaHow to write custom handler for API Manager

In this( post we have explained how we can add handler to API Manager.

Here in this post i will add sample code required for handler. You can import this to you favorite IDE and start implementing your logic.

Please find sample code here[]

Dummy class would be like this. You can implement your logic there

package org.wso2.carbon.test.gateway;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.synapse.Mediator;
import org.apache.synapse.MessageContext;
import org.apache.synapse.core.axis2.Axis2MessageContext;
import org.wso2.carbon.apimgt.impl.APIConstants;
import org.wso2.carbon.context.PrivilegedCarbonContext;
import org.wso2.carbon.utils.multitenancy.MultitenantConstants;
import java.util.Iterator;
import java.util.Map;
import java.util.TreeMap;

public class TestHandler extends AbstractHandler {

    private static final String EXT_SEQUENCE_PREFIX = "WSO2AM--Ext--";
    private static final String DIRECTION_OUT = "Out";
    private static final Log log = LogFactory.getLog(TestHandler.class);

    public boolean mediate(MessageContext messageContext, String direction) {"===============================================================================");
        return true;

    public boolean handleRequest(MessageContext messageContext) {"===============================================================================");
        return true;

    public boolean handleResponse(MessageContext messageContext) {
        return mediate(messageContext, DIRECTION_OUT);

John MathonMicroServices – Martin Fowler, Netscape, Componentized Composable Platforms and SOA (Service Oriented Architecture)


As an enterprise architect we are faced with constructing solutions for our companies that must be implemented quickly but need to scale to almost arbitrary capacity quickly when demand materializes and must stand the test of time because it is almost impossible to rebuild enterprise applications once they are successful and in production.

There are many examples of the history of really successful Enterprise Applications and how they evolved.  At TIBCO we had to build several generations of our publish/subscribe technology.  Each time we had a major architectural change which reflected our need to scale and interact with more things than the previous generation could possibly handle.   Each time we did this was a company “turning point” because a failed attempt could mean the death of the company.  So, it is with great pride that during those years we made these transitions successfully. It is a very hard thing to do to build software which is designed and architected so it is flexible enough to adapt to changing technology around it.


Having built many enterprise applications we start with good ideas of how to break things into pieces but after 5 years or more it becomes readily apparent that we broke some things into pieces the wrong way.  The book “Zen and the art of Motorcycle Maintenance”  discusses how you can look at a motorcycle engine from many vantage points giving you a different way to break down the structure of the engine which gives you different understanding of its workings.  This philosophical truth applies to motorcycle engines, string theory and software development.  There are always many ways to break a problem down.  Depending on your purpose one may work better but it is hard in advance to know which way will work best for problems in the future you have not anticipated.

Todays world of Software development is 10x more productive than the software development of just a decade ago.   The use of more powerful languages, DevOps/PaaS, APIs and Container technology such as Docker has revolutionized the world of software development.  I call it Platform 3.0.

A key aspect of Platform 3.0 is building reusable services and components.   You can think of Services as instances of components.   Micro-Services is the idea that we build services to be smaller entities in a lightweight container that are reusable and fast.

Micro-Services and SOA

There is some confusion about the term micro-services.   It can be a couple of different things depending on the person using it and their intention.    One definition that must be taken seriously is proposed by Martin Fowler: 

“The term “Microservice Architecture” has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.”

Martin goes on to describe micro-services as a way of building software applications as a combination of light weight services (API’s) that are independent.   However, IMO this is simply the default way all software is being written today.  It is the default architecture for Platform 3.0.

However, micro-services has emerged recently as a counterpoint to SOA type architecture.    The idea being that SOA architecture is heavyweight, has intermediate processes that slow things down, increase complexity.   Micro-service architecture in some cases is being proposed as a shortcut that means you can build software by simply hardcoding services into your applications and that this is faster and in the long term better than using a SOA type approach of a mediation component or message broker to facilitate agility and best practices.

I have no beef nor concern about services being broken into smaller and smaller pieces that are more and more independent of each other.   This is good architecture in general.   There is also no beef with the idea micro-services are lightweight and can be replicated and instanced quickly as demand builds.   Micro-services can be encapsulated in containers which woven together by a “container orchestration layer” allows one to specify how the micro-services are handled in fault tolerant and in scaling up and scaling down.    The orchestration does not have to become an intermediary process or communications layer that slows anything down.

Many of the principles of SOA have to do with agility to know what applications and services use what other applications and services.  This facilitates being able to change those dependencies quickly and reliably in an enterprise situation.   These concerns are not lessened but magnified because of a micro-services architecture.

Another definition of Micro-Services

Over the last 5 years APIs have grown dramatically in popularity and importance.   These are the “services” that are being reused in Platform 3.0 around the world in applications for mobile devices and in enterprises.     These API services are becoming ubiquitous and driving enormous innovation and growth.   See my article on 10^16 API calls.

What has evolved is that these APIs are becoming heavier and heavier weight as naturally happens with success.   As vendors get more customers with more diverse needs the APIs become bulkier and more powerful.   This is the standard evolution of software.

A counter to this is Micro-services.   The idea is that APIs should be leaner and that getting the functionality you want should come from a variety of smaller APIs that each serve a particular function.   In order to facilitate efficient operation of applications that utilize micro-services as opposed to the larger “API services” concessions have to be made to the overhead associated with invoking a micro-service.

Netscape is one of the major proponents of this idea of Micro-services.   Netscape has been learning as it builds its enormous presence in the cloud that breaking its offerings into more and more independent services each of which is ultra efficient and easily instanced is the architecture that works best.   They are even challenging one of the most powerful concepts of the last 10 years in APIs that APIs should be RESTful.   They suggest that “simpler” protocols with less overhead work better in many situations and actually trying to convince people REST is not the end of the story for APIs.   Paradoxically, REST is promoted because of its simplicity compared to SOAP for instance.

The advantage of micro-services according to the Netscape definition is that services can be more rapidly scaled and de-scaled because they are so lightweight.   Also, the overhead associated with conformance with heavy protocols like HTTP is not worth it for micro-services.   They need to be able to communicate on a more succinct, purpose driven approach.   Another advantage of the Netscape approach is to be able to short-circuit unavailable services quicker.  Please see the article on Chaos Monkey and Circuit Breakers.

Screen Shot 2015-03-19 at 4.13.21 PM

In a similar way with the first definition of Micro-services breaking down these services into hyper small entities doesn’t mean that they are not still containerized and the interface to them documented and managed albeit in a less obtrusive way than some API management approaches would entail.   If you look at the commandments from the Netscape development paradigm it includes key aspects of good SOA practices and API Management concepts.

API Management and MicroServices

API management is in my mind is simply an evolution of SOA registration service with many additional important capabilities, most important SOCIAL capabilities that enable widespread reuse and transparency.   If the goal of micro-services is to avoid transparency or social capabilities I would have serious argument with the micro-services approach.   As I mention above Netscape’s approach doesn’t suggest API Management is defunct for a micro-services architecture.

Part of the problem with API management and micro-services or with the mediation / message broker middleware technologies is that these can impose a software layer that is burdensome on the micro-services architecture.   In the case of the mediation / message broker paradigm this layer if placed between two communicating parties can seriously impede performance (either latency or overall throughput) especially if it isn’t architected well.   In fact this layer could make the entire benefit of micro-services architecture become a negative potentially.

Let us say an approach to implementing micro-services would be to put them into a traditional API Management service as offered by a number of vendors.   There would indeed be an overhead introduced because the API Layers impose a stiff penalty of authentication, authorization, then load balancing before a message can be delivered.   What is needed is a lightweight version of API Management for some trusted services behind a firewall that are low risk and high performance.  The only vendor I know that offers such a capability is WSO2.

WSO2 implements all its interfaces to all its products in API management as default.  The philosophy is called API Everywhere and enables you to see and manage all services, even micro-services in your architecture with an API Management layer.   You can choose during debug phase to impose a higher burden API management that gives more logging of what is happening to help you understand potential issues with a component or micro-service and to monitor performance and other characteristics more closely and when you are satisfied something is production worthy to go to a minimal in-process gateway that minimizes the impact on performance but still gives you some API Management best practices.

Other SOA components and Micro-services

Other components in the SOA architecture are mediation services and message brokers.   These components are used to provide agility by allowing you to easily change components in and out, version components or services, create integration with new services without having to change applications and other benefits.   These are still important in any enterprise application world and are particularly important in the fast changing world of Platform 3.0 we live in today where services change rapidly and applications are built using existing services rapidly.

One of the primary goals of the SOA architectural components was to facilitate the eradication of the productivity killing “Slide11

point-to-point architecture that made “the pile” of become impossibly


hard to whittle down.

More important than that is the lesson we have learned in the last 10 years which is that social is a critical component of reuse.  In order to facilitate reuse we must have transparency and the ability to track performance, usage of each component and have feedback in order to create the trust to reuse.  If the result of micro-services is to destroy transparency or some ability to track usage and performance then I would be against micro-services.

I believe the answer is in some cases to use API management traditionally and in some cases to use container orchestration services such as Kubernetes, Docker swarm and Docker Compose, Helios and other composition services.   I will be writing about these container composition services as they are an important part of the new architecture of Platform 3.0.


Micro-services is the new terminology being espoused as the replacement architecture for everything from SOA to REST. However, the concepts of micro-services are well founded in reuse and componentization and as long as good architectural practices from the lessons learned in the last 2 decades of programming are followed and implemented inside a framework of Platform 3.0 then micro-services is a good thing and a good way to build applications.

Micro-services provides an easier to scale, more efficient and more stable infrastructure in principle.   Micro-services once written are less likely to see lots of modifications over time as their functionality is constrained.   This means fewer bugs, more reusability and more agility.

Micro-services however should still be implemented in an API Management framework and ESB’s, Message Brokers and other SOA architectural components still make sense in this micro-services world especially when augmented with a container composition tool.    In fact these components should be used to make micro-services transparent and reusable.

Other Articles of interest you may want to read:

Using the cloud to enable microservices

Do Good Microservices Architectures Spell the Death of the Enterprise Service Bus?

Microservices : Building Services with the Guts on the Outside

Event Driven Architecture in the age of Cloud, Mobile, Internet of Things(IoT), Social

Is 10,000,000,000,000,000 (= 10^16 ) API calls / month a big deal?

What is Chaos Monkey?

Microservices – Martin Fowler

Merging Microservice Architecture with SOA Practices

7 Reasons to get API Management, 7 Features to look for in API Management 

Things that cost millions of dollars are costing thousands. Things that took years are taking months.

RDBs (Relational Databases) are too hard. NoSQL is the future for most data.

John MathonWhy would you want to Integrate IOT (Internet of Things) devices? Examples from a use case for the home.


Using the Network Effect to build automation at home between IOT devices

The Network effect is the value that is gained by the existence of a growing number of devices in a network.  I talk about the network effect in this article. I am proposing to leverage multiple IOT devices in a coordinated way to provide more value to my home in particular and more intelligence.  In the same way a business could think about integrating hardware to provide added value and intelligence in how it operates.

What kinds of automation can I do that would be useful at home.   There are dozens of things that could be done that might be marginally useful.    My main goals with this project is to:

1) Reduce energy use and energy cost

2) Provide improved security

3) Enhance the comfort of the home

4) Cool Stuff

I discuss a set of IOT devices I have reviewed, some I have bought, some I have decided you shouldn’t buy and some I am not sure.   I also discuss the market for IOT in general here.

I have also looked at ways these devices can be interfaced and integrated with in the article here..

In this article I am discussing what kinds of functionality to achieve by integrating diverse set of web services and IoT devices that would prove useful to an average person.  You should read the previous 2 articles to get the ideas behind this automation I propose but it is not necessary.

Energy Efficiency

I want my house to be as energy efficient as possible.  This is more complex problem than you might think.  My energy plan charges me different rates for energy usage at different times of the day.   Daytime energy rates are 4 times the nighttime energy rates.   The primary drivers of my energy use are heating the house and cooling, the pool cleaning and heating, various appliances, fans and finally charging the Tesla and lights.    I have already reduced my house heating costs by trying to move heating to the night time and also the pool cleaning and Tesla charging.   The combination of doing these things has cut my electric energy bill in half.    I further gained substantial benefit from moving to a variable speed pump on the pool.   The power consumed by an electric motor goes up as the cube of the speed of the motor.    Therefore being able to reduce the motor speed during certain functions by half cuts the cost by 7/8!   One problem is that heating the pool with solar panels is possible only during the daytime and therefore part of the challenge is deciding when to spend the energy to pump the solar panels during peak or off-peak energy periods.

I love my pool hot and I want to utilize the sun to the maximum to get the pool hot yet I don’t want to waste energy if it doesn’t make sense to heat it.  The automation system should know my schedule and where I am.   There is no point in making the effort to heat the pool if the temperature of the pool is 60 degrees and we get one hot 80 degree day so a dumb system that simply looks at the sun irradiance to determine when to heat is going to waste a lot of energy pumping when there is no point.

In the same way if the temperature today will hit 85 and the house is cool in the morning the last thing I want to do is heat the house knowing that by the afternoon I may want the house cooler.   So, the forecast of temperature and current temperature inside and outside the house are important.   If it is cool outside but the sun is out then the windows will produce a lot of warmth if they are open, however, if it is hot outside and the sun is blaring the house would need the shades down to reduce the heating from the windows.  Substantial gains in efficiency of the house can be gained by controlling a few shades.

There is no point in heating the house above a certain point if I am away from the house.   On the other hand if I am on my way back to the house it would be useful if the house knew that and heated itself prior to my arrival.  I wouldn’t want to do that if the energy cost would be too high but in general it would be useful to know my location to figure out how much to heat or cool the house.

A NEST thermostat claims to be smart but understanding all these things is beyond what I believe a NEST can do or knows about.   The NEST is supposed to learn my patterns but I don’t have any patterns.

Some of the rules I have come up with to help achieve these aims with the heating system.

if off peak heat to 72
If lowest energy cost raise heat to 75 to reduce heating at other times
When I am >100 miles from home minimize heating
If I’m coming home and it’s not peak energy cost time then raise temp to 72, send message
If I’m coming home and it is peak energy time, send message
When hitemp today will be >80 do not engage heater during day at all
if it is peak energy time minimize heat
If I am >15 miles form home and it is a workday minimize heat
If hitemp today > 80 and inside temp > 65 lower blinds
inside temp < 75 and hitemp today < 76 raise blinds
if after sunset or before sunrise lower blinds to conserve heat
if current outside temp is > 80 and inside temp is <65
if guests are in house keep temp at 72 except at night reduce to 65


These rules depend on understanding the times of the day that energy rates change.  They depend on knowing the current temperature outside and inside the house as well as predicted temperatures for today and the future.   The rules depend on how far I am from home and what direction I am going.  Where there is a question about what to do it depends on having an SMS chat with me to resolve the condition.   I also want to be able to override the state of the system for special conditions like guest are in the house.    From understanding these rules and the types of information I needed to be able to perform the automation I wanted I picked services and devices I needed to interface to.   For instance I need the following devices and services to perform the automation I want.


Followmee seems like a good open source service to grab my current location.   The app records my cell phones location every minute.  I can use this to help automate some functions.   For instance if my location over a period of 10 minutes moves 5 miles closer to home then I will assume I am coming home.   If I am >100 miles from home then I am on vacation or on a business trip.    If I am a mile from home and 5 minutes ago I was at home then the garage door and deadbolts should be secure.


IF THEN THIS THEN THAT is a great service that allows you to build automation to all kinds of services.   It will be the hub of my automation framework allowing me to create the rules above to control the house.

PG & EWeatherunderground, Davis Weather Station, Weathersnoop and Weatherbug

They have information on rates at different times of the day and also through opower and my Smart Zigbee Rainforest Eagle they can give me information on my electricity usage.

Weatherunderground, Davis Weather Station, Weathersnoop and Weatherbug

I have a weather station from Davis that records all kinds of useful information I will need to automate numerous functions.   The weather station information is sent up to the cloud to Weatherunderground using  Weathersnoop.   Forecasts can be obtained easily from Weatherbug.   I can also get internal temperature information from the Davis equipment and my pool temp.

Carrier Wifi Thermostat

The Carrier is an ultra-efficient heating and cooling system that also has a digital service for monitoring and controlling the house.   I will utilize it’s capabilities to control my home thermostat.

Pool Automation

I have another set of rules for controlling the pool pumping system.   I will expound on that in the next article and the rules for other automations.


Security and Safety

I want to be able to use intelligence to make sure my house is secure even if I forget something.     These include things like making sure the deadbolts are locked when I am away from the house and the doors and windows are closed.   I want to be notified when conditions at the house are changed.  I would like to be notified when winds exceed a certain range or temperatures exceed certain limits, if my electricity usage goes up or water usage goes up beyond certain points.   If there is an electrical interruption, doors, windows or garage is tampered with.   I also might want to get video of the house although I haven’t purchased those yet I expect I will at some point and want them integrated.


There are numerous things I have figured out would be easy to automate that would make life better, reduce mistakes or save me from a trip home or asking a friend to do something.   The irrigation system for plants might be a good target but I haven’t decided to tackle that yet.  There are tools to do this that are interesting.   I have read that one of the things that can harm my Tesla’s battery is leaving it in a discharged state for a long time.   I would like to be notified if my Tesla battery is below 70 miles for more than 4 hours.

I have a Gazebo with an automated motorized shade on the front.  I would like to control the shade based on the time of sunset and wind conditions, rain or factors such as whether I am home or not.

These are things that could be classified as “Cool.”   Since I am and always was a computer geek such things are kind of “pride” issues and it is important to have a few cool things to talk about.

Additional Services Needed for all this other Automation

Exponential Value from Connectedness

Lynx 7000 Home Security system and connected IOT devices that allow me to monitor security of the home, status of doors, garage and Z-Wave devices

Honeywell has led the way with home security cost reductions and with the 7000 and some other models has provided a low cost service to provide IOT home automation services.    The Honeywell Lynx 7000 can control Z-wave devices such as the shade operations both for the Gazebo and in-home, the deadbolt automation is through a Yale Zwave enabled device.   All these devices are controlled by the TotalConnect2 service.  Unfortunately this service does not have an API yet but it can be hacked through access to the web site.   I hope Honeywell adds an API soon.   It would make this part much easier.


Tesla car has an API that can be used to gather information and even operate certain functions on the car.


Wemo devices from Belkin provide ability to control electrical switches.   There are certain things I find useful to automate through these switches and WEMO provides a nice API and service for doing this.


The BodyMedia armband is the premier workout and life band in my opinion.   By using capacitance of the skin, motion, heat flux and heartbeat the BodyMedia gives a better view of energy produced in working out and sleep patterns.  Bodymedia was acquired by Jawbone which has created a uniform API for all its devices and is promoting numerous applications to control and monitor all its devices.


The Myo armband is able to detect gestures of my fingers, wrist and elbow to create events to control things.   I hope to link the armband to my system so that I can do things like raise and lower shades, turn on and off electronics, raise temperatures and even do some things to the Tesla based on arm gestures.


You may find some of the things I am trying to do stupid or not very relevant to you.   Please excuse my geekiness but I believe some of these things could be considered useful to a lot of us who simply want to for instance minimize energy usage or have specialized needs for safety, security or automation.   There are so many IOT devices and things being offered now that combining devices to accomplish some level of automation can be accomplished.

However, there are great difficulties in combining devices as I write this article.   Numerous of the services and interfaces to these devices differ dramatically.   Some have integrated into IFTTT to make automation easy and some have not.   Some have easy APIs to use and some have to be hacked to gain access.   Some of the APIs are described in Python, Java or other languages and some aren’t even REST.   It will take a good programmer or hacker to put all this automation together to combine these devices.   Ideally, services like IFTTT would become a central hub making it easy to create automations from many different IOT devices.   For now I will have to build my own automation in the cloud and leverage IFTTT.

In the next blog I will make the next step in this automation project by describing all the APIs I am using for each service, the rules specifically and how to prioritize the rules and automations.   What I hope I have described is an example of how using the Network Effect of multiple IoT devices you can build intelligence into a system of multiple IoT devices that can make your life better.    In a similar way any company can conceive of how to leverage the devices mentioned here or others to provide better support to customers, higher productivity for workers or to reduce costs in its infrastructure.   The proof of that is left to the reader at this point.

Other Articles you may find interesting:

Integrating IoT Devices. The IOT Landscape.

Tesla Update: How is the first IoT Smart-car Connected Car faring?

Iot (Internet of Things), $7 Trillion, $14 Trillion or $19 Trillion? A personal look

WSO2 – Platform 3.0 – What does the name mean? What does WSO2 offer that would be useful to your business?

A Reference Architecture for the Internet of Things


Breakout MegaTrends that will explode in 2015.

Tooling Up for the Marriage of the Internet of Things, Big Data, and Cloud Computing

The Internet of Things, Communication, APIs, and Integration

M2M integration platforms enable complex IoT systems

Srinath PereraIntroducing WSO2 Analytics Platform: Note for Architects

WSO2 have had several analytics products:WSO2 BAM and WSO2 CEP for some time (or Big Data products if you prefer the term).  We are adding WSO2 Machine Learner, a product to create, evaluate, and deploy predictive models, very soon to that mix. This post describes how all those fit within to a single story. 

Following Picture summarises what you can do with the platform. 

Lets look at each stages depicted the picture in detail. 

Stage 1: Collecting Data

There are two things for you to do.

Define Streams - Just like you create tables before you put data into a database, first you define streams before sending events. Streams are description of how your data look like (Schema). You will use the same Streams to write queries at the second stage. You do this via CEP or BAM's admin console (https://host:9443/carbon) or via Sensor API described in the next step.

Publish Event - Now you can publish events. We provide a one Sensor API to publish events for both batch and realtime pipelines. Sensor API available as Java clients (Thrift, JMS, Kafka), java script clients* ( Web Socket and REST) and 100s of connectors via WSO2 ESB. See How to Publish Your own Events (Data) to WSO2 Analytics Platform (BAM, CEP)  for details on how to write your own data publisher. 

Stage 2: Analyse Data

Now time to analyse the data. There are two ways to do this: analytics and predictive analytics. 

Write Queries

For both batch and realtime processing you can write SQL like queries. For batch queries, we support HIVE SQL and for realtime queries we support Siddhi Event Query Language

Example 1: Realtime Query (e.g. Calculate Average Temperature over 1 minute sliding window from the Temperature Stream) 

from TemperatureStream#window.time(1 min)
select roomNo, avg(temp) as avgTemp
insert into HotRoomsStream ;

Example 2: Batch Query (e.g. Calculate Average Temperature per each hour from the Temperature Stream)

insert overwrite table TemperatureHistory
select hour, average(t) as avgT, buildingId
from TemperatureStream group by buildingId, getHour(ts);

Build Machine Learning (Predictive Analytics) Models

Predictive analytics let us learn “logic” from examples where such logic is complex. For example, we can build “a model” to find fraudulent transactions. To that end, we can use machine learning algorithms to train the model with historical data about Fraudulent and non-fraudulent transactions.

WSO2 Analytics platform supports predictive analytics in multiple forms
  1. Use WSO2 Machine Learner ( 2015 Q2) Wizard to build Machine Learning models, and we can use them with your Business Logic. For example, WSO2 CEP, BAM and ESB would support running those models.
  2. R is a widely used language for statistical computing, and we can build model using R, export them as PMML ( a XML description of Machine Learning Models), and use the model within WSO2 CEP. Also you can directly call R Scripts from CEP queries
  3. WSO2 CEP also includes several streaming Regression and Anomaly Detection Operators

Stage 3: Communicate the Results

OK now we have some results, and we communicate those results to users or systems that cares for these results. That communications can be done in three forms.
  1. Alerts detects special conditions and cover the last mile to notify the users ( e.g. Email, SMS, and Push notifications to a Mobile App, Pager, Trigger physical Alarm ). This can be easily done with CEP.
  2. Visualising data via Dashboards provide the “Overall idea” in a glance (e.g. car dashboard). They supports customising and creating user's own dashboards. Also when there is a special condition, they draw the user's attention to the condition and enable him to drill down and find details. Upcoming WSO2 BAM and CEP 2015 Q2 releases will have a Wizard to start from your data and build custom visualisation with the support for drill downs as well.
  3. APIs expose Data as to users external to the organisational boundary, which are often used by mobile phones. WSO2 API Manager is one of the leading API solutions, and you can use it to expose your data as APIs. In the later releases, we are planning to add support to expose data as APIs via a Wizard.

Why choose WSO2 Analytics Platform?

Reason 1: One Platform for both Realtime, Batch, and Combined Processing - with Single API for publish events, and with support to implement combined usecases like following
  1. Run the similar query in batch pipeline and realtime pipeline ( a.k.a Lambda Architecture)
  2. Train a Machine Learning model (e.g. Fraud Detection Model) in the batch pipeline, and use it in the realtime pipeline (usecases: Fraud Detections, Segmentation, Predict next value, Predict Churn)
  3. Detect conditions in the realtime pipeline, but switch to detail analysis using the data stored in the batch pipeline (e.g. Fraud, giving deals to customers in a e-commerce site)
Reason 2: Performance - WSO2 CEP can process 100K+ events per second and one of the fastest realtime processing engines around. WSO2 CEP was a Finalist for DEBS Grand Challenge 2014 where it processed 0.8 Million events per second with 4 nodes.

Reason 3: Scalable Realtime Pipeline with support for running SQL like CEP Queries Running on top of Storm. - Users can provide queries using SQL like Siddhi Event Query Language. SQL like query language provides higher level operators to build complex realtime queries. See SQL-like Query Language for Real-time Streaming Analytics for more details. 
For batch processing, we use Apache Spark ( 2015 Q2 release forward), and for realtime processing, users can run those queries in one of the two modes.
  1. Run those queries using a two CEP nodes, one nodes as the HA backup for the other. Since WSO2 CEP can process in excess of hundred thousand events per second, this choice is sufficient for many usecases.
  2. Partition the queries and streams, build a Apache Storm topology running CEP nodes as Storm Sprouts, and run it on top of Apache Storm. Please see the slide deck Scalable Realtime Analytics with declarative SQL like Complex Event Processing Scripts. This enable users to do complex queries as supported by Complex Event Processing, but still scale the computations for large data streams. 
Reason 4: Support for Predictive analytics support building Machine learning models, comparing them and selecting the best model, and using them within real life distributed deployments.

Almost forgot, all these are opensource under Apache Licence. Most design decisions are discussed publicly at

If you find this interesting, please try it out. Please reach out to me or through if you want to know more information.

Heshan SuriyaarachchiFixing mysql replication errors in a master/slave setup

If you have a mysql master/slave replication setup and have run into replication errors, you can follow the below instructions to fix up the replication break and sync up the data.
1) Stop mysql on the slave.
service mysql stop
2) Login to the master.

3) Lock the master by running the following command
mysql> flush tables with read lock;
NOTE: No data can be written at this time to the master, so be quick in the following steps if your site is in production. It is important that you do not close mysql or this running ssh terminal. Login to a separate ssh terminal for the next part.

4) Now, in a separate ssh screen, type the following command from the master mysql server.
rsync -varPe ssh /var/lib/mysql root@IP_ADDRESS:/var/lib/ —delete-after
5) After this is done, now you must get the binlog and position from the master before you unlock the tables. Run the following command on master
mysql> show master status\G;
Then you’ll get some information on master status. You need the file and the position because that’s what you’re going to use on the slave in a moment. See step 10 on how this information is used, but please do not skip to step 10.

6) Unlock the master. It does not need to be locked now that you have the copy of the data and the log position.
mysql> unlock tables;
7) Login to the slave now via ssh. First remove the two files : /var/lib/mysql/ and /var/lib/mysql/

8) Start mysql on the slave.
service mysqld start
9) Immediately login to mysql and stop slave
mysql> slave stop; 
10) Now, you have to run the following query filling in the information from the show master status above (Step 5.)
mysql> MASTER_USER=‘replicate’, MASTER_PASSWORD=‘replicate’,
mysql> MASTER_LOG_FILE=‘mysql-bin.000001’, MASTER_LOG_POS=1234512351;
11) Now start the slave.
mysql > slave start
12) Check slave status.
mysql> show slave status\G;

Heshan SuriyaarachchiFixing Error: Could not find a suitable provider in Puppet

I'm quite new to Puppet and I had a Puppet script which configures a MySQL database working fine on a Puppet learning VM on VirtualBox. This issue happened when I installed a and setup Puppet on a server of my own. I kept seeing the following error and it was driving me crazy for some time.

[hesxxxxxxx@xxxxxxpocx ~]$ sudo puppet apply --verbose --noop /etc/puppet/manifests/site.pp 
Info: Loading facts
Info: Loading facts
Info: Loading facts
Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults
Notice: Compiled catalog for xxxxxxxxxxxxxxxxxxx in environment production in 0.99 seconds
Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.
(at /usr/lib/ruby/site_ruby/1.8/puppet/type/package.rb:430:in `default')
Info: Applying configuration version '1426611197'
Notice: /Stage[main]/Mysql::Server::Install/Package[mysql-server]/ensure: current_value absent, should be present (noop)
Notice: /Stage[main]/Mysql::Server::Install/Exec[mysql_install_db]/returns: current_value notrun, should be 0 (noop)
Notice: Class[Mysql::Server::Install]: Would have triggered 'refresh' from 2 events
Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: current_value {md5}8ace886bbe7e274448bc8bea16d3ead6, should be {md5}d0d209eb5ed544658b3f1a72274bc3ed (noop)
Notice: /Stage[main]/Mysql::Server::Config/File[/etc/my.cnf.d]/ensure: current_value absent, should be directory (noop)
Notice: Class[Mysql::Server::Config]: Would have triggered 'refresh' from 2 events
Notice: /Stage[main]/Mysql::Server::Service/Service[mysqld]/ensure: current_value stopped, should be running (noop)
Info: /Stage[main]/Mysql::Server::Service/Service[mysqld]: Unscheduling refresh on Service[mysqld]
Notice: /Stage[main]/Mysql::Server::Service/File[/var/log/mysqld.log]/ensure: current_value absent, should be present (noop)
Notice: Class[Mysql::Server::Service]: Would have triggered 'refresh' from 2 events
Error: Could not prefetch mysql_grant provider 'mysql': Command mysql is missing
Notice: /Stage[main]/Main/Node[default]/Mysql_grant[m_user@localhost/lvm.*]: Dependency Mysql_user[m_user@localhost] has failures: true
Warning: /Stage[main]/Main/Node[default]/Mysql_grant[m_user@localhost/lvm.*]: Skipping because of failed dependencies
Notice: Stage[main]: Would have triggered 'refresh' from 3 events
Error: Could not find a suitable provider for mysql_user
Error: Could not find a suitable provider for mysql_database

The issue was I was running puppet with —noop mode. When my Puppet tries to configure the mysql setup it gives errors because it didn’t have a mysql setup to configure since I had —noop. Removing this did the trick.

Although this is trivial, I thought of blogging this because someone might find this useful when facing the same issue as I did.

John MathonHow could Tesla eliminate “Range” Anxiety?


The Tesla 6.2 upgrade will consist of improvements in managing the existing range of the car, not improve it in any way.

The car will do this by:

1) Being aware of charging stations (unclear if only supercharging stations)

2) Understanding traffic, weather, elevation changes and other factors so that the car will be able to estimate with much greater accuracy how much power is needed to get to your destination.

3) Warning you if your destination is unreachable without additional power or you are driving out of reach of a Supercharging station.

These are useful convenience software features but don’t really address range anxiety in my opinion because most of us cannot travel 50 or 100 miles out of the way on the spur of the moment to find the closest supercharging station.   We also can’t sit at a conventional charging station for 3 hours to get 50 miles extra range.   While I am a huge fan of Elon and Tesla this does not address “range anxiety” for me because I was aware of the limitations of the cars battery and the options I already had, so a software feature that enables people who can’t think ahead doesn’t really help me.

If range anxiety is the fear that I literally will be abandoned on the side of the road without power because I was too stupid to look at the “battery level” and I need a computer to prompt me to tell me that you can’t go 80 miles with only 60 miles left in the battery then he has solved it.   If eliminating range anxiety means that my solution to having insufficient charge at my work to get home is that I can drive 30 miles (out of the way) to Fremont from my work location in Mountain View to get to a supercharging station, spend an hour there and then face a commute that is an additional hour for a total of 2 1/2 hours when my initial commute was 30 minutes then I will never use it.     I also don’t want to be told that to get from my house to Las Vegas I should take a route that is efficient in terms of supercharging stations but doesn’t let me go through Big Sur and visit my friend in Newport Beach without taking 100 mile detours.   These are useful features to supplement normal operation and possibly warn you from making a stupid mistake but if the “remedy” is to spend 2 hours going out of my way then it doesn’t solve range anxiety for me.

I don’t want to make a bad impression of the Tesla.   The fact is I don’t really fear range anxiety in my car.   I always keep the battery within my normal daily driving parameters.   I am aware of charging stations near me and if I have a problem I know where to get a boost or I can find a charging station fairly close to where I am going using Chargepoint or other services available.   I have not had a problem of worrying ever about if I was going to run out of charge but that’s because I think ahead.  It’s not rocket science.

What would have worked for me?

What would have made my life simpler and truly ended range anxiety would be having enough supercharging stations (thousands at least) that I could find one within no more than 5 miles from my planned itinerary.   If Elon would simply say we are going to build many more supercharging stations or we have a deal with Costco or some other similar company with a large number of outlets that could offer supercharging on the side then I would be much happier.   Costco and companies with big infrastructure such as Walmart already have very large power infrastructure.  Adding 5 or so supercharging stations would require practically no change in their infrastructure and would encourage customers to spend the half hour or so it takes to charge shopping at their outlet.   Who couldn’t use a half hour or hour at Costco or Walmart to fill up on the essentials every now and then while getting a full charge of your car for free?

This is what Elon does with his supercharging network already by placing them at locations with stores such as Starbucks, etc….   However, a company like Costco for instance would have a ready made customer base of well off customers who would spend an hour at their store every week possibly loading up on stuff while they charge their car.   Elon gets no benefit from positioning his charging stations at these locations but Costco would see a benefit and if it was smart would consider adding electric charging alongside their inexpensive gas stations it offers at some locations.

I think you get my point.

Another viable thing I think would be possible soon

I was hoping that Elon had figured out how to use the existing circuitry in the car to charge LiOn batteries like some others have that enables them to achieve many more charges (up to 10x) with the same batteries.   Such an improvement would make battery life less of an issue making battery replacement much more realizable option and cost effective.   Battery replacement as Elon has promoted it would cut the time to a fully charged battery from an hour to 90 seconds and would be faster than refilling a gas vehicle.   If batteries had a longer lifetimes then the cost of a refill would simply be the cost of the energy difference between the battery you drop off and the battery you pick up.   If the batteries lifetimes is limited to a certain number of cycles then the battery replacement facility has to factor in the life of the battery, the number of cycles and charge you for a cost to replace batteries much sooner which raises the cost.   This of course is just reality.

I am an optimist.   I expect that lifetime of the batteries will be much longer and replacement costs for batteries will fall dramatically making the cost of driving a mile close to the 100mpg that the car sticker proclaims.    The original Tesla batteries in the roadsters are holding up better than initially estimated.   They were guaranteed for 50,000 miles but have been routinely getting closer to 100,000 before dropping to the 80% level.   Consider that after 100,000 miles many ICE engines are losing their oats too and don’t perform quite the way they did at purchase.

I did a TCO analysis before buying the Tesla and it came out for me very favorably for the Tesla.  Part of this is the factors above and part is the fact I live in California which gives extra credit and PGE which gives me a big benefit by allowing me to change how I am charged for electricity.  I also assume that Elon and Tesla will prove to have built a reliable car.

TCO Analysis (Is Range Anxiety an issue in terms of overall cost of a Tesla?)

To say a Tesla gets 100mpg is correct if you don’t consider the cost of the batteries in the mpg.    If the cost of the batteries is $8,000 and the batteries last 125,000 miles then the fuel economy drops significantly.   A regular car would cost possibly $18,000-$25,000 to drive those miles with maintenance and all costs included while a Tesla might cost $15,500 including the cost of replacing batteries.   If the battery life is 250,000 miles then the cost drops to $11,000 or twice the fuel economy and if the battery is cheaper the costs fall further.

This is not an entirely fair comparison because if we really are fair an ICE car in 8 years or 125,000 or more miles may need an engine change, transmission change or other major repair.  These cars have far more items to maintain and break than a Tesla. If those costs are factored in to an ICE car then the Tesla looks much cheaper.

Another thing which is harder to factor into any TCO is safety.  The Tesla is the safest car ever built.  Hands down.  How do you factor that into TCO?

My TCO analysis of a Tesla vs BMW equivalent M series cars was that the Tesla cost 1/3 less over 8 years.  This included factors such as residual value (I estimated the BMW would have 150% the residual value of the Tesla and I chose the BMW 6 year maintenance plan and generously only charged the BMW 2,000/year for maintenance after year 6.  If you’ve owned a BMW you know I am being very light on the BMW and it still doesn’t perform as well and costs a lot more.  Forget the environment.  Forget the OTA (over the air upgrades) you can expect from a Tesla over 8 years compared to the lack/cost of anything you do to your BMW.   My analysis did not make any assumptions about battery cost reductions or battery life extension.  I simply did the math on the 8 years and gave the Tesla a ding at the end just because of risk.  Yet it was much less costly than an equivalent luxury car.

On Sunday March 15, 2015 Elon Musk tweeted:

Screen Shot 2015-03-17 at 8.45.19 AM


There are numerous ways Tesla could extend the performance of the existing Teslas to eliminate “Range” anxiety.   It depends what problem of “Range” anxiety Elon will attempt to mollify.

This would definitely end Range anxiety

tesla app ->tesla app extended range

Elon Musk has said that on Thursday Morning 9AM PST he will announce elimination of range anxiety via an OTA (Over the Air) update to all Tesla owners.

Substantial speculation about what he could be thinking of has many people guessing.   Here are the possibilities he will announce  from the point of view of a technologist.

A) A deal is announced with a major chain retailer with 10s of thousands of locations to offer supercharging locally for all Tesla owners.

B) “Insane conservation” mode allows Tesla to extend the range of Teslas by 100 miles effectively making it nearly impossible to “run out” of charge before you need to fill up.

C) The ability to charge Teslas to full charge or even 110% of full on a regular basis increasing the range of the cars by 50-100 miles (depending on model) and making it less likely you will run out of charge

D) The ability to recharge the batteries indefinitely and retain 95% of their original capacity giving the batteries a 50 year lifetime.

E) The announcement of dramatic cost reductions in the battery program making replacement batteries much cheaper.

Here are the issues that cause  “Range” anxiety

1) It takes a long time to charge the Tesla.

Typical charging takes a long time (8 hours at home close to empty) making managing the charge differently than an ICE (internal combustion engine).   Elon recently said that San Francisco to LA battery replacements were working well but that long term prospects were still focused on making the charging of batteries faster and getting more range from batteries.

New technologies are finding ways to rapidly charge batteries.  There are commercial solutions now that charge batteries 2 to 10 times faster.   Some of these require special batteries, some claim to manage the charge process better.  Unfortunately all charging cannot violate the laws of physics and in order to fill an 85KW battery will still take 85KW of energy.  Even if you could put that energy in the battery in 60 seconds you would need to be able to deliver 5,000 Amps at 240 Volts during this period. Since most households in the US have 200 Amp service it is not feasible to deliver this much current safely in normal environments.  Therefore it seems unlikely that Elon has made any changes in the charging time of Teslas at home.    Improvements could be substantial but would also require homeowners to upgrade their electricity service.  So, I expect improvements in this area although easily obtained can’t be what Elon is talking about.  He said so as well:

Screen Shot 2015-03-17 at 8.48.12 AM

2) There are only 100 or so supercharging stations in the US.

The high speed charging and battery replacement technologies are limited to 100 or so locations in the United States means that rapid recharging is not as simple as finding a gas station close by.

It would be simple for Elon to strike a deal with Chevron, Shell, McDonalds or any company with points of presence ubiquitous in the US to offer supercharging capability.  Increasing the number of locations with supercharging to 10,000 or 50,000 would essentially end range anxiety by making 20 minute charging like putting fuel in an ICE, ubiquitous and easy.


3)  The fact that 170 and 250 are the “normal” driving distances of a 60 or 85kw Tesla(respectively) while longer than other electric vehicles means that road trips in excess of these limits requires thinking ahead and possible route diversion to get to your destination.  The expected energy use of the car is 300w/mile of driving and therefore there are 2 ways of extending range:

3a) Can the efficiency of the car be substantially improved beyond 300w/mile?

I don’t believe the car can be substantially improved in terms of the energy consumed without a substantial rework and use of other components in the car.  The vast majority of the cars energy is used to drive the motor but it is possible to imagine that an “insane” mode for energy conservation could be implemented that would cut the cars energy use 30-40% by putting hard limits on drain of the battery (reducing the maximum energy draw from the battery) turning off all electronics and power consuming devices and limits on acceleration or battery drain rate.

If this could be achieved and attain 30-40% reduction in power consumption you could imagine that when the car hits quarter tank (70 miles) that the car automatically warns you it is going into “insane conservation” mode and gives you 100-150 miles effectively increasing the cars range by 50 or more miles when you are low on energy.  This may quell people’s range anxiety by making them feel they can get to their destination albeit in a hobbled state.

While this might work it seems it would be at the expense of the anxiety that the $100,000 car you bought acts more like a $10,000 car for a substantial amount of the time you use the car.   Of course most people might not use this mode very much it would make the “anxiety” of being stuck with no charge considerably less.

3b) Can the capacity of the battery be increased (without replacing the battery)?

There are numerous technologies that enable being able to charge LiOn cells to  100% or greater (110%) charge and still getting the full life from the batteries.    If this could be achieved then you might be able to get 15-25% in typical range for Tesla owners who now charge their car to 85% or less.   It would be impossible to store much more energy than the LiOn cell was designed without damaging physical the chemicals in the battery.  New battery technologies such as Magnesium and Lithium Air technologies as well as other battery technologies may be on the way but without replacing the battery itself it would be impossible to improve the capacity of existing batteries substantially so I doubt this is what is up Elon’s sleeve in this announcement.

4) The total lifetimes of the battery is guaranteed for 8 years or 125,000 miles or more,depending on model. The cost of a replacement battery on the order of $5,000 or more means a substantial financial cliff exists for cars that could produce range anxiety.

4a) Increased lifetime of the battery from 8 years to 40 or more years

The most serious degradation of LiOn cells happen because the lithium in the cells expands and contracts as it is discharged or charged.   The physical damage of repeated full discharges or full charges comes about from physical stress on the materials in the battery.   The material actually develops scar tissue that limits future charging and discharging capability.  Numerous papers have come out and commercial technologies have implemented algorithms that dramatically improve the number of charges one can obtain from a LiOn cell by a factor of 10 by carefully controlling the charging and discharging of the cells.   As the cells are charged and discharged mechanical stress can be minimized by controlling this process with a feedback mechanism that still allows mostly full use of the battery.   This technology is seeing more widespread adoption and could easily be what Elon is talking about.   If the battery life could be made to be 40 years or more then the cost efficiency of the Tesla dramatically changes and the anxiety about battery management decreases.   If the batteries can be used indefinitely it may make the battery replacement option at supercharging stations much more reasonable.

4b) Dramatically reduced cost of the battery

Elon’s main goal in building a massive battery factory in Nevada is to drastically cut the cost of the battery from the reported $8,000 for an 85KW battery to the $3-4,000 range or lower.   If Elon is quite confident in his ability to achieve this due to testing of new battery technologies he could promise substantial reductions in costs to existing Tesla owners to replace the battery also dramatically improving the economics of the Tesla.

In summary from my analysis these are the options for thursday mornings 9am release from Tesla regarding the “end of range anxiety.”


A) A deal is announced with a major chain retailer with 10s of thousands of locations to offer supercharging locally for all Tesla owners.

B) “Insane conservation” mode allows Tesla to extend the range of Teslas by 100 miles effectively making it nearly impossible to “run out” of charge before you need to fill up.

C) The ability to charge Teslas to full charge or greater (110%) on a regular basis increasing the typical range of the cars by 50-100 miles (depending on model) and making it less likely you will run out of charge

D) The ability to recharge the batteries indefinitely and retain 95% of their original capacity giving the batteries a 50 year lifetime.

E) The announcement of dramatic cost reductions in the battery program making replacement batteries much cheaper.

I believe that what Elon means by “eliminating Range anxiety” is about reducing the immediate concern that you will be stranded if you run out of juice.   This means options A, B, C are the only ones to be considered.  I would consider D and E to be likely improvements anyway that further improves the ROI of a Tesla but I don’t believe those could be what Elon will announce on Thursday.

So, that leaves us with A, B, C.   A) is simple and simply a matter of Elon working a deal with a major retailer.    B and C could be done with software and a combination could be accomplished giving a combined increase in range of the car by 50% or more.   Option B requires operating in a limited functionality which is undesirable and seems unlikely to be such an exciting option that would merit a big announcement.  Option C is possible but would not give a huge increase in range implied by his Tweet.

I therefore think it is possible that option A could be a major part of the Thursday call or a combination of A, B and C effectively providing substantial range increases with increased numbers of supercharging stations would effectively quell  “Range” anxiety.

 If I had to bet I would say it’s a combination of A, B and C.

Ushani BalasooriyaHow to monitor the created tcp connections for a particular proxy service in WSO2 ESB

As an example if we want to monitor the number of tcp connections created during a proxy invocation, the following steps can be performed.

1. Assume you need to monitor the number of tcp connections created for the following proxy service :

 <?xml version="1.0" encoding="UTF-8"?>  
<proxy xmlns=""
<property name="NO_KEEPALIVE" value="true" scope="axis2"/>
<address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
<address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
<address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
<onComplete xmlns:m0="http://services.samples" expression="//m0:getQuoteResponse">

You will see there are 5 clones. Therefore it should create only 5 tcp connections.

2. We have used SimpleStockQuoteService service as the backend.
3. Since you need to monitor the connections created, we should delay the response coming from the backend. Therefore we need to change the code slightly in the SimpleStockQuoteService.

We have include a Thread.sleep() for 10 seconds until we monitor the number of connections.
Therefore go to <ESB_HOME>/samples/axis2Server/src/SimpleStockQuoteService/src/samples/services/

and add the Thread.sleep(10000); as below to hold the response for sometime.

 public GetQuoteResponse getQuote(GetQuote request) throws Exception {  
if ("ERR".equals(request.getSymbol())) {
throw new Exception("Invalid stock symbol : ERR");
System.out.println(new Date() + " " + this.getClass().getName() +
" :: Generating quote for : " + request.getSymbol());
return new GetQuoteResponse(request.getSymbol());

4. Build the SimpleStockQuoteService once again by “ant” in here, <ESB_HOME>/samples/axis2Server/src/SimpleStockQuoteService

5. Now start the axis2server
6. Now you have to open a terminal and provide the following netstat command to get the process id.

sudo netstat --tcp --listening --programs

You will see the relevant SimpleStockQuoteService which is up in port 9000 like below.

tcp6 0 0 [::]:9000 [::]:* LISTEN 20664/java

So your process ID will be 20664.

7. Then open a terminal and provide the below command to view the tcp connections for the particular process id.

watch -n1 -d "netstat -n -tap | grep 20664"

8. Now open your soapui and send the following request to Proxy1
 <soapenv:Envelope xmlns:soapenv="" xmlns:ser="http://services.samples" xmlns:xsd="http://services.samples/xsd">  

9. View the tcp connections created in the terminal which you have been monitoring as soon as you send the request. You should be able to view only 3 connections since we have configured like that in proxy using clone mediator.

tcp6 0 0 ESTABLISHED 20664/java
tcp6 0 0 ESTABLISHED 20664/java
tcp6 0 0 ESTABLISHED 20664/java

Srinath PereraHow to Publish Your own Events (Data) to WSO2 Analytics Platform (BAM, CEP)

We collect data via a Sensor API (a.k.a. agents), send them to servers: WSO2 CEP and WSO2 BAM, process them, and do something with the results. You can find more information about the big picture from the slide deck

This post describes how you can collect data.

We provide a one Sensor API to publish events for both batch and realtime pipelines. The Sensor API is available as Java clients (Thrift, JMS, Kafka), java script clients* ( Web Socket and REST) and 100s of connectors via WSO2 ESB. 

Lets see how we can use the java thrift client to publish events. 

First of all, you need CEP or BAM running. Download, unzip, and run WSO2 CEP or WSO2 BAM (via bin/ 

Now, lets write a client. Add the jars given in Appendix A or add POM Dependancies given in Appendix B to your Maven POM file to setup the classpath. 

The Java client would look like following. 

Just like you create tables before you put data into a database, first you define streams before sending events to WSO2 Analytic Platfrom. Streams are a description of how your data look like (a.k.a. Schema). Then you can publish events. In the code, the "Event Data" is an array of objects, and it must match the types and parameters given in the event stream definition.

You can find an example client from /samples/producers/pizza-shop from WSO2 CEP distribution. 

Appendix A: Dependancy Jars

You can find the jars from the location ${cep.home}/repository/components/plugins/ of CEP or BAM pack.

  1. org.wso2.carbon.logging_4.2.0.jar
  2. commons-pool_1.5.6.*.jar
  3. httpclient_4.2.5.*.jar
  4. httpcore_4.3.0.*.jar
  5. commons-httpclient_3.1.0.*.jar
  6. commons-codec_1.4.0.*.jar
  7. slf4j.log4j*.jar
  8. slf4j.api_*.jar
  9. axis2_1.6.1.*.jar
  10. axiom_1.2.11.*.jar
  11. wsdl4j_1.6.2.*.jar
  12. XmlSchema_1.4.7.*.jar
  13. neethi_*.jar
  14. org.wso2.securevault_*.jar
  15. org.wso2.carbon.databridge.agent.thrift_*.jar
  16. org.wso2.carbon.databridge.commons.thrift_*.jar
  17. org.wso2.carbon.databridge.commons_*.jar
  19. libthrift_*.jar

Appendix B: Maven POM Dependancies 

Add the following WSO2 nexus repo and dependancies to pom.xml at corresponding sections. 



Kavith Thiranga LokuhewageHow to use DTO Factory in Eclipse Che

What is a DTO?

Data transfer objects are used in Che to do the communication between client and server. In a code level, this is just an interface annotated with @DTO com.codenvy.dto.shared.DTO. This interface should contain getters and setters (with bean naming conventions) for each and every fields that we need in this object.

 For example, following is a DTO with a single String field.

public interface HelloUser {
String getHelloMessage();
void setHelloMessage(String message);

By convention, we need to put these DTOs to shared package as it will be used by both client and server side. 

DTO Factory 

DTO Factory is a factory available for both client and server sides, which can be used to serialize/deserialize DTOs. DTO factory internally uses generated DTO implementations (described in next section) to get this job done. Yet, it has a properly encapsulated API and developers can simply use DTOFactoy instance directly. 

For client side   : com.codenvy.ide.dto.DtoFactory
For server side  : com.codenvy.dto.server.DtoFactory

HelloUser helloUser = DtoFactory.getInstance().createDto(HelloUser.class);

Above code snippet shows how to initialize a DTO using DTOFactory. As mentioned above, proper DtoFactory classes should be used by client or server sides. 

Deserializing in client side

//important imports

//invoke helloService
Unmarshallable<HelloUser> unmarshaller = unmarshallerFactory.newUnmarshaller(HelloUser.class);

helloService.sayHello(sayHello, new AsyncRequestCallback<HelloUser>(unmarshaller) {
protected void onSuccess(HelloUser result) {
protected void onFailure(Throwable exception) {

When invoking a service that returns a DTO, client side should register a callback created using relevant unmarshaller factory. Then, the on success method will be called with a deserialized DTO. 

De-serializing in server side

public ... sayHello(SayHello sayHello){
... sayHello.getHelloMessage() ...

Everest (JAX-RS implementation of Che) implementation automatically deserialize DTOs when they are used as parameters in rest services. It will identify serialized DTO with marked type -  @Consumes(MediaType.APPLICATION_JSON)  - and use generated DTO implementations to deserialize DTO. 

DTO maven plugin

As mentioned earlier, for DtoFactoy to function properly, it needs some generated code that will contain concrete logic to serialize/deserialize DTOs. GWT compiler should be able to access generated code for client side and generated code for server side should go in jar file.

Che uses a special maven plugin called “codenvy-dto-maven-plugin” to generate these codes. Following figure illustrates a sample configuration of this plugin. It contains separate executions for client and server sides. 

We have to input correct package structures accordingly and file paths to which these generated files should be copied. 

Other dependencies if DTOs from current project need them.

package - package, in which, DTO interfaces resides
outputDirectory -  directory, to which, generated files should be copied
genClassName - class name for the generated class

You should also configure your maven build to use these generated classes as a resource when compiling and packaging. Just add following line in resources in build section.


Paul FremantleGNU Terry Pratchett on WSO2 ESB / Apache Synapse

If any of you are following the GNU Terry Pratchett discussion on Reddit, BBC or the Telegraph, then you might be wondering how to do this in the WSO2 ESB or Apache Synapse. Its very very simple. Here you go. Enjoy.
Loading ....

Madhuka UdanthaCouchDB with Fauxton in windows 8

This post mainly about installing and running  ‘Fauxton’ in windows environment. Fauxton is the new Web UI for CouchDB. For this post I will be using windows 8 (64bit).

Prerequisite for Fauxton

1. nodejs (Download from here)

2. npm (now npm comes with node)

3. CouchDB (Installation from binaries or sources. I will have post on installing couch DB from source on windows later)

4. git (git or tortoisegit tool)


To test Prerequisite open cmd of the windows and type below to check each of their versions. Here are mine.

node  --version

npm --version

git --version


5. start CouchDB . Open up Futon to test it work fine. go to http://localhost:5984/_utils/


Here you see the

Now we have all the thing for ‘Fauxton’

Build ‘Fauxton’

1. Get git clone from ‘

2. Go to the directory (cd couchdb-fauxton)
3. enter ‘npm install’ on cmd

4. Install the grunt-cli by typing

npm install -g grunt-cli



There is small change to be done to run in windows.

'./node_modules/react-tools/bin/jsx -x jsx app/addons/ app/addons/',
update to

'node ./node_modules/react-tools/bin/jsx -x jsx app/addons/ app/addons/',

You can try 'uname -a' to find OS information and deped on it we can switch the code.



Dev Server

As I am looking for some development in here. I will start dev server and it is the easiest way to use Fauxton. Type

grunt dev


got tohttp://localhost:8000/#/_all_dbs



Here you have nice way interact with CouchDB and also I you can find rest all over in here. The work on the Fauxton implementation is going on and making good progress in preparation for CouchDB 2.0. It has just been merged in the new Fauxton sidebar redesign[1,2].

such http://localhost:8000/_users.etc.. Here is API for call in UI


Enjoy the cochDB with new UI Fauxton




sanjeewa malalgodaHow to cleanup old and unused tokens in WSO2 API Manager

When we use WSO2 API Manager over few months we may have lot of expired, revoked and inactive tokens in IDN_OAUTH2_ACCESS_TOKEN table.
As of now we do not clear these entries for logging and audit purposes.
But with the time when table grow we may need to clear table.
Having large number of entries will slow down token generation and validation process.
So in this post we will discuss about clearing unused tokens in API Manager.

Most important thing is we should not try this with actual deployment to prevent data loss.
First take a dump of running servers database.
Then perform these instructions.
And then start server pointing to updated database and test throughly to verify that we do not have any issues.
Once you are confident with process you may schedule it for server maintenance time window.
Since table entry deletion may take considerable amount of time its advisable to test dumped data before actual cleanup task.

Stored procedure to cleanup tokens

  • Back up the existing IDN_OAUTH2_ACCESS_TOKEN table.
  • Turn off SQL_SAFE_UPDATES.
  • Delete the non-active tokens other than a single record for each state for each combination of CONSUMER_KEY, AUTHZ_USER and TOKEN_SCOPE.
  • Restore the original SQL_SAFE_UPDATES value.

DROP PROCEDURE IF EXISTS `cleanup_tokens`;

CREATE PROCEDURE `cleanup_tokens` ()


-- 'Turn off SQL_SAFE_UPDATES'

-- 'Keep the most recent INACTIVE key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'

-- 'Keep the most recent REVOKED key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'

-- 'Keep the most recent EXPIRED key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'

-- 'Restore the original SQL_SAFE_UPDATES value'


Schedule event to run cleanup task per week
DROP EVENT IF EXISTS `cleanup_tokens_event`;
CREATE EVENT `cleanup_tokens_event`
      EVERY 1 WEEK STARTS '2015-01-01 00:00.00'
      CALL `WSO2AM_DB`.`cleanup_tokens`();

-- 'Turn on the event_scheduler'
SET GLOBAL event_scheduler = ON;

Muhammed ShariqTroubleshooting WSO2 server database operations with log4jdbc-log4j2

If you are using WSO2 Carbon based servers and are facing issues related to the database, there are few steps you should take in order to rectify those issue. Since Carbon 4.2.0 based products use Tomcat JDBC Connection Pool, first thing you could do is to try tuning the datasource parameters in the master-datasources.xml (or *-datasources.xml) file located in the ${CARBON_HOME}/repository/conf/datasources/ directory. Some of the parameters you might want to double check is;

  1. Set the "validationQuery" parameter 
  2. Set "testOnBurrow" to "true"
  3. Set a "validationInterval" and try tuning it to fit your environment
For a detailed explanation about those properties and also addition parameters that can be used to tune the JDBC pool, please visit the Tomcat site listed above.

Even though these parameters might help fix some of the JDBC issues you'd encounter, there might be instances where you'd want additional information to understand what's going on between the WSO2 server and the underlying database. 

We can use log4jdbc-log4j2 which is an improvement of the log4jdbc to do an in depth analysis JDBC operations between the WSO2 server and the database. In this post I'll be explaining how to configure log4jdbc-log4j2 with WSO2 servers.  

To setup a WSO2 server to log4jdbc-log4j2, follow the steps below (In the post I am assuming that the server has already been configured to point to the external database and setup with the necessary JDBC driver etc)
  1. Download log4jdbc-log4j2 jar and copy it to the ${CARBON_HOME}/repository/components/lib directory. 
  2. Prepend "jdbc:log4" to the JDBC url, <url> parameter in the datasource configuration, so the url would look like;
  3. jdbc:log4jdbc:mysql://localhost:3306/governance

  4. Change the "driverClassName" to "net.sf.log4jdbc.sql.jdbcapi.DriverSpy" as follows;
  5. net.sf.log4jdbc.sql.jdbcapi.DriverSpy

  6. To direct the log4jdbc-log4j2 output to a separate log file, add the following entries to the file located in the conf/ directory
  7. log4j.logger.jdbc.connection=DEBUG, MySQL
    log4j.logger.jdbc.audit=DEBUG, MySQL

  8. Finally, you need to start the server with the system property;

Note: You can set the system property in the file located in the bin/ directory for ease of use.

Now that you have the log4jdbc-log4j library and the required configurations in place, you can start using the server. The JDBC debug logs will be printed in the mysql-profile.log file located in the logs/ directory. There are six different loggers you can use to troubleshoot different types of problems, check section 4.2.2 of this page for more information on the different logging options.

Good luck !!!

Sriskandarajah SuhothayanBecoming a Master in Apache Maven 3

Writing programs in Java is cool, its a language thats very powerful which have right amount of flexibility that makes a developers life a hell easy. But when it comes to compiling, building and managing releases of a project its not that easy, it also has the same issues encountered by other programming languages.

To solve this problem, build tools like Apache ANT and Apache Maven have emerged. ANT is very flexible tool which allows users to do almost any thing when it comes to build, maintenance and releases. Having said that since its so flexible its quite hard to configure and manage, every project using ANT uses it in their own way and hence the projects using ANT looses their consistency. At the same time when we look at Apache Maven which is not flexible as ANT by default, but it follows an amazing concept "Convention over configuration" which give that right mix of convention and configuration for you to easily create, build, deploy and even manage releases at an enterprise level.

For examples Maven always works with defaults, and you can easily create and build a maven project just with the follow snippet in the pom.xml file of your project.


And this little configuration is tied up with many conventions

  • The Java source code is available at {base-dir}/src/main/java
  • Test cases are available at {base-dir}/src/test/java
  • A JAR file type of artifact is produced
  • Compiled class files are copied into {base-dir}/target/classes
  • The final artifact is copied into {base-dir}/target

But there are cases where we need to go a step ahead and break the rules with reasons ! And in any case if we need to change the above defaults its just a matter of adding the Maven Build Plugin and the artifact type to the project tag as below.

I came across this great book "Mastering Apache Maven 3" by Prabath Siriwardena that give you all the bits and pieces from getting started to eventually becoming a master in Maven. From this you will get to know the fundamentals and when to break the conventions with reasons. This helps you to develop and manage large, complex projects with confidence by providing an enterprise level knowledge to manage the whole Maven infrastructure.

This book covers Maven Configuration from the basics, discussing how to construct and build a  Maven project, manage Build Lifecycles, introduce useful functionalities through Maven Plugins and helps you to write your own custom plugins when needed. It also provide steps on building distributable archives using Maven Assemblies which adheres to a user-defined layout and structure,   demonstrate the usage of  Maven Archetypes for easily construct Maven projects and steps to create new Archetypes to help your developers and customers to quickly start on your project type without any configurations and replicated work. Further it also helps you to host and manage your Maven artifacts in repositories using Maven Repository Management, and most importantly explains you the Best Practices to keep your projects in line with enterprise standards.

Srinath PereraWhy We need SQL like Query Language for Realtime Streaming Analytics?

I was at O'reilly Strata in last week and certainly interest for realtime analytics was at it’s top.

Realtime analytics, or what people call Realtime Analytics, has two flavours.  
  1. Realtime Streaming Analytics ( static queries given once that do not change, they process data as they come in without storing. CEP, Apache Strom, Apache Samza etc., are examples of this. 
  2. Realtime Interactive/Ad-hoc Analytics (user issue ad-hoc dynamic queries and system responds). Druid, SAP Hana, VolotDB, MemSQL, Apache Drill are examples of this. 
In this post, I am focusing on Realtime Streaming Analytics. (Ad-hoc analytics uses a SQL like query language anyway.)

Still when thinking about Realtime Analytics, people think only counting usecases. However, that is the tip of the iceberg. Due to the time dimension of the data inherent in realtime usecases, there are lot more you can do. Lets us look at few common patterns. 
  1. Simple counting (e.g. failure count)
  2. Counting with Windows ( e.g. failure count every hour)
  3. Preprocessing: filtering, transformations (e.g. data cleanup)
  4. Alerts , thresholds (e.g. Alarm on high temperature)
  5. Data Correlation, Detect missing events, detecting erroneous data (e.g. detecting failed sensors)
  6. Joining event streams (e.g. detect a hit on soccer ball)
  7. Merge with data in a database, collect, update data conditionally
  8. Detecting Event Sequence Patterns (e.g. small transaction followed by large transaction)
  9. Tracking - follow some related entity’s state in space, time etc. (e.g. location of airline baggage, vehicle, tracking wild life)
  10. Detect trends – Rise, turn, fall, Outliers, Complex trends like triple bottom etc., (e.g. algorithmic trading, SLA, load balancing)
  11. Learning a Model (e.g. Predictive maintenance)
  12. Predicting next value and corrective actions (e.g. automated car)

Why we need SQL like query language for Realtime Streaming  Analytics?

Each of above has come up in use cases, and we have implemented them using SQL like CEP query languages. Knowing the internal of implementing the CEP core concepts like sliding windows, temporal query patterns, I do not think every Streaming use case developer should rewrite those. Algorithms are not trivial, and those are very hard to get right! 

Instead, we need higher levels of abstractions. We should implement those once and for all, and reuse them. Best lesson we can learn from Hive and Hadoop, which does exactly that for batch analytics. I have explained Big Data with Hive many time, most gets it right away. Hive has become the major programming API most Big Data use cases.

Following is list of reasons for SQL like query language. 
  1. Realtime analytics are hard. Every developer do not want to hand implement sliding windows and temporal event patterns, etc.  
  2. Easy to follow and learn for people who knows SQL, which is pretty much everybody 
  3. SQL like languages are Expressive, short, sweet and fast!!
  4. SQL like languages define core operations that covers 90% of problems
  5. They experts dig in when they like!
  6. Realtime analytics Runtimes can better optimize the executions with SQL like model. Most optimisations are already studied, and there is lot you can just borrow from database optimisations. 
Finally what are such languages? There are lot defined in world of Complex Event processing (e.g. WSO2 Siddhi, Esper, Tibco StreamBase,IBM Infoshpere Streams etc. SQL stream has fully ANSI SQL comment version of it. Last week I did a talk on Strata discussing this problem in detail and how CEP could match the bill. You could find the slide deck from below.

Saliya Ekanayakeසෝමසිරි...’n’ සයිලන්ට්

‘පොෂ්’ වීම ජීවිතය කරගත් මල් සිරි අප්පුහාමිලාගේ සෝමසිරි  තමාගේ නම ‘මෙල් සිරප් සෝම්’ ලෙස වෙනස් කර ගත්තේ ද වෙනකක් නිසා නො ව තම ගමට නිරන්තරයෙන් සිදු වන වල් අලි කරදර වාර්තා කිරීමට එන ඉංග්‍රීසි ජන මාද්‍ය හමුවේ පහසුවෙන් උච්චාරණය කිරීමට ය. එමෙන් ම හේනේ වැවෙන කැකිරිවලට ‘කෙකරි’ කීමටත් වම්බටු, මුරුංගා, බණ්ඩක්කා ආදි මෙකී නො කී සෑම දෙයක් ම පාහේ ‘වැදගත්’ ලෙස උච්චාරණය කිරීමටත් ඔහු වග බල ගත්තේ ය.  මෙම වෙනස හමුවේ ගම් වැසියෝ සෝමසිරි කතා කරන්නේ බුරියෙන් යැයි කීමට පුරුදු වූහ. නූගත් ගැමියන් කුමක් කීවත් ඒවා හමුවේ නො සැලෙන “ඩෙයිරයක්” තමන් සතු බව ඔහු නිතර ම කීවේ ය.

කැලය මාරු කළත් කොටියාගේ පුල්ලි මාරු නො වන්නා සේ ම ශබ්දය වෙනස් වුනත් වචන සියල්ල සිංහල වීම සෝමසිරිට මදි පුංචිකමක් විය. සිංහල වචන අතර නැගෙන ඉංග්‍රීසි වචනය දෙක තිරුවානා ගල් අතර දිලිසෙන මැණික් සේ ඔහුට දිස් වෙන්නට ගත්තේ ද ඒ නිසා ය. ඉංග්‍රීසි බස නිසි ලෙස ඉගෙනීමේ වැදගත්කම කෙසේ වෙතත් මෙය නම් ඔහු විසින් ඉටු කර ගත යුත්තක් ම විය.  සියලු දේ එසැනින් සොයා ගත හැකි අන්තර්ජාලය පිළිබඳ ව සෝමේගේ කණ වැටෙන්නේ මේ අතර තුර ය. හැරෙන පමාවෙන් තම ගමේ සිට හූ සියයක් පමණ දුර සිදාදියේ අන්තර්ජාල ‘කැෆේ’ එකකට ගොඩ වුන සෝමේ තමාගේ වුවමනාව එහි සිටි ‘ටෙක්’ කොලුවාට පවසා සිටියේ ය.

මේ උන්දැ මේ සොයන්නේ ඉංග්‍රීසි කුණු හරප යැයි තම නැණ නුවණින් තේරුම් ගත් කොලුවා ඉස්තරම් ගනයේ ‘වෙබ්’ පිටුවක් සොයා දුන්නේ ය. එපමණකින් නො නැවතුණු ඔහු ඒවායේ සිංහල තේරුම ද සෝමෙට පැහැදිළි කරන්නට විය. කොතරම් ‘පෝෂ්’ වුවත් සුද්දා අමුවෙන් ම කියන මේ වැකි කියන්නට හිත නොදුන් නිසා යන්තම් තෝරා බේරාගෙන එදිනෙදා භාවිත කළ හැකි ‘damn it’ යන්නට සෝමේගේ හිත ගියේ ය.

“අන්කල්, මේක ටිකක් මළ පැනපු බව පෙන්නන්න කියන එකක්. මේකේ ‘n’ සයිලන්ට්, වචන දෙක ම එක දිගට කියල දාන්න ඕනෙ”

‘ටෙක්’ කොලුවා සිදාදියේ බසින් ම සෝමේගේ අලුත් ‘කතුර’ භාවිත කරන අයුරු පැහැදිළි කළේ ය. එවදන් හිස් මුදිනින් පිළි ගත් සෝමේ නැවත ගමට අවේ රටක් රාජ්ජයක් දිනූ පරිද්දෙනි.

පසු දා සති පොළට ගිය ඔහු තම නව සොයා ගැනීම ගැමියන්ට පෙන්නීමට සිතා අහක තිබූ කුඩා ගලක තම කකුල හෙමිහිට වද්දාගෙන එය තමාට සිදු වූ මහා අකරතැබ්බයක් යැයි හුවා දක්වමින් මහා හඬින් ‘damn it’ කීවේ ය. තම හඬ මැකෙනවාත් සමග ම සති පොළේ හූදී ජනයා එකා සේ තමා දෙස නෙත් අයා බලා සිටින බව සෝමේ හොරැහින් නිරීක්ෂණය කළේ ය. ඒ වූ කලී ඔහුගේ ‘පෝෂ්’ දිවියේ වැදගත් ම සංධිස්ථානය විය. කිරි සප්පයාගේ සිට මහල්ලා දක්වා වූ මේ සැම බලා සිටින්නේ ඒ උදාරතර ‘සෝම්’ වන තමා දෙස නො වේ ද? මේ ගම්මානයේ තබා අහල ගම් හතකවත් තමාට සම කළ හැකි අයෙකු වන්නේ ද? එදා ‘බුරි සෝමේ’ කිව් කට කැඩි එවුන් අද මේ හූල්ලන් බලා ඉන්නේ තමා දෙස නොවේ ද? මල් සිරි අප්පුහාමිලාගේ සෝමසිරි හෙවත් ‘මෙල් සිරප් සෝම්’ යන මහා යුග පුරුෂයාගේ කල එළි දැක්ම මේ නො වේ ද?

“මහත්තය කාව ද හොයන්නේ?”

ගමේ පාසලේ ආරියසේන මුල් ගුරුතුමාගේ හඬින් සෝමේ යළි පියවි ලොවට පැමිණයේ ය.

“ආහ්, නැ මේ ප්‍රින්සිපල් මහත්තය මගේ කකුල මේ කරුම ගලේ වැදුන, ඒකයි මේ තරහා ගියේ පොඩ්ඩක්”

“එහෙම ද? අපි එත් බැලු ව මහත්තය ‘දමිත්’ ‘දමිත්’ කිය කියා මේ නැති වුණ බල්ලෙක්වත් හොයනව ද කියල”

“ඔන්න ඉතින් ලොකු මහත්තයා දන්න තරම. දමිත් කියන්නේ කියන්නෙ අනර්ඝ ඉංග්‍රීසි පදයක් නොවැ. කොහේ ද ඉතින් මහත්තයල අලුත් ලෝකෙ තියෙන දේවල් හොයන එකක් යැ. මං මේ අන්තර්ජාලයෙන් තමයි මේවා ඉගෙන ගත්තෙ. ආහ්, මේ තියෙන්නෙ ටවුමේ කඩේ කොල්ල ලියල දීපු කොළේ. ඔය බලන්නකො අපූරුවට ලියල තියෙන්නේ ... ‘ඩී .. ඒ .. එම් ..එන් (සයිලන්ට්) .. අයි .. ටී’ කියල. ‘ඩී .. ඒ .. එම්’ කියන්නේ ‘දම්’, ‘අයි .. ටී’ කියන්නේ ‘ඉත්’. එකට කියවන්නේ ‘දමිත්’ කියල. මහත්තයත් බර කරලා කියල බලන්නකෝ මේකේ තියෙන උජාරුව”

-- සාලිය ඒකනායක --

Nuwan BandaraMulti-tenant healthcare information systems integration

Scenario: A single healthcare information system needs to be exposed for different healthcare providers (hospitals). The system need to pass HL7 messages that comes via HTTP (API calls) to a HL7 receiver, (over tcp) reliably TODO: Enable HL7 transport senders in axis2.xml & axis2_blocking_client.xml in WSO2 ESB  following config shows the ESB configuration for tenant

sanjeewa malalgodaConfigure WSO2 API Manager 1.8.0 with reverse proxy (with proxy context path)

Remove current installation of Nginx
sudo apt-get purge nginx nginx-common nginx-full

Install Nginx
sudo apt-get install nginx

Edit configurations
sudo vi /etc/nginx/sites-enabled/default

Create ssl certificates and copy then to ssl folder.
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt

 Sample configuration:

server {

       listen 443;
       ssl on;
       ssl_certificate /etc/nginx/ssl/nginx.crt;
       ssl_certificate_key /etc/nginx/ssl/nginx.key;
       location /apimanager/carbon {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_pass https://localhost:9443/carbon/;
           proxy_redirect  https://localhost:9443/carbon/  https://localhost/apimanager/carbon/;
           proxy_cookie_path / /apimanager/carbon/;

      location ~ ^/apimanager/store/(.*)registry/(.*)$ {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

       location ~ ^/apimanager/publisher/(.*)registry/(.*)$ {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

  location /apimanager/publisher {
          index index.html;
           proxy_set_header X-Forwarded-Host $host
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_pass https://localhost:9443/publisher;
           proxy_redirect  https://localhost:9443/publisher  https://localhost/apimanager/publisher;
           proxy_cookie_path /publisher /apimanager/publisher;


      location /apimanager/store {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_pass https://localhost:9443/store;
           proxy_redirect https://localhost:9443/store https://localhost/apimanager/store;
           proxy_cookie_path /store /apimanager/store;

To stop start us following commands

sudo /etc/init.d/nginx start
sudo /etc/init.d/nginx stop

API Manager configurations

Add following API Manager configurations:

In API store edit wso2am-1.8.0/repository/deployment/server/jaggeryapps/store/site/conf/site.json  file and add following.

  "reverseProxy" : {
       "enabled" : true,
       "host" : "localhost",

In API publisher edit wso2am-1.8.0/repository/deployment/server/jaggeryapps/publisher/site/conf/site.json  file and add following.

   "reverseProxy" : {
       "enabled" : true,   
       "host" : "localhost",

Edit /repository/conf/carbon.xml and update following properties.


Then start API Manager.
Server URLs would be something like this


Ajith VitharanaAdd registry permisson for roles using admin service - WSO2 products.

You can use the ResourceAdminService to perform that task.

i) Open the carbon.xml and enable the WSDL view for admin services.
ii) Use the following WSDL endpoint to generate the SoapUI project.


Select the "addRolePermission" request.
<soap:Envelope xmlns:soap="" xmlns:ser="">
pathToAuthorize    - The registry path that need to set permissions.
roleToAuthorize     - Role name that need to authorize
actionToAuthorize - This can be following values

 2 - READ
 3 - WRITE

permissionType - This can be following valies

 1 - Allowed
 2 - Denied

Ajith VitharanaAggregate two REST responses- WSO2 ESB

There are two mock APIs (ListUsersAPI and UserRolesAPI).

 1. ListUsersAPI
<?xml version="1.0" encoding="UTF-8"?>
<api xmlns="" name="ListUsersAPI" context="/services/users">
   <resource methods="GET" url-mapping="/*">
         <payloadFactory media-type="json">
            <format>{     "persons":{        "person":[           {              "Id":"1",            "givenName":"ajith",            "lastName":"vitharana",            "age":"25",            "contactInfos":[                 {                    "InfoId":"1",                  "department":"1",                  "contactType":"email",                  "value":""               },               {                    "InfoId":"2",                  "department":"1",                  "contactType":"mobile",                  "value":"111111111"               },               {                    "InfoId":"3",                  "department":"1",                  "contactType":"home",                  "value":"Magic Dr,USA"               }            ]         },         {              "Id":"2",            "givenName":"shammi",            "lastName":"jagasingha",            "age":"30",            "contactInfos":[                 {                    "InfoId":"1",                  "department":"1",                  "contactType":"email",                  "value":""               },               {                  "InfoId":"2",                  "department":"1",                  "contactType":"mobile",                  "value":"2222222222"               },               {                    "InfoId":"3",                  "department":"1",                  "contactType":"home",                  "value":"Magic Dr,USA"               }            ]         }      ]   }}</format>
            <args />
         <property name="NO_ENTITY_BODY" scope="axis2" action="remove" />
         <property name="messageType" value="application/json" scope="axis2" type="STRING" />
         <respond />
Sample output.
curl -X GET  http://localhost:8280/services/users

                  "value":"Magic Dr,USA"
                  "value":"Magic Dr,USA"
2. UserRolesAPI
<?xml version="1.0" encoding="UTF-8"?>
<api xmlns=""
   <resource methods="GET" uri-template="/{personid}">
         <filter source="get-property('uri.var.personid')" regex="1">
               <payloadFactory media-type="json">
                  <format>{     "Id":1,   "roles":[        {           "roleId":1,         "personKey":1,         "role":"Deverloper"      },      {           "roleId":2,         "personKey":1,         "role":"Engineer"      }   ]}</format>
               <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
               <property name="messageType"
         <filter source="get-property('uri.var.personid')" regex="2">
               <payloadFactory media-type="json">
                  <format>{"personId": 2,"roles": [{ "personRoleId": 1, "personKey": 2, "role": "Manager" },{ "personRoleId": 2, "personKey": 2, "role": "QA" }]}</format>
               <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
               <property name="messageType"
Sample output.
curl -X GET  http://localhost:8280/services/roles/1 


3. UserDetailsAPI is the aggregated API
<?xml version="1.0" encoding="UTF-8"?>
<api xmlns=""
   <resource methods="GET">
               <http method="get" uri-template="http://localhost:8280/services/users"/>
         <iterate xmlns:soapenv=""
                  <property xmlns:ns="http://org.apache.synapse/xsd"
                  <property xmlns:ns="http://org.apache.synapse/xsd"
                        <http method="get"
                  <payloadFactory media-type="xml">
                        <combined xmlns="">                        $1$2                        </combined>
                        <arg xmlns:ns="http://org.apache.synapse/xsd"
                        <arg xmlns:ns="http://org.apache.synapse/xsd"
         <property name="ECNCLOSING_ELEMENT" scope="default">
            <wrapper xmlns=""/>
         <aggregate id="it1">
               <messageCount min="2" max="-1"/>
            <onComplete xmlns:s12=""
               <property name="messageType"
Sample output
curl -X GET  http://localhost:8280/userdetails

                     "value":"Magic Dr,USA"
                     "value":"Magic Dr,USA"

Chanaka FernandoEnabling audit logs for WSO2 carbon based servers

Audit logs provide very useful information related to the users who has tried to access the server. By default, most of the WSO2 carbon based products (ESB, APIM, DSS) have not enabled this logging. In production environments, it is always better to enable audit logs due to various reasons.

All you need to do is add the following section to file which resides in <CARBON_HOME>/repository/conf directory.

# Configure audit log for auditing purposes
log4j.appender.AUDIT_LOGFILE.layout.ConversionPattern=[%d] %P%5p - %x %m %n
log4j.appender.AUDIT_LOGFILE.layout.TenantPattern=%U%@%D [%T] [%S]

Once you enable this, you can see the audit log file is created under <CARBON_HOME>/repository/logs directory. It will contain information similar to below mentioned lines.

[2015-03-12 10:44:01,565]  INFO -  'admin@carbon.super [-1234]' logged in at [2015-03-12 10:44:01,565-0500]
[2015-03-12 10:44:45,825]  INFO -  User admin successfully authenticated to perform JMX operations.
[2015-03-12 10:44:45,826]  INFO -  User : admin successfully authorized to perform JMX operations.
[2015-03-12 10:44:45,851]  WARN -  Unauthorized access attempt to JMX operation.
java.lang.SecurityException: Login failed for user : jmx_user. Invalid username or password.
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at sun.rmi.server.UnicastServerRef.dispatch(
at sun.rmi.transport.Transport$
at sun.rmi.transport.Transport$
at Method)
at sun.rmi.transport.Transport.serviceCall(
at sun.rmi.transport.tcp.TCPTransport.handleMessages(
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(
at sun.rmi.transport.tcp.TCPTransport$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$

Shiva BalachandranBPMN Rest API Documentation


 The Business Process Management Initiative (BPMI) has developed a standard Business Process Modeling Notation (BPMN). The BPMN 2.0 specification was released to the public in January, 2011. We have integrated BPMN to BPS server by using the Activiti engine which is a light-weight workflow and Business Process Management (BPM) Platform targeted at business people, developers and system admins. It’s open-source and distributed under the Apache license. Activiti runs in any Java application, on a server, on a cluster or in the cloud. It is extremely lightweight and based on simple concepts.

  • Download  and run the Business Process Server 3.5.0.

Eg –  List all deployments, Request Url – repository/deployments, So the complete service url to retrieve all deployments would be – http://localhost:9763/bpmnrest/repository/deployments

  • Deployments

Represents a deployment that is already present in the process repository. A deployment is a container for resources such as process definitions, images, forms, etc. When a deployment is ‘deployed’ through the RuntimeService, the Activiti engine will recognize certain of such resource types and act upon them (eg process definitions will be parsed to an executable Java artifact).A Deployment on itself is a read-only object and its content cannot be changed after deployment.

  • List of Deployments  – Request Type – GET,   Request URL – repository/deployments

    The above request will get all the deployments from the server.

    Success Response Body :-

“data”: [
“id”: “27501”,
“name”: “sampleJavaServiceTask.bpmn20.xml”,
“deploymentTime”: “2015-03-08T18:46:14.898+05:30″,
“category”: null,
“url”: “http:\/\/localhost:9763\/bpmnrest\/repository\/deployments\/27501″,
“tenantId”: “”
“total”: 1,
“start”: 0,
“sort”: “id”,
“order”: “asc”,
“size”: 1

  • Get a Deployment –  Request Type – GET,   Request URL – repository/deployments/{deploymentId}

    We use this service to retrieve a specific Deployment from the BPS Server.

     Success Response Body :-

“id”: “27501”,
“name”: “sampleJavaServiceTask.bpmn20.xml”,
“deploymentTime”: “2015-03-08T18:46:14.898+05:30″,
“category”: null,
“url”: “http:\/\/localhost:9763\/bpmnrest\/repository\/deployments\/27501″,
“tenantId”: “”

  • Create New Deployment – Request Type – POST,   Request URL – repository/deployments

The request body should contain data of type multipart/form-data. There should be exactly one file in the request, any additional files will be ignored. The deployment name is the name of the file-field passed in. If multiple resources need to be deployed in a single deployment, compress the resources in a zip and make sure the file-name ends with .bar or .zip.

Success Response Body :-









  • Delete a Deployment – Request Type – DELETE, Request URL – repository/deployments/{deploymentId}

This request is used to delete a deployment from the server. Please note you can delete a deployment only after all respective process instances are completed or removed. There is no response body provided when you delete a deployment, but the service status “204” indicates the deployment was found and deleted. A service status of “404” indicates the deployment was not found.

  • Process Definitions

An object structure representing an executable process composed of activities and transitions. Business processes are often created with graphical editors that store the process definition in certain file format. These files can be added to a Deployment artifact, such as for example a Business Archive (.bar) file. At deploy time, the engine will then parse the process definition files to an executable instance of this class, that can be used to start a ProcessInstance.

  • List of Process Definitions – Request Type – GET, Request URL – repository/process-definitions

 This request will display all the process definitions in the server.

Success Response Body :-

“data”: [
“id”: “sampleJavaServiceTask:1:27503″,
“url”: “http:\/\/localhost:9763\/bpmnrest\/repository\/process-definitions\/sampleJavaServiceTask%3A1%3A27503″,
“key”: “sampleJavaServiceTask”,
“version”: 1,
“name”: null,
“description”: null,
“deploymentId”: “27501”,
“deploymentUrl”: “http:\/\/localhost:9763\/bpmnrest\/repository\/deployments\/27501″,
“resource”: “http:\/\/localhost:9763\/bpmnrest\/repository\/deployments\/27501\/resources\/sampleJavaServiceTask.bpmn20.xml”,
“diagramResource”: null,
“category”: “http:\/\/”,
“graphicalNotationDefined”: false,
“suspended”: false,
“startFormDefined”: false
“total”: 1,
“start”: 0,
“sort”: “name”,
“order”: “asc”,