WSO2 Venus

Ajith VitharanaHow to delete API from publisher with active subscriptions - WSO2 API Manager.

WSO2 API Manager doesn't allow to delete  APIs from publisher  which contains active subscriptions. However if  you have strong requirement to delete such API you can follow the  bellow steps to remove it.

Lets say we have API with active subscriptions.

Name : MobileAPI
Context : /mobile
Version : 1.0.0

1.First we should change the lifecycle state to BLOCKED. Then that API will be invisible from store and no longer can invoke that API.

2. Browse the  AM database and find the API_ID.

3. Delete all the subscriptions related to that API.
4. Now you can delete that API from publisher.

sanjeewa malalgodaHow to pass Basic authntication headers to backend server via API Manager

First let me explain how authorization headers work in API Manager. When user send authorization header along with API request we will use it for API authentication purpose. And we will drop it from out going message.
If you want to pass clients auth headers to back end server without dropping them at gateway you can enable following parameter and disable it.
Update following property in /repository/conf/api-manager.xml and restart server.


Then it will not drop user sent authorization headers at gateway. So whatever user send will go to back end as well

Send API request with Basic Auth header.

Incoming message to API gateway. As you can see we do not use API Manager authentication here. For this we can set resource auth type as none when we create API. Then send Basic auth header that need to pass back end server.
[2015-02-27 18:08:05,010] DEBUG - wire >> "GET /test-sanjeewa1/1.0.0 HTTP/1.1[\r][\n]"
[2015-02-27 18:08:05,011] DEBUG - wire >> "User-Agent: curl/7.32.0[\r][\n]"
[2015-02-27 18:08:05,011] DEBUG - wire >> "Host:[\r][\n]"
[2015-02-27 18:08:05,011] DEBUG - wire >> "Accept: */*[\r][\n]"
[2015-02-27 18:08:05,011] DEBUG - wire >> "Authorization: Basic 2690b6dd2af649782bf9221fa6188[\r][\n]"
[2015-02-27 18:08:05,011] DEBUG - wire >> "[\r][\n]"

Out going message from gateway. You can see client sent Basic auth header is present in out going message
[2015-02-27 18:08:05,024] DEBUG - wire << "GET http://localhost/apim1/ HTTP/1.1[\r][\n]"
[2015-02-27 18:08:05,025] DEBUG - wire << "Authorization: Basic 2690b6dd2af649782bf9221fa6188[\r][\n]"
[2015-02-27 18:08:05,025] DEBUG - wire << "Accept: */*[\r][\n]"
[2015-02-27 18:08:05,025] DEBUG - wire << "Host: localhost:80[\r][\n]"
[2015-02-27 18:08:05,025] DEBUG - wire << "Connection: Keep-Alive[\r][\n]"
[2015-02-27 18:08:05,026] DEBUG - wire << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2015-02-27 18:08:05,026] DEBUG - wire << "[\r][\n]"

Other possible option is setting Basic auth headers at API gateway. For this we have 2 options.

01. Define Basic auth headers in API when you create API(see attached image). In API implement phase you can provide required basic auth details. Then API manager gateway will send provided authorization details as basic oauth headers to back end. Here we can let client to send Bearer token authorization header with API request. And gateway will drop it(after Bearer token validation) and pass Basic auth header to back end.

Incoming message to API gateway. Here user send Bearer token to gateway. Then gateway validate it and drop from out message.
[2015-02-27 17:36:15,580] DEBUG - wire >> "GET /test-sanjeewa/1.0.0 HTTP/1.1[\r][\n]"
[2015-02-27 17:36:15,595] DEBUG - wire >> "User-Agent: curl/7.32.0[\r][\n]"
[2015-02-27 17:36:15,595] DEBUG - wire >> "Host:[\r][\n]"
[2015-02-27 17:36:15,595] DEBUG - wire >> "Accept: */*[\r][\n]"
[2015-02-27 17:36:15,595] DEBUG - wire >> "Authorization: Bearer 2690b6dd2af649782bf9221fa6188-[\r][\n]"
[2015-02-27 17:36:15,595] DEBUG - wire >> "[\r][\n]"

Out going message from gateway. You can see Basic auth header added to out going message
[2015-02-27 17:36:20,523] DEBUG - wire << "GET http://localhost/apim1/ HTTP/1.1[\r][\n]"
[2015-02-27 17:36:20,539] DEBUG - wire << "Authorization: Basic YWRtaW46YWRtaW4=[\r][\n]"
[2015-02-27 17:36:20,539] DEBUG - wire << "Accept: */*[\r][\n]"
[2015-02-27 17:36:20,540] DEBUG - wire << "Host: localhost:80[\r][\n]"
[2015-02-27 17:36:20,540] DEBUG - wire << "Connection: Keep-Alive[\r][\n]"
[2015-02-27 17:36:20,540] DEBUG - wire << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"

02. This is also same as previous sample. But if need you can set API resource authorization type as none. Then client don't need to send anything in request. But APIM will add Basic auth headers to outgoing message.
You can understand message flow and headers by looking following wire log

Incoming message to API gateway
[2015-02-27 17:37:10,951] DEBUG - wire >> "GET /test-sanjeewa/1.0.0 HTTP/1.1[\r][\n]"
[2015-02-27 17:37:10,953] DEBUG - wire >> "User-Agent: curl/7.32.0[\r][\n]"
[2015-02-27 17:37:10,953] DEBUG - wire >> "Host:[\r][\n]"
[2015-02-27 17:37:10,953] DEBUG - wire >> "Accept: */*[\r][\n]"
[2015-02-27 17:37:10,953] DEBUG - wire >> "[\r][\n]"

Out going message from gateway. You can see Basic auth header is present in out going message
[2015-02-27 17:37:13,766] DEBUG - wire << "GET http://localhost/apim1/ HTTP/1.1[\r][\n]"
[2015-02-27 17:37:13,766] DEBUG - wire << "Authorization: Basic YWRtaW46YWRtaW4=[\r][\n]"
[2015-02-27 17:37:13,766] DEBUG - wire << "Accept: */*[\r][\n]"
[2015-02-27 17:37:13,766] DEBUG - wire << "Host: localhost:80[\r][\n]"
[2015-02-27 17:37:13,766] DEBUG - wire << "Connection: Keep-Alive[\r][\n]"

Isuru PereraJava Flight Recorder Continuous Recordings

When we are trying to find performance issues, it is sometimes necessary to do continuous recordings with Java Flight Recorder.

Usually we debug issues in an environment similar to a production setup. That means we don't have a desktop environment and we cannot use Java Mission Control for flight recording.

That also means we need to record & get dumps using command line in servers. We can of course use remote connection methods, but it's more easier to get recordings from the server.

With continuous recordings, we need to figure out how to get dumps. There are few options.
  1. Get a dump when the Java application exits. For this, we need to use dumponexit and dumponexitpath options.
  2. Get a dump manually from JFR.dump diagnostic command via "jcmd"
Note: The "jcmd" command is in $JAVA_HOME/bin. If you use the Oracle Java Installation script for Ubuntu, you can directly use "jcmd" without including  $JAVA_HOME/bin in $PATH.

Enabling Java Flight Recorder and starting a continuous recording

To demonstrate, I will use WSO2 AS 5.2.1. First of all we need to enable Java Flight Recorder in WSO2 AS. Then we will configure it to start a default recording.

$ cd wso2as-5.2.1/bin
$ vi

In VI editor, press SHIFT+G to go to the end of file. Add following lines in between "-Dfile.encoding=UTF8 \" and "org.wso2.carbon.bootstrap.Bootstrap $*"

    -XX:+UnlockCommercialFeatures \
-XX:+FlightRecorder \
-XX:FlightRecorderOptions=defaultrecording=true,settings=profile,disk=true,repository=./tmp,dumponexit=true,dumponexitpath=./ \

As I mentioned in my previous blog post on Java Mission Control, we use the default recording option to start a "Continuous Recording". Please look at java command reference to see the meanings of each Flight Recorder option.

Please note that I'm using the "profile" setting and using disk=true to write a continuous recording to the disk. I'm also using ./tmp directory as the repository, which is the temporary disk storage for JFR recordings.

It's also important to note that the default value of "maxage" is set to 15 minutes.

To be honest, I couldn't exactly figure out how this maxage works. For example, if I set to 1m, I see events for around 20 mins. If I use 10m, I see events for around 40 mins to 1 hour. Then I found an answer in Java Mission Control forum. See the thread Help with maxage / limiting default recording disk usage.

What really happens is that maxage threshold is checked only when a new recording chunk is created. We haven't specified the "maxchunksize" above and the default value of 12MB is used. It might take a considerable time to fill the data and trigger removal of chunks.

If you need infinite recordings, you can set maxage=0 to override the default value.

Getting Periodic Java Flight Recorder Dumps

Let's use "jcmd" to get a Java Flight Recorder Dump. For this, I wrote a simple script (jfrdump)

now=`date +%Y_%m_%d_%H_%M_%S`
if ps -p $1 > /dev/null
echo "$now: The process $1 is running. Getting a JFR dump"
# Dump
jcmd $1 JFR.dump recording=0 filename="recording-$now.jfr"
echo "$now: The process $1 is not running. Cannot take a JFR dump"
exit 1

You can see that I have used "JFR.dump" diagnostic command and the script expects the Java process ID as an argument.

I have used recording id as 0. The reason is that the default recording started from the has the recording id as 0.

You check JFR recordings via JFR.check diagnostic command.

isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1$ jcmd `cat` JFR.check
Recording: recording=0 name="HotSpot default" maxage=15m (running)

I have also used the date for the recording name, which will help us to have multiple files with the date and time of the dump. Note that the recordings will be saved in the CARBON_HOME directory, which is the working directory for the Java process.

Let's test jfrdump script!

isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1$ jfrdump `cat`
2015_02_27_15_02_27: The process 21674 is running. Getting a JFR dump
Dumped recording 0, 2.3 MB written to:


Since we have a working script to get a dump, we can use it as a task for Cron.

Edit the crontab.

$ crontab -e

Add following line.

*/15 * * * * (/home/isuru/programs/sh/jfrdump `cat /home/isuru/test/wso2as-5.2.1/`) >> /tmp/jfrdump.log 2>&1

Now you should get a JFR dump every 15 minutes. I used 15 minutes as the maxage is 15 minutes. But you can adjust these values depending on your requirements.

See also: Linux Crontab: 15 Awesome Cron Job Examples

Troubleshooting Tips

  • After you edit, always run the server once in foreground (./ to see whether there are issues in script syntax. If the server is running successfully, you can start the server in background.
  • If you want to get a dump at shutdown, do not kill the server forcefully. Always allow the server to gracefully shutdown. Use "./ stop"

sanjeewa malalgodaHow to modify API Manager publisher to remove footer - API Manager 1.8.0

1. Go to publisher jaggery app (/repository/deployment/server/jaggeryapps/publisher)

2. Go to subthemes folder in publisher (site/themes/default/subthemes)

3. Create a folder with the name of your subtheme. For example "nofooter"

4. Create a folder called 'css' inside 'nofooter' folder

5. Copy the "/repository/deployment/server/jaggeryapps/publisher/site/themes/default/css/localstyles.css" to the new subtheme's css location " /repository/deployment/server/jaggeryapps/publisher/site/themes/default/subthemes/nofooter/css/"

6. Copy the "/repository/deployment/server/jaggeryapps/publisher/site/themes/default/images" folder to the new subtheme location " /repository/deployment/server/jaggeryapps/publisher/site/themes/default/subthemes/nofooter/"

7. add following css to localstyles.css file in "/repository/deployment/server/jaggeryapps/publisher/site/themes/default/subthemes/nofooter/css/" folder


8. Edit "/repository/deployment/server/jaggeryapps/publisher/site/conf/site.json" file as below in order to make the new sub theme as the default theme.
         "theme" : {
               "base" : "default",
               "subtheme" : "nofooter"

Lali DevamanthriLatest header files for FreeBSD 9 kernel

Mateusz Kocielski and Marek Kroemeke discovered that an integer overflow
in IGMP processing may result in denial of service through malformed
IGMP packets.

For the stable distribution (wheezy), this problem has been fixed in
version 9.0-10+deb70.9.

We recommend that you upgrade your kfreebsd-9 packages.

Further information about Debian Security Advisories, how to apply
these updates to your system and frequently asked questions can be
found at:

Srinath PereraWSO2 Demo Videos from O'reilly Strata 2015 Booth

We just came back from O’reilly Strata. It was great to see most of the Big Data world gathered at a one place. 

WSO2 have had a booth, and following are demos we showed in the booth. 

Demo 1: Realtime Analytics for a Football Game played with Sensors

This is shows a realtime analytics done using a dataset created by playing football game with sensors in the ball and the boots of the player. You can find more information from the earlier post.  

Demo 2: GIS Queries using Public Transport for London Data Feeds 

TFL (Transport for London) provides several public data feeds about London public transport. We used those feeds within WSO2 CEP's Geo Dashboard to implement "Speed Alerts", "Proximity Alerts", and Geo Fencing.

Please see this slide deck for more information. 

Srinath PereraIntroduction to Large Scale Data Analysis with WSO2 Analytics Platform

Slide deck for the talk I did at Indiana University, Bloomington. It walks though WSO2 Big data offering providing example queries.

Isuru PereraMonitor WSO2 Carbon logs with Logstash

The ELK stack is a popular stack for searching and analyzing data. Many people use it for analyzing logs. WSO2 also has a full-fledged Big Data Analytics Platform, which can analyze logs and do many more things.

In this blog post, I'm explaining on how to monitor logs with Elasticsearch, Logstash and Kibana. I will mainly explain logstash configurations. I will not show how to set up Elasticsearch and Kibana. Those are very easy to setup and there are not much configurations. You can just figure it out very easily! :)

If you want to test an elasticsearch server, you can just extract the elasticsearch distribution and start an elasticsearch server. If you are using Kibana 3, you need to use a web server to host the Kibana application. With Kibana 4, you can use the standalone server provided in the distribution.

Configuring Logstash

Logstash is a great tool for managing events and logs. See Getting Started with Logstash if you haven't used logstash.

First of all, we need to configure logstash to get the wso2carbon.log file as an input. Then we need to use a filter to parse the log messages and extract data to analyze.

The wso2carbon.log file is written using log4j and the configurations are in $CARBON_HOME/repository/conf/

For WSO2 log message parsing, we will be using the grok filter to extract the details configured via log4j pattern layout

For example, following is the pattern layout configured for wso2carbon.log in WSO2 AS 5.2.1 (wso2as-5.2.1/repository/conf/

log4j.appender.CARBON_LOGFILE.layout.ConversionPattern=TID: [%T] [%S] [%d] %P%5p {%c} - %x %m {%c}%n

In this pattern, the class name ("{%c}") is logged twice. So, let's remove the extra class name. (I have created a JIRA to remove the extra class name from log4j configuration. See CARBON-15065)

Following should be the final configuration for wso2carbon.log.

log4j.appender.CARBON_LOGFILE.layout.ConversionPattern=TID: [%T] [%S] [%d] %P%5p {%c} - %x %m %n

Now when we start WSO2 AS 5.2.1, we can see all log messages have the pattern specified in log4j configuration.

For example:

isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1/bin$ ./ start
isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1/bin$ tail -4f ../repository/logs/wso2carbon.log
TID: [0] [AS] [2015-02-25 18:02:00,345] INFO {org.wso2.carbon.core.init.JMXServerManager} - JMX Service URL : service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi
TID: [0] [AS] [2015-02-25 18:02:00,346] INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - Server : Application Server-5.2.1
TID: [0] [AS] [2015-02-25 18:02:00,347] INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - WSO2 Carbon started in 29 sec
TID: [0] [AS] [2015-02-25 18:02:00,701] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - Mgt Console URL :

Let's write a grok pattern to parse a log message (single line). Please look at grok filter docs for basic syntax in grok patterns. Once you are familiar with grok syntax, it's very easier to write patterns.

There is also an online "Grok Debugger" application to test grok patterns.

Following is the Grok pattern written for parsing above log lines.


You can test this Grok pattern with Grok Debugger. Use one of the lines in above log file for the input.

Grok Debugger
Grok Debugger

We are now parsing a single log line in logstash. Next, we need to look at how we can group exceptions or multiline log messages into one event.

For that we will use the multiline filter. As mentioned in the docs, we need to use a pattern to identify whether a particular log line is a part of the previous line. As configured in the log4j, all logs must start with "TID". If not we can assume that the particular log line belongs to the previous log line.

Finally we need to configure logstash to send output to some destination. We can use "stdout" output for testing. In a production setup, you can use elasticsearch servers.

Logstash Config File

Following is the complete logstash config file. Save it as "logstash.conf"

input { 
file {
add_field => {
instance_name => 'wso2-worker'
type => "wso2"
path => [ '/home/isuru/test/wso2as-5.2.1/repository/logs/wso2carbon.log' ]

filter {
if [type] == "wso2" {
grok {
match => [ "message", "TID:%{SPACE}\[%{INT:tenant_id}\]%{SPACE}\[%{WORD:server_type}\]%{SPACE}\[%{TIMESTAMP_ISO8601:timestamp}\]%{SPACE}%{LOGLEVEL:level}%{SPACE}{%{JAVACLASS:java_class}}%{SPACE}-%{SPACE}%{GREEDYDATA:log_message}" ]
multiline {
pattern => "^TID"
negate => true
what => "previous"

output {
# elasticsearch { host => localhost }
stdout { codec => rubydebug }

Please note that I have used "add_field" in file input to show that you can add extra details to the log event.

Running Logstash

Now it's time to run logstash!

$ tar -xvf logstash-1.4.2.tar.gz 
$ cd logstash-1.4.2/bin

We will first test whether the configuration file is okay.

$ ./logstash --configtest -f ~/conf/logstash.conf 
Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see {:level=>:warn}
Configuration OK

Let's start logstash

$ ./logstash -f ~/conf/logstash.conf

Now start the WSO2 AS 5.2.1 server. You will now see log events from logstash.

For example:

"message" => "TID: [0] [AS] [2015-02-26 00:31:41,389] INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - WSO2 Carbon started in 17 sec",
"@version" => "1",
"@timestamp" => "2015-02-25T19:01:49.151Z",
"type" => "wso2",
"instance_name" => "wso2-worker",
"host" => "isurup-ThinkPad-T530",
"path" => "/home/isuru/test/wso2as-5.2.1/repository/logs/wso2carbon.log",
"tenant_id" => "0",
"server_type" => "AS",
"timestamp" => "2015-02-26 00:31:41,389",
"level" => "INFO",
"java_class" => "org.wso2.carbon.core.internal.StartupFinalizerServiceComponent",
"log_message" => "WSO2 Carbon started in 17 sec"

Troubleshooting Tips

  • You need to run the server as the file input will read only new lines. However if you want to test a log file from the beginning, you can use following input configuration.

input { 
file {
add_field => {
instance_name => 'wso2-worker'
type => "wso2"
start_position => "beginning"
sincedb_path => "/dev/null"
path => [ '/home/isuru/test/wso2as-5.2.1/repository/logs/wso2carbon.log' ]

  • When you are using multiline filter, the last line of the log file may not be processed. This is an issue in logstash 1.4.2 and I didn't notice the same issue in logstash-1.5.0.beta1. 

I hope these instructions are clear. I will try to write a blog post on using Elasticsearch & Kibana later. :)

Shelan PereraWhen should you give up ?

 When should i give up something?

 So you should never give up something until you find something you really want. Yes... Until you find it.. :). We usually draw the give up line on what society believe, Not what we really believe. We often create boundaries on what society believes achievable. Until someone who really believe him or herself step in expand the limits.

We never thought someone could fly until wright brothers flew, We never imagined a world speaking with someone thousand of miles away until Alexander Graham Bell invented the first practical telephone. We may be prisoners of the society unless we brave enough to reach outside.

So When should we give up on something? We should give up on the day we win the game. It seems so simple, obvious but senseless.

Watch the following video if you need to breath a life into what i mentioned.

"It's Not OVER Until You Win! Your Dream is Possible - Les Brown"

Sajith RavindraReason for WSO2 ESB returns 202 response when an API is called!


There's a REST API hosted WSO2 ESB. And when you invoke it, ESB only returns a 202 Accept response similar to follows and no processing is done to the request. And there are no errors printed in the wso2carbon.log too.

HTTP/1.1 202 Accepted
Date: Wed, 25 Feb 2015 13:43:14 GMT
Server: WSO2-PassThrough-HTTP
Transfer-Encoding: chunked
Connection: keep-alive


The reason for this is, there's no API or a resource that matches the request URL. Let me elaborate more on this. Let's say the request URL is as follows,

If this request returns a response similar to above, following are the possible causes,
  1. There's no API with the context="/myapicontext"
  2. If there's an API with  context="/myapicontext", it has no resource with uri-template or a url-mapping which matches /myresource/myvar/ portion of the request URL.
Therefore, to fix this issue we should make sure target API and the resource exists in ESB.

In order for the ESB to send  a more meaning full response in case 2 ONLY, add the following sequence to the ESB.

<sequence xmlns="" name="_resource_mismatch_handler_">
   <payloadFactory media-type="xml">
         <tp:fault xmlns:tp="">
            <tp:type>Status report</tp:type>
            <tp:message>Not Found</tp:message>
            <tp:description>The requested resource (/$1) is not available.</tp:description>
      <arg xmlns:ns="http://org.apache.synapse/xsd" xmlns:ns3="http://org.apache.synapse/xsd" expression="$axis2:REST_URL_POSTFIX" evaluator="xml"></arg>
   <property name="NO_ENTITY_BODY" action="remove" scope="axis2"></property>
   <property name="HTTP_SC" value="404" scope="axis2"></property>

So that the ESB will return a response as follows if there's no matching resource in API,

<tp:fault xmlns:tp="">
    <tp:type>Status report</tp:type>
    <tp:message>Not Found</tp:message>
    <tp:description>The requested resource (//myresource/myvar/) is not available.</tp:description>

Srinath PereraWhy We need SQL like Query Language for Realtime Streaming Analytics?

I was at O'reilly Strata in last week and certainly interest for realtime analytics was at it’s top.

Realtime analytics, or what people call Realtime Analytics, has two favours.  
  1. Realtime Streaming Analytics ( static queries given once that do not change, they process data as they come in without storing. CEP, Apache Strom, Apache Samza etc., are examples of this. 
  2. Realtime Interactive/Ad-hoc Analytics (user issue ad-hoc dynamic queries and system responds). Druid, SAP Hana, VolotDB, MemSQL, Apache Drill are examples of this. 
In this post, I am focusing on Realtime Streaming Analytics. (Ad-hoc analytics uses a SQL like query language anyway.)

Still when thinking about Realtime Analytics, people think only counting usecases. However, that is the tip of the iceberg. Due to the time dimension of the data inherent in realtime usecases, there are lot more you can do. Lets us look at few common patterns. 
  1. Simple counting (e.g. failure count)
  2. Counting with Windows ( e.g. failure count every hour)
  3. Preprocessing: filtering, transformations (e.g. data cleanup)
  4. Alerts , thresholds (e.g. Alarm on high temperature)
  5. Data Correlation, Detect missing events, detecting erroneous data (e.g. detecting failed sensors)
  6. Joining event streams (e.g. detect a hit on soccer ball)
  7. Merge with data in a database, collect, update data conditionally
  8. Detecting Event Sequence Patterns (e.g. small transaction followed by large transaction)
  9. Tracking - follow some related entity’s state in space, time etc. (e.g. location of airline baggage, vehicle, tracking wild life)
  10. Detect trends – Rise, turn, fall, Outliers, Complex trends like triple bottom etc., (e.g. algorithmic trading, SLA, load balancing)
  11. Learning a Model (e.g. Predictive maintenance)
  12. Predicting next value and corrective actions (e.g. automated car)

Why we need SQL like query language for Realtime Streaming  Analytics?

Each of above has come up in use cases, and we have implemented them using SQL like CEP query languages. Knowing the internal of implementing the CEP core concepts like sliding windows, temporal query patterns, I do not think every Streaming use case developer should rewrite those. Algorithms are not trivial, and those are very hard to get right! 

Instead, we need higher levels of abstractions. We should implement those once and for all, and reuse them. Best lesson we can learn from Hive and Hadoop, which does exactly that for batch analytics. I have explained Big Data with Hive many time, most gets it right away. Hive has become the major programming API most Big Data use cases.

Following is list of reasons for SQL like query language. 
  1. Realtime analytics are hard. Every developer do not want to hand implement sliding windows and temporal event patterns, etc.  
  2. Easy to follow and learn for people who knows SQL, which is pretty much everybody 
  3. SQL like languages are Expressive, short, sweet and fast!!
  4. SQL like languages define core operations that covers 90% of problems
  5. They experts dig in when they like!
  6. Realtime analytics Runtimes can better optimize the executions with SQL like model. Most optimisations are already studied, and there is lot you can just borrow from database optimisations. 
Finally what are such languages? There are lot defined in world of Complex Event processing (e.g. WSO2 Siddhi, Esper, Tibco StreamBase,IBM Infoshpere Streams etc. SQL stream has fully ANSI SQL comment version of it. Last week I did a talk on Strata discussing this problem in detail and how CEP could match the bill. You could find the slide deck from below.

Madhuka UdanthaPandas for Data Manipulation and Analysis

Pandas is a software library written for the Python programming language for data manipulation and analysis. In many organizations, it is common to research, prototype, and test new ideas using
a more domain-specific computing language like MATLAB or R then later port those ideas to be part of a larger production system written in, say, Java, C#, or C++. What
people are increasingly finding is that Python is a suitable language not only for doing research and prototyping but also building the production systems, too.

It contains data structures and operations for manipulating numerical tables and time series. I notice pandas while I was researching on big data. It saved my hours in research so I thought of writing some blog post on pandas. It contain

  • Data structures
  • Date range generation Index objects (simple axis indexing and multi-level / hierarchical axis indexing)
  • Data Wrangling (Clean, Transform, Merge, Reshape)
  • Grouping (aggregating and transforming data sets)
  • Interacting with the data/files (tabular data and flat files (CSV, delimited, Excel))
  • Statistical functions (Rolling statistics/ moments)
  • Static and moving window linear and panel regression
  • Plotting and Visualization


Lets do coding, Firstly we import as follows

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

1. Date Generation

1.1 Creating a Series by passing a list of values

series  = pd.Series([1,2,4,np.nan,5,7])
print series


1 import pandas as pd
2 import numpy as np
3 import matplotlib.pyplot as plt
5 series = pd.Series([1,2,4,np.nan,5,7])
6 print series



1.2 Random sample values

The numpy.random module supplements the built-in Python random with functions for
efficiently generating whole arrays of sample values from many kinds of probability  distributions

1 samples = np.random.normal(size=(4, 4))
2 print samples



1.3 Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns.

1 import pandas as pd
2 import numpy as np
3 import matplotlib.pyplot as plt
5 dates = pd.date_range('20150201',periods=5)
7 df = pd.DataFrame(np.random.randn(5,3),index=dates,columns=list(['stock A','stock B','stock C']))
8 print "Colombo Stock Exchange Growth - 2015 Feb"
9 print (45*"=")
10 print df



1.4 Statistic summary

We can view a quick statistic summary of your data by describe

print df.describe()


1.5 Sorting

Now we want to sort by the values in one or more columns. Therefore we have to pass one or more column names to the 'by' option:
eg: We can sort data by increment of 'stock A' as below

df.sort_index(by='stock A')



To sort by multiple columns, pass a list of names:
df.sort_index(by=['stock A','stock B'])

df.sort(columns=None, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last') [2]

1.6 Ranking
DataFrame can compute ranks over the rows or the columns


df.rank(axis=0, numeric_only=None, method='average', na_option='keep', ascending=True, pct=False) [3]

[NOTE]Tie-breaking methods with rank

  • 'average' - Default:  assign the average rank to each entry in the equal group.
  • 'min' -  Use the minimum rank for the whole group.
  • 'max' -  Use the maximum rank for the whole group.
  • 'first' -  Assign ranks in the order the values appear in the data.
  • ‘dense’ - like ‘min’, but rank always increases by 1 between groups

1.7 Descriptive Statistics Methods
pandas objects are equipped with a set of common mathematical and statistical methods. Most of these fall into the category of reductions or summary statistics, methods
that extract a single value (like the sum or mean).



We need each day total increment of the stockA, stockB and stockC
print df.sum(axis=1)

We need highest stock increment day (date) per each stocks
print df.idxmax()



Descriptive and summary statistics


Number of non-NA values


Compute set of summary statistics for Series or each DataFrame column

min, max

Compute minimum and maximum values

argmin, argmax

Compute index locations (integers) at which minimum or maximum value obtained, respectively

idxmin, idxmax

Compute index values at which minimum or maximum value obtained, respectively


Compute sample quantile ranging from 0 to 1


Sum of values


Mean of values


Arithmetic median (50% quantile) of values


Mean absolute deviation from mean value


Sample variance of values


Sample standard deviation of values


Sample skewness (3rd moment) of values


Sample kurtosis (4th moment) of values


Cumulative sum of values

cummin, cummax

Cumulative minimum or maximum of values, respectively


Cumulative product of values


Compute 1st arithmetic difference (useful for time series)


Compute percent changes

There so many features I will go through them in my next posts.

Ajith VitharanaRegistry resource indexing in WSO2 server.

The registry resources are going to be stored in underline database as blob content. If we want to search some resource by the value of content/attribute(s) we have to read the entire content and pass through the content to find the matching resource(s). When the resource count is high that will affect to the performance of the search operations. To get rid of that issue we have introduced the Apache solr based content indexing/searching.

Set of  WSO2 products (WSO2 API Manager, WSO2 Governance Registry, WSO2 User Engagement Server ..etc) using this feature to list , search the registry artifacts.

The embedded solr server has configured and it start with the server start. The solr  configuration can be found in [Server_Home]repository/conf/solr/conf

The indexing has configured in registry.xml file (eg: WSO2 API Manager).

        <!--number of resources submit for given indexing thread -->
         <!--number of worker threads for indexing -->
        <!-- location storing the time the indexing took place-->
        <!-- the indexers that implement the indexer interface for a relevant media type/(s) -->
            <indexer class="org.wso2.carbon.registry.indexing.indexer.MSExcelIndexer" mediaTypeRegEx="application/"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.MSPowerpointIndexer" mediaTypeRegEx="application/"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.MSWordIndexer" mediaTypeRegEx="application/msword"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.PDFIndexer" mediaTypeRegEx="application/pdf"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.XMLIndexer" mediaTypeRegEx="application/xml"/>
            <indexer class="org.wso2.carbon.governance.registry.extensions.indexers.RXTIndexer" mediaTypeRegEx="application/wsdl\+xml" profiles ="default,uddi-registry"/>
            <indexer class="org.wso2.carbon.governance.registry.extensions.indexers.RXTIndexer" mediaTypeRegEx="application/x-xsd\+xml " profiles ="default,uddi-registry"/>
            <indexer class="org.wso2.carbon.governance.registry.extensions.indexers.RXTIndexer" mediaTypeRegEx="application/policy\+xml" profiles ="default,uddi-registry"/>
            <indexer class="org.wso2.carbon.governance.registry.extensions.indexers.RXTIndexer" mediaTypeRegEx="application/vnd.(.)+\+xml" profiles ="default,uddi-registry"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.XMLIndexer" mediaTypeRegEx="application/(.)+\+xml"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.PlainTextIndexer" mediaTypeRegEx="text/(.)+"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.PlainTextIndexer" mediaTypeRegEx="application/x-javascript"/>
            <exclusion pathRegEx="/_system/config/repository/dashboards/gadgets/swfobject1-5/.*[.]html"/>
            <exclusion pathRegEx="/_system/local/repository/components/org[.]wso2[.]carbon[.]registry/mount/.*"/>

When the server is starting (at the time registry indexing service register in OSGI ) indexing task scheduling with 60s delay.
Indexing task is running every 5s second to index the newly added, updated and deleted resources.
Max resource count can handle by one indexing task.
Indexing task thread pool size.
Last indexing task executed time is stored in the location. (Then next indexing task will fetch resources from registry which are updated/added after the last indexing time. This prevents re-indexing resource, which is already indexed but not updated. )

<indexer class="org.wso2.carbon.registry.indexing.indexer.MSExcelIndexer" mediaTypeRegEx="application/"/>
There are set of indexers to index different type of resources/artifacts based on media type.
 <exclusion pathRegEx="/_system/config/repository/dashboards/gadgets/swfobject1-5/.*[.]html"/>
You can exclude the resource indexing which are stored in given path(s).

How to select resource(s) to index.

1. Registry indexing task read the activity logs from the REG_LOG table and filter the logs which are added/updated after the timestamps stored in lastAccessTimeLocation.

2. Then it filter the relevant indexer(configured in registry.xml) matching with the media type. if the matching media type found , it creates the  indexable resource file and send to solr server to index.

3. The Governance API (GenericArtifactManager) and Registry API (ContentBasedSearchService) provides the APIs to search the indexed resource through the indexing client.


WSO2 API Manager store the API meta data as configurable governance artifact. Those API meta data going to be indexed using the RXTIndexer. The Governance API provide API to search the indexed api artifacts. Following client code search the api contain the  state as "PUBLISHED" and visibility as "public".

*  Copyright (c) 2005-2010, WSO2 Inc. ( All Rights Reserved.
*  WSO2 Inc. licenses this file to you under the Apache License,
*  Version 2.0 (the "License"); you may not use this file except
*  in compliance with the License.
*  You may obtain a copy of the License at
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* KIND, either express or implied.  See the License for the
* specific language governing permissions and limitations
* under the License.

import org.apache.axis2.context.ConfigurationContext;
import org.apache.axis2.context.ConfigurationContextFactory;
import org.wso2.carbon.base.ServerConfiguration;
import org.wso2.carbon.governance.api.generic.GenericArtifactManager;
import org.wso2.carbon.governance.api.generic.dataobjects.GenericArtifact;
import org.wso2.carbon.governance.api.util.GovernanceUtils;
import org.wso2.carbon.governance.client.WSRegistrySearchClient;
import org.wso2.carbon.registry.core.Registry;
import org.wso2.carbon.registry.core.pagination.PaginationContext;
import org.wso2.carbon.registry.core.session.UserRegistry;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class SampleWSRegistrySearchClient {

    private static ConfigurationContext configContext = null;

    private static final String CARBON_HOME = "/home/ajith/wso2/packs/wso2am-1.8.0/";
    private static final String axis2Repo = CARBON_HOME + File.separator + "repository" +
            File.separator + "deployment" + File.separator + "client";
    private static final String axis2Conf = ServerConfiguration.getInstance().getFirstProperty("Axis2Config.clientAxis2XmlLocation");
    private static final String username = "admin";
    private static final String password = "admin";
    private static final String serverURL = "https://localhost:9443/services/";
    private static Registry registry;

    private static WSRegistryServiceClient initialize() throws Exception {

        System.setProperty("", CARBON_HOME + File.separator + "repository" +
                File.separator + "resources" + File.separator + "security" + File.separator +
        System.setProperty("", "wso2carbon");
        System.setProperty("", "JKS");
        System.setProperty("carbon.repo.write.mode", "true");
        configContext = ConfigurationContextFactory.createConfigurationContextFromFileSystem(
                axis2Repo, axis2Conf);
        return new WSRegistryServiceClient(serverURL, username, password, configContext);

    public static void main(String[] args) throws Exception {
        try {
            registry = initialize();
            Registry gov = GovernanceUtils.getGovernanceUserRegistry(registry, "admin");
            // Should be load the governance artifact.
            GovernanceUtils.loadGovernanceArtifacts((UserRegistry) gov);
            //Initialize the pagination context.
            PaginationContext.init(0, 20, "", "", 10);
            WSRegistrySearchClient wsRegistrySearchClient = new WSRegistrySearchClient(serverURL, username, password, configContext);
            //This should be execute to initialize the AttributeSearchService.
            //Initialize the GenericArtifactManager
            GenericArtifactManager artifactManager = new GenericArtifactManager(gov, "api");
            Map<String, List<String>> listMap = new HashMap<String, List<String>>();
            //Create the search attribute map
            listMap.put("overview_status", new ArrayList<String>() {{
            listMap.put("overview_visibility", new ArrayList<String>() {{
            //Find the results.
            GenericArtifact[] genericArtifacts = artifactManager.findGenericArtifacts(listMap);

            for(GenericArtifact artifact : genericArtifacts){

        } finally {


Sohani Weerasinghe

Installing a new keystore into WSO2 Products

Basically WSO2 carbon based products are shipped with a default keystore (wso2carbon.jks) which can be found at <CARBON_HOME>/repository/resources/security directory. This has a private/public key pair which is mainly use for encrypt the sensitive information.

When the products are deployed in production environment it is better to replace this default keystore with a self signed or CA signed certificates.

1). Create a new keystore with a private and public key pair using keytool which is shipped with JDK installation.

Go to <CARBON_HOME>/repository/resources/security directory and type the following command

keytool -genkey -alias testcert -keyalg RSA -keysize 1024 -keypass testpassword -keystore testkeystore.jks -storepass testpassword

Then you have to provide necessary information in order to construct the DN of the certificate.
After you enter the information the created keystore can be found at the above location.

Note: You can view the contents of the generated keystore by:

keytool -list -v -keystore testkeystore.jks -storepass testpassword

2). In order to signed the public certificate, you can use two options as follows.

3). Export your public certificate from the keystore and import it into the trust store.

In WSO2 Carbon products, this trust store is set as client-truststore.jks which resides in the same above directory as the keystore.

Now we have to imort the new public certificate into this trust store for Front End and Back End communication.

  • Export the new public certificate:

keytool -export -alias testcert -keystore testkeystore.jks -storepass testpassword -file testcert.pem

This will export the public certificate into a file called testcert.pem in the same directory.

  • Import it into client-truststore.jks with following command:

keytool -import -alias testnewcert -file testcert.pem -keystore client-truststore.jks -storepass wso2carbon

(Password of client-truststore.jks keystore is: wso2carbon)

4). Change the below configuration files:

Go to  <CARBON_HOME>/repository/conf  and point the new keystore as below:

  •  carbon.xml 


  •  axis2.xml (Only for WSO2 ESB) (WSO2 ESB uses different HTTPS transport sender and receiver for accessing the services exposed over HTTPS as below, and the keystore used for this purpose is specified in the following configuration)

<transportreceiver class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLListener" name="https">  
<parameter locked="false" name="port">8243</parameter>  
<parameter locked="false" name="non-blocking">true</parameter>  
<parameter locked="false" name="httpGetProcessor">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter>  
<parameter locked="false" name="keystore">  
<parameter locked="false" name="truststore">  
<transportsender class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLSender" name="https">  
<parameter locked="false" name="non-blocking">true</parameter>  
<parameter locked="false" name="keystore">  
<parameter locked="false" name="truststore">  

Chandana NapagodaMonitor WSO2 carbon Instance using New Relic

While I was doing a performance analysis of WSO2 Governance registry, I was looking for a way to monitor information of Apache Solr and its performance numbers. While reading "Apache Solr 3 Enterprise Search Server" book I found this very real time monitoring Tool(site) called New Relic.

So I was able to integrate New Relic with WSo2 Governance Registry Server and was able to monitor many information about the server instance. There I found that Java Agent Self Installer is not working for my scenario. So I had to set Java agent information into JAVA_OPTS. After Few minutes(around 2 min) I was able to view my server related information in the newrelic console(

Here is the JAVA_OPTS which I have used:

export JAVA_OPTS="$JAVA_OPTS -javaagent:/home/chandana/Documents/g-reg/newrelic/newrelic.jar"

newrelic Java agent self-installer :

Chandana NapagodaRemove duplicate XML elements using XSLT

Today I faced an issue where I am receiving a XML message with duplicate elements. So I wanted to remove those duplicate elements using some condition . For that I came up with a XSLT which does that.

My XML input:

<OurGuestsCollection  xmlns="">


<xsl:stylesheet version="2.0"
    <xsl:output omit-xml-declaration="yes" indent="yes"/>

   <xsl:template match="@*|node()">
            <xsl:apply-templates select="@*|node()"/>
   <xsl:template match="m0:OurGuests" xmlns:m0="" >
        <xsl:if test="not(following::m0:OurGuests[m0:firstname=current()/m0:firstname])">
                <xsl:apply-templates select="@*|node()"/>

XML Output :

<OurGuestsCollection xmlns="">

Sohani Weerasinghe

Testing secured proxies using a security client 

Please follow the below steps to test a secured proxy

1. Create a Java project with and

2. Add Following configuration parameters to file

clientRepo = Path for Client repository location. Sample repo can be found in ESB_HOME/samples/axis2Server/repository location.

clientKey =Path for Client’s Key Store.  Here I am using same key Store (wso2carbon.jks). You can find it from ESB_HOME/resources/security.

securityPolicyLocation=Path for the client side security policy files. You can fine 15 policy files from here.

trustStore= This is trusted store that is used for ssl communication on https. You can use same key store for this. (wso2carbon.jks)

securityScenarioNo=Security scenario number that used to secure (eg: If it is non-repudiation it is 2)

SoapAction =You can find it from wsdl

endpointHttp =Http endpont of proxy service

endpointHttpS=Https endpont of proxy service

body = Body part of your Soap message

Sample configuration

clientKey =/home/sohani/Downloads/Desktop/ServerUP/new/wso2esb-4.8.1/repository/resources/security/wso2carbon.jks
SoapAction =urn:mediate
endpointHttp =http://localhost:8280/services/SampleProxy
endpointHttpS =https://localhost:8243/services/SampleProxy

3. Copy Following Java code

import org.apache.neethi.Policy;
import org.apache.neethi.PolicyEngine;
import org.apache.rampart.policy.model.RampartConfig;
import org.apache.rampart.policy.model.CryptoConfig;
import org.apache.rampart.RampartMessageData;
import org.apache.axis2.client.ServiceClient;
import org.apache.axis2.client.Options;
import org.apache.axis2.addressing.EndpointReference;
import org.apache.axis2.context.ConfigurationContext;
import org.apache.axis2.context.ConfigurationContextFactory;
import java.util.Properties;

public class SecurityClient implements CallbackHandler {

 public static void main(String srgs[]) {

        SecurityClient securityCl = new SecurityClient();
        OMElement result = null;
          try {
                result = securityCl.runSecurityClient();
            } catch (Exception e) {


    public OMElement runSecurityClient( ) throws Exception {

        Properties properties = new Properties();
        File file = new File("/home/sohani/workspace_new/TestClient/src/ ");
        FileInputStream freader=new FileInputStream(file);
        String clientRepo  = properties.getProperty("clientRepo");
        String endpointHttpS   = properties.getProperty("endpointHttpS");
        String endpointHttp   = properties.getProperty("endpointHttp");
        int securityScenario =Integer.parseInt(properties.getProperty("securityScenarioNo"));
        String clientKey = properties.getProperty("clientKey");
        String SoapAction = properties.getProperty("SoapAction");
        String body = properties.getProperty("body");
        String trustStore=properties.getProperty("trustStore");
        String securityPolicy =properties.getProperty("securityPolicyLocation");

        OMElement result = null;

        System.setProperty("", trustStore);
        System.setProperty("", "wso2carbon");

        ConfigurationContext ctx = ConfigurationContextFactory.createConfigurationContextFromFileSystem(clientRepo, null);
        ServiceClient sc = new ServiceClient(ctx, null);

        Options opts = new Options();

                opts.setTo(new EndpointReference(endpointHttpS));
                opts.setTo(new EndpointReference(endpointHttp));


                try {
                    String securityPolicyPath=securityPolicy+File.separator +"scenario"+securityScenario+"-policy.xml";
                    opts.setProperty(RampartMessageData.KEY_RAMPART_POLICY, loadPolicy(securityPolicyPath,clientKey));
                } catch (Exception e) {
        result = sc.sendReceive(AXIOMUtil.stringToOM(body));
        return result;
    public Policy loadPolicy(String xmlPath , String clientKey) throws Exception {

        StAXOMBuilder builder = new StAXOMBuilder(xmlPath);
        Policy policy = PolicyEngine.getPolicy(builder.getDocumentElement());

        RampartConfig rc = new RampartConfig();


        CryptoConfig sigCryptoConfig = new CryptoConfig();

        Properties prop1 = new Properties();
        prop1.put("", "JKS");
        prop1.put("", clientKey);
        prop1.put("", "wso2carbon");

        CryptoConfig encrCryptoConfig = new CryptoConfig();

        Properties prop2 = new Properties();
        prop2.put("", "JKS");
        prop2.put("", clientKey);
        prop2.put("", "wso2carbon");


        return policy;

    public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException {

        WSPasswordCallback pwcb = (WSPasswordCallback) callbacks[0];
        String id = pwcb.getIdentifer();
        int usage = pwcb.getUsage();

        if (usage == WSPasswordCallback.USERNAME_TOKEN) {

           if ("admin".equals(id)) {

        } else if (usage == WSPasswordCallback.SIGNATURE || usage == WSPasswordCallback.DECRYPT) {

            if ("wso2carbon".equals(id)) {

4. Add relevant libraries to your class path

It is easy , Go to ESB_HOME/bin and run ant command. You will see created jar file in ESB_HOME/repository/lib  directory. Do not forget to add saxon9he.jar  that is in ESB_HOME/lib/endorsed directory.

5. Then run your secured client

Chanaka JayasenaDisplaying a resource in the WSO2 registry with jaggery

    var carbon = require('carbon');
var server = new carbon.server.Server();
var options = {
username: 'admin',
tenantId: -1234
var reg = new carbon.registry.Registry(server, options);
var path = '/_system/es/cartoon001gossiplankanews.png';
var resource = reg.get(path);
response.contentType = resource.mediaType;

sanjeewa malalgodaHow monitor WSO2 server CPU usage and generate thread dump on high CPU usage using simple shell script

When we deployed WSO2 servers in production deployments we may need to monitor them for high CPU and memory usages. So in this article i will describe how we can use simple shell script to monitor server CPU usage and generate thread dump using jstack command.

First you need to create script using following command.

Then paste following script content.

# 1: ['command\ name' or PID number(,s)] 2: MAX_CPU_PERCENT
[[ $# -ne 2 ]] && exit 1
# get all PIDS as nn,nn,nn
if [[ ! "$PID_NAMES" =~ ^[0-9,]+$ ]] ; then
    PIDS=$(pgrep -d ',' -x $PID_NAMES)
#  echo "$PIDS $MAX_CPU"
MAX_CPU="$(echo "($MAX_CPU+0.5)/1" | bc)"
while [[ $LOOP -eq 1 ]] ; do
    sleep 0.3s
    # Depending on your 'top' version and OS you might have
    #   to change head and tail line-numbers
    LINE="$(top -b -d 0 -n 1 -p $PIDS | head -n 8 \
        | tail -n 1 | sed -r 's/[ ]+/,/g' | \
        sed -r 's/^\,|\,$//')"
    # If multiple processes in $PIDS, $LINE will only match\
    #   the most active process
    CURR_PID=$(echo "$LINE" | cut -d ',' -f 1)
    # calculate cpu limits
    CURR_CPU_FLOAT=$(echo "$LINE"| cut -d ',' -f 9)
    CURR_CPU=$(echo "($CURR_CPU_FLOAT+0.5)/1" | bc)
    echo "PID $CURR_PID: $CURR_CPU""%"
    if [[ $CURR_CPU -ge $MAX_CPU ]] ; then
        echo "PID $CURR_PID ($PID_NAMES) went over $MAX_CPU""%" on $now
        jstack $CURR_PID > ./$now+jlog.txt
        echo "[[ $CURR_CPU""% -ge $MAX_CPU""% ]]"
echo "Stopped"

Then we need to get process id of running WSO2 server by running following command.

sanjeewa@sanjeewa-ThinkPad-T530:~/work$ jps
30755 Bootstrap
8543 Jps
4892 Main

Now we know carbon server running with process ID 30755. Then we can start our script by providing init parameters(process ID and CPU limit). So it will keep printing CPU usage in terminal and once it reached limit it will take thread dump using jstack command. It will create new file with with embedding current date time and push Jstack output to it.

We can start scritp like this.
 sh <processId> <CPU Limit>

sanjeewa@sanjeewa-ThinkPad-T530:~/work$ sh 30755 10
PID 30755: 0%
PID 30755: 0%
PID 30755: 0%
PID 30755: 0%
PID 30755 (30755) went over 10% on 2015 පෙබරවාරි 19 වැනි බ්‍රහස්පතින්දා 14:44:55 +0530
[[ 13% -ge 10% ]]

As you can see when CPU goes above 10% it will create log file and append thread dump.

Sivajothy VanjikumaranWSO2 ESB SOAP headers lost

 During the connector and we experience some of the SOAP header information are being dropped by the ESB.

Please find the retrieved SOAP headers from direct API call and ESB call below.

Response from Direct API call
Response from ESB call
         <wsu:Timestamp wsu:Id="Timestamp-abd7433b-821f-4a23-861e-83ade6857961">
         <wsu:Timestamp wsu:Id="Timestamp-ec0a6c73-4633-437a-a555-6482f6a72f5d">

We can observe that the API is returning complete set of header information to the ESB in wire log, yet ESB returns only a selected set from it as shown above.

Reason for this issue is WS Addressing headers are removed while sending out. This can be solved by introducing Synapse Property "PRESERVE_WS_ADDRESSING" ( <property name="PRESERVE_WS_ADDRESSING" value="true" scope="default" /> )

Further detail can be found in [1]

Chathurika Erandi De SilvaForking with WSO2 App Factory - Part 2

This is the continuation of the previous blog post where we discussed about forking the main repository, changing the master of the fork and merging the changes from the fork master to the main repository's master.

In this post we will discuss about using the WSO2 App Factory to fork the main repository, changing a branch of the fork and merging the changes from the fork branch to the main repository branch.

From Fork branch to Main branch

1. Go to your application in  WSO2 App Factory and fork the main repository. When forked a separate repository is created for you

2. Clone the forked repository in to your local machine.

E.g. git clone

3. Change to the relevant branch where you need to work. In this case it will be 1.0.0

E.g. git checkout 1.0.0

4. Do the needed code changes and save them

5. Add the changes to the GIT repo

E.g. git add * ; will add all the changes

6. Commit the changes to the GIT repo

E.g. git commit -am "Commit"

7. Push the commits to the remote fork branch

E.g. git push

Now we have changed the remote fork repository with the needed code changes. We can merge the changes to main repository's relevant branch now. In order to do this follow the below steps

1. Clone the main repository to your local machine

E.g. git clone

2. Go inside the cloned folder

3. Change to the relevant branch. In this instance it will be 1.0.0. E.g. git checkout 1.0.0

4. Now we have to add our forked repository as a remote repository in the main repository.

E.g. git remote add -f b

5. After this command if you issue a git branch -r you will see the remote repository added as shown below

  origin/HEAD -> origin/master

6. Now we have to get the difference between the remote branch and our main  repository's branch.

E.g. git diff origin/1.0.0 remotes/b/1.0.0 > jagapp8.diff

7. Now we can apply the diff to our main master repository

E.g. git apply jagapp8.diff

8. Next add the changes by git add *

9. Commit the changes by git commit -am "commit"

10. Push the changes to the remote branch. git push

Chathurika Erandi De SilvaForking with WSO2 App Factory - Part 1

In WSO2 App Factory, we can fork a git repository and later merge it to the main repository. There are two ways in which this merging can be done.

1. From Fork master to Main master
2. From Fork branch to Main branch


Know bit about what git fork about before reading this post :-)

From Fork master to Main master

1. Go to your application in WSO2 App Factory and fork the main repository. When forked a separate repository is created for you

2. Clone the forked repository in to your local machine.

E.g. git clone

3. Since we are working in the fork master we are not changing to a branch

4. Do the needed code changes and save them

5. Add the changes to the GIT repo

E.g. git add * ; will add all the changes

6. Commit the changes to the GIT repo

E.g. git commit -am "Commit"

7. Push the commits to the remote fork branch

E.g. git push

Now we have changed the remote fork repository with the needed code changes. We can merge the changes to main master repository now. In order to do this follow the below steps

1. Clone the main repository to your local machine

E.g. git clone

2. Go inside the cloned folder

3. Now we have to add our forked repository as a remote repository in the main repository.

E.g. git remote add -f b

4. After this command if you issue a git branch -r you will see the remote repository added as shown below

  origin/HEAD -> origin/master

5. Now we have to get the difference between the remote master and our main master repository.

E.g. git diff origin/master remotes/b/master > jagapp8.diff

6. Now we can apply the diff to our main master repository

E.g. git apply jagapp8.diff

7. Next add the changes by git add *

8. Commit the changes by git commit -am "commit"

9. Push the changes to the remote branch. git push

In my next post I will discuss about the second method which is to merge the changes of the fork branch to main branch. 

Shelan PereraHow to use python BOTO framework to connect to Amazon EC2

 Python Boto framework is an excellent tool to automate things with Amazon. You have almost everything to automate Amazon EC2 using this comprehensive library.

A quick guide on how to use it in your project

1) Configure your EC2 credentials to be used by your application using one of the followings.

  • /etc/boto.cfg - for site-wide settings that all users on this machine will use
  • ~/.boto - for user-specific settings
  • ~/.aws/credentials - for credentials shared between SDKs

2) Refer the API and choose what you need to do.

3) Sample code for a simple autoscaler written using Boto framework. You may reuse the code in your projects quickly. This autoscaler spawn new instances based on the spike pattern of the load.

Dimuthu De Lanerolle

Java Tips .....

To get directory names inside a particular directory ....

private String[] getDirectoryNames(String path) {

        File fileName = new File(path);
        String[] directoryNamesArr = fileName.list(new FilenameFilter() {
            public boolean accept(File current, String name) {
                return new File(current, name).isDirectory();
        });"Directories inside " + path + " are " + Arrays.toString(directoryNamesArr));
        return directoryNamesArr;

To retrieve links on a web page ......

 private List<String> getLinks(String url) throws ParserException {
        Parser htmlParser = new Parser(url);
        List<String> links = new LinkedList<String>();

        NodeList tagNodeList = htmlParser.extractAllNodesThatMatch(new NodeClassFilter(LinkTag.class));
        for (int x = 0; x < tagNodeList.size(); x++) {
            LinkTag loopLinks = (LinkTag) tagNodeList.elementAt(m);
            String linkName = loopLinks.getLink();
        return links;

To search for all files in a directory recursively from the file/s extension/s ......

private List<String> getFilesWithSpecificExtensions(String filePath) throws ParserException {

// extension list - Do not specify "." 
 List<File> files = (List<File>) FileUtils.listFiles(new File(filePath),
                new String[]{"txt"}, true);

        File[] extensionFiles = new File[files.size()];

        Iterator<File> itFileList = files.iterator();
        int count = 0;

        while (itFileList.hasNext()) {
            File filePath =;
extensionFiles[count] = filePath;

Reading files in a zip

     public static void main(String[] args) throws IOException {
        final ZipFile file = new ZipFile("Your zip file path goes here");
            final Enumeration<? extends ZipEntry> entries = file.entries();
            while (entries.hasMoreElements())
                final ZipEntry entry = entries.nextElement();
                System.out.println( "Entry "+ entry.getName() );
                readInputStream( file.getInputStream( entry ) );
        private static int readInputStream( final InputStream is ) throws IOException {
            final byte[] buf = new byte[8192];
            int read = 0;
            int cntRead;
            while ((cntRead =, 0, buf.length) ) >=0)
                read += cntRead;
            return read;

Converting Object A to Long[]

 long [] myLongArray = (long[])oo;
        Long myLongArray [] = new Long[myLongArray.length];
        int i=0;

        for(long temp:myLongArray){
            myLongArray[i++] = temp;

Getting cookie details on HTTP clients

import org.apache.http.impl.client.DefaultHttpClient;
HttpClient httpClient = new DefaultHttpClient();
((DefaultHttpClient) httpClient).getCookieStore().getCookies(); 

 HttpPost post = new HttpPost(URL);
        post.setHeader("User-Agent", USER_AGENT);
        post.addHeader("Referer",URL );
        List<NameValuePair> urlParameters = new ArrayList<NameValuePair>();
        urlParameters.add(new BasicNameValuePair("username", "admin"));
        urlParameters.add(new BasicNameValuePair("password", "admin"));
        urlParameters.add(new BasicNameValuePair("sessionDataKey", sessionKey));
        post.setEntity(new UrlEncodedFormEntity(urlParameters));
        return httpClient.execute(post);

Ubuntu Commands

1. Getting the process listening to a given port (eg: port 9000) 

sudo netstat -tapen | grep ":9000 "

Ushani BalasooriyaA sample on a WSO2 ESB proxy with a DBLookup mediator and a dedicated faultSequence to excute during an error

Databases :

DB customer should be created with the required tables.

DB : customerDetails
table :  customers

Proxy :

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
      <inSequence onError="testError">
               <sql>SELECT * FROM customers WHERE name =?</sql>
               <parameter value="WSO2" type="INTEGER"/>
               <result name="company_id" column="id"/>
         <log level="custom">
            <property name="text" value="An unexpected error occurred for the service"/>

onError targeting sequence :

<sequence xmlns="" name="testError">
      <property name="error" value="Error Occurred"/>

Invoke :

curl -v http://host:8280/services/TestProxyUsh

Ushani BalasooriyaOne possible reason and solution for getting Access denied for user 'username'@'host' in mysql when accessing from outside

Sometimes we might get the below exception when creating a DB connection.

org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Access denied for user 'username'@''

Reasons :

The particular IP might not be accessible from outside.

Therefore you need to grant priviledges by the below command in mysql :

GRANT ALL PRIVILEGES ON *.* TO 'username'@'%' IDENTIFIED BY 'password';


Sometimes you might try out below :


Eventhough it says Query OK, still the issue can be there since you have not used the password.

Also you must do a Flush privileges to update it.

Hope this might help sometimes.

Ushani BalasooriyaA possible reason and solution for getting 400 Bad Request even when the synapse configuration in WSO2 ESB is correct

Sometimes you might spend lot of time in analyzing the reason for getting 400 bad request when you invoke an up and running backend service through WSO2 ESB.

When you configure and save the synapse configuration of your proxy, you will not see any issue. But sometimes there can a possible reason for this issue.

One good example is explained below :

Look at the below synapse configuration of WSO2 ESB proxy service.

 <proxy name="statuscodeProxy"
          transports="https http"
            <log level="full"/>
                  <address uri=""/>
            <log level="full"/>

By the look of it, this is alright and you are good to configure it in WSO2 ESB. But the endpoint defined as : is not exactly the endpoint the backend service expects.

Sometimes there can be missing parameters etc. But if you exactly know your backend service does not expect any parameters and still it throws a 400 bad reques, can you imagine the reason for this?

After spending hours, I accidently found a possible reason :

 it was due to the backend expects a / at the end of the endpoint. So it expects : except

If you configure a tcpmon in between ESB and the backend, you will see the difference as below : :  GET  HTTP/1.1  - 400 bad Request : GET / HTTP/1.1  - 200 OK

So if you too come across this situation, please try out this option. Sometimes this might save your time! :)

Ushani BalasooriyaOne Possible Reason and solution for dropping the client's custom headers when sending through nginx

Problem :

Sometimes you may have experienced that the custom header values defined and sent through client has been dropped off when it goes via the nginx in a setup fronted by nginx.

Solution :

1. First you should check whether the custom headers  contain underscores (_)

2. If so you need to check whether you have configured underscores_in_headers in your nginx.conf under each server.

3. By default, underscores_in_headers is off.

4. Therefore you need to configure it to on.

Reason :

By this property,

It enables or disables the use of underscores in client request header fields. When the use of underscores is disabled, request header fields whose names contain underscores are marked as invalid and become subject to the ignore_invalid_headers directive.

Ref: /http/ngx_http_core_module.html

Example :

Sample configuration would be as below :

      listen 8280;
      underscores_in_headers on;

location / {
       proxy_set_header X-Forwarded-Host $host;
       proxy_set_header X-Forwarded-Server $host;.

Ushani BalasooriyaReason and solution for not loading WSDL in a clustered setup fronted by NGINX

For more information about nginx http proxy module please refer :

Issue :
Sometimes a WSDL in a clustered setup fronted by NGINX might not be loaded.

Reason :

By default, http version 1.0 is used in nginx. Version 1.1 is recommended for use with keepalive connections. Therefore sometimes the wsdl will not be loaded.

You can confirm it by doing a curl command to the wsdl ngingx url.

E.g.,  curl
Then you will get half of the wsdl.

Solution :

To avoid this, you will have to configure the nginx configuration file to set it's http version.

       proxy_http_version 1.1;

Example :

As an example,

      listen 8280;

location / {
       proxy_set_header X-Forwarded-Host $host;
       proxy_set_header X-Forwarded-Server $host;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

       proxy_http_version 1.1;

        proxy_pass http://esbworkers;
        proxy_redirect http://esbworkers;


This configuration is applicable for nginx versions above 1.1.4

Krishanthi Bhagya SamarasingheSteps to install Rails on Windows

Instigation : Currently I am having a training on ROR (Ruby on Rails) for a project which is more concerned on the productivity than the performance. Since Rails is based on Ruby, which is a scripting language, it's productivity is very high. Although, too much memory consumption directly effects for it's performance.


  • If you have successfully installed Ruby, you can skip steps 1 and 2.
  • If you have successfully configured Ruby development kit, you can skip steps 3 and 4.

Step 1: Download the required ruby version from the following location.

Step 2: Run the executable file and select 3 options while running.

By selecting those options, it will help you to configure ruby on your environment.

Step 3: Download and run the development kit which is relevant to the ruby installation, from the following link.

This will ask you to extract files and it is better to give the same root location where ruby is installed.

Step 4: Open a command prompt and go to the development kit extracted location. Run the following commands.
  1. ruby dk.rb init
  2. ruby dk.rb install
Step 5: Open a command prompt and run following command to install Rails. You can run it from any location. It will find the Ruby installed location and will install Rails into the Ruby. Through this it will install gems which are related to run Rails on Ruby.

gem install rails

Step 6: On the command prompt go to a location where you like to create your Rails project and type:
rails new <ProjectName>

This will create the project with its folder structure.

Step 7: Cd <ProjectName>, run"bundle install". There you might get SSL certificate issue. Open the "Gemfile" file in the project folder and edit the source value's protocol as "http", and run the command again.

Step 8: Cd  <ProjectName>/bin, and run "rails server".

Step 9: Go to the page http://localhost:3000 from your browser. If you are getting following, you have successfully installed the Rails and your server is running successfully.

Jayanga DissanayakeCustom Authenticator for WSO2 Identity Server (WSO2IS) with Custom Claims

WSO2IS is one of the best Identity Servers, which enables you to offload your identity and user entitlement management burden totally from your application. It comes with many features, supports many industry standards and most importantly it allows you to extent it according to your security requirements.

In this post I am going to show you how to write your own Authenticator, which uses some custom claim to validate users and how to invoke your custom authenticator with your web app.

Create your Custom Authenticator Bundle

WSO2IS is based OSGi, so if you want to add a new authenticator you have to crate an OSGi bungle. Following is the source of the OSGi bundle you have to prepare.

This bundle will consist of three files,
1. CustomAuthenticatorServiceComponent
2. CustomAuthenticator
3. CustomAuthenticatorConstants

CustomAuthenticatorServiceComponent is an OSGi service component it basically registers the CustomAuthenticator (service). CustomAuthenticator is an implementation of org.wso2.carbon.identity.application.authentication.framework.ApplicationAuthenticator, which actually provides our custom authentication.

1. CustomAuthenticatorServiceComponent

package org.wso2.carbon.identity.application.authenticator.customauth.internal;

import java.util.Hashtable;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.osgi.service.component.ComponentContext;
import org.wso2.carbon.identity.application.authentication.framework.ApplicationAuthenticator;
import org.wso2.carbon.identity.application.authenticator.customauth.CustomAuthenticator;
import org.wso2.carbon.user.core.service.RealmService;

* @scr.component name="identity.application.authenticator.customauth.component" immediate="true"
* @scr.reference name="realm.service"
* interface="org.wso2.carbon.user.core.service.RealmService"cardinality="1..1"
* policy="dynamic" bind="setRealmService" unbind="unsetRealmService"
public class CustomAuthenticatorServiceComponent {

private static Log log = LogFactory.getLog(CustomAuthenticatorServiceComponent.class);

private static RealmService realmService;

protected void activate(ComponentContext ctxt) {

CustomAuthenticator customAuth = new CustomAuthenticator();
Hashtable<String, String> props = new Hashtable<String, String>();

ctxt.getBundleContext().registerService(ApplicationAuthenticator.class.getName(), customAuth, props);

if (log.isDebugEnabled()) {"CustomAuthenticator bundle is activated");

protected void deactivate(ComponentContext ctxt) {
if (log.isDebugEnabled()) {"CustomAuthenticator bundle is deactivated");

protected void setRealmService(RealmService realmService) {
log.debug("Setting the Realm Service");
CustomAuthenticatorServiceComponent.realmService = realmService;

protected void unsetRealmService(RealmService realmService) {
log.debug("UnSetting the Realm Service");
CustomAuthenticatorServiceComponent.realmService = null;

public static RealmService getRealmService() {
return realmService;


2. CustomAuthenticator

This is where your actual authentication logic is implemented

package org.wso2.carbon.identity.application.authenticator.customauth;

import java.util.Map;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.wso2.carbon.identity.application.authentication.framework.AbstractApplicationAuthenticator;
import org.wso2.carbon.identity.application.authentication.framework.AuthenticatorFlowStatus;
import org.wso2.carbon.identity.application.authentication.framework.LocalApplicationAuthenticator;
import org.wso2.carbon.identity.application.authentication.framework.config.ConfigurationFacade;
import org.wso2.carbon.identity.application.authentication.framework.context.AuthenticationContext;
import org.wso2.carbon.identity.application.authentication.framework.exception.AuthenticationFailedException;
import org.wso2.carbon.identity.application.authentication.framework.exception.InvalidCredentialsException;
import org.wso2.carbon.identity.application.authentication.framework.exception.LogoutFailedException;
import org.wso2.carbon.identity.application.authentication.framework.util.FrameworkUtils;
import org.wso2.carbon.identity.application.authenticator.customauth.internal.CustomAuthenticatorServiceComponent;
import org.wso2.carbon.identity.base.IdentityException;
import org.wso2.carbon.identity.core.util.IdentityUtil;
import org.wso2.carbon.user.api.UserRealm;
import org.wso2.carbon.user.core.UserStoreManager;
import org.wso2.carbon.utils.multitenancy.MultitenantUtils;

* Username Password based Authenticator
public class CustomAuthenticator extends AbstractApplicationAuthenticator
implements LocalApplicationAuthenticator {

private static final long serialVersionUID = 192277307414921623L;

private static Log log = LogFactory.getLog(CustomAuthenticator.class);

public boolean canHandle(HttpServletRequest request) {
String userName = request.getParameter("username");
String password = request.getParameter("password");

if (userName != null && password != null) {
return true;

return false;

public AuthenticatorFlowStatus process(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException, LogoutFailedException {

if (context.isLogoutRequest()) {
return AuthenticatorFlowStatus.SUCCESS_COMPLETED;
} else {
return super.process(request, response, context);

protected void initiateAuthenticationRequest(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException {

String loginPage = ConfigurationFacade.getInstance().getAuthenticationEndpointURL();
String queryParams = FrameworkUtils

try {
String retryParam = "";

if (context.isRetrying()) {
retryParam = "&authFailure=true&";

response.sendRedirect(response.encodeRedirectURL(loginPage + ("?" + queryParams))
+ "&authenticators=" + getName() + ":" + "LOCAL" + retryParam);
} catch (IOException e) {
throw new AuthenticationFailedException(e.getMessage(), e);

protected void processAuthenticationResponse(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException {

String username = request.getParameter("username");
String password = request.getParameter("password");

boolean isAuthenticated = false;

// Check the authentication
try {
int tenantId = IdentityUtil.getTenantIdOFUser(username);
UserRealm userRealm = CustomAuthenticatorServiceComponent.getRealmService()

if (userRealm != null) {
UserStoreManager userStoreManager = (UserStoreManager)userRealm.getUserStoreManager();
isAuthenticated = userStoreManager.authenticate(MultitenantUtils.getTenantAwareUsername(username),password);

Map<String, String> parameterMap = getAuthenticatorConfig().getParameterMap();
String blockSPLoginClaim = null;
if(parameterMap != null) {
blockSPLoginClaim = parameterMap.get("BlockSPLoginClaim");
if (blockSPLoginClaim == null) {
blockSPLoginClaim = "";
if(log.isDebugEnabled()) {
log.debug("BlockSPLoginClaim has been set as : " + blockSPLoginClaim);

String blockSPLogin = userStoreManager.getUserClaimValue(MultitenantUtils.getTenantAwareUsername(username),
blockSPLoginClaim, null);

boolean isBlockSpLogin = Boolean.parseBoolean(blockSPLogin);
if (isAuthenticated && isBlockSpLogin) {
if (log.isDebugEnabled()) {
log.debug("user authentication failed due to user is blocked for the SP");
throw new AuthenticationFailedException("SPs are blocked");
} else {
throw new AuthenticationFailedException("Cannot find the user realm for the given tenant: " + tenantId);
} catch (IdentityException e) {
log.error("CustomAuthentication failed while trying to get the tenant ID of the use", e);
throw new AuthenticationFailedException(e.getMessage(), e);
} catch (org.wso2.carbon.user.api.UserStoreException e) {
log.error("CustomAuthentication failed while trying to authenticate", e);
throw new AuthenticationFailedException(e.getMessage(), e);

if (!isAuthenticated) {
if (log.isDebugEnabled()) {
log.debug("user authentication failed due to invalid credentials.");

throw new InvalidCredentialsException();

String rememberMe = request.getParameter("chkRemember");

if (rememberMe != null && "on".equals(rememberMe)) {

protected boolean retryAuthenticationEnabled() {
return true;

public String getContextIdentifier(HttpServletRequest request) {
return request.getParameter("sessionDataKey");

public String getFriendlyName() {
return CustomAuthenticatorConstants.AUTHENTICATOR_FRIENDLY_NAME;

public String getName() {
return CustomAuthenticatorConstants.AUTHENTICATOR_NAME;

3. CustomAuthenticatorConstants

This is a helper class to just to hold the constants you are using in your authenticaator

package org.wso2.carbon.identity.application.authenticator.customauth;

* Constants used by the CustomAuthenticator
public abstract class CustomAuthenticatorConstants {

public static final String AUTHENTICATOR_NAME = "CustomAuthenticator";
public static final String AUTHENTICATOR_FRIENDLY_NAME = "custom";
public static final String AUTHENTICATOR_STATUS = "CustomAuthenticatorStatus";

Once you are done with these files, your authenticator is ready. Now you can build you OSGi bundle and place the bundle inside <CRBON_HOME>/repository/components/dropins.

*sample pom.xml file [3]

Create new Claim

Now you have to create a new claim in WSO2IS. To do this, log into the management console of WSO2IS and do the steps described in [1]. In this example, I am going to create new claim "Block SP Login".

So, goto configuration section of the management console click on "Claim Management", then select "" Dialect

Click on "Add New Claim Mapping", and fill the details related to your claim.

Display Name   Block SP Login
Description   Block SP Login
Claim Uri
Mapped Attribute (s)  localityName
Regular Expression
Display Order 0
Supported by Default true
Required false
Read-only false

Now, your new claim is ready in WSO2IS. As you select "Supported by Default" as true, this claim will be available in your user profile. So you will see this field appear, when you try to create a user, but this field in not mandatory as you didn't specify it as "Required"

Change application-authentication.xml

There is another configuration change you have to do, as it is going to take the claim name from the configuration file (, 107-114). Add the information about the your new claim into repository/conf/security/application-authentication.xml

<AuthenticatorConfig name="CustomAuthenticator" enabled="true">
<Parameter name="BlockSPLoginClaim"></Parameter>

If you check the code line,107-128. You will see in the processAuthenticationResponse, in addition to authenticating the user from the user store, it checks for the new claim,

So, this finishes the, basic steps to setup your custom authentication. Now you have to setup new Service Provider in WSO2IS and set you custom authentication to it. So that when ever your SP try to authenticate a user from WSO2IS, it will use your custom authenticator.

Create Service Provider and set the Authenticator

Follow the basic steps given in [2] to create a new Service Provider.

Then, goto, "Inbound Authentication Configuration"->"SAML2 Web SSO Configuration", and make the following changes,

Issuer* = <name of you SP>
Assertion Consumer URL = <http://localhost:8080/your-app/samlsso-home.jsp>
Enable Response Signing = true
Enable Assertion Signing = true
Enable Single Logout = true
Enable Attribute Profile = true

Then goto, "Local & Outbound Authentication Configuration" section,
select "Local Authentication" as the authentication type, and select your authenticator, here "custom".

Now you have completed all the steps needed to setup your custom autheticator with your custom claims

You can now start the WSO2IS, and start using your service. Meanwhile, change the value of the "Block SP Login" of a particular user and see the effect.


Chanaka FernandoSimple tutorial to work with WSO2 GIT source code for developers

Some important Git commands for branching with remotes and upstream projects

First create a fork from an existing repository on wso2 repo

Then create a cloned repository in your local machine from that forked repo. Replace the URL with your own repo URL.

git clone

Now add upstream repositories for your local clones repository.This should be the upstream project which you have forked from.

git remote add upstream

You can remove an upstream if you needed.
git remote rm upstream

Now you can see all the available branches in your git metadata repository. Here you need to keep in mind that only metadata available for all the branches except the master (which is the local branch created by default)

git branch -a

* master
 remotes/origin/HEAD -> origin/master

If you cannot see the upstream branches list from the above command, just fetch them with the following command.

git fetch upstream

Now you should see the branched list as above.

This will list down all the available branches. Then decide on which branch you are going to work on. Let’s say you are going to work on release-2.1.3-wso2v2 branch. Now you should create a tracking branch in your local repository for that branch.

git checkout --track origin/release-2.1.3-wso2v2

Here we are creating a tracking local branch to track the origin (your forked repository, not the upstream). Now if you look at the branch list, you can see the newly created branch.

git branch -a

* release-2.1.3-wso2v2
 remotes/origin/HEAD -> origin/master

Now you need to see what is the link between your remote branches and local branches. You can do that with the following command.

git remote show origin

This will give you the information about the connection between your local and remote branches.

* remote origin
 Fetch URL:
 Push  URL:
 HEAD branch: master
 Remote branches:
    master               tracked
    release-2.1.3-wso2v1 tracked
    release-2.1.3-wso2v2 tracked
 Local branches configured for 'git pull':
    master               merges with remote master
    release-2.1.3-wso2v2 merges with remote release-2.1.3-wso2v2
 Local refs configured for 'git push':
    master               pushes to master               (up to date)
    release-2.1.3-wso2v2 pushes to release-2.1.3-wso2v2 (up to date)

Chandana NapagodaWriting a Create API Executor for API.RXT

One of the use cases of the WSO2 Governance Registry is storing metadata information of different artifacts. In an Organization, there can be different metadata of different artifacts such their REST APIs, SOAP Services etc. In such a scenario you can use API and Service RXT which are available in the WSO2 Governance Registry to store metadata information.

With the use of API metadata which is stored in the WSO2 Governance Registry, you can publish APIs into WSO2 API Manager without accessing the web interface of the API Manager. This API creation is handled through lifecycle executor of the WSO2 Governance Registry. Once lifecycle state of the api publisher is reached, the executor will invoke Publisher REST API of the WSO2 API Manager and create the API. "Integrating with WSO2 API Manager" documentation explains about how to create an API using SOAP service meta data information.

If you want to create an API using the REST API metadata information available in the WSO2 Governance Registry, you have to write your own executor to achieve this. So I have written an example Implementation of the lifecycle executor which can used to create API's in the API Manager.

John MathonTesla Update: How is the first IoT Smart-car Connected Car faring?

Telsa S’s at Supercharging stationTesla Supercharger

Here is my previous article on the Tesla IoT Case Study

How’s it going – a year later?

I have owned my Tesla for a year and it is a year since I published my first review of it as an IOT and published a list of 13 beefs with the Tesla on the Tesla forums.  I have 14,000 miles on the car which is a lot more than I planned.   I thought I would use my ICE car a lot more.  It has sat in the driveway unused collecting dust month after month as the Tesla is so much more fun to drive and cheaper.   My fiance has insisted on driving it whenever I don’t.

Tesla owners are pretty feisty and when I published my 13 small beefs in the Tesla forums, instead of getting a number of high-fives and people who felt similarly I got a lot of criticism.  I was only offering some helpful suggestions.  I was happy with the car just trying to help Tesla improve.  They came down on me for the slightest criticism as if I was a foreign agent coming in with evil propaganda from the enemy.    The ferociousness of their criticism must be similar to the loyalty that Apple owners had in the early days of Apple.

Tesla whether looking at my list or not has fixed 12 of the 13 beefs through software upgrades over the last year! Now that is something you don’t see everyday.   The 13th beef – I didn’t like the placement of the turn signal lever. Everybody told me this is where Mercedes places their turn signal lever so I was wrong.  I still find it awkward to use but other than that I can say over the last 12 months I have had nothing short of an amazing car experience.

Besides fixing 12 of my 13 beefs they have provided service above the call I didn’t even imagine.   They installed titanium bars for free to prevent my car when driving low on the highway from being damaged, they replaced a windshield for free ($1800) when it was damaged by a rock.

So far the worries about hacking the Telsa, possibly corruption of apps have not emerged.   That’s obviously good.  Part of that is undoubtedly the low # of Teslas and that Tesla has not enabled a lot of apps to run in the car and limited the functions of the API and Mobile App.  I hope this year will allow some more capabilities and they will not experience any security issues.

Elon Musk is disruptive individual

His strategy for business seems to be how can he throw the most monkey wrenches into the ways we have thought things “had to be”.    I mentioned some of the disruptive things he did with the Tesla.  (I won’t belabor SpaceX or SolarCity disruptive ideas or building a thousands of mph tunnel between LA and SF, etc…)

Tesla’s biggest problem is that with a battery production capacity limited to 30,000 or so cars a year and an annual consumption in the US alone of 16 MILLION ICE cars he’s not making the slightest dent in the car market or in challenging the ICE makers.   His new battery factory will allow him to hit 500,000 cars a year which still would be smaller than virtually any other car manufacturer in the world.   He also expects to be able to reduce battery cost for the 60KWH battery to below $3,000.

When you realize the scale of the ICE commitment the world has you realize this is not something that is going to happen overnight.   Even in the most optimistic scenario for Tesla it is a 1% car manufacturer for the next decade and cannot create disruption unless additional car manufacturers and additional battery capacity comes on board more quickly.

Let’s review those disruptive aspects of the Tesla and how successful they’ve been:

1) At 1/3 cost to operate due to efficiency of central electrical production and electric motor the car produces less ecological impact than other cars

green car

The move to electric cars and replicating Elon Musk’s Tesla has made waves but the ICE car manufacturers haven’t conceded defeat yet and they don’t seem to be quaking in their boots.   Other electric cars are not selling well at all.  Tesla seems to be the only electric car succeeding.

I think the problem is similar to the Steve Jobs effect.  Steve Jobs understood the problem was that you had to solve the holistic ecosystem problem.   When Apple came out with the iPod they also did the user interface, made it fast, simple and backed it up with service.   The result was 70% market share in the consumer electronics industry which is unheard of.   When I tried to buy an iPod competitor I was astonished how stupid they were.  The user interface consisted of 12 indecipherable buttons, a 2 line barely readable screen.  It took me 24 hours to upload a few songs to it.    Needless to say I returned it and just got the next model iPod.    I couldn’t fathom how when the competitors were shown the exact thing they needed to make they still produced crappy alternatives.

The cars cost of operation has met all my expectations and more.

Service costs:  $0

Energy Costs:  -$450/month

How could I be paying 450 dollars a month less?   I save $250/month on my electric bill because I could switch to a time based rate plan in california allowing me to move energy to evening hours and cutting my costs dramatically.  I save at least $200/month in gas costs I no longer have.   I charge my car only half the time at home which is about $20/month.  The other half is charged at garages and other places I can hook a charge for free.  I don’t know how long many places will let me charge for free but I expect corporations will provide free chargers more and more for employees.

It is clearly the most ecological full sized vehicle ever produced in any quantity and it is cheap to operate.  I understand that if the car doesn’t last, has serious service problems that this prognosis will change but so far it is virtually free to drive (other than the capital expenditure).

2) The size of the battery, the charge at home or in garages all over, superstations across the country for free and battery replacement options disrupts the gas service station business as well as other electric vehicles


Tesla is not only potentially disruptive to ICE’s but it is disrupting today the electric competition.  Sales are in decline for almost every other electric car.

Elon Musk solved the distance traveling problem, the lifetime problems, the time to charge problems.  He gave us a solution that is elegant, powerful, competitive in every aspect.  When these other companies put up 80 mile battery cars with slow charging options, poor performance, is it any wonder nobody wants to buy one of those?   My fiance was thinking of an electric car after seeing the Tesla.  We went through all the alternatives and concluded they were all crap.  We wanted to buy another Electric car but none of them met the simple requirements.   I’m sorry there is no comparison.   I don’t think they have a clue.   The European carmakers seem to be making the most effort to compete, but even they are coming up far short. Mercedes has licensed the Tesla battery.  It will be interesting to see how that evolves.

3) The low maintenance and maintenance at home and the ability to update the car remotely with new features and fix the car, the ability of Tesla to capture real time data to improve service is transformative and disrupts the entire service business for cars.

There has been zero maintenance costs or issues with the car.  (Unless you count a piece of the molding having to be glued back that became loose – also done gratis by Tesla as they looked over my car to adjust tire pressure and check it out electronically.)

As I mentioned above Cost of service:  $0.  However, because there are only 30,000 cars on the road the ability to disrupt the service business is ZERO at the scale he can achieve in the next decade.   However, aspects of what he has done is having an impact on car makers and consumers who have adopted 100% the connected car concept.   Over the next few years I believe Tesla will have succeeded 100% in proving that connected car is a better car, a cheaper and safer car.

Items 5 and 6 below Tesla has been successful in 2 years in disrupting the industry beyond expectations and I think that partly he is succeeding in his disruption on service on this point because most car manufacturers seem to be acknowledging that the ability to gather information from the car during operation rather than simply at service calls can provide valuable input that allows them to improve their cars faster.   We will see if other manufacturers use the connected options below to actually improve their service as well as Tesla has done.

4) The user interface design, big screen with apps and ability to control the car with an app is precedent setting including the smartness of the car, anticipating your schedule, finding alternate routes, raising and lowering suspension based on experience demonstrate a smart car.  Future upgrades include self-driving capabilities all of which is precedent setting

tesla screen2 tesla screen1

No other car manufacturer has implemented the user interface, nor have I seen other car manufacturers commit to such a digital version of a car, large screen full control user interface concept.  No disruption yet.

5) The IOT capabilities including the ability to manage, find, operate the car remotely is precedent setting

tesla app

I believe TESLA has achieved disruption with the connected IOT car

I love the IoT aspects of the car.   I love checking in to see how charging is going, I love being able to remotely turn on the air or heating before I get to the car.   I love being able to find my car anytime.  I love tracking when it is being driven by other people and knowing how fast they are driving or when it is about to meet me.   I love knowing I could operate the car without the key.

Virtually every manufacturer has agreed and is committed to full time internet in cars.  Some have committed their entire fleets to the idea in the next year.   The biggest problem will be how much of the car is available for IoT operation or viewing, how much is actually useful?  Is this simply a matter of making it easier to browse the web in your car or about the car itself? If they are just making internet connectivity in your car more available I think they will find this is not necessarily a big winner.   Connectivity costs money and if you have it on your phone already it is not clear everybody will sign up for this alone.

I believe that part of the success of the connected car is the obvious benefits of these features and the upgrades I talk about in the next point.

tesla software update

6) The ability to upgrade the car over the air or fix it is precedent setting

These are some of the improvements they have downloaded to my car in the last 12 months:

1) Better support for starting and stopping on hills

2) Smart suspension that seems to magically figure out when my suspension should be lower or higher based on experience

3) Smart Calendar, Phone and Car integration which makes it easier to get to appointments and interact with my calendar, destination, conference call support, notes from the big screen

4) Smart commute pre-planning and pre-conditioning (figuring out the best route to work or back even when I didn’t ask it to saving me from stupidly taking the route with the accident on it.), pre-heating/cooling my car automatically before my commute

5) Better backup guide lines and backup camera integration, better support for parallel parking and object detection around the car

6) Improved bluetooth functionality

7) Expanded Media Options, Better Media Buffering for Internet Media, Improved USB playing

8) Improved route planning, traffic integration, telling me how much “gas” i’ll have at my destination and how much if I return to my start point, better ways to see how much “fuel” i was using during a trip compared to estimated

9) automatically remembering charge stations I’ve used and finding Tesla charge stations easily

10) Traffic aware cruise control

11) The key fob can now also open the charge port remotely – super cool

12) Improvements in controlling the screen layout

13) Improvements in the Tesla app to allow operating the car without the key and controlling more car functions remotely

Is it any wonder that Tesla Owner satisfaction is at 98-99% for 2 years in a row?

I can’t imagine living with a car that didn’t constantly improve itself.  I believe other car manufacturers will implement this feature albeit with a lot less utility since the cars aren’t fully digital.

7. Self-driving car

tesla google self driving car

This wasn’t on my original list.   At the time a year ago I didn’t know Tesla was so committed to self-driving features.

Everybody talks about Google self-driving cars.  Everybody talks about how Europeans are ahead of us in self-driving regulation and self-driving features.   The fact is Tesla is implementing these things in part this year and is delivering in every car today the ability to be self-driving.

Google says they will have self-driving out for production in 2017.  People say that in the US this means US car manufacturers will start delivering self-driving features in 2020.

Tesla is delivering many of those features this year and next.

Telsa X due mid-year 2015 4-wheel drive 7-person crossover


(okay, it’s an S with 4-wheel drive, a little taller roof and cool doors)

My laundry list for Tesla for 2015

1) Tesla has promised app integration with Android and support for Chrome browser.

facebook  waze_logo-1skype_logo-580x367 fandango

Star-Chart_1 thescore

I hope this year it comes and with it the following apps supported:  Pandora, Waze, Chargepoint, StarChart, Yelp, Weather (along my route), Fandango, theScore, Camera Recording or photo taking from the car, RecorderPro, youtube, audible, audio read google news, facebook audio check in with camera, audio SMS both receipt and send, audio banking, clock with timer alarm etc…, skype, email, chrome, contacts.

Some cool things would be 2-way easy integration with Waze to allow easy reporting of police and accidents.

Being able to see the names of the stars and planets around me at night while driving would be cool on the big screen.

Weather would be a cool integration allowing me to see how weather along my route will change as I drive.

Facebook integration with camera would be ultra cool.  I promise to use it very infrequently.

Using the camera for more than driving functions would be pretty cool.  How many times have you taken a lousy picture from your car through the window.

Many of the other functions above would come with Android integration promised.

2) More self-driving features

3) More ability to control the car from the app including opening or closing windows

4) Ability to stream video from the cameras or sensors to the app

5) I want the video cameras and sensors to detect cops in the vicinity

6) Calendar “driving” event the car prepares (warm or cool, sets destination up)

7) Better efficiency (Hey I can hope!)

8) Integration with iWatch or other personal devices


The Future of the Connected Car

These are things I think are reasonable to expect

1) High speed communications

There is talk lately of the connected car having a huge impact including possibly being the main motivation for the next generation cell phone communications speed improvements – what is called G5.  The reason for this is to be able to support self-driving capabilities and also to make communication between cars seamless and instant.

Cars need to sense their environment in order to self-drive but many believe that alone is not enough.  The idea is that if cars can communicate with each other instantly (very low delays) they can coordinate themselves better.  A car stopping ahead can notify cars behind it to slow down potentially much farther than can be sensed with sensors (or our eyes.)   If there is traffic the flow can be regulated to produce optimum throughput on freeways.

Many people don’t know that freeway stops are caused by a wave function.   When too many cars are on a freeway a simple slowdown by one car causes a wave backwards that can eventually cause a stop.   This is why you stop on the freeway for no apparent reason sometimes.    LA implemented stop lights at entrances of freeways to slow ingestion which reduces the waves.   However, feedback from cars could work much better.

I don’t know if this will be the motivation for next generation cell communications.  I think it will happen anyway because of other IoT needs and general needs but this could be helpful too.

Cars talking to each other could be cool and something people want or could be obtrusive and annoying,  privacy problem or even dangerous.

2) All Cars talking to the factory

It is obvious that cars sending real-time information back to manufacturers can help them provide better service, design cars better and prevent breakdowns.   It will allow us to build safer cars, easier to maintain cars, more efficient cars.   However, a lot of people might not like having their driving habits reported.

3) Cars being upgraded

I am sold on cars upgrading themselves.

4) Cars being smart

A lot of the features Tesla has added this year prove how a car can be integrated better into our lives and our technology.  The ability of the car to anticipate things or to remember things that happened at certain locations is very powerful and smart. Tesla is absolutely being disruptive in showing the way here.

a) the car plans a route knowing the altitude changes and the effect on consumption

b) the car knows your calendar and scheduled events making it easy to anticipate your destination and tell you about driving problems without having to ask

c) the car makes it easy to access and integrate phone and car so that you need to take your hands off the wheel less

d) the car knows when it needed to be at high suspension or can be in low suspension

e) it knows when to heat or cool itself as I’m about to get in the car to go someplace in my calendar

These are things Tesla has already done and point to the idea that “smart” is really useful.   I believe that other manufacturers will see the utility and start adding these things as well.

If a car has sensors and somebody knocks it or it senses a danger it could notify you and give you visual information to decide to call the police.  In the event of accident the sensors could definitively identify the driver who made the mistake or all the drivers mistakes.

5) Cars driving themselves

This is the biggie obviously that is talked about a lot.   I am a bit of a skeptic in that I don’t see completely self-driving cars being able to really work safely.  I see more and more driving assist features.   When cars are completely self-driving it will be a huge change.   Car design will probably radically change with it because what’s the point of even having a driver seat if the car drives itself?  I see this as still a decade away or more.   However, improvements to make cars drive on highways seems possible much shorter term.

Other articles:

At $18B, the Connected Car is an Ideal Market for the Internet of Things

Progressive Car Dongle creates IoT security and safety risk 

Tesla:  IOT Case Study

John MathonDeep Learning Part 3 – Why CNN and DBN are far way from human intelligence


How do current CNN compare to the scale of the human brain?

We know that the human brain consists of a 21 billion neurons and somewhere between 1 trillion and 1000 trillion dendrite connections.  The closest species is an Elephant which has half as many neurons as a human brain.    We can therefore postulate that higher level learning  requires at least 12 billion neurons and probably >10 trillion dendrites.   The human cortex is composed of 6 layers that might correspond to the layers in a CNN network but not likely.  The human cortex is what is called “recurrent” which CNN neurons can be configured as well to be.   In recurrent neural nets the output of higher level neurons is fed back into lower levels to produce potentially an infinite number of levels of abstraction.

An information theoretic approach to this problem  would suggest that until we are able to build CNN with billions of neurons we are not going to get “human challenging” intelligence.   The largest CNN built today is 650,000 neurons or about 1/20,000th the number of neurons in the postulated needed quantity to achieve minimal human intelligence.   However, if the equivalent of dendrites is the connections of the neurons in our neural nets then we are 1/20 millionth of the way to having a sufficient CNN.   The problem with making larger CNN is that the computational and memory problems of larger CNN grows exponentially with the size so that even if we could design a CNN with 20,000 or 20,000,000 times the CNN neurons we would need all the compute power ever built in the world to operate one functioning slightly smarter than one elephant level brain.

It simply wouldn’t be economic in the near future even 10 years hence assuming massive improvements in compute power to use nearly all the compute power of the world to build one slightly dumber than a human brain assuming that this technique of CNN or DBN proves to be scalable to much larger levels.

Some will object to this simplistic approach of saying CNN are far away from human intelligence.

CNN lacks key features of intelligence

CNN, DBN or any XNN technology have NOT demonstrated the following characteristics of intelligence and awareness:

1) Plasticity – the capability to adapt to new data to revise old abstractions.  In these CNN systems the neurons are essentially frozen after the learning phase.   This is because if they don’t additional data doesn’t reinforce the neurons but leads to the neurons becoming less and less discriminating.   Clearly this is not a characteristic of the human brain.

2) Many levels of abstraction – we have so far demonstrated only a few levels of abstraction with XNN.  Going from 3 or 4 layers to 100 layers or more has not been demonstrated and many difficulties could be expected.   It has been shown so far that when we expand the layers too much it tends to cause too much backpropogation and the network becomes less facile.

3) Planning - no CNN does anything close to “planning”, whereas we know that planning is a separate function in the brain and is not the same as recognition.

4) Memory in general -CNN  Neurons in a sense are memory in that they recognize based on past data but they are an invisible memory in the sense that the underlying original data fed into the XNN is not remembered.   In other words the XNN doesn’t remember it saw a face or text before, it cannot go back and relive that memory or use an old memory to create a new level of abstraction.   We have no idea how to incorporate such memory into a CNN.

5) Experiential Tagged Memory – CNN do not experience time in the sense that it can relate events to time or to other experiences or memories.   The human brain tags all memories with lots of synthetic information, smells we had at the same time as we experienced the event, feelings we had, other things that happened at the same time.   Our brains have a way of extracting memories by searching through all its memories by any of numerous tags we have associated with those memories.  We have no such mechanism in CNN.   For instance, if we did have 3 CNN’s, a text recognition CNN, a object recognition CNN and a voice recognition CNN we have no idea how combining those CNN could produce anything better.

6) Self-perception – CNN do not have a feedback of their own processes, their own state such that they can perceive the state of their neurons, the state of itself changing from one moment to the next.  A human has an entire nervous system dedicated to perceiving itself, Proprioception is an incredibly powerful capability of living things where they can tell where their arm is, how they feel physically, if they are injured, etc…   These innate systems we have are fed into all our learning systems which enable us to do things such as throw a ball, look at ourselves and perceive ourselves.  No CNN has anything like this.

7) Common sense – CNN do not have a sense of what is correct and what is wrong.  An example would be if we create a “self-driving” CNN and the CNN discovers the rules of driving it will not be able to make a judgement about a situation it hasn’t been explicitly taught.   So, if presented with a new situation at a stop light it might say the best thing to do is charge through the stop light.  There is no way to know a priori in advance how a CNN will operate in all situations because until it has been trained on all possible input scenarios we don’t know if it will make a big mistake in some situations.  It may be able to see a face in numerous pictures but a face we humans easily see the CNN may say that it is a tractor.

8) Emotions – As much as emotions don’t seem to be part of our cognitive mind they are a critical part of how we learn, how we decide what to learn.  Emotions depend on having needs and we have not figured out how to represent emotions in a CNN.

9) Self-preservation – like emotion, a basic factor that drives all creatures learning is self-preservation.  CNN don’t learn based on self-preservation but simply do the math to learn patterns of input.

10) Qualia – CNN have no way of determining if something smells good, looks nice or if things are going well today.   We have no way of knowing if a CNN could ever have a concept of beauty.

11) Consciousness - consciousness is a complicated topic but CNN are clearly not conscious.

12) Creativity – There is an element to creativity in CNN in the sense that it can see patterns and makes new abstractions.   Abstractions are some evidence of creativity but in the case of CNN it is a mechanical creativity in the sense that we know in advance the range of creativity possible.  It is not shown yet that higher level abstractions and the CNN mechanism would allow a CNN to make a leap of insight to discover or think of something that it wasn’t presented with earlier.

Some of these things may be possible with CNN we just don’t know.   Some we may discover may be easy to add to CNN but in any case the sheer number of unknown things, the complexity of all these other mechanisms clearly involved in higher level abstraction, thinking and consciousness simply aren’t demonstrated in CNN therefore I state unequivocally that we are NOT on the precipice any time soon to a higher level intelligence as Elon Musk, Bill Gates, Stephen Hawking suggest.   CNN are cool technology but they are nowhere near able to represent higher level thinking yet.


CNN, DBN, XNN technology is definitely important and is helping us build smarter devices and smarter cloud, smarter applications.  We should leverage our advances as much as possible but I would say is clear we are nowhere near the danger feared by some about this technology.  These XNN technologies today lack a huge amount of the basic machinery of intelligent thought and planning, logic, understanding.   There is much that we simply have no idea if this technology will scale or get much better in 5 years.   This technology with the recent advances may simply like other AI techniques reach a wall in a couple years and we are stuck with no better way to move the ball forward to building intelligent machines.

To get links to more information on these technologies check this:


Articles in this series on Artificial Intelligence and Deep Learning


Is DeepLearning going to result in human intelligence or better anytime soon? – part 2, CNN and DBN explained

Deep Learning Part 3  – Why CNN and DBN are far way from human intelligence


Isuru PereraEnabling Java Security Manager for WSO2 products

Why Java Security Manager is needed?

In Java, the Security Manager is available for applications to have various security policies. The Security Manager helps to prevent untrusted code from doing malicious actions on the system. 

You need to enable Security Manager, if you plan to host any untrusted user applications in WSO2 products, especially in products like WSO2 Application Server.

The security policies should explicitly allow actions performed by the code base. If any of the actions are not allowed by the security policy, there will be a SecurityException

For more information on this, you can refer Java SE 7 Security Documentation.

Security Policy Guidelines for WSO2 Products

When enabling Security Manager for WSO2 products, it is recommended to give all permissions to all jars inside WSO2 product. For that, we plan to sign all jars using a common key and grant all permissions to the signed code by using "signedBy" grant as follows.

grant signedBy "<signer>" {

We also recommend to allow all property reads and WSO2 has a customized Carbon Security Manager to deny certain system properties.

One of the main reasons is that in Java Security Policy, we need to explicitly mention which properties are allowed and if there are various user applications, we cannot have a pre-defined list of System Properties. Therefore Carbon Security Manager's approach is to define a list of denied properties using the System Property "". This approach basically changes Java Security Manager's rule of "Deny all, allow specified" to "Allow all, deny specified".

There is another system property named "restricted.packages" to control the package access. However this "restricted.packages" system property is not working in latest Carbon and I have created CARBON-14967 JIRA to fix that properly in a future Carbon release.

Signing all JARs inside WSO2 product.

To sign the jars, we need a key. We can use the keytool command to generate a key.

$ keytool -genkey -alias signFiles -keyalg RSA -keystore signkeystore.jks -validity 3650 -dname "CN=Isuru,OU=Engineering, O=WSO2, L=Colombo, ST=Western, C=LK"
Enter keystore password:
Re-enter new password:
Enter key password for
(RETURN if same as keystore password):

Above keytool command creates a new keystore file. If you omit -dname argument, all key details will be prompted.

Now extract the WSO2 product. I will be taking WSO2 Application Server as an example.

$ unzip -q  ~/wso2-packs/

Let's create two scripts to sign the jars. First script will find all jars and the second script will be used to sign a jar using the keystore we created earlier. script:

if [[ ! -d $1 ]]; then
echo "Please specify a target directory"
exit 1
for jarfile in `find . -type f -iname \*.jar`
./ $jarfile
done script:


set -e



signjar="$JAVA_HOME/bin/jarsigner -sigalg MD5withRSA -digestalg SHA1 -keystore $keystore_file -storepass $keystore_storepass -keypass $keystore_keypass"
verifyjar="$JAVA_HOME/bin/jarsigner -keystore $keystore_file -verify"

echo "Signing $jarfile"
$signjar $jarfile $keystore_keyalias

echo "Verifying $jarfile"
$verifyjar $jarfile

# Check whether the verification is successful.
if [ $? -eq 1 ]
echo "Verification failed for $jarfile"

Now we can see following files.

$ ls -l
-rwxrwxr-x 1 isuru isuru 602 Dec 9 13:05
-rwxrwxr-x 1 isuru isuru 174 Dec 9 12:56
-rw-rw-r-- 1 isuru isuru 2235 Dec 9 12:58 signkeystore.jks
drwxr-xr-x 11 isuru isuru 4096 Dec 6 2013 wso2as-5.2.1

When we run, all JARs found inside WSO2 Application Server will be signed using the "signFiles" key.

$ ./ wso2as-5.2.1/ > log

Configuring WSO2 Product to use Java Security Manager

To configure Java Security Manager, we need to pass few arguments to the main Java process. 

Java Security Manager can be enabled by using "" system property. We will specify the WSO2 Carbon Security Manager using this argument.

We also need to specify the security policy file using "" system property.

As I mentioned earlier, we will also set "restricted.packages" & "" system properties.

Following is the recommended set of values to be used in (Edit the startup script and add following lines just before the line " org.wso2.carbon.bootstrap.Bootstrap $*" \$CARBON_HOME/repository/conf/sec.policy \
-Drestricted.packages=sun.,,com.sun.xml.internal.bind.,com.sun.imageio.,org.wso2.carbon. \,, \

Exporting signFiles public key certificate and importing it to wso2carbon.jks

We need to import the signFiles public key certificate to the wso2carbon.jks as the security policy file will be referring the signFiles signer certificate from the wso2carbon.jks (as specified by the first line).

$ keytool -export -keystore signkeystore.jks -alias signFiles -file sign-cert.cer
$ keytool -import -alias signFiles -file sign-cert.cer -keystore wso2as-5.2.1/repository/resources/security/wso2carbon.jks

Note: wso2carbon.jks' keystore password is "wso2carbon".

The Security Policy File

As specified in the system property "", we will keep the security policy file at $CARBON_HOME/repository/conf/sec.policy

Following policy file should be enough for starting up WSO2 Application Server and deploying a sample JSF & CXF webapps.

keystore "file:${user.dir}/repository/resources/security/wso2carbon.jks", "JKS";

// ========= Carbon Server Permissions ===================================
grant {
// Allow socket connections for any host
permission "*:1-65535", "connect,resolve";

// Allow to read all properties. Use in to restrict properties
permission java.util.PropertyPermission "*", "read";

permission java.lang.RuntimePermission "getClassLoader";

// CarbonContext APIs require this permission
permission "control";

// Required by any component reading XMLs. For example: org.wso2.carbon.databridge.agent.thrift:4.2.1.
permission java.lang.RuntimePermission "";

// Required by org.wso2.carbon.ndatasource.core:4.2.0. This is only necessary after adding above permission.
permission java.lang.RuntimePermission "";

// ========= Platform signed code permissions ===========================
grant signedBy "signFiles" {

// ========= Granting permissions to webapps ============================
grant codeBase "file:${carbon.home}/repository/deployment/server/webapps/-" {

// Required by webapps. For example JSF apps.
permission java.lang.reflect.ReflectPermission "suppressAccessChecks";

// Required by webapps. For example JSF apps require this to initialize com.sun.faces.config.ConfigureListener
permission java.lang.RuntimePermission "setContextClassLoader";

// Required by webapps to make HttpsURLConnection etc.
permission java.lang.RuntimePermission "modifyThreadGroup";

// Required by webapps. For example JSF apps need to invoke annotated methods like @PreDestroy
permission java.lang.RuntimePermission "accessDeclaredMembers";

// Required by webapps. For example JSF apps
permission java.lang.RuntimePermission "";

// Required by webapps. For example JSF EL
permission java.lang.RuntimePermission "getClassLoader";

// Required by CXF app. Needed when invoking services
permission javax.xml.bind.JAXBPermission "setDatatypeConverter";

// File reads required by JSF (Sun Mojarra & MyFaces require these)
// MyFaces has a fix
permission "/META-INF", "read";
permission "/META-INF/-", "read";

// OSGi permissions are requied to resolve bundles. Required by JSF
permission org.osgi.framework.AdminPermission "*", "resolve,resource";

The security policies may vary depending on your requirements. I recommend to test your application thoroughly in a development environment.

NOTE: There are risks in allowing some Runtime Permissions. Please look at the java docs for RuntimePermission. See Concerns below.

Troubleshooting Java Security

Java provides the "" system property to set various debugging options and monitor security access.

I recommend to add following line to whenever you need to troubleshoot some issue with Java Security."access,failure"

After adding that line, all the debug information will be printed to standard output. To check the logs, we can start the server using nohup.

$ nohup ./ &

Then we can grep the nohup.out and look for access denied messages.

$ tailf nohup.out | grep denied

Concerns with Java Security Policy

There are few concerns with current permission model in WSO2 products.

  • Use of ManagementPermission instead of Carbon specific permissions. The real ManagementPermission is used for a different purpose. I created CARBON-14966 jira to fix that.
  • Ideally the permission ' "control"' should not be specified in policy file as it is only required for privileged actions in Carbon. However due to indirect usage of such privileged actions within Carbon code, we need to specify that permission. This also needs to be fixed.
  • In above policy file, the JSF webapps etc require some risky runtime permissions. I recommend to use a Custom Runtime Environment (CRE) in WSO2 Application Server for JSF webapps etc and sign the jars inside CRE. You can also grant permissions based on the jar names (Use grant codeBase). However signing jars and using a CRE is a better approach with WSO2 AS.
If you also encounter any issues when using Java Security Manager, please discuss those issues in our developer mailing list.

Ajith VitharanaJAX-WS client for authenticate to WSO2 server.

We are going to use  AuthenticationAdmin service to generate the JAX-WS client.

1. Open the carbon.xml file and  set the value to false in following parameter.
2. Start the server and point your browser to,
https://[host or ip]:<port>/services/AuthticationAdmin?wsdl
3. Save the  AuthenticationAdmin.wsdl file to new folder (Lets say code-gen)
4 Go to the code-gen directory and execute the following command to generate the client codes.
wsimport -p org.wso2.auth.jaxws.client AuthenticationAdmin.wsdl
5. Now you will end up with following errors.
parsing WSDL...

[ERROR] operation "logout" has an invalid style
  line 1090 of file:/home/ajith/Desktop/auth-client-jaxws/AuthenticationAdmin.wsdl

[ERROR] operation "logout" has an invalid style
  line 1127 of file:/home/ajith/Desktop/auth-client-jaxws/AuthenticationAdmin.wsdl

[ERROR] operation "logout" has an invalid style
  line 1208 of file:/home/ajith/Desktop/auth-client-jaxws/AuthenticationAdmin.wsdl

[ERROR] missing required property "style" of element "operation"
    Failed to parse the WSDL.
6.The java2wsdl tool doesn't generate some important elements which are expecting by wsimport tool due to some issues (eg: output element in  wsdl operation  for void rerun type). So adding the following elements to the missing places(line numbers in above errors) will solve the issue.

<wsdl:output message="ns:logoutResponse" wsaw:Action="urn:logoutResponse"></wsdl:output>
     <soap:body use="literal"></soap:body>
<xs:element name="logoutResponse">
                        <xs:element minOccurs="0" name="return" type="xs:boolean">  </xs:element>
    <wsdl:message name="logoutResponse">
        <wsdl:part name="parameters" element="ns:loginResponse"></wsdl:part>
7. You can find the updated wsdl file here.
8. Execute the above command again to generate the client code.
9. Execute the following comment to generate the jar file.
jar cvf org.wso2.auth.jaxws.client.jar *

10. Add the above jar file to class path and build the client project.
 (Before executing the client change the trust store path)

package org.sample.jaxws.client;

import org.wso2.auth.jaxws.client.AuthenticationAdmin;
import org.wso2.auth.jaxws.client.AuthenticationAdminPortType;

import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

 * Created by ajith on 2/8/15.
public class LoginClient {

    public static void main(String[] args) {
        System.setProperty("", "<product_home>/repository/resources/security/client-truststore.jks");
        System.setProperty("", "wso2carbon");
        AuthenticationAdmin authenticationAdmin = new AuthenticationAdmin();
        AuthenticationAdminPortType portType = authenticationAdmin.getAuthenticationAdminHttpsSoap11Endpoint();

        try {
            portType.login("admin", "admin", "localhost");
            Headers headers = (Headers) ((BindingProvider) portType).getResponseContext().get(MessageContext.HTTP_RESPONSE_HEADERS);
            List<String> cookie = headers.get("Set-Cookie");
            Map<String, List<String>> requestHeaders = (Map) ((BindingProvider) portType).getResponseContext().get(MessageContext.HTTP_REQUEST_HEADERS);

            if (requestHeaders == null) {
                requestHeaders = new HashMap<String, List<String>>();
            requestHeaders.put("Cookie", Collections.singletonList(cookie.get(0)));
            ((BindingProvider) portType).getRequestContext().put(MessageContext.HTTP_REQUEST_HEADERS, requestHeaders);


        } catch (Exception e) {

John MathonIs DeepLearning going to result in human intelligence or better anytime soon? – part 2, CNN and DBN explained


2010 Convolutional Neural Nets

Convolutional Neural Nets operate on raw data from senses using some basic numerical techniques which aren’t that hard to understand.  It is possible these same techniques are employed in our brain.  Therefore the concern is that if these CNN have a similar ability to abstract things as a human brain does that it is simply a matter of scale to take a CNN system to the point it eventually can abstract up reality to the point it appears to be a general purpose learning machine, i.e. a human intelligence.

Part of this makes sense.  The way these CNN work is that every 2 layers of the CNN “brain” create abstractions and then select.  The more layers the more abstractions.    This is clearly how the brain works.  The brain is an abstraction building machine.  It takes raw sensory data and keeps building higher and higher level abstractions that we use to understand the world around us.  We also know the cortex of the brain is composed of layers (6 in the human brain) so layers seems like a workable hypothesis as a way to build a brain.

In a facial recognition algorithm using a CNN the first 2 layers of the “artificial brain” match raw stimula and try to find first level abstractions.   At this first layer for a facial recognition algorithm the neurons will be able to recognize local phenomenon like an eye, a nose, an ear of different shapes and perspectives.  The more neurons the more abstractions are possible to be formed, the more possible classifications the CNN can generate.  However,  a large number of these abstractions will be meaningless not well performing abstractions we should discard as they don’t represent abstractions that perform well (they don’t work time after time to help us understand but are just random coincidences in the data we sent to the brain).   The next layer of the CNN does a reduction (filter) essentially allowing the use of only the best performing abstractions from the previous level.   So, the first 2 layers give us the ability to abstract (recognize) some basic features of a face.  The next 2 layers operate to recognize combinations of these features that make up a face and then select the ones that produce the most reliable “face” recognition.   In the facial recognition example those next 2 layers may recognize that combinations of eyes, lips, ears and other shapes are consistent with a face, a tractor, a spoken word.   So, by using a 4 layer CNN we can recognize a face from something else.  The next 2 layers may abstract up facial characteristics that are common among groups of people, such as female or male, ethnicity, etc.

The more layers the more abstractions we can learn.  This seems to be what the human brain does.  So, if you could make enough layers would you have a human brain?

Each neuron in that first layer postulates a “match” of data by looking at local raw data that is in proximity to the neuron.  The neuron tries to find a formula that best recognizes a feature of the underlying stimuli and that consistently returns a better match to the next set of data that is presented.   This neuron is presented with spatially invariant data across the entire field of data from the lower layer.   In a Deep Belief Network (a type of CNN) a markov random mechanism is used to try to assemble possible candidate abstractions.   As stimula are presented to the system these postulates evolve and the second layer “selects” the best performing matching algorithms basically reducing the number of possible patterns down to a more workable number of likely abstractions that match better than others.  It does this simply by choosing the best performing neuron of all the local neurons from the lower level neurons it sees.    The algorithm each neuron uses is a function of estimating the error of the match from a known correct answer and for the neuron to adjust its matching criteria using a mathematical algorithm to rapidly approach the correct best answer.   Such a mechanism requires having what is called a “labeled” dataset in which the actual answer is known so that the error function can be calculated.

The neural net algorithms work best when trained with labeled datasets of course that allow the system to rapidly approach the correct answer because the system is given the correct answer.  This way of learning is not that much different than what we do with Machine Learning (another branch of AI.)

Another important advance in Neural Network design happened in 1997 which involved creating a Long short-term memory neuron which could remember data.   This neuron can remember things indefinitely or forget them when needed.   Adding this neuron type to a recurrent CNN has allowed for us to build much better cursive text recognition and phoneme recognition in voice systems.

Deep Belief Networks

A variation of CNN  called Deep Belief Networks operate by using probabilistic techniques to reproduce a set of inputs without labels.  A DBN is trained first with unlabeled data to develop abstractions and then tuned with a second phase of training using labeled data.   Improvements of this type of neural net have enabled the best results of any CNN quickly and without having to have massive labeled data sets for training which is a problem.  A further enhancement of this technique involves during the second phase of learning actively managing the learning of each set of layers in the neural network to produce the best results from each layer.   This would be equivalent to learning your arithmetic before learning your algebra.

In humans we do have an analog to the DBN and “labeled” dataset approach to learning in the sense that we go to school.  In school they tell us the correct answers so our brains can learn what lower level abstractions, what neurons to select as producing the better answers.  As we go through school we fine tune our higher level abstractions.  All day the human brain is confronted with learning tasks in which the human learns the correct answer and could be training its neurons to select the best performing abstraction from the candidates we build in our brain.  Labeled datasets by themselves are not a discriminating issue in the difference between human brains and CNN.


CNN and DBN and variations on these neural network patterns are producing significantly better results for recognition problems

To get links to more information on these technologies check this:


Articles in this series on Artificial Intelligence and Deep Learning


Is DeepLearning going to result in human intelligence or better anytime soon? – part 2, CNN and DBN explained

Deep Learning Part 3  – Why CNN and DBN are far way from human intelligence


Ajith VitharanaSSL termination - API Gateway.

SSL Termination-API Gateway

WSO2 API Manager product consist with  following four components.

1. API Publisher
2. API Store
3. API Gateway
4. API Key Manager

API Publisher:

API publisher provides three rich wizard (Design / Implement/ Publish) to create an enterprise API.  

API Store:

The published APIs are visible in API store, that provides all the enterprise API store capabilities like subscribe,  token generate, rate , API docs , client tools ..etc.

API Gateway

The published APIs will be deployed in the API Gateway. All the inbound and out bound API request will be accepted by API Gateway. You can publish APIs using both HTTP and/or HTTPS.

API Key Manager

 Once the request comes to API Gateway , that will be redirected to the API Key Manager to validate the authentication tokens.

If the token is valid, API Gateway can forward the request to the actual  back end API or service through Non- TLS connection (HTTP) or TLS (HTTPS) connection.

 If the token is invalid, API Gateway terminate the request and send back the authentication failure respond to the client who invoked the API.


# - All the APIs (exposed by API Gateway) has the public certificate of the API Gateway. The client (who invoke the API) will  use that certificate to establish the TLS connection with API Gateway.

# - All four components of API manager can be clustered separately to achieve the high availability and load balancing in each layer.

# - The API meta data has sored in  registry database (REG_DB) and API Manager database (AM_DB) , therefore those two databases should be shared across publisher, store and key manager components.

# - The reverse proxy servers has  established against the API gateway and API store to add for more security and load balancing.


John MathonIs DeepLearning going to result in human intelligence or better anytime soon – the setup?

Does a significant advance like the recent advances in AI presage a massive new potentially dangerous robotic world?


Elon Musk Stephen Hawking, Bill Gates and others. have stated that recent advances in AI, specifically around CNN (Convoluted Neural Nets) also called DeepLearning has the potential to finally represent real AI.

This is exciting and worrisome if true.   I have been interested in this problem from the beginning of my career.   When I started doing research at MIT into AI I had kind of a “depressing” feeling about the science.   It seemed to me that the process the brain used to think couldn’t be that hard and that it would take computer scientists not very long to try lots of possible approaches to learn the basic operation and process of how people “abstract and learn” to eventually give computers the ability to compete with us humans.   Well, this has NOT been easy.   Decades later we had made essentially zero progress in figuring out how the brain did this “learning” thing let alone what consciousness was and how to achieve that.    I have a blog about that sorry state of affairs from the computer science perspective and the pathetic results of biologists to solve the problem from the other side, i.e. to figure out how the brain worked.    Recent discoveries indicate the brain may be far more complicated than we thought even a few years ago.  This is not surprising.  My experience in almost every scientific discipline is that the more we look into something in nature inevitably we discover it is a lot more complicated than it first seemed.

Things are simplified by a discovery but rapidly get more complicated

Let’s take genetics for instance.  Let’s take the amazing simple discovery of DNA.  DNA was composed of 4 principal chemicals distributed across a number of chromosomes in a helical fashion.   This simple model seemed like it would lead to some simple experiments to understand these DNA strands and how they function.   Suffice it to say but the simple idea has become enormously complex to the point that we now know that DNA gets created during the lifespan of humans that can affect their functioning in their life and some can be passed on to subsequent generations to children contrary to a fundamental belief of evolution postulated by Darwin.   We learned that the DNA was composed of 99% junk only to discover it wasn’t junk but a whole different language than the language of genes.   Each of these things breaks down into more and more complex patterns.

This same pattern of simplifying discovery leads to some advances, we think it’s all so simple and then we find it’s way more complicated as we look deeper is same pattern found in physics over and over, chemistry and almost every subject I am aware of.   So, it is not surprising that the more we look at brains and intelligence the more we learn it is more complicated than we initially thought.   We have a conceit as humans to believe the world must be broken down into simple concepts fundamentally that we can figure out and eventually understand but every time we make advances in understanding a part we seem to find there is a whole new set of areas that are raised that put our understanding farther away than ever.   Every simplification is followed by an incredible array of complications that lead us to more questions than we knew even existed when we started.

Interestingly, this philosophical problem that as we look at something using a simplification or organizational mechanism that the problem simply becomes more complex was discussed in the book “Zen and the Art of Motorcycle Maintenance,” an incredibly insightful philosophy book  published 50 years ago and still relevant.

I therefore have 100% skepticism that the recent discovery of a new way to operate neural nets is somehow the gateway to us claiming a 50 year goal of human level intelligence.   Nevertheless, the advance has moved us substantially forward in a field that has been more or less dead for decades.   We can suddenly claim much improved recognition rates for voices, faces, text and lots of new discoveries.  It’s an exciting time.   Yet the goal of human level intelligence is far far from where we are.

Will neural nets produce applications that will take our jobs away or make decisions to end human life?   Maybe, but it’s nothing different than has been going on for centuries.

Neural nets were a technology first imagined in the 1950s and 1960s.  First implementations of neural nets was tried in the 70s and 80s and resulted in less than impressive results.  For 30 years virtually no progress was made until 2010, 4 years ago.   Some simple advances in neural nets has resulted in the best performing pattern recognition algorithms we have for speech, text, facial, general visual recognition problems and other problems.  This has brought neural nets and in particular a form of neural nets called Convoluted Neural Nets (CNN) sometimes also called DeepLearning as a new discovery that will lead to human intelligence soon.

Some of these new recognition machines are capable of defeating humans at specific skills.  This has always been the case with computers in general.   At specific problems we can design, using our understanding of the problem, a computer algorithm which when executed by a high speed computer performs better at that specific task than any human.  One of the first that was impressive and useful was Mathematica.  Done in the early 80s Mathematica was able to solve complex algebraic and differential equation problems that even the best mathematicians couldn’t solve.  IBM’s Watson is a recent example of something similar but as I’ve pointed out such things are not examples of general purpose learning machines.   Mathematica will never help you plan a vacation to Mexico nor will Watson be able to solve mathematical equations.   Neither will ever have awareness of the world or be able to go beyond their limited domain.  They may be a threat to humans in the sense that they might cost jobs for humans if we develop enough of these special purpose systems.  They could eventually do a lot of the work that many humans do that is rote.   If such systems are put in charge of dangerous systems such as our nuclear arsenal and they make a decision to kill humans inadvertently it is not because of evil AI, it is because some human was stupid enough to put a computer with no common sense in charge .    Such problems should be blamed on the humans who wrote those algorithms or who put such systems in charge of dangerous elements.

So, the idea that computers, AI and such could be a physical threat to humans or a job risk is real but is within the domain of our current abilities to control.

To get links to more information on these technologies check this:


Articles in this series on Artificial Intelligence and Deep Learning


Is DeepLearning going to result in human intelligence or better anytime soon? – part 2, CNN and DBN explained

Deep Learning Part 3  – Why CNN and DBN are far way from human intelligence


Chintana WilamunaScalable deployments - Choosing middleware components

Different user groups have different performance expectations from a system they’re involved with. Given a platform, be it a middleware/application development or integration, there are 3 sets of stakeholders we can consider,


Developers always want the platform/tools they’re working with to be robust. Some developer expectations include,

  1. A robust platform applications can be developed and deployed with minimum effort and time.
  2. Use of different 3rd party libraries to ease development as much as possible. When library usage increase, available resources for shared tenants goes down. No amount of autoscaling can solve uncontrolled library usage
  3. Fast service calls regardless of business logic complexities in services. Sort of ignoring the network boundaries/external DB and system calls. Another aspect is when more stricter security measures are utilized, more time spent for processing security related headers, thus reducing response times.
  4. Fast application loading times when the applications grow in size and complexity. Without designing the application to keep a low footprint, or process large data chunks efficiently, underlying platform cannot solve this. When size and complexity of applications grow, autoscaling times of instances will also increase.

Operations or deveops folks have a different set of performance expectations from a platform.

  1. They want instant instance spawning and start up times.
  2. Fast log collection. This can be collecting platform level logs to identify performance problems as well as collecting application level logs for  developers.
  3. No sane devops folks want to manually deploy a complex system. The entire deployment should be automated.
  4. Seamless autoscaling of services from the deployment itself without having to wait until one server gets bogged down and do some manual patching.
  5. Available resources should be equally distributed among active nodes for efficient utilization of shared resources.

Customers are probably the most difficult lot to please. Quoting from The Guardian article based on Mozilla blog of metrics,

32% of consumers will start abandoning slow sites between one and five seconds

So applications exposed to customers should be fast and have short response times.

Users expect application developers to write fast responsive applications. App developers look at devops/operations to provide a scalable platform for their apps. Devops in turn rely on architects to provide a robust deployment architecture that scales based on application requirements.

Expectations on architects are,

  1. Identify and use correct reference architecture/s
  2. Use correct deployment architecture based on application requirements as well as non functional requirements
  3. Deployment architecture should be aligned with business expectations of the system
  4. Should take SLAs into account
  5. Identify correct SOA/middleware components for the business use cases involved

Getting the right architecture

Identifying the right SOA components is an important step. In a typical middleware stack there are multiple components to choose from. Usually each component have very different performance characteristics. There are many ways to implement the same scenario.

Let’s see a simple example. Scenario - withdrawing money from a bank. This can be implemented is several different ways.

Example - Method 1

Here this can be implemented as a web service or web application which talks to a DB and host it in an application container

Example - Method 2

This example expose DB operations through Data Services Server, use an ESB for doing mediation and then expose an API for withdrawing money.

Example - Method 3

In this example, there’s a Business Process Server orchestrating the process. There are Business Activity Monitor for monitoring business transactions and Governance Registry as a repository.

A given scenario can be implemented in many ways and also the complexity and use of components will vary. Based on the scenario in hand, right components should be selected.

Chanaka FernandoGIT Cheat Sheet for beginners

Clone an existing repository
$ git clone ssh://
Create a new local repository
$ git init

Changed files in your working directory
$ git status
Changes to tracked files
$ git diff
Add all current changes to the next commit
$ git add .
Add some changes in <file> to the next commit
$ git add -p <file>
Commit all local changes in tracked files
$ git commit -a
Commit previously staged changes
$ git commit
Change the last commit
Don‘t amend published commits!
$ git commit --amend

Show all commits, starting with newest
$ git log
Show changes over time for a specific file
$ git log -p <file>
Who changed what and when in <file>
$ git blame <file>

List all existing branches
$ git branch
Switch HEAD branch
$ git checkout <branch>
Create a new branch based on your current HEAD
$ git branch <new-branch>
Create a new tracking branch based on a remote branch
$ git checkout --track <remote/branch>
Delete a local branch
$ git branch -d <branch>
Mark the current commit with a tag
$ git tag <tag-name>

List all currently configured remotes
$ git remote -v
Show information about a remote
$ git remote show <remote>
Add new remote repository, named <remote>
$ git remote add <remote> <url> 
Download all changes from <remote>,
but don‘t integrate into HEAD
$ git fetch <remote>
Download changes and directly merge/ integrate into HEAD
$ git pull <remote> <branch>
Publish local changes on a remote
$ git push <remote> <branch>
Delete a branch on the remote
$ git branch -dr <remote/branch>
Publish your tags
$ git push --tags

Merge <branch> into your current HEAD
$ git merge <branch>
Rebase your current HEAD onto <branch>
Don‘t rebase published commits!
$ git rebase <branch>
Abort a rebase
$ git rebase --abort
Continue a rebase after resolving conflicts $ git rebase --continue
Use your configured merge tool to solve conflicts
$ git mergetool
Use your editor to manually solve conflicts and (after resolving) mark file as resolved
$ git add <resolved-file> 
$ git rm <resolved-file>

Discard all local changes in your working directory
$ git reset --hard HEAD
Discard local changes in a specific file
$ git checkout HEAD <file>
Revert a commit (by producing a new commit with contrary changes)
$ git revert <commit>
Reset your HEAD pointer to a previous commit
...and discard all changes since then
$ git reset --hard <commit>
...and preserve all changes as unstaged changes
$ git reset <commit>
...and preserve uncommitted local changes
$ git reset --keep <commit>

Chanaka FernandoLearning GIT from level zero

GIT is one of the heavily used version control systems in the software industry. In this blog post, I am covering the fundamental concepts of GIT vcs for a beginner level user.

create a new repository

create a new directory, open it and perform a
git init
to create a new git repository.

checkout a repository

create a working copy of a local repository by running the command
git clone /path/to/repository

when using a remote server, your command will be
git clone username@host:/path/to/repository


your local repository consists of three "trees" maintained by git. the first one is your Working Directory which holds the actual files. the second one is the Index which acts as a staging area and finally the HEAD which points to the last commit you've made.

add & commit

You can propose changes (add it to the Index) using
git add <filename>
git add *

This is the first step in the basic git workflow. To actually commit these changes use
git commit -m "Commit message"
Now the file is committed to the HEAD, but not in your remote repository yet.

pushing changes

Your changes are now in the HEAD of your local working copy. To send those changes to your remote repository, execute
git push origin master
Change master to whatever branch you want to push your changes to.

If you have not cloned an existing repository and want to connect your repository to a remote server, you need to add it with
git remote add origin <server>
Now you are able to push your changes to the selected remote server


Branches are used to develop features isolated from each other. The master branch is the "default" branch when you create a repository. Use other branches for development and merge them back to the master branch upon completion.

create a new branch named "feature_x" and switch to it using
git checkout -b feature_x

switch back to master
git checkout master

and delete the branch again
git branch -d feature_x

a branch is not available to others unless you push the branch to your remote repository
git push origin <branch>

update & merge

to update your local repository to the newest commit, execute
git pull
in your working directory to fetch and merge remote changes.

to merge another branch into your active branch (e.g. master), use
git merge <branch>

in both cases git tries to auto-merge changes. Unfortunately, this is not always possible and results in conflicts. You are responsible to merge those conflicts manually by editing the files shown by git. After changing, you need to mark them as merged with
git add <filename>

before merging changes, you can also preview them by using
git diff <source_branch> <target_branch>


it's recommended to create tags for software releases. this is a known concept, which also exists in SVN. You can create a new tag named 1.0.0 by executing
git tag 1.0.0 1b2e1d63ff

the 1b2e1d63ff stands for the first 10 characters of the commit id you want to reference with your tag. You can get the commit id by looking at the...


in its simplest form, you can study repository history using.. git log
You can add a lot of parameters to make the log look like what you want. To see only the commits of a certain author:
git log --author=bob

To see a very compressed log where each commit is one line:
git log --pretty=oneline

Or maybe you want to see an ASCII art tree of all the branches, decorated with the names of tags and branches:
git log --graph --oneline --decorate --all

See only which files have changed:
git log --name-status

These are just a few of the possible parameters you can use. For more, see git log --help

replace local changes

In case you did something wrong (which for sure never happens ;) you can replace local changes using the command
git checkout -- <filename>

this replaces the changes in your working tree with the last content in HEAD. Changes already added to the index, as well as new files, will be kept.

If you instead want to drop all your local changes and commits, fetch the latest history from the server and point your local master branch at it like this
git fetch origin
git reset --hard origin/master

useful hints

built-in git GUI

use colorful git output
git config color.ui true

show log on just one line per commit
git config format.pretty oneline

use interactive adding
git add -i

This blog post is prepared with the reference of this content.

John MathonWhy Open Source has changed from the cheapest software to the best software

wso2 logo



Over the last 5 years we have seen a transformation in the way Open Source is viewed in Enterprises that is significant.

1.  Open source underwent a massive increase over the last 5 years with the help of organizations such as Google, Twitter, Facebook, Netflix and many others.

The number of new projects continues to grow exponentially and many senior people at mainstream companies are writing open source on the side.   Developers and technologists consider it important they are abreast and contribute to open source.

2. The virtuous circle or the rapid evolution of new technologies have forced open source into a critical position in software development

The rapid pace of evolution in open source has overtaken the closed-source world.  The closed source world is now FOLLOWING open source.  Back in the 2000 time frame open source was creating projects that were replicas of closed source software.  Now closed-source companies are copying open source components and ideas.

3.  The Enterprise Sales model continues to come under increasing pressure because of Open Source

The benefits of component technology whether APIs or open source components is so great that licensing component by component is unworkable.  You may use one open source project today and switch tomorrow.  You may use dozens of open source components.   The software licensing model is too cumbersome and limiting.

4. closed-source Enterprise License Model of Software is NOT aligned with customer

Closed source companies are interested in you re-upping your license to the next paid version.  They will put features into new versions to force you to commit and they will make you wait for those features you could get in open source today.

They want to write components because they make money by you using their closed source software NOT by making it easy for you to use open source projects.  They will want to lock you into their versions of these things and if they don’t have them yet they will make you wait.  The whole model of closed-source is opposed to the rapid adoption paradigm and reuse of the new era.

Lastly, it is hard to understand why anyone would pay a license fee for a commodity product and many enterprise closed-source products at this point are commodity products that are available in multiple different and in many cases better open source projects for no license.

Organizations are coming to realize that Open Source is NOT

1) the low quality but cheap software

2) a small part of the problem

3) a risky way to get software

open source icons4638740636_a12cdcfd86_z

Instead they are realizing that Open Source is:

1) The only way to get certain kinds of key software

2) The best software for some major pieces they need

3)  That they could benefit by participating in

4) that is more aligned with their business than closed-source software

5) that it is critical to their technology evolution and transformation

6) That it is critical for them to have the agility they need


Major companies are adopting a  “Open source first” policy or a “must look at open source alternative” policy. Many major companies have in the last year made open source the reference architecture in their companies.   Some companies are saying they must use an open source solution if it is available or that they must consider open source in any purchasing decision of software.

I have seen this more and more with bigger customers and talking to CIOs and even lower level people who say they have to talk to us (WSO2) because they need to consider open source in their decision making process.   That is a huge change from 5 years ago where many corporations thought open source was  “risky.”

Where will this lead?

The bigger question is whether “Open Source” is the way we should be doing software?  Will we ultimately suffer for doing open source, using open source?   I don’t think so.

I have thought about this quite a bit over the years.   For most enterprise software there is no need for software companies to use the closed-source model.  There is very little to be gained by a software company choosing to keep its software secret in my opinion and that advantage is diminishing every day.   Software like hardware is moving to a pay per use model.

The open source software issue is simply a variant of the entire problem of intellectual property in the digital age. Since all IP can be represented digitally whether news and reporting, music, film, education, software they all have the same problem.  When IP can be copied infinitely for free how do you maintain income to support the creation of IP?  The market is evolving to find answers to each domain in different ways.    I believe that no matter how easy it is to copy IP and use it that the users of content will be motivated to pay for the content in some way to cause the creation of the content (IP) they want.  It is simply a matter of the market to figure out how to do this but as long as people need and like and value music, film, software, etc we will find a way to compensate people to make it.  The fact this may not be the way traditional existing companies do business is irrelevant.

Many people bemoaned the music industries collapse but the fact that traditional music companies were vertically integrated to provide 100% of the value chain for a musician and had control of the musician and the listener was an artifact of history.  By disintermediating the components of the vertical industry into segments we still get our music.   In fact the amount of music has exploded and I would say our ability to listen and enjoy music has expanded at the same time musicians are not starving any more than they were.

You can take that example and see how all the IP domains in the digital world need to undergo a painful transition for some.  Universities is one of my favorite topics in this space.  What will happen to universities?  Another blog.

Other Articles on Opensource:

Open Source is just getting started, you have to adopt or die

Inner Source, Open source for the enterprise

Software Development today, Cool APIs, Open Source, Scala, Ruby, Chef, Puppet, IntelliJ, Storm, Kafka

The technology “disruption” occurring in today’s business world is driven by open source and APIs and a new paradigm of enterprise collaboration

The Virtuous Circle is key to understanding how the world is changing – Mobile, Social, Cloud, Open Source, APIs, DevOps

Enterprise Application Platform 3.0 is a Social, Mobile, API Centric, Bigdata, Open Source, Cloud Native Multi-tenant Internet of Things IOT Platform


Dimuthu De Lanerolle

Useful Git commands

Q: How can I merge a distinct pull request to my local git repo ?

   You can easily merge a desired pull request using the following command. If you are doing this merge at first time you need to clone a fresh check-out of the master branch to your local machine and apply this command from the console.
git pull +refs/pull/78/head

Q: How do we get the current repo location of my local git repo?

A: The below command will give the git repo location your local repo is pointing to.

git remote -v

Q: Can we change my current repo url to a remote repo url

A: Yes. You can point to another repo url as below.

git remote set-url origin

Q: What is the git command to clone directly from a non-master branch (eg: two branches master & release-1.9.0 how to clone from release-1.9.0 branch directly without switching to release-1.9.0 after cloning from the master) 

A: Use the following git command.

git clone -b release-1.9.0


Q : I need to go ahead and build no matter i get build failures. Can I do that with maven build?

A: Yes. Try building like this.

mvn clean install -fn 

John MathonBreakout MegaTrends that will explode in 2015, Continuation Perceptive APIs, Cognitive Technologies, Deep Learning, Convolutional Neural Networks

brain sprouting

This is a continuation of the series on the Disruptive Megatrends for 2015 Series

12. Perceptive APIs / Cognitive Technologies / DeepAI – smart technologies become more and more important to differentiate

I strongly believe rapid adoption of Deep Learning technologies and more adoption of various AI sub-disciplines will show dramatic growth in 2015.   This is because of the need by companies to utilize greater intelligence from BigData and the need to put intelligence into social and all applications in general.

Many people are not aware of the significant changes that have happened in Artificial Intelligence in the last few years.   There are several areas of AI that have made great strides, combined with some hardware advances we are seeing a 3rd wave of AI and this time may be the magic time that sticks and leads to mass adoption.   AI was my original study at MIT and I have a lot of thoughts on conventional AI approaches which have failed generally and which I was skeptical of from the beginning.  However, in the last couple years we have seen the emergence of true AI or what is being called Deep Learning or Deep AI.

Deep Learning involves the use of Convolutional Neural Networks which are a synthetic form of “brain” based on virtual neurons configured in layers.   Each layer of a convolutional neural network either amplifies or selects from the previous layer.  How many layers to use, what layers follow what other layers, how they are connected and how to configure them is a matter of experience.  The number of layers determines how deep the learning is and if you feed the output back into the input of the neural net you have potentially unlimited depth of learning.    It’s like designing your own brain.  The more you need the neural net to learn the deeper the layers and the more neurons you must use.  The amount of processing for all the neurons grows exponentially thus the interest in GPUs.  There are patterns that work in different scenarios.  You initially feed in a lot of data to the CNN and it learns.  After the training period you feed in new data and the system reports or acts on the data in however you have trained it to operate.

Neural networks were first designed in the 1980s but didn’t seem to work  that well.  Neural nets made some uninteresting advances for the next 30 years but in 2010 The principal recent advance was the introduction of LTSM (long term short memory) which led immediately to several impressive achievements by CNN that suddenly made them better than any other approach to recognition we have seen.   DeepMind (a British LLC acquired by Google)  has among other perfected the LTSM in each neuron which seems to have greatly improved the ability of neural networks to recognize and abstract information.  DeepMind demonstrated some interesting enough results to get Google interested and they were bought and have been employed in some of Googles recognition visual and voice recognition projects.

One of the claims of the DeepMind folks was that they could feed the visual feed of Atari games into DeepMind and it could learn to play Pong and other games from the video feed well enough to play the games and in some cases defeat good human players.   Now that does sound impressive.   Other examples of DeepLearning applied to pattern recognition has shown significant improvements from previous approaches leading to much higher recognition of text than ever before.

Elon Musk who has seen more of DeepMind than any of us is worried.  He claims that the technology at Google has the potential to become dangerous AI.  Dangerous AI is AI which may be able to gain some form of innate intelligence or perception which might be a threat to ourselves.    Since then other luminaries such as Bill Gates and even Stephen Hawkins have expressed reservations at Deep Learning AI.

Whether you believe such concerns as stated by these people the basic convolutional neural network technology is available in numerous open source projects and as APIs.   There is a lot of work and experience needed to configure the layers and parameters of each layer.   The amount of processing and memory required can be prodigious and some dedicated GPU’s are being developed to do CNN.   Several projects are underway to determine the limits of CNN learning capabilities.  It is exciting.

The technology is in use at numerous companies and in numerous applications already.

Given the way advances like this make their way into the industry you can see rapid adoption of Deep Learning technology through open source and APIs is likely to make its way into numerous applications and underlying technologies in 2015.

I want to make the distinction between different disciplines of AI which have been around for a while and being applied to applications and the Deep Learning.   I believe the Machine learning examples below will become Deep Learning before long this year.

D-Wave and Quantum Computer Technology is advancing rapidly


I don’t believe everyone will be buying a D-wave anytime soon.   The newest version coming out in March will have 1152qubits and represent a dramatic advance in quantum computer technology.   This is now at the point that people should become aware of it and consider what impact it will have.

I discuss quantum computers more deeply in an article on Artificial Intelligence here.

Google is currently using D-wave for some recognition tasks.  The prognosis is positive but they aren’t buying hundreds of these yet.   This is a technology that is rapidly evolving.  As of last year the D-wave was powerful enough to compete with the best processors built today.  That’s quite an achievement for a company in business developing a completely new technology to be able to build a processor which is as fast or maybe 5 times faster than the current state of the art processors available today but at $10 Million its a bit pricey compared to what is charged for state of the art silicon processors.   The real point is that if they have accomplished this and the scale goes up at Moores law every year on qubits which is extremely likely then the D-wave will quickly (<10 years) be faster than all computers on the earth today at least for solving some set of problems.

The set of problems the D-wave is good at are problems that involve optimization, i.e. logistics, recognition problems and many other problems that don’t look like optimization problems but can be translated to be optimization problems.   The D-wave leverages the fuzzy quantum fog to calculate at the square root of time a normal processor would need to calculate such problems.  As the number of qubits rise the size of problem that can be solved grows exponentially and eventually supersedes all existing computing ability to solve these problems.

The big win for D-wave and quantum computers are in the areas of security, better recognition for voice, visual and other big data and smarter query responses.

inner source

CNN / Deep Learning is being applied here:

 Facebook is using it in their face recognition software

Google apparently can transcribe house street addresses from pictures

Visual Search Engines for instance in Google+

Check Reading machines used by banks and treasury

IBM’s Watson


CNN and AI with BigData is a big theme.

Conventional AI is being used with BigData but I expect over the next year we will see more uses of  CNN.    Here are some articles talking about some companies doing this:

Machine Learning and BigData






CNN / Deep Learning resources


Atari Games Play

Deep Learning Resources


Madhuka UdanthaPython For Beginners

Python is an interpreted dynamically typed Language with very straightforward syntax. Python comes in two basic versions one in 2.x and 3.x and Python 2 and 3 are quite different. This post is bais for Python 2.x

In Python 2, the "print" is keyword. In Python 3 "print" is a function, and must be invoked with parentheses. There are no curly braces, no begin and end
keywords, no need for semicolons at the ends of lines - the only thing that
organizes code into blocks, functions, or classes is indentation

To mark a comment the line, use a pound sign, ‘#’.  use a triple quoted string (use 3 single or 3 double quotes) for multiple lines.

Python Data Types

  • int
  • long
  • float
  • complex
  • boolean

Python object types (builtin)

  • list  : Mutable sequence, in square brackets
  • tuple : Immutable sequence, in parentheses
  • dic  : Dictionary with key, value using curly braces
  • set  : Collection of unique elements unordered
  • str  : Sequence of characters, immutable
  • unicode  : Sequence of Unicode encoded characters


Sequence indexes

  • x[0]  : First element of a sequence
  • x[-1]  : Last element of a sequence
  • x[1:]  : Second element from the last element
  • x[:-1]  : First element up to (but NOT including last element)
  • x[:]  : All elements - returns a copy of list
  • x[1:3]  : From Second elements to 3rd element
  • x[0::2]  : Start at first element, then every second element
1 seq = ['A','B','C','D','E']
3 print seq
4 print seq[0]
5 print seq[-1]
6 print seq[1:]
7 print seq[:-1]
8 print seq[:]
9 print seq[1:3]
10 print seq[0::2]

output of above code


Function and Parameters

Functions are defined with the “def” keyword and parenthesis after the function name

1 #defining a function
2 def my_function():
3 """ to do """
4 print "my function is called"
6 #calling a defined function
7 my_function()

Parameters can be passed in many ways

  • Default parameters:
    def foo(x=3, y=2):
        print x

  • By position:
    foo(1, 2)

  • By name:


  • As a list:
    def foo(*args):
        print args
    foo(1, 2, 3)

  • As a dictionary:

1 def foo(a, b=2, c= 3):
2 print a, b, c
3 d = {'a':5, 'b':6, 'c':7}
5 foo(**d)
7 #need to pass one parameter
8 foo(1)



John MathonBreakout MegaTrends that will explode in 2015, part 2 CIO Priorities

This year has been a breakout year for a number of key technology trends, part 2 CIO Priorities

9.  CIO priorities will shift from cost abatement to “digitization”

2013 and 2014 were for most large organizations an experimental period with the cloud and many new technologies.   One of the biggest movements that many CXO’s saw from this new technology was cost reductions.  They have been pushing CIOs and IT departments to cut costs cut costs cut costs and to do more with less.   They heard about how many companies were drastically reducing costs by moving to the cloud.   While not every organization can see immediate gain from such a move, some companies could gain dramatic efficiencies with very little effort.  One company I know dropped its costs by a factor of 50 for its services by moving to the cloud.

With the price reductions we saw in cloud services mid-year in the very favorable cost/value equation for using cloud has become apparent to everyone.  Cost abatement will still be a factor in decisions to move to new technology but the pace of change will have accelerated and the expectations of the market have changed and are changing.  The fact is today you can get virtual hardware that has what you need to make your application run fast and efficiently in the cloud at unbeatable prices.    If you need SSD disk drives Google and Amazon sell you servers with SSD for cheap prices.  If you need high speed networking, high speed processors or alternatively if you don’t need those things but simply want to run your service at the least cost possible for the hours you use it in a day nothing can beat the cloud today.

The flexibility to order it when you need it, in incremental levels with the features you need for that application make this an unstoppable force.   It’s hard to think of a reason why a corporation would buy hardware these days.   The move to virtualization will take giant strides this year.

The need to adopt “digitization” or “platform 3.0″ the new paradigm emerging will dominate over cost concerns.   The productivity gains by adopting the open source, API Centric way of operating, PaaS DevOps deployment again are overwhelming and I can’t imagine anyone building applications that have any success using other approaches in todays competitive environment.   Organizations in 2015 will see more necessity to mainstream their digitization plans regardless of cost.   I believe IT spend in 2015 will stabilize and rise significantly, around 5%.

Gartner predicts 2.1% growth overall with 5-7% growth in Software spending.  This is astounding as the rapid improvements in productivity and efficiency would reduce costs substantially if adoption remained constant.   However, there is tremendous demand for mobile, APIs, Cloud services and new innovative approaches to IT.   The growth in IoT will help fuel a lot of corporate initiatives.

What I think is happening is that Enterprises are focusing more on the top level benefits of technology than they are on the bottom line cost reduction advantages.   They are realizing that this new technology is not an option anymore, it is not something to research.   This is life or death now.   An article that supports this thesis is here.

Other Articles in this series on 2015 Megatrends:

Here is the list of all 2015 Technology Disruptive Trends

1. APIs (CaaS)-Component as a Service (CaaS)  fuels major changes in applications and enterprise architecture

2. SaaS – the largest cloud segment continues to grow and change

3. PaaS – is poised to take off in 2015 with larger enterprises

4. IaaS – massive changes continue to rock and drive adoption

5. Docker (Application containerization) Componentization has become really important

6. Personal Cloud – increasing security solidifies this trend

7. Internet of Things (IoT) will face challenges

8. Open Source Adoption becomes broader and leading to death of the traditional enterprise sales model

9.  CIO priorities will shift from cost abatement to “digitization” – new technology renewal

10.  Cloud Security and Privacy –  the scary 2014 will improve.  Unfortunately privacy is going in the opposite direction

11. Platform 3.0 – disruptive change in the way we develop software will become mainstream

12 perceptive-apis-cognitive-technologies-deep-learning-convolutional-neural-networks

13.  Big Data – 2015 may see a 10 fold increase of data in the cloud

14. Enterprise Store – Enterprises will move to virtual enterprises and recognize a transformation of IT focused on services and less on hardware

15. App consolidation – the trend in 2015 will be to fewer apps that will drive companies to buddy up on apps

Here is more description of these changes.   Some of these I will talk more about in future blogs.

Related Articles you may find interesting:

Cloud Adoption and Risk Report,1-1248.html

Paul FremantleWSO2 \ {Paul}

I have an announcement to make: as of this month, I am stepping down as CTO of WSO2, in order to concentrate on my research into IoT secure middleware.

I have been at WSO2 since the start: our first glimmer of an idea came in August 2004 and the first solid ideas for the company came about in December 2004. We formally incorporated in 2005, so this has been more than 10 years.

The company has grown to more than 400 employees in three different continents and has evolved from a startup into a multi-million dollar business. I'm very proud of what has been achieved and I have to thank all of the team I've worked with. It is an amazing company.

In late 2013, I started to research into IoT middleware as part of a PhD programme I'm undertaking at the University of Portsmouth. You can read some of my published research here. I plan to double down on this research in order to make significantly more progress.

Let me divert a little. You often meet people who wish to ensure they are irreplaceable in their jobs, to ensure their job security. I've always taken the opposite view: to make sure I am replaceable and so that I can move onto the next chapter in my career. I've never thought I was irreplaceable as CTO of WSO2 - I've simply tried to make sure I was adding value to the company and our customers. In 2013, I took a sabbatical to start my PhD and I quickly realised that the team were more than ready to fill any gaps I left.

WSO2 is in a great position: the company has grown significantly year over year over year, creating a massive compound growth. The technology is proven at great customers like Fidelity, eBay, Trimble, Mercedes-Benz, Expedia and hundreds of others.

I am an observer on the board and I plan to continue in that role going forwards.

Once again I'd like to thank all the people who I've worked with at WSO2 who have made it such a productive and life-changing experience.

Shelan PereraPerfect Memory.. A Skill developed or a gifted super natural ?

 In the previous post i discussed about the importance of developing a most important gift a human has inside his brain.... the memory.

In this world we have externalized most of the important stuff to external digital world. In a way it has made things evolving faster. We are gathering much more knowledge than earlier. Wait.. Is it knowledge or knowledge pointers?

The most suitable word is knowledge pointers using which we can retrieve knowledge easily. Imagine a world without internet or any other form of knowledge reference. How far can we survive.? We are loosing our capability of retaining information in our brains day by day and making the external storages cost effective at an similar pace.

I am striving to regain that capability if i have lost and see how far i can succeed. Because following video will be one of your eye openers if you are one of the crazy people how would like to travel back to history and master one of the key aspects of a perfect Human being.

This is a Ted talk which you will be fascinated to watch.

"There are people who can quickly memorize lists of thousands of numbers, the order of all the cards in a deck (or ten!), and much more. Science writer Joshua Foer describes the technique — called the memory palace — and shows off its most remarkable feature: anyone can learn how to use it, including him"

Shelan PereraDo we need to master memory in Google's Era ?

 If you need any information you just type Google. Simple is n't it? So do we have to bother about memorizing anything this world? Is there a pay back for what you memorize? Can we rely on internet and only just learn how to search and can we be successful?

 I am sure most of the people have forgotten the importance of memorizing things in this information age. But the people who master memorizing things excel more often than people who do not. The simple reason is you can apply only what you know.

As a Software Engineer and as a Masters student in computer science we often think for any problem we can just Google and find answers. But how fast? You may say within seconds you may have thousands of results before you.

But... You need to type go to the first answer and if you are lucky you have a hit.. If not repeat until you find the correct answer. Imagine you have things in memory.... It will not take fractions of a second to retrieve. It will be simply comparing to a cache retrieval to a Hard disk access.

People often think there are lots of information how can we memorize all these things? Yes it is true that in this digital world content is being produced super faster. But you have to be super smart to filter out what is important to you and memorize those things. We often underestimate the capability of retaining information in our brain and lazily forget techniques to master memory.

If you have not read following book is a great source of encouragement as well as a learning tool by a
Memory Champ Kevin Horsley.

Following video is one of his world record breaking videos [Successful World Record attempt in Pi matrix memorization]

Shelan PereraLearning Math and Science - Genius Mind

 Do you think learning math and science is something alien from outer universe ? Do you struggle to solve complex mathematics or science problems? If you keep on adding questions there will be a lot to add on. Because even i have some of those problems in my mind.

I excelled in college and was able to get into the top engineering University in Sri Lanka. At present i am reading for my Masters Degree in Distributed computing in Europe which involve Science and Mathematics heavily.

If you read my post Want to Learn anythin faster was a spark to ignite my habit of learning. Often i was comfortable at Theoretical subjects which needed rote memorization or comprehension. I wanted to understand the reasons so anyone who suffers the same can be benefit of it.  Further there is a common misunderstanding that maths is hard and complex even before attempting to realize the beauty of it.

As Human beings we are computing machines. We can do lot of maths in our head to survive in this world. To measure the distance by contrasting with other objects, Cross a road safely without being hit by a vehicle and the list is endless..So then why we cannot do it in the class room?

I often find the way we approach math and science is wrong. There are several observations that i have made why we think maths is complex.

1) If we relate a story to a problem and try to solve it will be easier than denoting it with x,y or any other mathematical notation. When we have a story we have solid mapping of the problems in our mind so understanding a problem and working towards a solution makes more realistic. Good mathematicians or scientists create problems more vivid in their minds. They live in it as real worlds. Symbols are just notation to express what they understood in common language.

2) In classroom or tests we try to rote memorize concepts. Maths and science is super easy when you understand the fundamentals about it. You need to feel or live with the concepts to solve problems. learning an equation will not give you the ability to solve problems unless it is just a substitution.
If you clearly observe an equation, it is a complete story. It is a story of how incidents of left side will come to a common agreement on your right hand side.

3) Connect what you learn with what you know. Our brain is structured as a web. If you do not want to lose newly learnt concepts you have to link and bind with what you know to avoid loosing them. Isolated memory islands disappears soon. Try to relate to whatever the concept you have really understood and try to construct on top of that. If you find anything hard at once try it repeatably in different ways but always have a break. Our brain needs to digest and it takes time to assimilate.

I am still researching and try to apply those concepts in practice to see how they work in real world. But it has been producing interesting results so far. I find myself learning more complex math or science problem than before as i changed the approach.

I am currently reading this book which is an excellent resource for who wants to develop a "mind for numbers". Happy learning.

Heshan SuriyaarachchiResolving EACCES error when using Angular with Yeoman

In one of my earlier posts, I discussed installing NodeJS, NPM and Yeoman. Although the setup was good to start off my initial work, it was not was giving me the following error when trying to install Angular generator. Following post describe how to resolve this error.


| | .------------------------------------------.
|--(o)--| | Update available: 1.4.5 (current: 1.3.3) |
`---------´ | Run npm install -g yo to update. |
( _´U`_ ) '------------------------------------------'
| ~ |
´ ` |° ´ Y `

? 'Allo Heshan! What would you like to do? Install a generator
? Search NPM for generators: angular
? Here's what I found. Install one? angular
npm ERR! Darwin 14.0.0
npm ERR! argv "node" "/usr/local/bin/npm" "install" "-g" "generator-angular"
npm ERR! node v0.10.33
npm ERR! npm v2.1.11
npm ERR! path /Users/heshans/.node/lib/node_modules/generator-angular/
npm ERR! code EACCES
npm ERR! errno 3

npm ERR! Error: EACCES, unlink '/Users/heshans/.node/lib/node_modules/generator-angular/'
npm ERR! { [Error: EACCES, unlink '/Users/heshans/.node/lib/node_modules/generator-angular/']
npm ERR! errno: 3,
npm ERR! code: 'EACCES',
npm ERR! path: '/Users/heshans/.node/lib/node_modules/generator-angular/' }
npm ERR!
npm ERR! Please try running this command again as root/Administrator.
npm ERR! error rolling back Error: EACCES, unlink '/Users/heshans/.node/lib/node_modules/generator-angular/'
npm ERR! error rolling back { [Error: EACCES, unlink '/Users/heshans/.node/lib/node_modules/generator-angular/']
npm ERR! error rolling back errno: 3,
npm ERR! error rolling back code: 'EACCES',
npm ERR! error rolling back path: '/Users/heshans/.node/lib/node_modules/generator-angular/' }

npm ERR! Please include the following file with any support request:
npm ERR! /Users/heshans/Dev/projects/myYoApp/npm-debug.log

It was due to a permission error in my setup. I tried giving my user the permission to access those files but it still didn't resolve the issue. Then I decided to remove my existing NodeJS and NPM installations. Then I used the following script by isaacs with slight modifications. It worked like a charm. Then I was able to successfully install AngularJS generator to Yeoman.

PS : Also make sure that you have updated your PATH variable in your ~/.bash_profile file.
export PATH=$HOME/local/bin:~/.node/bin:$PATH

Waruna Lakshitha JayaweeraUse Account Lock feature to block user token generation


In this blog I will describe how we can block user getting token. So I am using api manager 1.6 with installing IS 4.6.0 features.


Step 1

Step 2

Step 3

Step 1

Step 2

Step 3

Create new user named testuser. Grant subscriber permission.Then go to users and select required user(testuser)
Goto user profiles > lock account(set FALSE to Account Locked) > update.

Step 4

After this restart the servers.

Test the scenario

Step 1

Login as test user.Subscribe any API.
Try to generate token like this.

curl -k -d "grant_type=password&username=waruna&password=test123&scope=PRODUCTION" -H "Authorization: Basic b3ZKMEtvVGd4YlJ5c2dBSDVQdGZnOUpJSmtJYTpBVjVZVFJlQkNUaGREUWp2NU0wbUw2VHFkdjhh, Content-Type: application/x-www-form-urlencoded" https://localhost:9443/oauth2/token 

You will get tokens.

Step 2

Login as admin.
Then go to users and select required user(testuser) and goto user profiles > lock account(set TRUE to
Account Locked) > update

Step 3

As Step 1 Try to generate token.
You will following message
{"error":"invalid_grant","error_description":"Provided Authorization Grant is invalid."}
Now you're done


Dmitry SotnikovDownload Links for PowerGUI and QAD cmdlets

powergui logoWith Dell’s acquisition of Quest and all the IT reorganization that followed, it is actually not that easy to find these two popular free PowerShell tools any longer. So here are the links that work today (January 30, 2015):


The download is freely available from Dell’s PowerGUI community.

The community itself also got moved from to

Dell Software is still maintaining the product – as I am writing this the latest version is 3.8 released in April 2014.

Quest / QAD cmdlets

This one is a little more tricky to find:

If this link for some reason changes, all Dell’s freeware and trial links can be found in this catalog:

Happy PowerShelling!

Sivajothy VanjikumaranWSO2's 6th Office opened in Jaffna

WSO2 has opened a office at Jaffna with 10 employees including 9 software engineers

Sivajothy VanjikumaranGIT 101 @ WSO2


Git is yet another source code management like SVN, Harvard, Mercurial and So on!

Why GIT?

Why GIT instant of SVN in wso2?
I do not know why! it might be a off site meeting decision taken in the trinco after landing with adventurous flight trip ;)

  • awesome support for automation story
  • Easy to manage
  • No need to worry about backup and other infrastructure issues.
  • User friendly
  • Publicly your code reputation is available.

GIT in WSO2.

WSO2 has two different repository.
  • Main Repository.
    • Main purpose of this repository maintain the unbreakable code repository and actively build for the continuous delivery story incomprated with integrated automation.
  • Development Repository.
    • Development repository is the place teams play around with their active development.
    • wso2-dev is a fork of wso2 repo!


Now this statement invalid as WSO2 has changed it process on Dec/2014


  1. Developer should not fork wso2 repo.
    1. Technically he/she can but the pull request will not accepted.
    2. If something happen and build breaks! He/She should take the entire responsible and fix the issue and answer the mail thread following the build break :D
  2. Developer should fork respective wso2-dev repo.
    1. He/She can work on the development on her/his forked repo and when he/she feel build won't break he/she need to send the pull request to wso2-dev.
    2. If pull request should be reviewed by respective repo owners and merge.
    3. On the merge, Integration TG builder machine will get triggered and if build pass no problem. If fails, He/She will get a nice e-mail from Jenkins ;) so do not spam or filter it :D. Quickly respective person should take the action to solve it.
  3. When wso2-dev repository in a stable condition, Team lead/Release manager/ Responsible person  has to send a pull request from wso2-dev to wso2.
    1. WSO2 has pre-builder machine to verify the pull request is valid or not.
      1. if build is passed and the person who send a pull request is white listed the pull request will get merged in the main repository.
      2. if build fails, the pull request will be terminated and mail will send to the respective person who send the pull request. So now, respective team has to work out and fix the issue.
      3. Build pass but not in whitelist prebuild mark it a need to reviewed by admin. But ideally admin will close that ticket and ask the person to send the pull request to wso2-dev ;)
      4. If everyting merged peacefully in main repo. Main builder machine aka continuous delivery machine  build it. If it is fail, TEAM need to get into action and fix it.
  4. You do not need to build anything in upstream, ideally everything you need should fetched from the Nexus.
  5. Allways sync with the forked repository

GIT Basics

  1. Fork the respective code base to your git account
  2. git clone
  3. git commit -m “blha blah blah”
  4. git commit -m “Find my code if you can” -a
  5. git add
  6. git push

Git Beyond the Basics

  • Sych with upstream allways before push the code to your own repository

WSO2 GIT with ESB team

ESB team owns

Nobody else other than in ESB team has the mergeship :P for these code repository. So whenever somebody try to screw our repo, please take a careful look before merge!
The first principle is no one suppose to build anything other than currently working project.

Good to read

[Architecture] Validate & Merge solution for platform projects

Maven Rules in WSO2

Please find POM restructuring guidelines in addition to things we discussed during today's meeting.  

  1. Top level POM file is the 'parent POM' for your project and there is no real requirement to have separate Maven module to host parent POM file.
  2. Eliminate POM files available on 'component' , 'service-stub' and 'features' directories as there is no gain from them instead directly call real Maven modules from parent pom file ( REF - [1] )
  3. You must have a    section on parent POM and should define all your project dependencies along with versions.
  4. You CAN'T have  sections on any other POM file other than parent POM.
  5. In each submodule make sure you have Maven dependencies WITHOUT versions.
  6. When you introduce a new Maven dependency define it's version under section of parent POM file.  
  7. Make sure you have defined following repositories and plugin repositories on parent POM file. These will be used to drag SNAPSHOT versions of other carbon projects which used as dependencies of your project.

Prabath SiriwardenaMastering Apache Maven 3

Maven is the number one build tool used by developers and it has been there for more than a decade. Maven stands out among other build tools due to its extremely extensible architecture, which is built on top of the concept, convention over configuration. That in fact has made Maven the de-facto tool for managing and building Java projects. It’s being widely used by many open source Java projects under Apache Software Foundation, Sourceforge, Google Code, and many more. Mastering Apache Maven 3 provides a step-by-step guide showing the readers how to use Apache Maven in an optimal way to address enterprise build requirements.

 Following the book readers will be able to gain a thorough understanding on following key areas.
  • Apply Maven best practices in designing a build system to improve developer productivity.
  • Customize the build process to suit it exactly to your enterprise needs by developing custom Maven plugins, lifecycles and archetypes. 
  • Troubleshoot build issues with greater confidence. Implement and deploy a Maven repository manager to manage the build process in a better and smooth way. 
  • Design the build in a way, avoiding any maintenance nightmares, with proper dependency management. 
  • Optimize Maven configuration settings. 
  • Build your own distribution archive using Maven assemblies. Build custom Maven lifecycles and lifecycle extensions.
Chapter 1, Apache Maven Quick Start, focuses on giving a quick start on Apache Maven. If you are an advanced Maven user, you can simply jump into the next chapter. Even for an advanced user it is highly recommended that you at least brush through this chapter, as it will be helpful to make sure we are on the same page as we proceed.

Chapter 2, Demystifying Project Object Model (POM), focuses on core concepts and best practices related to POM, in building a large-scale multi-module Maven project.

Chapter 3, Maven Configurations, discusses how to customize Maven configuration at three different levels – the global level, the user level, and the project level for the optimal use.

Chapter 4, Build Lifecycles, discusses Maven build lifecycle in detail. A Maven build lifecycle consists of a set of well-defined phases. Each phase groups a set of goals defined by Maven plugins – and the lifecycle defines the order of execution.

Chapter 5, Maven Plugins, explains the usage of key Maven plugins and how to build custom plugins. All the useful functionalities in the build process are developed as Maven plugins. One could also easily call Maven, a plugin execution framework.

Chapter 6, Maven Assemblies, explains how to build custom assemblies with Maven assembly plugin. The Maven assembly plugin produces a custom archive, which adheres to a user-defined layout. This custom archive is also known as the Maven assembly. In other words, it’s a distribution unit, which is built according to a custom layout.

Chapter 7, Maven Archetypes, explains how to use existing archetypes and to build custom Maven archetypes. Maven archetypes provide a way of reducing repetitive work in building Maven projects. There are thousands of archetypes out there available publicly to assist you building different types of projects.

Chapter 8, Maven Repository Management, discusses the pros and cons in using a Maven repository manager. This chapter further explains how to use Nexus as a repository manager and configure it as a hosted, proxied and group repository.

Chapter 9, Best Practices, looks at and highlights some of the best practices to be followed in a large-scale development project with Maven. It is always recommended to follow best practices since it will drastically improve developer productivity and reduce any maintenance nightmare.

Dedunu DhananjayaDo you want Unlimited history in Mac OS X Terminal?

Back in 2013, I wrote a post about expanding terminal history unlimited. Recently I moved from Linux to Mac OS. Then I wanted unlimited history. Usually in Mac OS X you will only get 500 entries in history. New entries would replace old entries.

Take the a terminal window and type below command.

open ~/.bash_profile


vim ~/.bash_profile

Most probably you will get an empty file. Add below lines to that file. If the file is not empty add them in the end.

export HISTSIZE=

Next you have to save the file. Close the terminal and get new terminal windows. Now onwards your whole history would be stored in ~/.bash_history file.

sanjeewa malalgodaHow to add sub theme to API Manager jaggery applications - API Manager store/publisher customization

In API Manager we can add sub themes and change look and feel of jaggery applications. Here in this post i will provide high level instructions for that. Customize the existing theme and, add the new theme as a "sub theme" to the store and publisher.

(1) Navigate to "/repository/deployment/server/jaggeryapps/store/site/themes/fancy/subthemes" directory.
(2) Create a directory with the name of your subtheme. For example "test".
(3) Copy the "/repository/deployment/server/jaggeryapps/store/site/themes/fancy/css/styles-layout.css" to the new subtheme location "repository/deployment/server/jaggeryapps/store/site/themes/fancy/subthemes/test/css/styles-layout.css".
(4) At the end of the copied file add the css changes in [a].
(5) Edit "/repository/deployment/server/jaggeryapps/store/site/conf/site.json" file as below in order to make the new sub theme as the default theme.
        "theme" : {
               "base" : "fancy",
               "subtheme" : "test"

Add custom css to new sub theme

.link-to-user {
  max-width: 100px;
  white-space: nowrap;
  overflow: hidden;
  text-overflow: ellipsis;
@media only screen and (max-width:1200px){
    .navbar .nav.pull-right.login-sign-up{
        width: 100%;
border-left:solid 1px #292e38;
border-bottom:solid 1px #292e38;
.menu-content .navbar .nav.pull-right.login-sign-up > li{
 .menu-content .navbar .nav.pull-right.login-sign-up > li.dropdown > a , .menu-content .navbar .nav.pull-right.login-sign-up > > a{
        color:#000 !important;
 #wrap > div.header + div.clearfix + .container-fluid{

Adam FirestoneTreasures from the East: How Funded Meritocracy Can Change Government and Cybersecurity

For centuries, the trade winds, or Trades, have been the means by which the bounty of the East has enriched the West.  These riches were often tangible items such as precious metals, textiles, works of art and gemstones.  The most enduring itinerant wealth, however, has been ideas that fundamentally altered Western concepts of technology, law, government and education.  The westward migration of knowledge continues today, and may hold the key to economic innovation and a safer, more secure cyberspace.

There’s a long history of progressive ideas emanating from the East.  For example, western ideas of governance by a professional civil service are Chinese in origin.  The concept of a civil service meritocracy originated in China in 207 BCE.  Prior to that, official appointments were based on aristocratic recommendations and the majority of bureaucrats were titled peers.  As the empire grew and nepotism became rampant, the system broke down and government became increasingly inefficient and ineffective. 

The solution was the “Imperial Examination,” a sweeping testing system designed to select the best and brightest candidates for civil service.  Initiated in the Han dynasty, this system of examination and appointment became the primary path to state employment during the Tang dynasty (618 – 907 CE), and remained so until 1905. 

The examination curriculum ultimately covered five areas of study: military strategy, civil law, revenue and taxation, agriculture and geography, and the Confucian classics.  There were five testing levels, each increasing in scope and difficulty.  This hierarchy was intended to match candidates to levels of responsibility associated with prefecture, provincial, national and court-level appointments respectively.  This examination is regarded by historians as the first merit-based, standardized government occupational testing mechanism.

Unfortunately, innovative ideas for government travel less rapidly than the Trades.  More than a millennium passed before a comparable civil service meritocracy was implemented in the United States.  The Pendleton Civil Service Reform Act was passed in 1883 in response to the assassination of President Garfield by a civil service applicant, Charles Guiteau, who had been rejected under the previous patronage (or spoils) system.  The Act required applicants to pass an exam in order to be eligible for civil service jobs.  It also afforded protections against retaliatory, partisan or political dismissal, freeing civil servants from the influences of political patronage.  As a result (so the theory went), civil servants would be selected based on merit and the career service would operate in a politically neutral manner.

Another critically relevant Eastern innovation addresses the creation, nurturing and maintenance of an innovative, technically astute cyber workforce.  The Israeli Talpiot program is one of the most successful examples of national investment in cyber education and training in the world.

In the 1973 Yom Kippur War, Israeli forces were surprised by Egyptian use of sophisticated technology, including surface to air missiles and guided antitank missiles.  In response, the Israeli government set out to ensure that its forces would have a dominant technological superiority in all future conflicts.  In 1979, Israel implemented Talpiot.  Talpiot creates a synergy between the Israeli national defense infrastructure and the country’s most prestigious universities that produces an elite talent pool dedicated to the most pressing security technology issues.

Approximately 50 students (out of a candidate pool of 3,000 to 5,000) who demonstrate excellence in the natural sciences and mathematics are selected for Talpiot annually.  Their university tuition and fees are sponsored by the Israel Defense Force (IDF) (specifically the Israeli Air Force (IAF)) and they graduate with an officer’s commission.

The Talpiot application process begins after the equivalent of junior year in high school.  After an initial down selection, candidates are tested on basic knowledge as well as reasoning and analytic abilities.  Applicants who pass these tests then go through advanced screening.

Successful applicants enter a three year training cycle, which accounts for the three years of mandatory military service required of Israeli citizens.  While university classes are in session they pursue academic studies.  Military training takes place during the rest of the year.  Upon graduation and commissioning, the candidates spend an additional six years in regular IDF units where they assume senior roles in organizations dedicated to technical research and development (R&D). 

Talpiot builds on a unique, three-part curriculum that features academic, military and ethical cores.  The academic core is based around a bachelor’s degree in physics and mathematics.  The military core includes combat and specialized military occupational professional training, projects emphasizing both basic and applied research, and a thorough grounding in the Israeli technology and defense establishment.  The ethical core stresses Israeli culture, geography and history, leadership, the IDF mission, and core IAF and IDF values.

The academic core is rigorous.  Graduates earn a Bachelor of Science degree from the Faculty of Mathematics and Natural Sciences at the Hebrew University.  The course of study includes a degree in physics augmented with mathematics and computer science based studies. 

Upon completion of their studies, Talpiot graduates take positions with operational technology development organizations within the IDF.  In these roles, they conduct advanced technology research, develop advanced weapons, design algorithms and computer applications, or conduct systems analysis.  While most Talpiot alumni serve in R&D units, there is an operational option available.  Those choosing this option serve in army ground combat units, on naval ships and submarines, or as air force pilots.  After approximately three years of service in operational units, graduates are assigned to R&D organizations where they contribute the perspectives and insight gained in the field to the R&D effort.

Through Talpiot, Israel has gained a well-trained, highly competent cadre of technical specialists conducting R&D that is extraordinarily responsive to national security needs.  Talpiot research leads to rapid fielding of advanced technical solutions to both physical and cybersecurity problems.  Talpiot alumni are actively courted by global venture capital firms and have created a significant number of successful technology startups that have benefitted both the Israeli and global economies.

As with ideas of a civil service meritocracy, notions of state-sponsored training and education of a technocratic elite to meet public and private sector needs has moved west.  On January 1st, 2015, British national morning newspaper The Independent reportedon ideas coming out of Whitehall and Government Communications Headquarters (GCHQ) (the UK’s counterpart to the US National Security Agency (NSA)). 

Impressed by Talpiot’s success in the defense and commercial sectors, the British government is seeking to emulate Talpiot with a variant of the successful Teach First program (itself an offshoot of the Teach for America program in the United States).  The program’s (informally known as “Spook First”) goal is to convince promising young university graduates to work for and with GCHQ to develop new technologies that can be transitioned to the commercial sector, driving economic growth.  After a two year commitment, program alumni would be free to move to the private sector.

Unfortunately, something appears to have been garbled in translation as the Talpiot concept moved west.  Britain’s best and brightest technical graduates, already courted by prospective employers, have little incentive to take a relatively low paying government position.  The British proposal does not cover a candidate’s educational expenses, which are not trivial.  University tuition in the UK averages approximately $14,000 per year.  And that’s without accounting for the cost of room, board, books and other living expenses.  Given that the UK does not have mandatory military service, there is little incentive to drive quality candidates into the Spook First program.  From a student’s perspective, Spook First just doesn’t add up.

 The keys to Talpiot’s success are clear: 
  •  a fully funded world-class education;
  •  leadership positions in exciting, relevant technical R&D organizations; and
  •  a high probability of venture capital funding for technology startups.

 The translation in Britain yields:  “We’ll let you play with us, in a low paying job, after you’ve paid your own way.”  This does not set the UK up for success.

And what of the United States?  Innovation is part of the American DNA.  Unfortunately, so are skyrocketing education costs (average annual cost for a private university in the US is $32,000 per year), a national critical infrastructure that is increasingly vulnerable to cyber-attack and a desperate shortage of qualified cybersecurity professionals.  Given this perfect storm, bringing Talpiot even further west in a way that both replicates all of its key components and applies them in a uniquely American way makes a great deal of sense.  Providing qualified students a means to afford higher education and a mechanism to translate drive and innovation into private sector growth is a winning proposition for students, the economy and the nation.

Isuru PereraJava Mission Control & Java Flight Recorder

Last year, I got two opportunities to talk about Java Mission Control & Java Flight Recorder.

I first talked about "Using Java Mission Control & Java Flight Recorder" as an internal tech talk at WSO2. I must thank Srinath for giving me that opportunity.

After that, Prabath also invited me to do a talk at Java Colombo Meetup. Prabath, Thank you for inviting me and giving me the opportunity to talk at the Java Colombo Meetup!

I'm also very excited to see that Marcus Hirt, the Team Lead for Java Mission Control has mentioned about the Java Colombo Meetup in his blog post: "My Favourite JMC Quotes". It's so nice to see "Sri Lanka" was mentioned in his blog post! :)

Not to mention that there were recently JMC presentations from Houston to Sri Lanka.
From Marcus' Blog

Here are the slides used at the meetup.

Marcus Hirt's blog posts really helped me to understand JMC & JFR concepts and his tutorials were very helpful for the demonstrations.

In this blog post, I want to note down important instructions on using JFR and other tools.

Java Experimental Tools

I first started the talk by mentioning the various tools provided in the JDK.

Examples of using some Monitoring Toolsjstat

# List java processes.

# Print a summary of garbage collection statistics.
jstat -gcutil <pid>

Examples of using some Troubleshooting Tools: jmap, jhat, jstack

# Print a summary of heap
sudo jmap -heap <pid>

# Dump the Java heap in hprof binary format
sudo jmap -F -dump:format=b,file=/tmp/dump.hprof <pid>

# Analyze the heap dump
jhat /tmp/dump.hprof

# Print java stack traces
jstack <pid>

Java Flight Recorder

We need to use following options to enable Java Flight Recorder.

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder

To produce a Flight Recording from the command line, you can use “- XX:StartFlightRecording” option. For example


The relevant settings are in $JAVA_HOME/jre/lib/jfr.

Note that above command will start a "Time Fixed Recording"

You can also use following to change log levels in JFR.


Use the default recording option to start a "Continuous Recording"


Default recording can be dumped on exit. Only the default recording can be used with the dumponexit and dumponexitpath parameters.


The "jcmd" command

The "jcmd" is a JVM Diagnostic Commands tool. This is a very useful command and we can send various diagnostics commands to a java process.

# View Diagnostic Commands
jcmd <pid> help

As you can see, we can use this "jcmd" command to start a flight recording

# Start Recording
jcmd <pid> JFR.start delay=20s duration=60s name=MyRecording filename=/tmp/recording.jfr settings=profile

#Check recording
jcmd <pid> JFR.check

#Dump Recording
jcmd <pid> JFR.dump filename=/tmp/dump.jfr name=MyRecording

Kalpa WelivitigodaDate time format conversion with XSLT mediator in WSO2 ESB

I recently came across this requirement where a xsd:datetime in the payload is needed to be converted to a different date time format as follows,

Original format : 2015-01-07T09:30:10+02:00
Required date: 2015/01/07 09:30:10

In WSO2 ESB, I found that this transformation can be achieved through a XSLT mediator, class mediator or a script mediator. In an overview, XSLT mediator uses a XSL stylesheet to format the xml payload passed to the mediator whereas in class mediator and script mediator we use java code and javascript code respectively to manipulate the message context. In this blog post I am going to present how this transformation can be achieved by means of the XSLT mediator.

XSL Stylesheet
<?xml version="1.0" encoding="UTF-8"?>
<localEntry xmlns="" key="dateTime.xsl">
<xsl:stylesheet xmlns:xsl="" xmlns:xs="" version="2.0">
<xsl:output method="xml" omit-xml-declaration="yes" indent="yes" />
<xsl:param name="date_time" />
<xsl:template match="/">
<xsl:value-of select="format-dateTime(xs:dateTime($date_time), '[Y0001]/[M01]/[D01] [H01]:[m01]:[s01] [z]')" />
<description />

Proxy configuration
<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="" xmlns:xs="" name="DateTimeTransformation" transports="https http" startOnLoad="true" trace="disable">
<property name="originalFormat" expression="$body/dateTime/original" />
<xslt key="dateTime.xsl">
<property name="date_time" expression="get-property('originalFormat')" />
<log level="full" />

dateTime.xsl XLS style sheet is stored as an inline xml local entry in ESB.

In the proxy, the original date is passed as an parameter ("date_time") to the XLS style sheet. I have used format-dateTime function, a function of XSL 2.0, to do the transformation.

Sample request
<?xml version="1.0" encoding="UTF-8"?>
<soap:Envelope xmlns:soap="">
<soap:Header />

Console output
<?xml version="1.0" encoding="UTF-8"?>
<soap:Envelope xmlns:soap="">
<dateTime xmlns="" xmlns:xs="">
<required>2015/01/07 09:30:10 GMT+2</required>

Pavithra MadurangiEmail based login for tenants - For WSO2 Carbon based Products.

This simple blog post explains how to configure WSO2 Carbon based servers to support email authentication for tenants.

e.g :  If the tenant domain is and if the email user name of the tenant user is, then "" should be able to login to management console of WSO2 products

1) To support email authentication, enable following property in user-mgt.xml (CARBON_HOME/repository/conf)

2) Change following two properties in primary user store manager

3) Remove following property

After this configuration, tenants will be able to login with email attribute (email@tenantDomain)

e.g :

References :

Kavith Thiranga LokuhewageHow to use DTO Factory in Eclipse Che


Data transfer objects are used in Che to do the communication between client and server. In a code level, this is just an interface annotated with @DTO (com.codenvy.dto.shared.DTO). This interface should contain getters and setters (with bean naming conventions) for each and every fields that we need in this object.

For example, following is a DTO with a single String field.

public interface HelloUser {

    String getHelloMessage();

    void setHelloMessage(String message);

By convention, we need to put these DTOs to shared package as it will be used by both client and server side.

DTO Factory

DTO Factory is a factory available for both client and server sides, which can be used to serialize/deserialize DTOs. DTO factory internally uses generated DTO implementations (described in next section) to get this job done. Yet, it has a properly encapsulated API and developers can simply use DTOFactoy instance directly.

For client side   : com.codenvy.ide.dto.DtoFactory  
For server side : com.codenvy.dto.server.DtoFactory

HelloUser helloUser = DtoFactory.getInstance().createDto(HelloUser.class);
Initializing a DTO

Above code snippet shows how to initialize a DTO using DTOFactory. As mentioned above, proper DtoFactory classes should be used by client or server sides.

Deserializing in client side

Unmarshallable<HelloUser> unmarshaller =
helloService.sayHello(sayHello, new AsyncRequestCallback<HelloUser>(unmarshaller) {
           protected void onSuccess(HelloUser result) {

           protected void onFailure(Throwable exception) {
Deserializing in client side

When invoking a service that returns a DTO, client side should register a callback created using relevant unmarshaller factory. Then, the on success method will be called with a deserialized DTO.

Deserializing in server side

    public ... sayHello(SayHello sayHello){
             sayHello.getHelloMessage() ...
Deserializing in server side     
Everest (JAX-RS implementation of Che) implementation automatically deserialize DTOs when they are used as parameters in rest services. It will identify serialized DTO with marked type -  @Consumes(MediaType.APPLICATION_JSON) - and use generated DTO implementations to deserialize DTO.

DTO maven plugin

As mentioned earlier, for DtoFactoy to function properly, it needs some generated code that will contain concrete logic to serialize/deserialize DTOs. GWT compiler should be able to access generated code for client side and generated code for server side should go in jar file.

Che uses a special maven plugin called “codenvy-dto-maven-plugin” to generate these codes. Following figure illustrates a sample configuration of this plugin. It contains separate executions for client and server sides.
We have to input correct package structures accordingly and file paths to which these generated files should be copied.

    Other dependencies if DTOs from current project need them.

package - package, in which, DTO interfaces resides
outputDirectory -  directory, to which, generated files should be copied
genClassName - class name for the generated class

You should also configure your maven build to use these generated classes as a resource when compiling and packaging. Just add following line in resources in build section.


Kavith Thiranga LokuhewageGWT MVP Implementation in Eclipse Che

MVP Pattern

Model View Presenter (aka MVP) is a design pattern that attempts to decouple the logic of a component from its presentation. This is similar to the popular MVC (model view controller) design pattern, but has some fundamentally different goals. The benefits of MVP include more testable code, more reusable code, and a decoupled development environment.

MVP Implementation in Che

Note : Code example used in this document are from a sample project wizard page for WSO2 DSS .

There are four main java components used to implement a Che component that follows MVP.

  1. Interface for View functionality
  2. Interface for Event delegation
  3. Implementation of View
  4. Presenter


To reduce the number of files created for each MVP component, No. 1 and No. 2 are created within a single java file. To be more precise, event delegation interface is defined as a sub interface within view interface.

View interface should define methods that will be used by presenter to communicate with view implementation. Event delegation interface should define methods that will be implemented by presenter so that view can delegate events to presenter using these methods.

Following code snippet demonstrates these two interfaces that we created for DSS project wizard page.

public interface DSSConfigurationView extends View<DSSConfigurationView.ActionDelegate> {
    String getGroupId();
    void setGroupId(String groupId);
    String getArtifactId();
    void setArtifactId(String artifactId);
    String getVersion();
    void setVersion(String version);

    interface ActionDelegate {
           void onGroupIdChanged();
           void onArtifactIdChanged();
           void onVersionChanged();
VIew and Event Handler interfaces

Interface for view should extend from com.codenvy.ide.api.mvp.View interface. This com.codenvy.ide.api.mvp.View interface only defines a single method - void setDelegate(T var1).

…interface DSSConfigurationView extends View<DSSConfigurationView.ActionDelegate>..

Using generics, we need to inform this super interface about our event handling delegation interface.

View Implementation

View implementation often can extend from any abstract widget such as Composite. It may also use UIBinder  to implement the UI if necessary. It is possible to implement view by following any approach and using any GWT widget. The only must is that it should implement view interface (created in previous step) and IsWidget interface (Or extend any subclass of IsWidget).

public class DSSConfigurationViewImpl extends ... implements DSSConfigurationView {

      // Maintain a reference to presenter
    private ActionDelegate delegate;
      // provide a setter for presenter
    public void setDelegate(ActionDelegate delegate) {
        this.delegate = delegate;
      // Implement methods defined in view interface
    public String getGroupId() {
        return groupId.getText();

    public void setGroupId(String groupId) {
      // Notify presenter on UI events using delegation methods
    public void onGroupIdChanged(KeyUpEvent event) {

    public void onArtifactIdChanged(KeyUpEvent event) {

View implementation

As shown in above code snippet (see full code), main things to do in view implementation can be summarised as below.
  1. Extend any widget from GWT and implement user interface by following any approach
  2. Implement view interface (created in previous step)
  3. Manage a reference to action delegate (presenter - see next section for more info)
  4. Upon any UI events inform presenter using the delegation methods so that presenter can execute business logic accordingly   


Presenter can extend from many available abstract presenters such as AbstractWizardPage, AbstractEditorPresenter and BasePresenter, anything that implements com.codenvy.ide.api.mvp.Presenter. It also should implement Action Delegation interface so that upon any UI events, those delegation methods will be called.

public class DSSConfigurationPresenter extends ... implements DSSConfigurationView.ActionDelegate {

      // Maintain a reference to view
    private final DSSConfigurationView view;

    public DSSConfigurationPresenter(DSSConfigurationView view, ...) {
         this.view = view;
         // Set this as action delegate for view

     // Init view and set view in container
    public void go(AcceptsOneWidget container) {   

     // Execute necessary logic upon ui events
    public void onGroupIdChanged() {

      // Execute necessary logic upon ui events
    public void onArtifactIdChanged() {

Depending on the extending presenter, there may be various abstract method that needs to be implemented by presenter. For example, if you extend AbstractEditorPresenter, you need to implement initializeEditor(), isDirty() and doSave(), etc. methods. If it is AbstractWizardPage, you need to implement isCompleted(), storeOptions(), removeOptions(), etc methods.

Yet, as shown in above code snippet (seefullcode), following are the main things that you need to do in presenter.
  1. Extend any abstract presenter as needed and implement abstract methods/or override behaviour as needed
  2. Implement Action delegation interface
  3. maintain a reference to view
  4. set this as the action delegate of view using set delegate method
  5. init view and set view in the parent container (go method)
  6. use methods defined in view interface to communicate with view

The go method is the one that will be called by Che UI framework, when this particular component is need to be shown in IDE. This method will be called with a reference to parent container.

Dedunu DhananjayaOpenJDK is not bundled with Ubuntu by default

This is not a technical blog post. This was about a bet. One of my ex-colleagues told that OpenJDK is installed on Ubuntu by default. And I installed a fresh Virtual machine and showed him that it won't. Then I earned Pancakes. We went to The Mel's Tea Cafe

Dedunu DhananjayaThat Cafe on That Day

This was a treat from Jessi. (President of BVT) Actually we earned it buy helping her course work. According to her this was the best place. And we were excited. We planned to go there on 5pm. I was the guy who went there first. And time was around 4pm. Then I was waiting till someone comes. Aliza came there next. And we were waiting for our honorable president. 

I like the atmosphere. It was little bit hard to find That Cafe. You can see Jessi's favorite drink, Ocean Sea Fossil. BVT didn't want to leave the place. And we also decided the next BVT tour as well. Wait for next BVT tour. ;)

Dedunu DhananjayaSimply Strawberries on 14th Jan

We went to have Strawberry waffles. And all of us wanted it with chocolate sauce. And my friend Jessi always want to take photographs of food. Jessi, Aliza and myself went there. So I got this photo because of her. Waffle was awesome. Also I love the setting there. 

This is the beginning of "Bon Viveur" team. And we decided to go out and try different foods and places much often. Oh my god, I forgot to mention about the shop. It is Simply Strawberries. We had a walk to the place and it was fun!!!

Dedunu DhananjayaSunday or Someday on 27th Dec

Three of us wanted to go somewhere. And then we tried to pick a date, but we couldn't. Finally we just agreed to go out on Sunday. Then we went to Lavinia Breeze and had fun. We were acting like kids. Screaming, Laughing. We don't mind what others think. That's us!!!

Then we went to Majestic City Cinema to watch Hobbit. And we laughed like idiots when we are supposed to be serious. ;) Then finally we had went to Elite Indian Restaurant.

Good Best friends!!! :D

Dedunu DhananjayaThe Sizzle on 17th Dec

Recently I started visiting places with my friends and enjoy. So last month, I went to The Sizzle with one of my best friends. Receptionist asked "table for two?". Then I nodded. He bought us two a table for two which looked little bit embarrassing.  But food was good. And This was the second time, I visited "The Sizzle".

And this Sizzle visit will be remarkable. ;)

Ajith VitharanaRead the content stored in registry- WSO2 ESB

1.Lets say we have stored a XML file (order-id.xml) in registry.

2. I'm going to use Mock Service (Mockproxy) to read the content  and send back as a response (using respond mediator -ESB 4.8.x).

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
<property name="order"
<payloadFactory media-type="xml">
<arg evaluator="xml" expression="$ctx:order//id"/>
<arg evaluator="xml" expression="$ctx:order//symbol"/>
3. Mockproxy test.

Request :
<soapenv:Envelope xmlns:soapenv="">

<soapenv:Envelope xmlns:soapenv="">
      <response xmlns="">

sanjeewa malalgodaWSO2 API Manager - Troubleshoot common deployment issues

 Here are some of useful tips when you work on deployment related issues.

-Dsetup doesn't work for usage tables. Usage table definitions are not there in setup scripts.
In WSO2 API Manager and BAM deployment we need user manager database, registry database and api manager database. We do have setup scripts for those database under db scripts folder of product distribution.  There is no need to create any tables in stats database(manually or using setup script) as API Manager toolbox(deployed in BAM) will create them when hive queries get executed. -Dsetup option will not apply to the hive scripts inside toolbox deployed in BAM.

Understand connectivity between components in distributed API manager deployment.
This is important when you work on issues related to distributed API Manager deployment. Following steps to explain connectivity between components. It would be useful to listed them here.

1. Changed the admin password
2. Tried to log in to publisher and got the insufficient privilege error
3. Then changed the admin password in authManager element in api-manager.xml
4. Restarted and I was able to login to API publisher. Then I created an API and tried to publish. Got a permission error again.
5. Then, I changed password under API Gateway element in api-manager.xml
6. Restarted and published the API. Then, tried to invoke an API using an existing key. Got the key validation error.
7. Then, I changed the admin password in KeyManager element in api-manager.xml and all issues got resolved.

Thrift key validation does not work when we have load balancer fronted key manager.
Reason for this is most of load balancers not capable of routing traffic in session aware manner. So in such cases its always recommend to use WS key validation client.

Usage data related issues.
When you work with usage data related issues first we should check data source configurations in BAM and API Manager. Then we need to check created tables in usage database. Most of reported issues are due to configuration issues.  Same applies to billing sample as well.

sanjeewa malalgodaWSO2 API Manager - How to customize API Manager using extension points

Here in this article i will discuss about common extensions available in API Manager and how we can use them.

Dedunu DhananjayaHow to run Alfresco Share or Repository AMP projects

From the previous post, I explained how to generate an Alfresco AMP project using Maven. When you have an AMP project you can run it by deploying it to an existing Alfresco Repository or Share. But if you are a developer you will not find it as a effective way to run Alfresco modules. The other way is that you can run the AMP project using Maven plug-in. 

In this post, I'm not going to talk about the first method. As I said earlier we can run an Alfresco instance using Maven. To do that from you terminal move to Alfresco AMP project folder and run below command.

mvm clean package -Pamp-to-war

Perhaps it may take a while. If you are running this command for the first time it will download Alfresco binary for local Maven repository. If you are running an instance again, your changes would be still available on that Alfresco instance. 

If you want to discard all the previous data, use below command.

mvn clean package -Ppurge -Pamp-to-war

Above command will discard all the changes and data. It will start a fresh instance.

Enjoy Alfresco Development!!!

Pavithra MadurangiConfiguring Active Directory (Windows 2012 R2) to be used as a user store of WSO2 Carbon based products

The purpose of this blog post is not to explain the steps on how to configure AD as primary user store. Above information is covered from WSO2 Documentation. My intention is to give some guide on how to configure AD LDS instance to work over SSL and how to export/import certificates to the trust store of WSO2 servers.

To achieve this, we need to

  1. Install AD on Windows 2012 R2
  2. Install AD LDS role in Server 2012 R2
  3. Create an AD LDS instance
  4. Install Active Directory Certificate Service in the Domain Controller (Since we need to get AD LDS instance work over SSL)
  5. Export certificate used by Domain Controller.
  6. Import the certificate to client-truststore.jks in WSO2 servers.

Also this information is already covered from following two great blog posts by Suresh. So my post will be an updated version of them and will fill some gaps and link some missing bits and pieces.

1. Assume you have only installed Windows 2012 R2 and now you need to install AD too. Following article clearly explains all the steps required.

Note : As mentioned in the article itself, it is written assuming that there's no existing Active Directory Forrest. If you need to configure the server to act as the Domain Controller for an existing Forrest, then following article will be useful

2) Now you've installed Active Directory Domain Service and the next step is to install AD LDS role. 

- Start - > Open Server Manager -> Dashboard and Add roles and feature

- In the popup wizard, Installation type -> select Role-based or feature based option and click the Next button. 

- In the Server Selection, select current server which is selected by default. Then click Next.

- Select AD LDS (Active Directory Lightweight Directory Service ) check box in Server Roles  and click Next.

- Next you'll be taken through wizard and it will include AD LDS related information. Review that information and click Next.

- Now you'll be prompted to select optional feature. Review it and select the optional features you need (if any) and click next.

- Review installation details and click Install.

- After successful AD LDS installation you'll get a confirmation message.

3. Now let's create an AD LDS instance. 

- Start -> Open Administrative Tools.  Click Active Directory Lightweight Directory Service Setup Wizard.

-  You'll be directed to Welcome to the Active Directory Lightweight Directory Services Setup Wizard. Click Next.

- Then you'll be taken to Setup Options page. From this step onwards, configuration is same as mentioned in 

4. As explained in above blog, if you pick Administrative account for the service account selection, then you won't have to specifically create certificates and assign them to AD LDS instance. Instead the default certificates used by the Domain Controller can be accessed by AD LDS instance.

To achieve this, let's install certificate authority on Windows 2012 server (if it's not already installed). Again I'm not going to explain it in details because following article covers all required information

5. Now let's export the certificate used by Domain controller

- Go to MMC (Start -> Administrative tools -> run -> MMC)
- File -> Add or Remove Snap-ins
- Select certificates snap-in and click add.

-Select computer account radio button and click Next.
- Select Local computer and click Finish.
- In MMC, click on Certificates (Local Computer) -> Personal -> Certificates.
- There you'll find bunch of certificates.
- Locate root CA certificate, right click on it -> All Tasks and select Export.

Note : The intended purpose of this certificate is all. (Not purely for server authentication.) It's possible to create a certificate for server authentication and use it for LDAPS authentication. [1] and [2] explains how it can be achieved.

For the moment I'm using the default certificate for LDAPS authentication.

- In the Export wizard, select Do not export private key option and click Next.
- Select DER encoded binary X.509 (.cer) format and provide a location to store the certificate.

6. Import the certificate to trust store in WSO2 Server.

Use following command to import the certificate to client-truststore.jks found inside CARBON_HOME/repository/resource/security.

keytool -import -alias adcacert -file/cert_home/cert_name.cer -keystore CARBON_HOME/repository/resource/security/client-trustsotre.jks -storepass wso2carbon

After this, configuring user-mgt.xml and tenant-mgt.xml is same as explained in WSO2 Documentation.

Madhuka UdanthaPython with CSV

CSV (Comma Separated Values) format is the most common format in computer world (export and import). In python 'csv module' implements classes to read and write tabular data in CSV format without knowing the precise details of the CSV format used by Excel.

Here it is reading csv file and filtering data in it. Create new CSV file and moved filtered data


1 import csv
3 data = []
5 #reading csv file
6 with open('D:/Research/python/data/class.csv', 'rb') as f:
7 reader = csv.reader(f)
8 #checking file is open fine
9 print f.closed
10 count =0
11 for row in reader:
12 print row
13 #catching first element
14 if count ==0 :
15 data += [row]
16 #collecting over 99 marks only
17 else:
18 if int(row[1]) > 99:
19 data += [row]
20 count += 1
21 #f.close();
23 #writting to csv file
24 with open('D:/Research/python/data/some.csv', 'wb') as f1:
25 writer = csv.writer(f1)
26 for row in data:
27 print row
28 writer.writerows(data)


out put

%run D:/Research/python/
['name', 'marks']
['dilan', '100']
['jone', '98']
['james', '100']
['jack', '92']
['dilan', '100']
['james', '100']



Malintha AdikariHow to send string content as the response from WSO2 ESB Proxy/API

We can send XML content or JSON content as the response out from our WSO2 ESB proxy/REST API. But there may be a scenario where we want to send string content (which is not in XML format) as the response of our service. Following synapse code snippet shows you how to do it

As an example think you have to send following string content to your client service


Note :
  • Above is not in XML format. So we cannot generate this directly through payload factory mediator.
  • We have to send <,> symbols inside the response, but WSO2 ESB doesn't allow to keep those inside your synapse configuration.
1. First you have to encode above expected response. You can use this tool to encode your xml. We get following xml after encoding in our example


Note : If you want to encode dynamic payload content you can use script mediator or class mediator for that task

2. Now we can attach the require string content to our payload as follows

            <payloadFactory media-type="xml">
                  <ms11:text xmlns:ms11="">$1</ms11:text>
                  <arg value="&lt;result1&gt;malintha&lt;/result1&gt;+&lt;result2&gt;adikari&lt;/result2&gt;"/>
            <property name="messageType" value="text/plain" scope="axis2"/>

Here we are using payload factory mediator to create our payload. You can see still our media-type is XML. Then load our string content as argument value and finally we change the message type to "text/plain". So this would return string content as it's response.

sanjeewa malalgodaWSO2 API Manager visibiity, subscription availability and relation between them

When we create APIs we need to aware about API visibility and subscriptions. Normally API visibility directly couple with subscription availability(simply because you cannot subscribe to something you dont see in store). See following diagram for more information about relationship between them.

Visibility - we can contorl how other users can see our APIS

Subscription availability - How other users can subscribe to APIs created by us

Chintana WilamunaReliable messaging pattern

Reliable messaging involve sending a message successfully from one system to another over unreliable protocols. Although TCP/IP gives you reliability at a lower level, reliable messaging provide delivery guarantees at a higher level. If the recipient is unavailable, messages will be retransmitted over a defined period until it’s successfully delivered.


Traditional SOA method of handling reliable messaging is through a framework/library that implements WS-ReliableMessaging specification. The pattern is illustrated here. A framework like Apache Sandesha provide reliable delivery guarantees according to the specification. From the reliable message specification (PDF) message exchange sequence is like this,

At each step there will be an XML message going back an forth the wire. This create a lot of additional overhead and as a result performance suffer.

Alternative approach using JMS

Looking at the communication overhead and complexity involved with creating/maintaining ReliableMessaging capable clients, an alternative approach using JMS is very popular. Simplicity of JMS and easy maintainability are key factors for JMS’s success as a defacto solution used for reliable message delivery. You put messages to a queue and process messages from a queue.

Messaging with WSO2 platform

WSO2 ESB have a concept of messages stores and message processors. Message stores does what you expect. They store messages. Default implementation of message store is an in memory implementation. Also you have the option of pointing the message store to a queue in an external broker. Also there are standard extension points to extend the functionality and write a custom message store implementation.

Message processors are responsible for reading messages from a message store and sending it to another system. You can configure parameters at the message processor to have a flexible interval to poll for new messages, retry interval, delivery attempts and so on.

Advantages of using WSO2 for reliable messaging

  1. ESB follows a configuration driven approach. Easy to configure, don’t need to write code for large set of integration patterns
  2. Protocol conversion comes naturally and don’t have to do any extra work - Accepting an HTTP request and sending it to a JMS queue require you to specify the JMS endpoint only
  3. Can take advantage of a large set of enterprise integration patterns
  4. Simple config and deployment for simple scenarios. Complex scenarios are possible just be extending/integrating external brokers
  5. Production deployment flexibility (single node, multiple nodes, on-prem, cloud, hybrid deployments)

Hasitha Aravinda[Oracle] How to get raw counts of all tables at once

SQL Query : 

          xmltype(dbms_xmlgen.getxml('select count(*) c from '||table_name))
    from user_tables 

Lahiru SandaruwanRun WSO2 products with high available Master-Master Mysql cluster

This blog post will explain a simple setup of WSO2 product with a Master-Master Mysql cluster. Normal configurations for database can be found at here. In this small note, we configure the Carbon product to use two Mysql master nodes which makes sure high availability of the setup.

This is an extreamly easy guide to setup Mysql cluster. Assume the hostnames of Mysql Master nodes are "mysql-master-1" and "mysql-master-2". Use following sample datasource in masterdatasource.xml of Carbon product. You can use same format for any of the databases which are used by WSO2 Carbon products as explained in above mentioned guide.

<?xml version="1.0" encoding="UTF-8"?>
   <description>The datasource used by user manager</description>
   <definition type="RDBMS">
         <validationQuery>SELECT 1</validationQuery>


Ajith VitharanaHow WS-Trust STS works in WSO2 Identity Server.

WS-Trust STS (Secure Token Service) provides the facility for  secure communication between web service client and server.

Benefits of WS-Trust STS

1. Identity delegation.
2. Service consumers should not be worried about the token specific implementation/knowledge.
3. Secure communication across  the web services.

Work flow.

1. Service client provides credentials to STS and request a security token (RST - Request Security Token).

2. STS validates the client credentials and reply with security token (SAML) to the client (RSTR -Request Security Token Reply).

3. Client invoke the web service along with the token.

4. Web service validates the  token from the STS.

5. STS send the decision to the web service.

6. If the token is valid web service allow to access the protected resource(s).

Use Case

Invoke a secured  web service  (Hosted in WSO2 Application Server) using the secure token issued by WSO2 Identity Server.

1. Download the latest version of WSO2 AS (5.2.1) and WSO2 Identity Server(5.0.0).
2. In AS,  change the port offSet value in carbon.xml to 1 (default 0).
3. Start both servers.
4. The "HelloService" sample web service which is already deployed in AS.

 5. Once you chick on the "HelloService" name, you should see the service endpoints.

6. In this use case we are going to use the "wso2carbon-sts" service of the Identity Server for issuing and validating tokens. Therefore Identity server act as the "Identity Provider". So we need to configure the Resident Identity Provider" first.

7. Go to Home ---> Identity -----> Identity Provider -----> List, then  click on "Resident Identity Provide" link.

8. Add a name for the resident Identity provider. (Eg: "WSO2IdentityProvider")

9. Expand the "WS-Trust / WS-Federation (Passive) Configuration". Now you should see the "wso2carbon-sts" endpoint.

10. Click on the "Apply Security Policy" link and enable the security. Then select the security scenario which is need to be applied for the wso2carbon-sts service. (Eg: select UsernameToken). Once you select the security scenario, the relevant policy will be applied automatically to the "wso2carbon-sts" service.

 10. Select the user group(s) which is allowed to access the "wso2carbon-service" for requesting  tokens.

11. Click on the "wso2carbon-sts" service link, now you should  see the wsdl including the applied policy.


12.To add a service provider for web service client , enter name (eg : HelloServiceProvider) for the new service provider and update.

13. Edit the "HelloServiceProvider" and configure the web service.

14. Apply the security for the "HelloService" deployed in AS.

15. Select the  "Non-Repudiation" as the security scenario.

   Bellow image is captured from Identity Server product.

16. Now  "HelloService" WSDL should have the applied policy.

17. Download the sts-client project from following git repository location.
(This is same sample which is included in the WSO2 Identity Server  project and did few changes for this use case).

git :

18 README of the sts-client project describes how to execute the client.

(The underline values should be changed according to your environment.)

19. The key store of the web service client  should have the public certificate of the STS and AS. Therefore it used the wso2carbon.jks which is already using in ESB and AS.

20 You can enable the soap tracer to capture the request and reply of each servers.

Dedunu DhananjayaHow to generate Alfresco 5 AMP project

Recently I have been working as a Alfresco Developer. When you are developing Alfresco Modules, you need to have a proper project with correct directory structure. Since Alfresco use Maven, you can  generate Alfresco 5 AMP project using archetype.

First you need Java and Maven installed on your Linux/Mac/Windows computer. Then run below command to start the project.

mvn archetype:generate -DarchetypeCatalog= -Dfilter=org.alfresco:

Then you will get below text.

[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] ------------------------------------------------------------------------
[INFO] >>> maven-archetype-plugin:2.2:generate (default-cli) > generate-sources @ standalone-pom >>>
[INFO] <<< maven-archetype-plugin:2.2:generate (default-cli) < generate-sources @ standalone-pom <<<
[INFO] --- maven-archetype-plugin:2.2:generate (default-cli) @ standalone-pom ---
[INFO] Generating project in Interactive mode
[INFO] No archetype defined. Using maven-archetype-quickstart (org.apache.maven.archetypes:maven-archetype-quickstart:1.0)
Choose archetype:
1: -> org.alfresco.maven.archetype:alfresco-allinone-archetype (Sample multi-module project for All-in-One development on the Alfresco plaftorm. Includes modules for: Repository WAR overlay, Repository AMP, Share WAR overlay, Solr configuration, and embedded Tomcat runner)
2: -> org.alfresco.maven.archetype:alfresco-amp-archetype (Sample project with full support for lifecycle and rapid development of Repository AMPs (Alfresco Module Packages))
3: -> org.alfresco.maven.archetype:share-amp-archetype (Share project with full support for lifecycle and rapid development of AMPs (Alfresco Module Packages))

Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): :

Now you have 3 options to select.
  1. All-in-One (This includes Repository Module, Share Module, Solar configuration and Tomcat runner. One-stop solution for Alfresco development. I don't recommend it to beginners to start with. )
  2. Alfresco Repository Module (This will generate AMP for Alfresco Repository.)
  3. Alfresco Share Module (This will generate AMP for Alfresco Share.)
Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): : 2
Choose org.alfresco.maven.archetype:alfresco-amp-archetype version: 
1: 2.0.0-beta-1
2: 2.0.0-beta-2
3: 2.0.0-beta-3
4: 2.0.0-beta-4
5: 2.0.0
Choose a number: 5: 

In this example I used Alfresco Repository Module. Then it prompts for SDK version. By pressing enter you can get the latest(default) SDK version. Then Maven prompts for groupId and artifactId. Please provide a suitable Ids for them.

Define value for property 'groupId': : org.dedunu
Define value for property 'artifactId': : training
[INFO] Using property: version = 1.0-SNAPSHOT
[INFO] Using property: package = (not used)
[INFO] Using property: alfresco_target_groupId = org.alfresco
[INFO] Using property: alfresco_target_version = 5.0.c
Confirm properties configuration:
groupId: org.dedunu
artifactId: training
version: 1.0-SNAPSHOT
package: (not used)
alfresco_target_groupId: org.alfresco

alfresco_target_version: 5.0.c
 Y: : 

Then again Maven prompts for your target Alfresco version. At the moment the latest Alfresco version is 5.0.c. If you hit enter it will continue with latest version. Otherwise you can customize the target Alfresco version. Then it will generate a Maven project for Alfresco.

[INFO] ----------------------------------------------------------------------------
[INFO] Using following parameters for creating project from Archetype: alfresco-amp-archetype:2.0.0
[INFO] ----------------------------------------------------------------------------
[INFO] Parameter: groupId, Value: org.dedunu
[INFO] Parameter: artifactId, Value: training
[INFO] Parameter: version, Value: 1.0-SNAPSHOT
[INFO] Parameter: package, Value: (not used)
[INFO] Parameter: packageInPathFormat, Value: (not used)
[INFO] Parameter: package, Value: (not used)
[INFO] Parameter: version, Value: 1.0-SNAPSHOT
[INFO] Parameter: groupId, Value: org.dedunu
[INFO] Parameter: alfresco_target_version, Value: 5.0.c
[INFO] Parameter: artifactId, Value: training
[INFO] Parameter: alfresco_target_groupId, Value: org.alfresco
[INFO] project created from Archetype in dir: /Users/dedunu/Documents/workspace/training
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 08:33 min
[INFO] Finished at: 2015-01-19T23:58:38+05:30
[INFO] Final Memory: 14M/155M

[INFO] ------------------------------------------------------------------------

sanjeewa malalgodaHow to run WSO2 API Manager 1.8.0 with Java Security Manager enabled

In Java, the Security Manager is available for applications to have various security policies. The Security Manager helps to prevent untrusted code from doing malicious actions on the system.

Here in this post we will see how we can run WSO2 API Manager 1.8.0 with security manager enabled.

To sign the jars, we need a key. We can use the keytool command to generate a key.

sanjeewa@sanjeewa-ThinkPad-T530:~/work/wso2am-1.8.0-1$ keytool -genkey -alias signFiles -keyalg RSA -keystore signkeystore.jks -validity 3650 -dname "CN=Sanjeewa,OU=Engineering, O=WSO2, L=Colombo, ST=Western, C=LK"Enter keystore password: 

Re-enter new password:
Enter key password for
(RETURN if same as keystore password):
Scripts to sign Jars available in product. Create following 2 scripts and grant them required permissions. script:
    if [[ ! -d $1 ]]; then
       echo "Please specify a target directory"
       exit 1
    for jarfile in `find . -type f -iname \*.jar`
      ./ $jarfile
    done script:

    set -e
    signjar="$JAVA_HOME/bin/jarsigner -sigalg MD5withRSA -digestalg SHA1 -keystore $keystore_file -storepass $keystore_storepass -keypass $keystore_keypass"
    verifyjar="$JAVA_HOME/bin/jarsigner -keystore $keystore_file -verify"
    echo "Signing $jarfile"
    $signjar $jarfile $keystore_keyalias
    echo "Verifying $jarfile"
    $verifyjar $jarfile
    # Check whether the verification is successful.
    if [ $? -eq 1 ]
       echo "Verification failed for $jarfile"

Then sign all jars using above created scripts
    ./ ./repository/ > log

Add following to file \$CARBON_HOME/repository/conf/sec.policy \
 -Drestricted.packages=sun.,,com.sun.xml.internal.bind.,com.sun.imageio.,org.wso2.carbon. \,, \

Exporting signFiles public key certificate and importing it to wso2carbon.jks

We need to import the signFiles public key certificate to the wso2carbon.jks as the security policy file will be referring the signFiles signer certificate from the wso2carbon.jks (as specified by the first line).

    $ keytool -export -keystore signkeystore.jks -alias signFiles -file sign-cert.cer
    sanjeewa@sanjeewa-ThinkPad-T530:~/work/wso2am-1.8.0-1$ keytool -import -alias signFiles -file sign-cert.cer -keystore repository/resources/security/wso2carbon.jks
    Enter keystore password: 
    Owner: CN=Sanjeewa, OU=Engineering, O=WSO2, L=Colombo, ST=Western, C=LK
    Issuer: CN=Sanjeewa, OU=Engineering, O=WSO2, L=Colombo, ST=Western, C=LK
    Serial number: 5486f3b0
    Valid from: Tue Dec 09 18:35:52 IST 2014 until: Fri Dec 06 18:35:52 IST 2024
    Certificate fingerprints:
    MD5:  54:13:FD:06:6F:C9:A6:BC:EE:DF:73:A9:88:CC:02:EC
    SHA1: AE:37:2A:9E:66:86:12:68:28:88:12:A0:85:50:B1:D1:21:BD:49:52
    Signature algorithm name: SHA1withRSA
    Version: 3
    Trust this certificate? [no]:  yes
    Certificate was added to keystore

Then add following sec.policy file
    keystore "file:${user.dir}/repository/resources/security/wso2carbon.jks", "JKS";

    // ========= Carbon Server Permissions ===================================
    grant {
       // Allow socket connections for any host
       permission "*:1-65535", "connect,resolve";
       // Allow to read all properties. Use in to restrict properties
       permission java.util.PropertyPermission "*", "read";
       permission java.lang.RuntimePermission "getClassLoader";
       // CarbonContext APIs require this permission
       permission "control";
       // Required by any component reading XMLs. For example: org.wso2.carbon.databridge.agent.thrift:4.2.1.
       permission java.lang.RuntimePermission "";
       // Required by org.wso2.carbon.ndatasource.core:4.2.0. This is only necessary after adding above permission.
       permission java.lang.RuntimePermission "";
     permission "${carbon.home}/repository/deployment/server/jaggeryapps/publisher/localhost/publisher/site/conf/locales/jaggery/locale_en.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/publisher/localhost/publisher/site/conf/locales/jaggery/locale_default.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/publisher/site/conf/site.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/store/localhost/store/site/conf/locales/jaggery/locale_en.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/store/localhost/store/site/conf/locales/jaggery/locale_default.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/store/site/conf/locales/jaggery/locale_en.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/store/site/conf/locales/jaggery/locale_default.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/store/site/conf/site.json", "read,write";
permission "${carbon.home}/repository/deployment/server/jaggeryapps/publisher/site/conf/locales/jaggery/locale_en.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/publisher/site/conf/locales/jaggery/locale_default.json", "read,write";
       permission "findMBeanServer,createMBeanServer";
      permission "-#-[-]", "queryNames";
      permission "*[java.lang:type=Memory]", "queryNames";
      permission "*[java.lang:type=Memory]", "getMBeanInfo";
      permission "*[java.lang:type=Memory]", "getAttribute";
      permission "*[java.lang:type=MemoryPool,name=*]", "queryNames";
      permission "*[java.lang:type=MemoryPool,name=*]", "getMBeanInfo";
      permission "*[java.lang:type=MemoryPool,name=*]", "getAttribute";
      permission "*[java.lang:type=GarbageCollector,name=*]", "queryNames";
      permission "*[java.lang:type=GarbageCollector,name=*]", "getMBeanInfo";
      permission "*[java.lang:type=GarbageCollector,name=*]", "getAttribute";
      permission "*[java.lang:type=ClassLoading]", "queryNames";
      permission "*[java.lang:type=ClassLoading]", "getMBeanInfo";
      permission "*[java.lang:type=ClassLoading]", "getAttribute";
      permission "*[java.lang:type=Runtime]", "queryNames";
      permission "*[java.lang:type=Runtime]", "getMBeanInfo";
      permission "*[java.lang:type=Runtime]", "getAttribute";
      permission "*[java.lang:type=Threading]", "queryNames";
      permission "*[java.lang:type=Threading]", "getMBeanInfo";
      permission "*[java.lang:type=Threading]", "getAttribute";
      permission "*[java.lang:type=OperatingSystem]", "queryNames";
      permission "*[java.lang:type=OperatingSystem]", "getMBeanInfo";
      permission "*[java.lang:type=OperatingSystem]", "getAttribute";
      permission "org.wso2.carbon.caching.impl.CacheMXBeanImpl#-[org.wso2.carbon:type=Cache,*]", "registerMBean";
      permission "org.apache.axis2.transport.base.TransportView#-[org.apache.synapse:Type=Transport,*]", "registerMBean";
      permission "org.apache.axis2.transport.base.TransportView#-[org.apache.axis2:Type=Transport,*]", "registerMBean";
      permission "org.apache.axis2.transport.base.TransportView#-[org.apache.synapse:Type=Transport,*]", "registerMBean";
      permission java.lang.RuntimePermission "modifyThreadGroup";
      permission "${carbon.home}/repository/database", "read";
      permission "${carbon.home}/repository/database/-", "read";
      permission "${carbon.home}/repository/database/-", "write";
      permission "${carbon.home}/repository/database/-", "delete";
    // Trust all super tenant deployed artifacts
    grant codeBase "file:${carbon.home}/repository/deployment/server/-" {
    grant codeBase "file:${carbon.home}/lib/tomcat/work/Catalina/localhost/-" {
     permission "/META-INF", "read";
     permission "/META-INF/-", "read";
     permission "-", "read";
     permission org.osgi.framework.AdminPermission "*", "resolve,resource";
     permission java.lang.RuntimePermission "*", "";
    // ========= Platform signed code permissions ===========================
    grant signedBy "signFiles" {
    // ========= Granting permissions to webapps ============================
    grant codeBase "file:${carbon.home}/repository/deployment/server/webapps/-" {
       // Required by webapps. For example JSF apps.
       permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
       // Required by webapps. For example JSF apps require this to initialize com.sun.faces.config.ConfigureListener
       permission java.lang.RuntimePermission "setContextClassLoader";
       // Required by webapps to make HttpsURLConnection etc.
       permission java.lang.RuntimePermission "modifyThreadGroup";
       // Required by webapps. For example JSF apps need to invoke annotated methods like @PreDestroy
       permission java.lang.RuntimePermission "accessDeclaredMembers";
       // Required by webapps. For example JSF apps
       permission java.lang.RuntimePermission "";
       // Required by webapps. For example JSF EL
       permission java.lang.RuntimePermission "getClassLoader";
       // Required by CXF app. Needed when invoking services
       permission javax.xml.bind.JAXBPermission "setDatatypeConverter";
       // File reads required by JSF (Sun Mojarra & MyFaces require these)
       // MyFaces has a fix  
       permission "/META-INF", "read";
       permission "/META-INF/-", "read";
       // OSGi permissions are requied to resolve bundles. Required by JSF
       permission org.osgi.framework.AdminPermission "*", "resolve,resource";


Start server

Chintana WilamunaWhat is microservices architecture?

Microservice architecture to me is a term that doesn’t convey anything new. Martin Fowler’s article about microservices compare it with a monolithic application. Breaking a monolithic application into a set of services comes with several benefits. That’s true regardless of the term being used, microservices or SOA. Quoting from article,

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies

Also the article goes on to explain a set of characteristics of microservices based architecture. These characteristics are general to any architecture that’s based on services regardless of the implementation technology used. Association of ESBs with SOA and labeling it as heavyweight is not right. It’s a flimsy argument against widely available high quality open source tools we have today. We can certainly do SOA without an ESB. A lot of the bad rap for SOA stems from trying to use expensive and bloated tools to do the wrong job. Compared with open source middleware tools these days we can implement distributed services with ease. A lot of these tools doesn’t require developers to lock down on a particular messaging format. It’s about understanding the customers’ problems and using the right technology and toolset to solve the problems with minimum effort and time. So microservices doesn’t give you anything new. You need to understand the problems you have and decide what’s the best possible path to take. After that a proof of concept can be done in weeks. Once you can prove your architecture, progressing towards broader goals become easy.

Challenges and possible solutions

I really like this article on about microservices. This again is not specific to “microservices”. Those challenges are there in every software system that’s based on services. Let’s see whether we can address some of the challenges.

Significant Operations Overhead

Any moderately large distributed deployment with 10+ servers (with HA) will have a significant operational overhead. There are multiple ways to reduce this complexity. This will increase when there are multiple environments for develop, test and production.

  1. Deployment automation - Tools like Puppet gives a lot of power to automate deployment of environments as well as artifacts that should be deployed. All the components of the deployment such as load balancers, message brokers and other services can be successfully automated though Puppet
  2. Monitoring running instances - Although there are open source tools like Nagios/Ganglia/ELK they’re still a bit behind compared to tools like Amazon CloudWatch IMO.
  3. Real-time alerting - Again there are multiple open source products that you can configure to get alerts on based on disk/CPU/memory utilization
  4. Rolling into production - Open source tools like WSO2 AppFactory plays a significant role here to give developers the ability to rollout to different environments easily

We can utilize some of the above tools to reduce operational overhead.

Substantial DevOps Skills Required

To stand up the environment in a private data center or on top of an IaaS vendor does require some devops skill. You can get around this by using cloud hosted platforms such as WSO2 Public Cloud. Then again there can be organization policies around data locality.

Implicit Interfaces

This is one of the major challenges in any RESTful architecture. If you’re using SOAP then the service interface is available through WSDL. Now WADL is becoming popular around HTTP based services for providing a service definition. Even though there are standards coming up still there are challenging aspects when it comes to securing and versioning these services.

Interface changes of services can be solved with a mediation layer when the need arise. This can be an alternative to introducing contracts for services where differences in interfaces are masked at the mediation layer.

Duplication Of Effort

Sanjiva’s article about API Management explains the importance of having an API Management strategy for removing duplication of effort as well as service reuse. In order to reuse, there should be an efficient way to find what’s available. Having a catalog of APIs help to find what can be reused and reduce duplication of effort.

Distributed System Complexity

Although distributed systems add complexity, flexibility aspects you get by having a set of independently operating services is large. According to the requirements of the application and performance characteristics expected, granularity of service decomposition should be decided.

Asynchronicity Is Difficult!

This IMO is a difficult problem to solve than the other points author has mentioned. Maintaining complex distributed transaction semantics across service boundaries is hard.

Testability Challenges

While individual service testing is important, integration testing of the entire interaction from end to end is more important in a distributed environment. So more emphasis should be given to service integrations that will involve several services being called to get a single result

Chamila WijayarathnaCreating a RPC Communication using Apache Thrift

Apache Thrift is an open source cross language Remote Procedure Call (RPC) framework with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.
In one of my previous blogs, I wrote about my contributions to Apache Thrift during Google Summer of Code 2014. In this blog, I'm going to write about how to create a simple RPC communication using Apache Thrift.
To do this, you need to have installed Apache Thrift in your computer. 
Then the first step is writing .thrift file which defines the service you are implementing. In this file you need to define methods, structs and exceptions you are going to use. Following is a thrift code that defines a very simple service I'm going to implement here.

    namespace java hello

    service mytest1{
        i32 add(1:i32 num1, 2:i32 num2)


This defines a service with single method add with take two integers as input and return the sum of those 2.
Thrift documentation describes more about defining services using thrift. There are many data types defined in thrift that can be used when defining the service.
Let's save this in a file named mytest1.thrift. Then by running "thrift -r --gen java mytest1.thrift" we can generate the bean files required to create the service. This will create gen-java/hello/ file which contains the classes and methods needed for creating server and client. You can use any language that is supported by thrift here.

Then next step is creating the server. First we need to create Handler class which contains the method bodies of methods we defined in .thrift definition. So in our case, handler class will look likes following.

import org.apache.thrift.TException;
public class Handler implements mytest1.Iface {
public int add(int num1, int num2) throws TException {
System.out.println("Entered handler");
return num1+num2;


Then we need to create the server class which starts the server and keep listening to incoming requests. It should be look like following.

import org.apache.thrift.server.TServer;
import org.apache.thrift.server.TServer.Args;
import org.apache.thrift.server.TSimpleServer;
import org.apache.thrift.transport.TServerSocket;
import org.apache.thrift.transport.TServerTransport;

public class Processor {

public static Handler handler;
public static mytest1.Processor<mytest1.Iface> processor;
public static void main(String [] args) {
   try {
     handler = new Handler();
     processor = new mytest1.Processor<mytest1.Iface>(handler);

     Runnable simple = new Runnable() {
       public void run() {
     new Thread(simple).start();
   } catch (Exception x) {

 public static void simple(mytest1.Processor<mytest1.Iface> processor) {
   try {
     TServerTransport serverTransport = new TServerSocket(9090);
     TServer server = new TSimpleServer(new Args(serverTransport).processor(processor));

     System.out.println("Starting the simple server...");
   } catch (Exception e) {

The hierarchy of the project I create in Eclipse looks likes following.

You will have to add some thrift dependency jar files to this as well. A simple server I created is available at

Then the next task is to create the client. Its easier than creating the server. Following is a client class written in Java.

import org.apache.thrift.TException;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TProtocol;
import org.apache.thrift.transport.TSocket;
import org.apache.thrift.transport.TTransport;

public class Client {

 public static void main(String [] args) {
   try {
     TTransport transport;
     transport = new TSocket("localhost", 9090);;
     TProtocol protocol = new  TBinaryProtocol(transport);
     mytest1.Client client = new mytest1.Client(protocol);
   } catch (TException x) {

 private static void perform(mytest1.Client client) throws TException
   int sum = client.add(24, 31);
   System.out.println("24+31=" + sum);

After starting server, if you run this client, you will see that service on the server will run and return the desired outcome. For the same .thrift definition, we can create gen files in any language and create servers and clients using them.

Chintana WilamunaMoving the blog

I’ve just moved the blog to a new location. I might move the archives a little later. Moved a couple of posts to adjust the look and feel of the blog. Also hoping that this will be a bit more photo friendly.

Lakmali BaminiwattaTroubleshooting Swagger issues in WSO2 API Manager

WSO2 API Manager provides this functionality through the integration of Swagger ( Swagger-based interactive documentation allows you to try out APIs from the documentation itself which is available as the "API Console" in API Store. 

There are certain requirements that need to be satisfied in order to swagger Try-it functionality to work. First requirement is to enable CORS in API Mananger Store. This documentation describes how that should be done. 

But most of them face many issues in getting the swagger Try-it into work. So this blog post describes common issues faced by users with Swagger and how to troubleshoot them.  


API Console keeps on loading the response for ever as below.

Cause -1

API resource not supporting OPTIONS HTTP verb. 


Add OPTIONS HTTP verb for API resources as below. Then Save the API and Try again. 

Cause -2 

Backend endpoint not supporting OPTIONS HTTP verb. 

Note: You can verify this by directly invoking the backend for OPTIONS verb. If backend is not supporting OPTIONS verb, "403 HTTP method not allowed" will be returned. 


If you have control over the Backend service/api, enable OPTIONS HTTP verb. 

Note: You can verify this by directly invoking the backend for OPTIONS verb. If backend is supporting OPTIONS verb, 200/202/204 success response should be returned.

If you can't enable OPTIONS HTTP verb in the backend, then what you can do is modify the API synapse sequence of the API, which returns back without sending the request to the backend if it is an OPTIONS request. 


API Console completes request execution, but no response is returned. 


Authentication Type enabled for OPTIONS HTTP verb is not 'None'. 

If this is the cause below error message will be shown in the wso2carbon.log. Required OAuth credentials not provided
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(
at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(
at org.apache.axis2.engine.AxisEngine.receive(
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(
at org.apache.axis2.transport.base.threads.NativeWorkerPool$
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
at java.util.concurrent.ThreadPoolExecutor$


Make the Auth Type as none for OPTIONS HTTP verb as below. Then Save & Publish the API.

Note: If you are seeing the Issue -2 , even though Authentication type is shown as none for OPTIONS, then please re-select the authentication type as 'none' as above. Then Save & Publish the API. There is an UI bug in API Manager 1.7.0, where although you have set some other authentication type other than 'None' for OPTIONS verb, UI shows it as none. 


API Store domain/port which you are currently trying swagger API Console, is not included in the CORS Access-Control-Allow-Origin configuration. For example, in below CORS configuration, only localhost domain addresses are allowed for API Store. But API Console is accessed using IP address. 


Include domain/port in the CORS Access-Control-Allow-Origin configuration. For above example, we have to include IP address as below.  Then restart the server and try API Console again. 


API Console keeps on loading the response for ever as below when API Store is accessed as HTTPs, while HTTP is working properly. 


Browser blocking the request due to accessing the API Gateway in HTTP from HTTPs. 


Go to API Publisher and edit "Swagger API Definition" of the API and change the basePath with https gateway address as below. The Save the "Swagger API Definition" and try again. 


If you are still getting the issue, even after applying the above then, cause can be that the security certificate issued by the server is not trusted by your browser.


Access the HTTPS gateway endpoint directly from your browser and accept the security certificate. Then try again. 

Amila MaharachchiWhy you should try WSO2 App Cloud - Part 1

I have written a blog post on how to get started with WSO2 App Cloud couple of months ago and it can be found at Lets get started with WSO2 App Cloud. My intention of writing this post is to explain why you should try it. I'll be listing the attractive features of WSO2 App Cloud which makes it special from other app cloud offerings available.

  1. Onboarding process
  2. Creating your first application
  3. Launching your first application
  4. Editing and re-launching the application

Onboarding process

WSO2 App Cloud allows you to sign-up by providing your email address. Then you get an email with a link to click and confirm your account. Once you click it, as the next step, you are asked to provide an organisation name which you want to create.

WSO2 App Cloud Sign-Up page

WSO2 App Cloud has an organization concept which allows a set of developers, QA engineers, DevOps persons to work collaboratively. When you create an account in WSO2 App Cloud, it creates an organisation for you. In other words, this is a tenant. Then you can invite other members to this organisation. You can assign members to different applications developed under this organisation entity. I'll explain this structure in a future post (to keep you reading through this :))

Some of the other app clouds available today do not have the organisation concept. They only have the application entity. You sign-up and create an application. Then you are the owner of that app. You can invite others to collaborate with your app. But, if you create another app, you need to invite them again for collaboration.

Creating your first application

After you sign-up and sign-in successfully, lets have a look at the first app creation experience. WSO2 App Cloud support multiple application types to be created. They are,
  • Java web applications
  • Jaggery web applications (Jaggery is a Javascript framework introduced by WSO2 itself)
  • JAX-WS services
  • JAX-RS services
  • WSO2 Data Services
  • It also allows you to upload existing Java web apps, JAX-WS, JAX-RS and Jaggery apps
WSO2 App Cloud is planning to support other app types such as PHP, node.js in the future.

In WSO2 App Cloud, creating an application is a two click process. After you login in, you click on "Add New Application" button, fill in the information and click "Create Application" button. This will create the skeleton of the app, a git repository for it, build space for the app. Then it will build the app and deploy it for you. All you have to do is, wait a couple of minutes, go to the app's overview page and click the "Open" button to launch your app. This has the following advantages.
  1. You don't need to know anything about the folder structure of the app type you are creating. WSO2 App Cloud does it for you.
  2. Within a couple of minutes, you have an up and running application. This is same for new users and users who are familiar with the WSO2 App Cloud.
See this video on WSO2 App Cloud's app creation experience.

Other offerings have different approaches when creating applications. Some of them needs users to download SDKs. Users also need to have knowledge about the structure of an app which they want to create. After a user creates and uploads/pushes an app, they have to start up instances to run it. Some offerings provide a default instance, but if you want high availability for your app, user has to start additional instances. WSO2 App Cloud takes care of them by default, so the user does not need to worry about it.

Launching your first application

Lets see how a user can launch the application they have created.

When you create the app in WSO2 App Cloud, after it goes through the build process, it is automatically deployed. Within seconds of creation, it presents you the URL to launch your app. A user with very little knowledge can get an app created and running easily.

Editing and re-launching the application

Its obvious that a user wants to edit the application he/she creates by adding their code.WSO2 App Cloud dominates the app cloud offerings when it comes to development support. It provides you a cloud IDE. You can just click the "Edit Code" button and your app code will be open in the browser. Not only you can edit the code in the browser, you can build and run it before pushing the code to the App Cloud. Very cool, isn't it?

Second option is to edit the code using WSO2 Developer Studio. To do this, you need to have a working installation of WSO2 Developer Studio.

Third option is to clone the source code and edit it using your favourite IDE. 
WSO2 App Cloud IDE
See this video on WSO2 App Cloud's cloud IDE.

Special thing to note is, there is no other app cloud offering which provides a cloud IDE for the developers.

I'll be writing the second part of this post by discussing some more attractive features of WSO2 App Cloud and I am expecting to discuss
  • Using resources within the app (databases etc.)
  • Collaborative development
  • Lifecycle management of the app
  • Monitoring logs
Your feedback is welcome. Stay tuned :) 

Chandana NapagodaConfigurable Governance Artifacts - CURD Operation

Please refer my previous post which explain about Configurable Governance Artifacts in WSO2 Governance Registry.

Once you have added an RXT, it will generate HTML based List and Add view. Also it will be deployed as an Axis2 based service with CRUD operations. For example, when you upload contact.rxt, there will be a Contact Service exposed with CRUD operations. Using provided CRUD operations, external client applications(php, .net,etc) can perform registry operations.

Below is an example CRUD operations provided for Contact RXT which we created in my previous post(RXT Source).
  • addContact - create an artifact in the type Contact .
  • getContact - retrieve an artifact in the type Contact .
  • updateContact - update an artifact in the type Contact .
  • deleteContact - delete an artifact in the type Contact . 
  • getContactArtifactIDs - get all the artifact ID s of artifacts in the type Contact .

To retrieve WSDL of the above service, we need to disable "HideAdminServiceWSDLs" in "carbon.xml" file. After that, you need to restart WSO2 Governance Registry server. Then Contract(WSDL) will be exposed like this: 


Please refer Service Client Example for more details:

Chandana NapagodaWSO2 API Manager - Changing the default token expiration time

In WSO2 API Manager, expiration period of all the access tokens is set to 60 minutes (3600 seconds) by default. However, you can modify the default expiration period value using identity.xml file located in <APIM_HOME>/repository/conf/ directory.

In the identity.xml file you can see separate configurations to modify default expirations of User tokens and application access tokens.


If you are planning to modify the validity period of appliccation access token, then you have to modify the default value of the <ApplicationAccessTokenDefaultValidityPeriod> element in identity.xml file. Changing the value of <ApplicationAccessTokenDefaultValidityPeriod> will not affect for existing applications which have alreday generated application tokens. So when you regenerate the application token, it will pick the token validity time from the UI. Therefore, for applications which has already generated tokens, token validity period needs to be changed from the UI as well. However, when you are creating a new application or when you generating the token for the first time, it will pick the token validity period from the identity.xml file.


If you are planning to modify the validity period of user token, then you need to update value of the <UserAccessTokenDefaultValidityPeriod> element in identity.xml file. The User token validity period will get updated when user generating and refreshing the token.

Chintana WilamunaSelecting a Strategy for New IT

There are many advantage that you can gain from moving to new IT that connects business data with customers, partners and general public in general. “New IT” here refers to pretty much any process/mechanisms that you take to move away from siloed applications and having a more integrated experience to make day to day business more efficient. From a business and project management standpoint this can mean gaining more visibility into ongoing projects and the business impact they make. From an engineering stand point this can mean, versions of each project that’s in production, projects that will be going to production soon, critical bugs affecting a release schedule and so forth. This blog tries to explore couple of existing problems and how API management helps to move forward with adopting “new IT” for a connected business.

Existing processes

Everyone has some sort of a process/methodology that works for them. No 2 places or teams are alike. When there are already systems and certain ways of doing things are in place, it’s hard to introduce change. First and foremost the battle seems to be from the developers who are accustomed to doing certain things in certain ways. When introducing something that’s new, it’s always helpful to position it as filling a gap in the current process, doing incremental improvements.

Possible problems

Let’s see what some of the usual problems and how API management can solve those.

Standard method for writing services

If you have a rigid standard when introducing a new service, that might not be optimal. Rigid standards in the sense that tied to particular architectural style like REST or everything being a SOAP service. There are some services and scenarios where a SOAP service make sense and there are other situations where an HTTP service accepting a JSON message would be much simple and efficient. Having a one true standard which forces non-optimal creation of services will lead to unhappy developers/consumers and end up not being reused at all.

Challenges in service versioning/consistent interfaces

When a new version of a service is to be released, there has to be a way of versioning the service. If you can disrupt all the client and force them to upgrade then that’s easy (in case of an API change). Otherwise there has to be a way of deprecating an exising service and moving it to the new one.

Finding an existing service. Encouraging service reuse

If there are services available, then there should be a way to discover existing services. Not having a place to discover existing services is going to make service reuse a nightmare. This is not about holy grail of service reuse most SOA literature talks about. Think about how a developer in another team or a new hire discover services.


When you have a service, you need to find out who’s using that service, what are the usage patterns, should certain clients be throtted out to give priority for other clients and so on. If you don’t have any metering, it’s near impossible to determine what’s going on. Problems (if you have any) service consumers face. It also helps to have historic data for doing projections that’ll result in resources expansions accordingly.


Here I would like to present a solution and how it will address some of the problems we listed out earlier. Make APIs the interfaces for services. There is a difference between a service and an API. API will be the consistent interface to everything outside (users within the organization/partners/suppliers/customers etc…), anything or anyone who will consume the service.

Having an API façade will be the first step. You can expose a SOAP based service without any problem. Exposing a REST service falls naturally. Exposing a service is not bound by any imposed standard anymore. A service can be written by analysing the best approach based on the problem at hand.

Versioning is enabled at the service definition level. The API layer. Following picture shows how to create a new version of an existing API.


In the picture above it shows the current API version in the top and you can give a version when you’re creating the new one. Once you create the API, at the publishing stage you can choose whether to deprecate the existing API or to have them coexist so that the new version is taken as a separate API. As following picture shows,


Another cool thing is that you can choose whether to revoke existing security tokens, forcing all clients no re-subscribe.

Next up is having a place for developers to find out about existing services. API Manager has a web application named API Store that list out published APIs. Here’s an example for the store that’s developed on top of API Manager. Monitoring is equally simple and powerful.

This is how API Manager helps to make life easy as well as help making the right technical choice. Allowing developers and other stakeholders to choose right service type, message formats encouraging service implementation to be diverse but still, having a consistent interface where monitoring/security/versioning etc… can be applied with ease.

Chintana WilamunaImplementing an SOA Reference Architecture

A reference architecture try to capture best practices and common knowledge for a particular subject area or a business vertical. Although you can define a reference architecture at any level, for the purpose of this blog we’ll be talking about an SOA reference architecture. When tasked with implementing an SOA architecture, blindly folloing a reference architecture might not give optimal results. If business requirements are not considered sometimes it will be not the right fit for the issues at hand.

When an SOA architecture is going to be implemented, close attention should be given to business requirements and any other technological constraints. Having to work with a specific ERP system, having to work with a legacy datastore which otherwise, too expensive to replace with a whole new system and so on. Based on these facts a solution architecture should be developed that’s aligned with the business requirements.

Looking at the solution architecture a toolset should be evaluated that maximize ROI and possible future enhancements that might come in the next 1 - 2 years. Evaluating an existing architecture every 1 - 2 years and making small refinements saves the time and effort for doing big bang replacements and modifications for critical components. While selecting a toolset having a complete unified platform helps to build the bits you need right away and still have room for additions later. If you’re looking for a complete platform you probably want to consider WSO2 middleware platform provide a complete open source solution.

I’m biased for open source solutions and having a platform that makes connected business possible is mighty cool.

Isuru PereraMonitoring WSO2 products with logstash JMX input plugin

These days, I got the chance to play with ELK (Elasticsearch, Logstash & Kibana). These tools are a great way to analyze & visualize all logs.

You can easily analyze all wso2carbon.log files from ELK. However we also needed to use ELK for monitoring WSO2 products and this post explains the essential steps to use logstash JMX input plugin to monitor WSO2 servers.

Installing Logstash JMX input plugin

Logstash has many inputs and the JMX input plugin is available under "contrib"

We can use "plugin install contrib" command to install extra plugins.

cd /opt/logstash/bin
sudo ./plugin install contrib

Note: If you use logstash 1.4.0 and encounter issues in loading the jmx4r, please refer Troubleshooting below.

Logstash JMX input configuration

When using the JMX input plugin, we can use a similar configuration as follows. We are keeping the logstash configs in "/etc/logstash/conf.d/logstash.conf"

input {
path => "/etc/logstash/jmx"
polling_frequency => 30
type => "jmx"
nb_thread => 4

output {
elasticsearch { host => localhost }

Note that the path points to a directory. We have the JMX configuration in "/etc/logstash/jmx/jmx.conf"

//The WSO2 server hostname
"host" : "localhost",
//jmx listening port
"port" : 9999,
//username to connect to jmx
"username" : "jmx_user",
//password to connect to jmx
"password": "jmx_user_pw",
"alias" : "jmx.dssworker1.elasticsearch",
//List of JMX metrics to retrieve
"queries" : [
"object_name" : "java.lang:type=Memory",
"attributes" : [ "HeapMemoryUsage", "NonHeapMemoryUsage" ],
"object_alias" : "Memory"
}, {
"object_name" : "java.lang:type=MemoryPool,name=Code Cache",
"attributes" : [ "Name", "PeakUsage", "Usage", "Type" ],
"object_alias" : "MemoryPoolCodeCache"
}, {
"object_name" : "java.lang:type=MemoryPool,name=*Perm Gen",
"attributes" : [ "Name", "PeakUsage", "Usage", "Type" ],
"object_alias" : "MemoryPoolPermGen"
}, {
"object_name" : "java.lang:type=MemoryPool,name=*Old Gen",
"attributes" : [ "Name", "PeakUsage", "Usage", "Type" ],
"object_alias" : "MemoryPoolOldGen"
}, {
"object_name" : "java.lang:type=MemoryPool,name=*Eden Space",
"attributes" : [ "Name", "PeakUsage", "Usage", "Type" ],
"object_alias" : "MemoryPoolEdenSpace"
}, {
"object_name" : "java.lang:type=MemoryPool,name=*Survivor Space",
"attributes" : [ "Name", "PeakUsage", "Usage", "Type" ],
"object_alias" : "MemoryPoolSurvivorSpace"
}, {
"object_name" : "java.lang:type=GarbageCollector,name=*MarkSweep",
"attributes" : [ "Name", "CollectionCount", "CollectionTime" ],
"object_alias" : "GarbageCollectorMarkSweep"
}, {
"object_name" : "java.lang:type=GarbageCollector,name=ParNew",
"attributes" : [ "Name", "CollectionCount", "CollectionTime" ],
"object_alias" : "GarbageCollectorParNew"
}, {
"object_name" : "java.lang:type=ClassLoading",
"attributes" : [ "LoadedClassCount", "TotalLoadedClassCount", "UnloadedClassCount" ],
"object_alias" : "ClassLoading"
}, {
"object_name" : "java.lang:type=Runtime",
"attributes" : [ "Uptime", "StartTime" ],
"object_alias" : "Runtime"
}, {
"object_name" : "java.lang:type=Threading",
"attributes" : [ "ThreadCount", "TotalStartedThreadCount", "DaemonThreadCount", "PeakThreadCount" ],
"object_alias" : "Threading"
}, {
"object_name" : "java.lang:type=OperatingSystem",
"attributes" : [ "OpenFileDescriptorCount", "FreePhysicalMemorySize", "CommittedVirtualMemorySize", "FreeSwapSpaceSize", "ProcessCpuLoad", "ProcessCpuTime", "SystemCpuLoad", "TotalPhysicalMemorySize", "TotalSwapSpaceSize", "SystemLoadAverage" ],
"object_alias" : "OperatingSystem"
} ]

This is all we need to configure logstash to get JMX details from WSO2 servers. Note that we have given a directory as the path for JMX configuration. This means that all the configs inside "/etc/logstash/jmx" will be loaded. So, we need to make sure that there are no other files.

I'm querying only the required attributes for now. It is possible to add as many queries as you need.

Securing JMX access of WSO2 servers.

WSO2 servers by default start the JMX service and you should be able to see the JMX Service URL in wso2carbon.log

For example:
TID: [-1234] [] [DSS] [2014-05-31 01:09:11,103]  INFO {org.wso2.carbon.core.init.JMXServerManager} -  JMX Service URL  : service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi

You can see JMX configuration in <CARBON_HOME>/repository/conf/etc/jmx.xml and the JMX ports in <CARBON_HOME&gt/repository/conf/carbon.xml

<!-- The JMX Ports -->
<!--The port RMI registry is exposed-->
<!--The port RMI server should be exposed-->

You may change ports from this configuration.

It is recommended to create a role with only "Server Admin" permission and assign to the "jmx_user". Then the "jmx_user" will have the required privileges to monitor WSO2 servers.

Also if we enable Java Security Manager, we need to have following permissions. Usually the WSO2 servers are configured to use the security policy file at  <CARBON_HOME>/repository/conf/sec.policy if the Security Manager is enabled.

grant {
// JMX monitoring requires following permissions. Check Logstash JMX input configurations
permission "-#-[-]", "queryNames";
permission "*[java.lang:type=Memory]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=MemoryPool,name=*]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=GarbageCollector,name=*]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=ClassLoading]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=Runtime]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=Threading]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=OperatingSystem]", "queryNames,getMBeanInfo,getAttribute";

That's it. You should be able to push JMX stats via logstash now.


First of all you can check whether the configurations are correct by running following command.

logstash --configtest

This must tell that the configuration is OK.

However I encountered following issue in logstash 1.4.0 when running the logstash command.

LoadError: no such file to load -- jmx4r
require at org/jruby/
require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53
require at /opt/logstash/lib/logstash/JRUBY-6970.rb:27
require at /opt/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.4/lib/polyglot.rb:65
thread_jmx at /opt/logstash/bin/lib//logstash/inputs/jmx.rb:132
run at /opt/logstash/bin/lib//logstash/inputs/jmx.rb:251

For this issue, we need to extract the plugins to the same directory of logstash installation instead of contrib plugin installation. I got help from #logstash IRC to figure this out. Thanks terroNZ!

I did following steps

wget  --no-check-certificate -O logstash-contrib-1.4.0.tar.gz
tar -xvf logstash-contrib-1.4.0.tar.gz
sudo rsync -rv --ignore-existing logstash-contrib-1.4.0/* /opt/logstash/

Please note that JMX input plugin works fine in logstash-1.4.1 after installing contrib plugin and above steps are not required.

Then the next issue can occur is when connecting to WSO2 server. Check <CARBON_HOME>/repository/logs/audit.log and see whether the user can connect successfully. If it is not successful, you should check user permissions.

Another issue can be the failure of JMX queries. You can run logstash with "--debug" option and see debug logs.

I noticed following.
{:timestamp=>"2014-05-30T00:09:29.373000+0000", :message=>"Find all objects name java.lang:type=Memory", :level=>:debug, :file=>"logstash/inputs/jmx.rb", :line=>"165"}
{:timestamp=>"2014-05-30T00:09:29.392000+0000", :message=>"No jmx object found for java.lang:type=Memory", :level=>:warn, :file=>"logstash/inputs/jmx.rb", :line=>"221"}
{:timestamp=>"2014-05-30T00:09:29.393000+0000", :message=>"Find all objects name java.lang:type=Runtime", :level=>:debug, :file=>"logstash/inputs/jmx.rb", :line=>"165"}
{:timestamp=>"2014-05-30T00:09:29.396000+0000", :message=>"No jmx object found for java.lang:type=Runtime", :level=>:warn, :file=>"logstash/inputs/jmx.rb", :line=>"221"}

This issue came as we have enabled the Java Security Manager and after adding permissions as mentioned above, the logstash JMX input plugin worked fine.

Next is to create dashboards in Kibana using these data. Hopefully I will be able to write a blog post on that as well.

Prabath SiriwardenaWSO2 Identity Server 5.0.0 - Service Pack 1 is now is publicly available

You can now download the WSO2 Identity Server 5.0.0 - Service Pack 1 from

This Service Pack(SP) contains all the fixes issued for WSO2 Identity Server 5.0.0 up to now.

Installing the service pack into a fresh WSO2 Identity Server 5.0.0 release, prior to the first start up. 

1. Download WSO2 Identity Server 5.0.0 ( and extract it to a desired location.

2. Download, extract it and copy the WSO2-IS-5.0.0-SP01 directory to the same location, where you have wso2is-5.0.0.

3. The directory structure, from the parent directory will look like following.


4. In the command line, from the 'WSO2-IS-5.0.0-SP01' directory run the following command.

   On Microsoft Windows: \> install_sp.bat
   On Linux/Unix: $ sh

5. Start the Identity Server.

   Linux/Unix : $ sh -Dsetup
   Windows : \> wso2server.bat -Dsetup

 6. Open the file, wso2is-5.0.0/repository/logs/patches.log and look for the following line. If you find it, that means the service pack has been applied successfully.

INFO {org.wso2.carbon.server.util.PatchUtils} - Applying - patch1016 Installing the service pack into a WSO2 Identity Server 5.0.0 release, in production already.

If you have an Identity Server instance running already in your production environment - and want to update it with the service pack, please refer the README file which comes with the service pack itself.

Lakmali BaminiwattaHow to invoke APIs in SOAP style in Swagger

WSO2 API Manager has integrated Swagger to allow API consumers to explore APIs through a interactive console which is known as 'API Console'

This swagger based API Console supports invoking APIs i REST style out of the box. So this post going to show how we can invoke APIs in SOAP style in API console of WSO2 API Manager 1.7.0. For that we need to do few extra configurations.

1. Send SOAPAction and Content-Type header in the request
2. Enable sending SOAPAction header in the CORS configuration

First create an API for a SOAP Service. In this example I am using HelloService sample SOAP service of WSO2 Application Server. This HelloService has a operation named greet which accepts a payload as below.


1. Create API

Figure-1 : Design API 

Figure-2 : Implement API by giving SOAP endpoint

Figure-3 :Save and Publish API

2. Update Swagger API Definition

Now we have to edit the default Swagger content and add SOAPAction and Content-Type header. For that go to 'Docs' tab and click 'Edit Content' for API definition.

Figure-4: Edit Swagger API definition

Since we have to send a payload in the request, locate the POST http method in the content. Then add below content into the 'parameters' section of POST http method. 

                            "name": "SOAPAction",
                            "description": "OAuth2 Authorization Header",
                            "paramType": "header",
                            "required": false,
                            "allowMultiple": false,
                            "dataType": "String"
                            "name": "Content-Type",
                            "description": "OAuth2 Authorization Header",
                            "paramType": "header",
                            "required": false,
                            "allowMultiple": false,
                            "dataType": "String"

Then the complete POST HTTP method definition would look like below.

Figure-5: Edited Swagger API definition

After doing the above changes Save & Close.

Now if you go to API Store and click on the created API, and the go to API Console, you should see the SOAPAction and Content-Type fields are added to the Swagger UI.

Figure-6: API Console with new header parameters

3. Add New Headers to CORS Configuration

Although we have added required headers, those will be sent in the request oly if they are set as allowed headers in the CORS configuration.

For that open APIM_HOME/repository/conf/api-manager.xml file and locate CORSConfiguration. Then add SOAPAction in to available list of Access-Control-Allow-Headers as below (Content-Type is added by default, So we have to only add SOAPAction). 

Figure-7: API Console with new header parameters

After adding the headers, restart the api manager and invoke the API through API Console. 
When invoking the API, set the SOAPAction according to your SOAP service. Also set the COntent-Type header as 'text/xml'

Figure-8: API Console Invoke API

If you face any issues with swagger invoke, please go through this.

Lali DevamanthriProductivity in SOA Development team

I have frequently noticed that SOA development team is always critical path in all projects. Is there a problem of capacity, skills, life cycle, design or maturity? If you have your development team on the critical path for everything, that’s a PROCESS choice and a POLICY choice. If you don’t think that choice is working for you, you need to examine alternatives.  SOA, if implemented in a couple different ways, actually eliminates the need for bottle-necked development. SOA will not give you any answer about people management. Get inspiration from Agile methodology (Alister Cockburn for instance) in order to drive the output value from a team.

If you are team lead who experiencing the same kind of atmosphere … 

#1 SOA strategy

In our post-cold-war world, the only who decides is still MONEY. Technology not yet.
– SOA is not an intellectual thing.
– SOA MUST make your enterprise more competitive on the market.
– SOA must maximize the ROI of your enterprise.
– At your level, SOA must help for productivity (after some impediments removal)

Reusing services means factoring team skills and factoring costs spend on research/education related to some technology within the organization.

#2 Ownership

Are you really not owner of the team? As an architect, you have to give the good advises to the team in order they are more productive. For sure, you’re challenged / monitored by your management on those aspects.

If the architect of your house makes unreadable plans for the brickies and carpenters, this will result in a disaster. Even if the architect of your house followed an SOA initiative.

The IT architect is assimilated with team / people management.
What is the added value of giving advises if your team is not hearing/following you ?

#3 Organization

Your problem is not technical. This is an organizational issue.

Indeed you have to take on you those problem first : proper life cycle, artifact definition.
This is your job. But nothing to do with SOA.
When those are clear / make a plan, and discuss with the team.
And then, for those issues related to ownership confusion / Team Leading, you have to prepare to expose the situation to your management. Be very careful on your communication; choose your words, expose the facts / not problems / otherwise you’ll be designated to be the source of problem (!). Do not tell to the management about SOA or anything technical. Just tell the facts. And then, if (and only if) you are asked for, propose different solutions.

Manula Chathurika ThantriwatteHow to work with Apache Stratos 4.1.0 with Kubernetes

Below are the simple steps, how to configure Kubernetes cluster and work it with Apache Stratos.

Setup Kubernetes host cluster by cloning and setting up the virtual machines      
    Login to Kubernetes master and pull the following Docker image
    • cd [vagrant-kubernetes-setup-folder]
    • vagrant ssh master
    • sudo systemctl restart controller-manager
    • docker pull stratos/php:4.1.0-alpha

    Verify Kubernetes cluster status, once the following command is run there should be at least one minion listed
    • cd [vagrant-kubernetes-setup-folder]
    • vagrant ssh master
    • kubecfg list minions

    Start Stratos instance and tail the log
    • cd [stratos-home-folder]
    • sh bin/ start
    • tail -f repository/logs/wso2carbon.log

    Set Message Broker and Complex Event Processor IP addresses to Stratos host IP address in the Kubernetes custer
    • cd [stratos-samples-folder]
    • vim single-cartridge/kubernetes/artifacts/kubernetes-cluster.json


    Once the server is started run one of the Kubernetes samples available in the Stratos samples checked out above
    • cd [stratos-samples-folder]
    • cd single-cartridge/kubernetes
    • ./

    Monitor Stratos log and wait until the application activated log is printed
    • INFO {org.apache.stratos.autoscaler.applications.topic.ApplicationsEventPublisher} -  Publishing Application Active event for [application]: single-cartridge-app [instance]:single-cartridge-app-1

    Ajith VitharanaAdding custom password policy enforcer to WSO2 Identity Server.

    1. Lets say, user password should meet the following requirements

    * password should have at least one lower case
    * password should have at least one upper case
    * password should have at least one digit
    * password should have at least one special character (!@#$%&*).
    * password should have 6-8 characters.

     You can write new custom password enforcer extending the AbstractPasswordPolicyEnforcer class.

    1. You can download the java project from following git repository location [i]


    2. Build the project (Follow the README.txt).

    3. Copy the jar file in to <IS5.0.0_HOME>/repository/components/lib directory.

    4. Open the file (<IS5.0.0_HOME>/repository/conf/security/

    5. Enable the  identity listener.


    6. Disable the default Password.policy.extensions configurations.


    7. Add new configuration for custom policy enforcer.


    8. Restart the server.

    9. Test.

    i) user : ajith  password : 1Acws@d  (this password meet above  policy).

    ii) user : ajith1 password : 1Acws@dgggg (this password doesn't meet above policy because length is  11.)

    Ajith VitharanaSOAP web service as REST API using WSO2 API Manager -1


    Download the latest version of   WSO2 ESB, API Manager ,Application Server and WSO2 Developer Studio from web site.

    1. Deploy sample web service.

     1.1 Download this sample web service archive[i] (SimpleStockQuoteService.aar) and deploy on WSO2 Application Sever.

    [i] :

    1.2 Open the carbon.xml file and changed the port off set value to 1, then start the server. (carbon.xml file under the wso2as-5.2.1/repository/conf directory)

    1.3. Log in to the administrator console(admin/admin) and deploy the SimpleStockQuoteService.aar file using aar file deploying wizard. (Please check the bellow image)

    https://[host or IP]:9444/carbon/

     1.4 After few seconds refresh the service "List" page, now you should see the "SimpleStockQuoteService" service in the services list. (Please see bellow image)

    1.5 Click on the "SimpleStockQuoteService" name, now  you should see the  WSDL locations(1) and endpoints(2) of that services along with some other features.

    1.6 Create a SOAP UI project and invoke some operation. (as an example I'm going to invoke the getQuote operation)

    1.7 Now I'm going to expose this operation(getQuote)  using WSO2 API Manager.

    POST : http://<Host or IP>:<port>/stock/1.0.0/getquote/

    request payload

    "getQuote": {
    "request": { "symbol": "IBM" }

    1.8 Expose the SimpleStockQuoteService service as proxy service using WSO2 ESB, because when I call the above operation in RESTful manner we want to create the SOAP payload which is expecting by back end web service. That conversion can be easily achieved using WSO2 ESB.

    2.0 Create ESB configuration project

    WSO2 Developer Studio(DevS)  provides rich graphical editor to create a message mediation flow without writing XML configuration by hand.

    2.1 File --> New project ---> Other , then select the "ESB Config Project" and create new esb config project called 'ESBConfigProject'.

    [You can find my project in following git repository location [i]. (Download and import that to DevS)]


    3.0 Message mediation flow.

    (1) API will forward the JSON request to the "StockQuoteProxy"

    JSON request:

    "getQuote": {
    "request": { "symbol": "IBM" }
    Complete proxy configuration:
    <?xml version="1.0" encoding="UTF-8"?>
    <proxy xmlns="" name="StockQuoteProxy" transports="http https" startOnLoad="true" trace="disable">
    <property name="messageType" value="text/xml" scope="axis2" type="STRING" description=""/>
    <source clone="true" xpath="$body/jsonObject/getQuote"/>
    <target type="body"/>
    <xslt key="in_transform"/>
    <address uri="http://localhost:9764/services/SimpleStockQuoteService/" format="soap12"/>
    <property name="messageType" value="application/json" scope="axis2" type="STRING"/>

    (2) Once you define the property mediator message will be converted to XML.

    <soapenv:Envelope xmlns:soapenv="">

    (3) Enrich mediator will be removed the extra <jasonObject> tag.

    <soapenv:Envelope xmlns:soapenv="">

    The XSLT configuration(in-transform) has added as a local entry.

    (4) XSLT mediator will add the required namespace to the message.

    <soapenv:Envelope xmlns:soapenv="">
    <m0:getQuote xmlns:m0="http://services.samples">

    (5) Send mediator will send  the above message to the address endpoint.

    (6) ESB will get the following response at the out sequence.

    <?xml version="1.0" encoding="UTF-8"?>
    <soapenv:Envelope xmlns:soapenv="">
    <ns:getQuoteResponse xmlns:ns="http://services.samples">
    <ns:return xmlns:ax23="http://services.samples/xsd" xmlns:xsi="" xsi:type="ax23:GetQuoteResponse">
    <ax23:lastTradeTimestamp>Sun Jan 11 21:24:45 EST 2015</ax23:lastTradeTimestamp>
    <ax23:name>IBM Company</ax23:name>

    (7) The property media will set the content-type to "application/json", then outgoing message from ESB to API Manager will be converted to JSON.

    (8) API Manager get the JSON message and send back to the original client who called the REST endpoint.

    getQuoteResponse: {
    return: {
    @type: "ax23:GetQuoteResponse"
    change: -2.7998649191202554
    earnings: -8.327004353136367
    high: 64.81427412887071
    last: 62.87761070119457
    lastTradeTimestamp: "Sun Jan 11 21:56:23 EST 2015"
    low: 65.364164349526
    marketCap: 56492843.72473255
    name: "IBM Company"
    open: 65.48021093533967
    peRatio: -18.06979273115794
    percentageChange: 4.597394502400419
    prevClose: -60.90112383565025
    symbol: "IBM"
    volume: 15407

    3.1 Crete a "Composite Application Project" using DevS. (File ---> new --> Composite Application Project).  Name of that project is "ESBCARApp".

    3.2. Right click on the ESBCARApp , then "Export Composite Application Project".

    3.3 Select the file location to export the deployable archive file. Then add the "ESBConfigProject" as dependency in next window.

    4.0 Deploy the CAR file in ESB.

    Extract (unzip) the WSO2 ESB distribution and change the port off set value to 2 in carbon.xml file, then start the server.


    4.1 Log in to the WSO2 ESB management console and deploy the Carbon Application Archive(CAR) file.

      4.2 Now you should see the following info logs in WSO2 ESB startup console (Or in wso2carbon.log  file)

    4.3 After few second you should see the deployed "StockQuoteProxy" in the services list.

    4.4 Click on the proxy name, then you should see the proxy endpoint. That is the endpoint you should invoke inside your API.

    5.0 Create and Publish API

    Log in to the publisher web portal to create new API.

    https://<Host or IP>:9443/publisher

    5.1 Fill the required field with following values and click on "Add New Resource" button.

    Name    :  StockQuoteAPI
    Context :  stock
    Version :  1.0.0

    URL Pattern : getquote
    Check the resource type POST

     5.2 Go to the "Implement" wizard and add the endpoint related details.

    Endpoint Type : select the "Address Endpoint" from drop down.

    Production URL: https://localhost:8245/services/StockQuoteProxy

     5.3 Go to the "Manage" wizard and select the required "Transport" , and "Tier Availability". Finally click on "Save & Publish" button to publish this API to Store.

    5.4 Go to the API store,  then you should see the deployed "StockQuoteAPI".
    https://<Host or IP>:9443/store/

    5.5 Log in to the store and go the "My Application", then add new application called "StockQuoteApplication".

    5.6. Go to the "My Subscriptions" page , then generate  new application token.

    5.7. Go to the API page and select "StockQuoteAPI", then subscribe that API against the  "StockQuoteApplication".

    5.8 Invoke the API  proving the REST endpoint, Application token, and the JSON payload. (Click on the API name to get the endpoint URL).

    5.9. You can use the SOAP UI to invoke that API.

    Hiranya JayathilakaCreating Eucalyptus Machine Images from a Running VM

    I often use Eucalyptus private cloud platform for my research. And very often I need to start Linux VMs in Eucalyptus, and install a whole stack of software on them. This involves a lot of repetitive work, so in order to save time I prefer creating machine images (EMIs) from fully configured VMs. This post outlines the steps one should follow to create an EMI from a VM running in Eucalyptus (tested on Ubuntu Lucid and Precise VMs).

    Step 1: SSH into the VM running in Eucalyptus, if you already haven't.

    Step 2: Run euca-bundle-vol command to create an image file (snapshot) from the VM's root file system.
    euca-bundle-vol -p root -d /mnt -s 10240
    Here "-p" is the name you wish to give to the image file. "-s" is the size of the image in megabytes. In the above example, this is set to 10GB, which also happens to be the largest acceptable value for "-s" argument. "-d" is the directory in which the image file should be placed. Make sure this directory has enough free space to accommodate the image size specified in "-s". 
    This command may take several minutes to execute. For a 10GB image, it may take around 3 to 8 minutes. When completed, check the contents of the directory specified in argument "-d". You will see an XML manifest file and a number of image part files in there.

    Step 3: Upload the image file to the Eucalyptus cloud using the euca-upload-bundle command.
    euca-upload-bundle -b my-test-image -m /mnt/root.manifest.xml
    Here "-b" is the name of the bucket (in Walrus key-value store) to which the image file should be uploaded. You don't have to create the bucket beforehand. This command will create the bucket if it doesn't already exist. "-m" should point to the XML manifest file generated in the previous step.
    This command requires certain environment variables to be exported (primarily access keys and certificate paths). The easiest way to do that is to copy your eucarc file and the associated keys into the VM and source the eucarc file into the environment.
    This command also may take several minutes to complete. At the end, it will output a string of the form "bucket-name/manifest-file-name".

    Step 4: Register the newly uploaded image file with Eucalyptus.
    euca-register my-test-image/root.manifest.xml
    The only parameter required here is the "bucket-name/manifest-file-name" string returned from the previous step. I've noticed that in some cases, running this command from the VM in Eucalyptus doesn't work (you will get an error saying 404 not found). In that case you can simply run the command from somewhere else -- somewhere outside the Eucalyptus cloud. If all goes well, the command will return with an EMI ID. At this point you can launch instances of your image using the euca-run-instances command.

    Dinusha SenanayakaHow to enable login to WSO2 API Manager Store using Facebook credentials

    WSO2 Identity Server 5.0.0 release has provided several default federated authenticators like Google, Facebook, Yahoo. Even it's possible to write custom authenticator as well, in addition to default authenticators provided.

    In this post we are going to demonstrate, how we can configure WSO2 API Manager with WSO2 Identity Server, so that users comes to API Store can use their Facebook account as well to login to API Store.

    Step 1 : Configure SSO between API Store and API Publisher

    First you need to configure SSO between publisher and store as mentioned in this document.

    Step 2 : You need to have App Id and App secret key pair generated for a application registered in facebook developers site. This can be done by login to facebook developer site and creating a new app.

    Step 3 :  Login to the Identity Server and register a IdP with Facebook authenticator

    This can be done by navigating to Main -> Identity Providers -> Add. This will prompt the following window. In the "Federated Authenticators" section expand the "Facebook Configuration" and provide the details.

    App Id and App Secrete generated in the step two maps to Client Id and Client Secret values asked in the form.

    Step 4 : Go to the two service providers created in step-1 and associate the above created IdP to it.

    This configuration is available under "Local & Outbound Authentication Configuration" section of the SP.

    Step 5 : If you try to access store url (i.e: https://localhost:9443/store) , it should redirect to the facebook login page.

    Step 6: In order to store users to capable in using their facebook account as a login, they need to follow this step and associate their facebook account to their user account in the API Store.

    Identity Server has provided a dashboard which gives multiple features for users in maintaining their user accounts. Associating a social login for their account is a one option provided in this dashboard.

    This dashboard can be accessed in the following url .

    eg: https://localhost:9444/dashboard

    Note: If you are running Identiry Server with port offset, you need to do changes mentioned here, in order to get this dashboard working.

    Login to the dashboard with API Store user account. It will give you a dashboard like follows.

    Click on the "View details" button provided in "Social Login" gadget. In the prompt window, there is a option to "Associate Social Login".  Click on this and give your Facebook account id as follows.

    Once account is registered, it will list down as follows.

    That's all we have to configure . This user should be able to login to API Store using his facebook account now.

    Note: This post explained , when there is already a user account is exist in the API Store , how these users can associate their facebook account to authenticate to API Store. If someone needs to enable API Store login for all facebook accounts without having user account in API Store, that should be done though a custom authenticator added to Identity Server. i.e Provision this user using JIT (Just In Time Provisioning) functionality provided in IdP and using custom authenticator associate "subscriber" role to this provisioned user.

    Ajith VitharanaAdding namespace and prefix to only root tag using XSLT

    1. Lets assume that we have following XML content.

    <soapenv:Envelope xmlns:soapenv="">
    2. Add the namespace and prefix only for the root tag.

    <soapenv:Envelope xmlns:soapenv="">
          <ns0:ByExchangeRateQuery xmlns:ns0="">

    3. Define the XSLT file.

      <xsl:stylesheet xmlns:xsl="" version="2.0">
        <xsl:output indent="yes"/>
        <xsl:strip-space elements="*"/>
        <!--match all the nodes and attributes-->
        <xsl:template match="node()|@*">
                <xsl:apply-templates select="node()|@*">
        <!--Select the element need to be apply the namespace and prefix -->
        <xsl:template match="ByExchangeRateQuery">
            <!--Define the namespace with prefix ns0 -->
            <xsl:element name="ns0:{name()}" namespace="">
                <!--apply to above selected node-->
                <xsl:apply-templates select="node()|@*">

    Kalpa WelivitigodaWSO2 Carbon Kernel 4.3.0 Released

    I am a bit late to announce, but here it is...

    Hi Folks,

    WSO2 Carbon team is pleased announce the release of the Carbon Kernel 4.3.0.

    What is WSO2 Carbon

    WSO2 Carbon redefines middleware by providing an integrated and componentized middleware platform that adapts to the specific needs of any enterprise IT project - on premise or in the cloud. 100% open source and standards-based, WSO2 Carbon enables developers to rapidly orchestrate business processes, compose applications and develop services using WSO2 Developer Studio and a broad range of business and technical services that integrate with legacy, packaged and SaaS applications.

    WSO2 Carbon kernel, the lean, modular, OSGi-based platform, is the base of the WSO2 Carbon platform. It is a composable server architecture which inherits modularity and dynamism from OSGi framework. WSO2 Carbon kernel can be considered as a framework for server development. All the WSO2 products are composed as a collection reusable components running on this kernel. These products/components inherits all the core services provided by Carbon kernel such as Registry/repository, User management, Transports, Caching, Clustering, Logging, Deployment related features.

    You can download the released distribution from the product home page :

    How to Contribute 

    What's New In This Release

    • Simplified logging story with pluggable log provider support.
    • Upgraded versions of Hazelcast, Log4j, BouncyCastle.
    • Improved Composite application support.

    Key Features

    • Composable Server Architecture - Provides a modular, light-weight, OSGi-based server development framework.
    • Carbon Application(CApp) deployment support.
    • Multi-Profile Support for Carbon Platform - This enable a single product to run on multiple modes/profiles.
    • Carbon + Tomcat JNDI Context - Provide ability to access both carbon level and tomcat level JNDI resources to applications using a single JNDI context.
    • Distributed Caching and Clustering functionality - Carbon kernel provides a distributed cache and clustering implementation which is based on Hazelcast- a group communication framework
    • Pluggable Transports Framework - This is based on Axis2 transports module.
    • Registry/Repository API- Provide core registry/repository API for component developers.
    • User Management API  - Provides a basic user management API for component developers.
    • Logging - Carbon kernel supports both Java logging as well as Log4j. Logs from both these sources will be aggregated to a single output
    • Pluggable artifact deployer framework - Kernel can be extended to deploy any kind of artifacts such as Web services, Web apps, Business processes, Proxy services, User stores etc.
    • Deployment Synchronization - Provides synchronization of deployed artifacts across a product cluster.
    • Ghost Deployment - Provides a lazy loading mechanism for deployed artifacts</li>
    • Multi-tenancy support - The roots of the multi-tenancy in Carbon platform lies in the Carbon kernel. This feature includes tenant level isolation as well as lazy loading of tenants.

    Fixed Issues

    Known Issues

    Contact Us

    WSO2 Carbon developers can be contacted via the mailing lists:

    Reporting Issues
    You can use the Carbon JIRA issue tracker to report issues, enhancements and feature requests for WSO2 Carbon.

    Thank for you interest in WSO2 Carbon Kernel.

    --The WSO2 Carbon Team--

    Amila MaharachchiLets get started with WSO2 API Cloud

    Few weeks ago, I wrote a blog post on "Getting started with WSO2 App Cloud". Intention of writing it was to line up some of the screencasts we have done and published which helps you to use the WSO2 App Cloud.

    Couple of weeks after we started publishing App Cloud videos, we started publishing API Cloud videos too. Intention of this blog post is to introduce those screencasts we have added.

    WSO2 API Cloud provides you the API Management service in the cloud. You can create and publish APIs very easily using it and it is powered by the well established product of WSO2 product stack, WSO2 API Manager. If you already have an account in WSO2 Cloud or, you can login to WSO2 Cloud and navigate to the API Cloud.

    First thing you would do, when you go to the API Cloud is, creating and publishing an API. Following screencast will help you on how to do it.

    After publishing your API, it appears in your API store. If you advertise your API store to others they can subscribe to the APIs and invoke them. Next screencast shows how to that.
    You may have a very promising API created and published, now available in your API store. But, for it to be successful, you need to spread the word. To do that, WSO2 API Cloud have some useful social and feedback channels. You can allow the users to rate your API, comment on them, share using social media and also start interactive discussions via the forums. Following tutorial showcases those capabilities.
    For a developer to use your API easily, its documentation is important. Therefore, WSO2 API Cloud allows you to add documentation for your API. There are several options such as inline documentations, uploading already available docs and pointing to online available docs. This video showcases the documentation features.
    We will keep adding more screencasts/tutorials to our API Cloud playlist. Stay tuned and enjoy experiencing the power of WSO2 API Cloud.

    John Mathon7 Reasons to get API Management, 7 Features to look for in API Management and 7 Advanced Features you may need

    inner source

    Over the last couple years the logic of using API Management service or software has become more and more compelling.    I would say that API Management continues to be the hottest technology trend in the market today and is having a tremendous impact on every aspect of technology development.

    Lots of Organizations already have had  service oriented infrastructure for a decade and may be providing external APIs to some customers today.   There are many reasons I’ve found organizations want to go beyond this and  invest in API Management.

    7 triggering reasons why customers decide to deploy API Management

    1. Need to provide APIs to customer or partner.

    2. Want to generate revenue from APIs

    3. Desire to provide APIs for building its own Mobile Applications

    4. Desire to reuse existing services in the organization better

    5. Desire to build new reusable services using modern RESTful services

    6. Security Concerns with existing external services

    7. Decided that the API Centric Paradigm is the way forward


    7 top features that customers want in API Management

    Whatever reasons a customer comes to the realization that API Management will be useful they are looking at the following features as critical in the implementation:


    1) Deploy quickly and cheaply

    Sometimes companies have hundreds of APIs they want to register soon.  Sometimes it is one, but in any case they will want something that makes deploying services fast, easy and cheap.

    2)  Lifecycle of APIs

    One of the most complicated parts of managing an API is handling version change and migration of customers.  Migrating customers gracefully, supporting multiple simultaneous versions and communication with customers is hard.  API Management key feature is making this easier.


    3) Security

    It is frequently the case a customer built an API or set of APIs and has poor outmoded or not well thought out security.  We still see customers who are using basic authentication.   An API Management platform should allow integration of state of the art OAUTH2, OPEN_ID, XACML and other security protocols as well as API keys and more.  API Management can help you secure your existing services much better.


    4) Tracking usage

    A major requirement is tracking everything about usage of APIs by customers to enable rapid improvement, phaseout of unused features and discovery of new opportunities.   Tracking can also be used to bill customers, allocate costs and to figure out how services load changes to plan and meet demand better.  API Management software should give you all the bigdata information to do analytics, produce dashboards for executive analysis, allow you to learn about your services and customers.

    distributed computers

    5) A community quickly

    Some customers are looking to acquire a large number of potential developers to their APIs, so they need a social environment that can enable rapid adoption of the APIs.   A key benefit of API Management is to provide a social environment that makes it easy for people to learn about the services available, comment, interact with people.   It’s important that the API Management provide a user friendly way to use the APIs.

    6) Integrating existing services into APIs from legacy systems quickly

    Sometimes the services you have are legacy and require adapters and mediation to create RESTful APIs.  Sometimes you want to run the service locally and sometimes you would like a turnkey cloud based solution.   API Management should be available as a on-premise solution if you want it, a cloud solution if you want it and should come with powerful integration tools like ESB’s, MB’s, BPS’s and connectors/adapters to facilitate mediation and integration.

    7) Throttling and Tier Management

    Not all customers are equal and some customers get special treatment.  An important feature for most API Management hopefuls is being able to put customers into tiers depending on their value and need so that you can give each the level of service they are expecting and you can tell them they will get and monitor what service they are getting.   This could be # of calls per minute or per day, a number of messages before they are throttled, sizes of messages or specific features or APIs available.  You may have constraints on certain times of day or you may want to guarantee a certain up-time.  You need to be able to allocate to each customer the service expected and track it to be sure you are delivering it.

    7 Advanced Features

    Some customers for API Management are looking for something special from their API Management solution:


    1) Platform as a Service (PaaS)

    Customers want to make it easy for themselves or their customers to use APIs to build mobile or other applications without having to spin up a sophisticated development environment.   They want a PaaS environment where customers can build applications without having to leave their environment.   This makes their APIs stickier.   It also allows them to see who is using their APIs and enforce additional security.   PaaS environments allow an API provider to control which applications get built or shared.  This can be very valuable in Financial, regulated or secure services.

    2) Sophisticated multi-server, environment lifecycle and production

    Some customers want to be able to manage the lifecycle of their APIs from development, staging, testing, deployment all within API Management software.

    3) Phased deployment and migration

    More advanced customers want to be able to deploy a new version of an API to selected customers  or to a subset initially to insure that any problems will be caught before their whole customer base is exposed to problems.


    4) Pluggable security

    Larger organizations have numerous security systems already or may have advanced security requirements.  Having a flexible security architecture in the API management software is critical for them.


    5) Auto-discovery

    Customers frequently ask me for this.  They want to be able to populate their API stores with existing APIs, discover meta-data automatically.   They want to be able to detect when new services are available or being used automatically.    This is only partially possible today but is a good feature.


    6) IoT (Internet of Things)  support

    IoT devices have an API and certain devices use different protocols for those APIs.  IoT support can include specialized support for certain protocols as well as enhanced capabilities to manage lifecycle of IoT devices, to deliver upgrades or device management capabilities.    Collecting data is critical to IoT device functionality so it plays an important part not just of the API but also the device and diagnostics of the device and the function the device performs.

    7) Advanced analytics

    API Management should support BigData collection and rudimentary statistics collection and eventing.   However, customers frequently want to have much more sophisticated analysis tools, graphics, real-time capability and CEP (Complex Event Processing.)



    API Management is one of the top priorities of almost every organization I talk to.  Thinking through what you need or will need from API Management vendor is important CIO level decision today.   The API management function becomes key to the organization renewal, Enterprise Refactoring, new products and services and leads to the digitization trend by supplying the analytics to understand the digital world most corporations are working towards.

    Chathurika Erandi De SilvaUpdating the Secure Vault after changing the default keystore

    In this post, I will be explaining a simple technique that WSO2 Carbon provides as a feature.

    Before following this post, please take time to read on Enabling Secure Vault for Carbon Server  

    Let's consider the following scenario

    Tom has secure vault applied to his WSO2 Carbon Server. Then he decided to change the default keystore. Now he needs to update the Secure Vault with the new keystore.

    How can he achieve this?

    Answer: WSO2 Carbon provides an option for ciphertool as -Dchange to achieve this.

    E.g. After the keystore is changed, if we need to update the secure vault we need to run the cipertool with -Dchange option

    sh -Dchange

    As shown below Figure 1, when the above command is run, it will provide us the facility to re-encrypt the passwords that the secure vault has encrypted before.

    Figure 1: Running Ciphertool with -Dchange option

    Saliya EkanayakeRunning C# MPI.NET Applications with Mono and OpenMPI

    I wrote an earlier post on the same subject, but just realized it's not detailed enough even for me to retry, hence the reason for this post.
    I've tested this in FutreGrid with Infiniband to run our C# based pairwise clustering program on real data up to 32 nodes (I didn't find any restriction to go above this many nodes - it was just the maximum I could reserve at that time)
    What you'll need
    • Mono 3.4.0
    • MPI.NET source code revision 338.
        svn co -r 338
    • OpenMPI 1.4.3. Note this is a retired version of OpenMPI and we are using it only because that's the best that I could get MPI.NET to compile against. If in future MPI.NET team provides support for a newer version of OpenMPI, you may be able to use it as well.
    • Automake 1.9. Newer versions may work, but I encountered some errors in the past, which made me stick with version 1.9.
    How to install
    1. I suggest installing everything to a user directory, which will avoid you requiring super user privileges. Let's create a directory called build_mono inside home directory.
       mkdir ~/build_mono
      The following lines added to your ~/.bashrc will help you follow the rest of the document.

      Once these lines are added do,
       source ~/.bashrc
    2. Build automake by first going to the directory that containst automake-1.9.tar.gz and doing,
       tar -xzf automake-1.9.tar.gz
      cd automake-1.9
      ./configure --prefix=$BUILD_MONO
      make install
      You can verify the installation by typing which automake, which should point to automake inside $BUILD_MONO/bin
    3. Build OpenMPI. Again, change directory to where you downloaded openmpi-1.4.3.tar.gz and do,
       tar -xzf openmpi-1.4.3.tar.gz
      cd openmpi-1.4.3
      ./configure --prefix=$BUILD_MONO
      make install
      Optionally if Infiniband is available you can point to the verbs.h (usually this is in /usr/include/infiniband/) by specifying the folder /usr in the above configure command as,
       ./configure --prefix=$BUILD_MONO --with-openib=/usr
      If building OpenMPI is successfull, you'll see the following output for mpirun --version command,
       mpirun (Open MPI) 1.4.3

      Report bugs to
      Also, to make sure the Infiniband module is built correctly (if specified) you can do,
       ompi_info|grep openib
      which, should output the following.
       MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.3)
    4. Build Mono. Go to directory containing mono-3.4.0.tar.bz2 and do,
       tar -xjf mono-3.4.0.tar.bz2
      cd mono-3.4.0
      Mono 3.4.0 release is missing a file, which you'll need to add by pasting the following content to a file called./mcs/tools/xbuild/targets/Microsoft.Portable.Common.targets
       <Project xmlns="">
      <Import Project="..\Microsoft.Portable.Core.props" />
      <Import Project="..\Microsoft.Portable.Core.targets" />
      You can continue with the build by following,
       ./configure --prefix=$BUILD_MONO
      make install
      There are several configuration parameters that you can play with and I suggest going through them either in or in./configure --help. One parameter, in particular, that I'd like to test with is --with-tls=pthread
    5. Build MPI.NET. If you were wonder why we had that ac_cv_path_ILASM variable in ~/.bashrc then this is where it'll be used. MPI.NET by default tries to find the Intermediate Language Assembler (ILASM) at /usr/bin/ilasm2, which for 1. does not exist because we built Mono into $BUILD_MONO and not /usr 2. does not exist because newer versions of Mono calls this ilasm notilasm2. Therefore, after digging through the configure file I found that we can specify the path to the ILASM by exporting the above environment variable.
      Alright, back to building MPI.NET. First copy the downloaded to the subversion checkout of MPI.NET. Then change directory there and do,
       patch MPI/ <
      This will say some hunks failed to apply, but that should be fine. It only means that those are already fixed in the checkout. Once patching is completed continue with the following.
      ./configure --prefix=$BUILD_MONO
      make install
      At this point you should be able to find MPI.dll and MPI.dll.config inside MPI directory, which you can use to bind against your C# MPI application.
    How to run
    • Here's a sample MPI program written in C# using MPI.NET.
        using System;
      using MPI;
      namespace MPINETinMono

      class Program

      static void Main(string[] args)
      using (new MPI.Environment(ref args))
      Console.Write("Rank {0} of {1} running on {2}\n",,,
    • There are two ways that you can compile this program.
      1. Use Visual Studio referring to MPI.dll built on Windows
      2. Use mcs from Linux referring to MPI.dll built on Linux
        mcs Program.cs -reference:$MPI.NET_DIR/tools/mpi_net/MPI/MPI.dll
        where $MPI.NET_DIR refers to the subversion checkout directory of MPI.NET
        Either way you should be able to get Program.exe in the end.
    • Once you have the executable you can use mono with mpirun to run this in Linux. For example you can do the following within the directory of the executable,
        mpirun -np 4 mono ./Program.exe
      which will produce,
        Rank 0 of 4 running on i81
      Rank 2 of 4 running on i81
      Rank 1 of 4 running on i81
      Rank 3 of 4 running on i81
      where i81 is one of the compute nodes in FutureGrid cluster.
      You may also use other advance options with mpirun to determine process mapping and binding. Note. the syntax for such controlling is different from latest versions of OpenMPI. Therefore, it's a good idea to look at different options from mpirun --help. For example you may be interested in specifying the following options,

      mpirun --display-map --mca btl ^tcp --hostfile $hostfile --bind-to-core --bysocket --npernode $ppn --cpus-per-proc $cpp -np $(($nodes*$ppn)) ...
      where, --display-map will print how processes are bind to processing units and --mca btl ^tcp forces to turn off tcp
    That's all you'll need to run C# based MPI.NET applications in Linux with Mono and OpenMPI. Hope this helps!

    Hiranya JayathilakaDeveloping Web Services with Go

    Golang facilitates implementing powerful web applications and services using a very small amount of code. It can be used to implement both HTML rendering webapps as well as XML/JSON rendering web APIs. In this post, I'm going to demonstrate how easy it is to implement a simple JSON-based web service using Go. We are going to implement a simple addition service, that takes two integers as the input, and returns their sum as the output.

    package main

    import (

    type addReq struct {
    Arg1,Arg2 int

    type addResp struct {
    Sum int

    func addHandler(w http.ResponseWriter, r *http.Request) {
    decoder := json.NewDecoder(r.Body)
    var req addReq
    if err := decoder.Decode(&req); err != nil {
    http.Error(w, err.Error(), http.StatusInternalServerError)

    jsonString, err := json.Marshal(addResp{Sum: req.Arg1 + req.Arg2})
    if err != nil {
    http.Error(w, err.Error(), http.StatusInternalServerError)
    w.Header().Set("Content-Type", "application/json")

    func main() {
    http.HandleFunc("/add", addHandler)
    http.ListenAndServe(":8080", nil)
    Lets review the code from top to bottom. First we need to import the JSON and HTTP packages into our code. The JSON package provides the functions for parsing and marshaling JSON messages. The HTTP package enables processing HTTP requests. Then we define two data types (addReq and addResp) to represent the incoming JSON request and the outgoing JSON response. Note how addReq contains two integers (Arg1, Arg2) for the two input values, and addResp contains only one integer (Sum) for holding the total.
    Next we define what is called a HTTP handler function which implements the logic of our web service. This function simply parses the incoming request, and populates an instance of the addReq struct. Then it creates an instance of the addResp struct, and serializes it into JSON. The resulting JSON string is then written out using the http.ResponseWriter object.
    Finally, we have a main function that ties everything together, and starts executing the web service. This main function, simply registers our HTTP handler with the "/add" URL context, and starts an HTTP server on port 8080. This means any requests sent to the "/add" URL will be dispatched to the addHandler function for processing.
    That's all there's to it. You may compile and run the program to try it out. Use Curl as follows to send a test request.

    curl -v -X POST -d '{"Arg1":5, "Arg2":4}' http://localhost:8080/add
    You will get a JSON response back with the total.

    Chandana NapagodaWSO2 G-Reg Modify Subject of the Email Notification

    WSO2 Governance Registry generates notifications for events triggered by various operations performed on resources and collections stored in the repository. Notifications can be consumed in a variety of formats including E-mail. The sample "E-mail NotificationCustomization" shows how we can modify the content of emails. It describes how to edit email body and restrict email notification to some email addresses.

    Here I am going to extend that sample to modify Subject of the email.

    private void editSubject(MessageContext msgContext, String newSubject) {
    msgContext.getOptions().getProperty( MessageContext.TRANSPORT_HEADERS)).put(MailConstants.MAIL_HEADER_SUBJECT, newSubject);

    You will have to import "org.apache.axis2.transport.mail.MailConstants;" class and other related Java util classes as well.

    When you are building the sample code, please follow the instructions available in the Governance Registry Sample Documentation.

    Umesha GunasingheUse Case scenarios with WSO2 Identity Server 5.0.0 - Part 2

    Hi All,

    Today lets talk about database connectivity with WSO2 Identity Server.  As you know WSO2 Identity Server can be deployed over any LDAP, AD or JDBC user store. In fact, you can create write a custom user store manager , and connect to any legacy databases.

    The WSO2 IS has the concept of primary database and secondary databases. If you are to change the primary database, you will have to change the configuration files and start-up the server. But , if you are going to add the secondary databases, you can do this through the IS management console. This is some background information on the product.

    Now, lets talk about a common use case scenario.

    Say, you have a need of connecting the IS server to many databases. Clearly you can do this by connecting all the databases as secondary databases. Therefore, if a use is trying to get authenticated, the user will be authenticated against checking all the databases connected.

    Solution 1
    If your user bases are located in different geographical locations, say for an example you have three offices located in three countries , and you need to connect Identity Server to the three user databases located in these countries, what  you can do is connecting these databases as secondary databases via VPN connections.

    Solutions 2
    Another solution would be to have 3 Identity Servers in each of these countries, and have one central Identity Server where you can provision users from other three servers to the central server where the user will be authenticated against.

    Please check on following resource links for implementation of these scenarios :-


    Cheers ! Last post for year 2014...have a wonderful 2015 ahead...see you in the next year ;)

    Malintha AdikariHow to grant access permission for user from external machine to MySQL database

    I faced this problem while doing a deployment. I create a database on the sql server which is running on one machine. Then I wanted to grant access permission to this database from another machine. There are two step process to achieve this

    Think you want to grant access permission for machine and your DB server is running on machine

    1. Create a user for the remote machine with preferred username and password

    mysql> CREATE USER `abcuser`@`` IDENTIFIED BY 'abcpassword'; 

     Here "abcuser" is the username

               "abcpassword" is the password for that user

    2. Then grant permission for that user to your database 

    GRANT ALL PRIVILEGES ON registry.* TO 'abcuser'@''; 

    Here "registry" is the DB name

    Thats it...............!


    Sohani Weerasinghe

    Get a JSON response with WSO2 DSS

    WSO2 Data Services Server supports for both XML and JSON outputs and in order to receive the message as JSON you need to change following configurations.
    You can achieve this by enabling the 'content negotiation' property in the axis2.xml file and axis2_client.xml file. Also you should send the requests to the server by adding the "Accept:Application/json" to the request header, and as a result, DSS will return the response in JSON format. Please follow the below steps 

    1. In axis2.xml file at <DSS_HOME>/repository/conf/axis2, include the following property 

    <parameter name="httpContentNegotiation>true</parameter> 

    2. In axis2_client.xml file at <DSS_HOME>/repository/conf/axis2 enable the following property 

    <parameter name="httpContentNegotiation>true</parameter> 

    3. When sending the request please make sure to add the "Accept:Application/json" to the request header 

    Note that if you are using tenant, then the above parameter need to be set in 'tenant-axis2.xml' as well.
    Now you can use the ResourceSample, which is a sample available in your DSS distribution to test the result. Send a request to the server using  CURL by adding “Accept:Application/json” to the request header as shown below.

    curl -X GET -H "Accept:application/json" http://localhost:9763/services/samples/ResourcesSample.HTTPEndpoint/product/S10_1678

    Sohani Weerasinghe

    XML to JSON conversion using a proxy in WSO2 ESB

    This post describes about the way to convert an XML payload into JSON. Basically this can be achieved by using the property "messageType" in synapse configuration.

    The property is as follows.

    <property name="messageType" value="application/json" scope="axis2"/>

    I have created a proxy called TestProxy with the  SimpleStockQuote endpoint as follows.

     <?xml version="1.0" encoding="UTF-8"?>  

     <proxy xmlns=""  








     <property name="messageType" value="application/json" scope="axis2"/>  

     <log level="full"/>  



     <address uri="http://localhost:9000/services/SimpleStockQuoteService/"/>  









    In order to invoke this proxy you can follow below steps.

    1. Build the SimpleStockQuote Service at <ESB_HOME>/samples/axis2Server/src/SimpleStockQuoteService by running "ant" command

    2. Then start sample backend to recover the response which is provided with WSO2 ESB by navigating to <ESB_HOME>/samples/axis2Server and enter the command "sh"

    3. Send the below SOAP request to proxy service using SOAP UI.

    <soapenv:Envelope xmlns:soapenv="" xmlns:m0="http://services.samples" xmlns:xsd="http://services.samples/xsd">  



     <m0:getQuote xmlns:m0="http://services.samples" id="12345">  










    Response should be as follows:

     HTTP/1.1 200 OK  

     Host: sohani-ThinkPad-T530:8280  

     SOAPAction: "urn:getQuote"  

     Accept-Encoding: gzip,deflate  

     Content-Type: application/json  

     Date: Fri, 29 Dec 2014 09:00:00 GMT  

     Server: WSO2-PassThrough-HTTP  

     Transfer-Encoding: chunked  

     Connection: Keep-Alive  


    Lakmali BaminiwattaCustomizing workflows in WSO2 API Manager

    In WSO2 API Manager, Workflow extensions allow you to attach a custom workflow to various operations in the API Manager for

    • User Signup
    • Application Creation
    • Application Registration
    • Subscription

    By default, the API Manager workflows have Simple Workflow Executor engaged in them. The Simple Workflow Executor carries out an operation without any intervention by a workflow admin. For example, when the user creates an application, the Simple Workflow Executor allows the application to be created without the need for an admin to approve the creation process.

    In order to enforce intervention by a workflow admin, you can engage the WS Workflow Executor. It invokes an external Web service when executing a workflow and the process completes based on the output of the Web service. For example, when the user creates an application, the request goes into an intermediary state where it remains until authorized by a workflow admin.

    You can try out the default workflow extensions provided by WSO2 API Manager to engage business processes with API management operations as described in here

    There are two extension points exposed with WSO2 API Manager to customize workflows.

    Customizing the Workflow Executor
    • When you need to change the workflow logic
    • When you need to change the Data Formats

    Customizing the Business Process
    • When you are happy with the Data Formats and need to change only the business flow
    This blog post will provide a sample on how we can customize workflow executors and change the workflow logic.

    First let's look at WorkflowExecutor class which each WS workflow executor is extended from.

    * This is the class that should be extended by each workflow executor implementation.
    public abstract class WorkflowExecutor {

    * The Application Registration Web Service Executor.
    public class ApplicationRegistrationWSWorkflowExecutor extends WorkflowExecutor{

    //Logic to execute the workflow
    public void execute(WorkflowDTO workflowDTO) { }

    //Logic to complete the workflow
    public void complete(WorkflowDTO workflowDTO) { }

    //Returns the workflow type - ex: WF_TYPE_AM_USER_SIGNUP
    public String getWorkflowType() { }

    //Used to get workflow details
    public List getWorkflowDetails(String workflowStatus) { }


    As the example scenario, let's consider the Application registration workflow of WSO2 API manager.
    After an application is created, you can subscribe to available APIs, but you get the consumer key/secret and access tokens only after registering the application. There are two types of registrations that can be done to an application: production and sandbox. You change the default application registration workflow in situations such as the following:

    • To issue only sandbox keys when creating production keys is deferred until testing is complete.
    • To restrict untrusted applications from creating production keys. You allow only the creation of sandbox keys.
    • To make API subscribers go through an approval process before creating any type of access token.
    Find step by step instructions on how we can configure Application Registration Workflow from here

    Sending an email to Administrator upon Application Registration

    As the extension of this Application Registration workflow, we are going customize the workflow executor and send an email to Administrator once the workflow is triggered

    1. First write a new executor extending ApplicationRegistrationWSWorkflowExecutor

    public class AppRegistrationEmailSender extends 
    ApplicationRegistrationWSWorkflowExecutor {

    2. Add private String attributes and public getters and setters for email properties (adminEmail, emailAddress, emailPassword)

    private String adminEmail;
    private String emailAddress;
    private String emailPassword;
    public String getAdminEmail() {
    return adminEmail;

    public void setAdminEmail(String adminEmail) {
    this.adminEmail = adminEmail;

    public String getEmailAddress() {
    return emailAddress;

    public void setEmailAddress(String emailAddress) {
    this.emailAddress = emailAddress;

    public String getEmailPassword() {
    return emailPassword;

    public void setEmailPassword(String emailPassword) {
    this.emailPassword = emailPassword;

    3. Override execute(WorkflowDTO workflowDTO) method and implement email sending logic. Finally invoke super.execute(workflowDTO).

    public void execute(WorkflowDTO workflowDTO) throws WorkflowException {

    ApplicationRegistrationWorkflowDTO appDTO = (ApplicationRegistrationWorkflowDTO) workflowDTO;

    String emailSubject = appDTO.getKeyType() + "Application Registration";

    String emailText = "Appplication " + appDTO.getApplication().getName() + " is registered for " +
    appDTO.getKeyType() + " key by user " + appDTO.getUserName();

    try {
    EmailSender.sendEmail(emailAddress, emailPassword, adminEmail, emailSubject, emailText);
    } catch (MessagingException e) {
    // TODO Auto-generated catch block



    Find the complete source code of custom workflow executor. 
    package org.wso2.sample.workflow;

    import javax.mail.MessagingException;

    import org.wso2.carbon.apimgt.impl.dto.ApplicationRegistrationWorkflowDTO;
    import org.wso2.carbon.apimgt.impl.dto.WorkflowDTO;
    import org.wso2.carbon.apimgt.impl.workflow.ApplicationRegistrationWSWorkflowExecutor;
    import org.wso2.carbon.apimgt.impl.workflow.WorkflowException;

    public class AppRegistrationEmailSender extends ApplicationRegistrationWSWorkflowExecutor {

    private String adminEmail;
    private String emailAddress;
    private String emailPassword;

    public void execute(WorkflowDTO workflowDTO) throws WorkflowException {

    ApplicationRegistrationWorkflowDTO appDTO = (ApplicationRegistrationWorkflowDTO) workflowDTO;

    String emailSubject = appDTO.getKeyType() + "Application Registration";

    String emailText = "Appplication " + appDTO.getApplication().getName() + " is registered for " +
    appDTO.getKeyType() + " key by user " + appDTO.getUserName();

    try {
    EmailSender.sendEmail(emailAddress, emailPassword, adminEmail, emailSubject, emailText);
    } catch (MessagingException e) {
    // TODO Auto-generated catch block



    public String getAdminEmail() {
    return adminEmail;

    public void setAdminEmail(String adminEmail) {
    this.adminEmail = adminEmail;

    public String getEmailAddress() {
    return emailAddress;

    public void setEmailAddress(String emailAddress) {
    this.emailAddress = emailAddress;

    public String getEmailPassword() {
    return emailPassword;

    public void setEmailPassword(String emailPassword) {
    this.emailPassword = emailPassword;


    Now modify the existing ProductionApplicationRegistration as below.

    You can do the same modification to SandboxApplicationRegistration workflow as below.

    With this change, Application Registration workflows will trigger the workflows through AppRegistrationEmailSender which will send an email to adminEmail email address. Then it will invoke the default ApplicationRegistrationWSWorkflowExecutor. 

    Nadeeshaan GunasingheMySQL Performance Testing with MySQLSLAP and Apache Bench

    When we create a database it is always a difficult task to determine the performance of the database system under heavy data sets. In the real time, there are situations in which more than one user tries to access your application, web site, etc.. concurrently. In such situations the performance degradation might have a great affect with your database. Therefore it is always necessary to put your database under stress test before you put it to work in the real time.
    In this article I am trying to give a brief explanation about how to use some effective tools to test your database system under heavy loads.


    This tool always come with your MySQL installation and can be found PATH_TO_MYSQL/mysql/bin . Inside this directory you can find a script called mysqlslap.exe (In Windows). After locating the relevant script direct to the location and then you can invoke the mysqlslap as mysqlslap [options].

    Above figure shows most widely used set of options to test a certain query with mysqlslap. With this query under the options parameter --query we include the location of relevant sql file in which you have mentioned your query. Under the parameter --create-schema you include the name of your database. Under the parameter --concurrency mention the number of concurrent accesses which tries to access the query concurrently. Then you can mention under the parameter -- iterations, the number of time the query runs. If you wish to include more than one query in the sql you need to mention the query delimiter to separate one query from the other and use --delimiter to specify the relevant delimiter.
    According to the above figure the mentioned query runs 5 time under one concurrent access. As a total the same query runs 250 times for the configurations.
    After the benchmarking you can see the results and analyze them depending on your requirements. Rather than the above mentioned options there are other options which you can find in mysqlslap official documentation.

    Apache Bench

    With mysqlslap you can mention the set of or individual queries to be tested under stress testing. With Apache bench it is easy to test your web application when a certain page loads. With this tool you do not need to mention all the queries and this allows to measure the performance of your application when executing a set of queries when the loads.
    After installing Apache Bench in your system you  need to direct to the location where you have installed it.
    Now you can simply issue the command ab –n 500 –c 100 <Enter Url here> to test the page which is mentioned by the url. With the options -n you can ensure the number of requests to perform and the parameter -c determines the number of concurrent requests.
    You can find the other available set of options in Apache Bench Options.

    Sanjiva WeerawaranaNorth Korea, The Interview and Movie Ethics

    Its been quite a while since I blogged .. I'm going to try to write a bit more consistently from now (try being the key!). I thought I'll start with a light topic!

    So I watched the now infamous The Interview two nights ago. I'm no movie critic, but I thought it was a cheap, crass stupid movie with no depth whatsoever. More of a dumbass slapstick movie than anything else.

    Again, I'm no movie critic so I don't recommend you listen to me; watch it and make up your own mind :-). I have made up mine!

    HOWEVER, I do think the Internet literati's reaction to this movie is grossly wrong, unfair and arrogant.

    Has there ever been any other Hollywood movie where the SITTING president of a country is made to look like a jackass and assassinated in the most stupid way? I can't think of any movies like that. In fact, I don't think Bollywood or any other movie system has produced such a movie.

    When Hollywood movies have US presidents in them they're always made out to be the hero (e.g. White House Down) and they pretty much never die. If they do die, then they die a hero (e.g. 2012) in true patriotic form.

    I don't recall seeing a single movie where David Cameron or Angela Merkel or Narendra Modi or any other sitting president was made to look like a fool and gets killed as the main point of the movie (or in any other fashion).

    I believe the US Secret Service takes ANY threats against the US president very seriously. According to Wikipedia, a threat against the US president is a class D felony (presumably a bad thing). I've heard of students who send anonymous (joking) email threats get tracked down and get a nice visit.

    So, suppose Sony Pictures decided to make a movie which shows President Obama being a jackass and then being killed? How far would that go before the US Secret Service shuts it down?

    In my view the fact that this movie was conceived, funded and made just goes to show how little r