WSO2 Venus

Nandika JayawardanaHow to configure IBM MQ 8 With WSO2 ESB

In this blog post, we will look at how to configure IBM MQ version 8 with WSO2 ESB and implement a proxy service to consume messages from a queue in IBM MQ.

Following are the steps we need to follow in order to configure ESB and implement our proxy service. 

1. Create the relevant JMS Administrative objects in IBM MQ.
2. Generate the JNDI binding file from IBM MQ
3. Configure WSO2 ESB JMS transport with the generated binding file and connection factory information.
4. Implement the proxy service and deploy it.
5. Publish a message to MQ and observe how it is consumed by ESB.

Create Queue Manager and Queue and Server Connection Channel in MQ


Start the Web Sphere MQ Explorer. If you are not running on an administrator account, right click on the icon and select Run as Administrator option.

Step 2.

Click on the Queue Managers and Select New => Queue Manager to create a new queue manager.

We will name the queue manager as ESBQManager. Select create server connection channel option as you pass through the wizard with next button. You will get the option to specify the port this queue manager will use. Since we do not have any queue managers at the moment, we can use the default 1414 port.

Now we have created a queue manager object. Next we need to create a local queue which we will used to publish massages and consume from ESB. Lets name this queue as LocalQueue1.

Expand newly created ESBQManager and click on Queues and select New => Local Queue.

We will use default options for our local queue.

Next we need to create a server connection channel which will be used to connect to the queue manager.

Select Channels => New => Server-connection Channel option and give the channel name mychannel. Select default options for creating the channel.

Now we have created our queue manager, queue and server connection channel.

Generating the binding file

   Next we need to generate the binding file which will be used by IBM MQ client libraries for JNDI Look-up.  For that, we need to first create a directory where this binding file will be stored. I have created a directory named G:\jndidirectory for this purpose. 

Now go to MQ Explorer, click on JMS Administered Objects and select Add Initial Context.

In the connection details wizard, select File System option and browse to our newly created directory and click next and click finish.

Now, under the JMS Administered objects, we should be able to see our file initial context.

Expand it and click on Connection Factories to create a new connection factory.

We will name our connection factory as MyQueueConnectionFactory. For the connection factory type, select Queue Connection Factory.

Click next and click finish. Now Click on the newly created Connection Factory and select properties. Click on the connections option, browse and select our queue manager. You can also configure the port and the host name for connection factory. Since we used default values, we do not need to do any changes here. 

For the other options, go with the defaults. Next , we need to create a JMS Destination. We will use the same queue name LocalQueue1 as our destination and map it to our queue LocalQueue1 . Click on Destinations and select New => Destination. and provide name LocalQueue1. When you get the option to select the queue manager and queue browse and select ESBQManager and LocalQueue1 .

Now we are done with creating the Initial Context. If you now browse to the directory we specified, you should be able to see the newly generated binding file.

In order to connect to the Queue, we need to configure channel authentication. For the ease of use, lets disable channel authentication for our scenario. For that run the command runmqsc from the command line and execute the following two commands. Note that you have to start command prompt as admin user.

runmqsc ESBQManager



Now we are done with configuring the IBM MQ.  

Configuring WSO2 ESB JMS Transport. 

open axis2.xml found in wso2esb-4.8.1\repository\conf\axis2 directory and add the following entries to it near the commented out jms transport receiver section.

<transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">
  <parameter name="default" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>

  <parameter name="myQueueConnectionFactory1" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>

Similarly add jms transport sender section as follows.

<transportSender name="jms" class="org.apache.axis2.transport.jms.JMSSender">
  <parameter name="default" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>

  <parameter name="myQueueConnectionFactory1" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>

Since we are using IBM MQ queue manager default configuration, it is expecting username password client authentication. Here, the username and password is the login information of your logged in operating system account.

Copy MQ client libraries to respective directories.

Copy jta.jar and jms.jar to repository/components/lib directory.
Copy and fscontext_1.0.0.jar to repository/components/dropins directory. Download the jar files from here.

Deploy JMSListener Proxy Service.

Now start esb and deploy the following simple proxy service. This proxy service act as a listener to our queue LocalQueue1 and when ever we put a message to this queue, the proxy service will pull that message out of the queue and log it.

<proxy xmlns=""
         <log level="full"/>
   <parameter name="transport.jms.Destination">LocalQueue1</parameter>

Testing our proxy service

Go to MQ Explorer and add a message to local queue. 

Now you will be able to see the message logged in ESB console as well as in the log file.

Enjoy JMS with IBM MQ

sanjeewa malalgodaHow to use API Manager Application workflow to automate token generation process

Workflow extensions allow you to attach a custom workflow to various operations in the API Manager such as user signup, application creation, registration, subscription etc. By default, the API Manager workflows have Simple Workflow Executor engaged in them.

The Simple Workflow Executor carries out an operation without any intervention by a workflow admin. For example, when the user creates an application, the Simple Workflow Executor allows the application to be created without the need for an admin to approve the creation process.

Sometimes we may need to do additional operations as part of work flow.
In this example we will discuss how we can generate access tokens once you finished application creation. By default you need to generate keys once you created Application in API store. With this sample that process would automate and generate access tokens for you application automatically.

You can find more information about work flows in this document.

Lets first see how we can intercept workflow complete process and do something.

ApplicationCreationSimpleWorkflowExecutor.complete() method will execute after we resume workflow from BPS.
Then we can write our implementation for workflow executor and do whatever we need there.
We will have user name, application id, tenant domain and other required parameters need to trigger subscription/key generation.
If need we can directly call dao or APIConsumerImpl to generate token(call getApplicationAccessKey).
In this case we may generate tokens from workflow executor.

Here you will see code for ApplicationCreation. This class is just the same as ApplicationCreationSimpleWorkflowExecutor, but additionally generating the keys in the ApplicationCreationExecutor.complete().
In this way the token will be generated as and when the application is created.

If need user can use OAuthAdminService getOAuthApplicationDataByAppName in the BPS side using soap call to get these details. If you want to send mail with generate tokens then you can do that as well.


import java.util.List;
import java.util.Map;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.wso2.carbon.apimgt.api.APIManagementException;
import org.wso2.carbon.apimgt.impl.APIConstants;
import org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO;
import org.wso2.carbon.apimgt.impl.dto.ApplicationWorkflowDTO;
import org.wso2.carbon.apimgt.impl.dto.WorkflowDTO;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowException;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowExecutor;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowConstants;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowStatus;

import org.wso2.carbon.apimgt.impl.dto.ApplicationRegistrationWorkflowDTO;
import org.wso2.carbon.apimgt.impl.APIManagerFactory;
import org.wso2.carbon.apimgt.api.APIConsumer;

public class ApplicationCreationExecutor extends WorkflowExecutor {

    private static final Log log =

    private String userName;
    private String appName;

    public String getWorkflowType() {
        return WorkflowConstants.WF_TYPE_AM_APPLICATION_CREATION;

     * Execute the workflow executor
     * @param workFlowDTO
     *            - {@link ApplicationWorkflowDTO}
     * @throws WorkflowException

    public void execute(WorkflowDTO workFlowDTO) throws WorkflowException {
        if (log.isDebugEnabled()) {
  "Executing Application creation Workflow..");


     * Complete the external process status
     * Based on the workflow status we will update the status column of the
     * Application table
     * @param workFlowDTO - WorkflowDTO
    public void complete(WorkflowDTO workFlowDTO) throws WorkflowException {
        if (log.isDebugEnabled()) {
  "Complete  Application creation Workflow..");

        String status = null;
        if ("CREATED".equals(workFlowDTO.getStatus().toString())) {
            status = APIConstants.ApplicationStatus.APPLICATION_CREATED;
        } else if ("REJECTED".equals(workFlowDTO.getStatus().toString())) {
            status = APIConstants.ApplicationStatus.APPLICATION_REJECTED;
        } else if ("APPROVED".equals(workFlowDTO.getStatus().toString())) {
            status = APIConstants.ApplicationStatus.APPLICATION_APPROVED;

        ApiMgtDAO dao = new ApiMgtDAO();

        try {
        } catch (APIManagementException e) {
            String msg = "Error occured when updating the status of the Application creation process";
            log.error(msg, e);
            throw new WorkflowException(msg, e);

        ApplicationWorkflowDTO appDTO = (ApplicationWorkflowDTO) workFlowDTO;
        userName = appDTO.getUserName();

        appName = appDTO.getApplication().getName();

        System.out.println("UseName : " + userName + "   --- appName = " + appName) ;

        Map mapConsumerKeySecret = null ;

        try {
            APIConsumer apiConsumer = APIManagerFactory.getInstance().getAPIConsumer(userName);
            String[] appliedDomains = {""};
//Key generation
            mapConsumerKeySecret = apiConsumer.requestApprovalForApplicationRegistration(userName, appName, "PRODUCTION", "", appliedDomains, "3600");
        } catch(APIManagementException e) {
            throw new WorkflowException(
                    "An error occurred while generating token.", e);

        for (Map.Entry entry : mapConsumerKeySecret.entrySet()) {
            String key = entry.getKey();
            String value = entry.getValue();

            System.out.println("Key : " + key + "   ---  value = " + value);

    public List getWorkflowDetails(String workflowStatus) throws WorkflowException {
        return null;


Then add this as application creation work flow.

Lakmali BaminiwattaCSV to XML transformation with WSO2 ESB Smooks Mediator

This post provides a sample CSV to XML transformation with WSO2 ESB. WSO2 ESB supports executing Smooks features through 'Smooks Mediator'. 

Latest ESB can be downloaded from here

We are going to transform the below CSV to an XML message.


This is the format of the XML output message.



First lets write the smooks configuration to transform above CSV to given XML message (smooks-csv.xml).


<resource-config selector="org.xml.sax.driver">
<param name="fields">firstname,lastname,gender,age,country</param>
<param name="rootElementName">people
<param name="recordElementName">person
Now let's write a simple proxy service to take the CSV file as the input message and process through the smooks mediator. For that first you need to enable VFS transport sender and reciever.

Below is the service synapse configuration. Make sure to change the following parameters according to your file system. You can find more information about the parameters from here.

  •   transport.vfs.FileURI
  •   transport.vfs.MoveAfterProcess
  •   transport.vfs.ActionAfterFailure

<proxy xmlns=""
         <smooks config-key="smooks-csv">
            <input type="text"/>
            <output type="xml"/>
         <log level="full"/>
   <parameter name="transport.PollInterval">5</parameter>
   <parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
   <parameter name="transport.vfs.FileURI">file:///home/lakmali/dev/test/smooks/in</parameter>
   <parameter name="transport.vfs.MoveAfterProcess">file:///home/lakmali/dev/test/smooks/original</parameter>
   <parameter name="transport.vfs.MoveAfterFailure">file:///home/lakmali/dev/test/smooks/original</parameter>
   <parameter name="transport.vfs.FileNamePattern">.*.csv</parameter>
   <parameter name="transport.vfs.ContentType">text/plain</parameter>
   <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>

You have to make a ESB local entry with the key 'smooks-csv' and give path to smooks-csv.xml which we crated above. So in the smooks mediator above, we are loading the smooks config through the local entry key name (smooks-csv).

To perform the transformation what you need to do is drop the input message file to transport.vfs.FileURI location. In the log you can see the transformed message in XML!! Now you got the CSV message in XML in your synapse sequence. So you can perform any further mediation to this message such as send to some endpoint/database/file etc.

Ajith VitharanaConnect WSO2 server to PostgreSQL

I'm going to install PostgreSQL in Ubuntu machine and connect WSO2 API manager 1.8.0.

1. Install PostgreSQL.
sudo apt-get install -y postgresql postgresql-contrib
2. Open the postgresql.conf  file (/etc/postgresql/9.3/main/postgresql.conf ) and change the listen_addresses.
listen_addresses = 'localhost'
3. Logged to the  PostgreSQL.
sudo -u postgres psql
4. Create a user(eg : vajira/vajira123).
CREATE USER <user_name> WITH PASSWORD '<password>';
5. Create a database (eg: wso2carbon_db).
CREATE DATABASE <database_name>;
6. Grant permission to for user for that database.
GRANT ALL PRIVILEGES ON DATABASE <database_name> to <user_name>;
7. Open the pg_hba.conf file (/etc/postgresql/9.3/main/pg_hba.conf) and change the peer authentication to md5. (using the peer authentication only the operating system user can login to the database)

    local      all                all                        peer
8.  Restart the PostgreSQL.
sudo service postgresql restart
9. Run the script to create registry and user manager database.
psql -U vajira -d wso2carbon_db -a -f wso2am-1.8.0/dbscripts/postgresql.sql
7. Logged in to the database.
psql -U <user_name> -d <database_name> -h
8. Use the following command to view the table list of wso2carbon_db.
8. Download the jdbc driver and copy to <server_home>/repository/components/lib.

9. Open the master-datasources.xml  file (server_home/repository/conf/datasources/master-datasources.xml) and the change the data source configuration as bellow.
            <description>The datasource used for registry and user manager</description>
            <definition type="RDBMS">
                    <validationQuery>SELECT 1</validationQuery>

9. Start the server.
10. Using same steps you can create other databases in   PostgreSQL.

11. You can use pgAdmin PostgreSQL tool to connect to databse.

Dimuthu De Lanerolle

Useful Git commands

Q: How can I merge a distinct pull request to my local git repo ?

   You can easily merge a desired pull request using the following command. If you are doing this merge at first time you need to clone a fresh check-out of the master branch to your local machine and apply this command from the console.
git pull +refs/pull/78/head

Q: How do we get the current repo location of my local git repo?

A: The below command will give the git repo location your local repo is pointing to.

git remote -v

Q: Can we change my current repo url to a remote repo url

A: Yes. You can point to another repo url as below.

git remote set-url origin

Q: What is the git command to clone directly from a non-master branch (eg: two branches master & release-1.9.0 how to clone from release-1.9.0 branch directly without switching to release-1.9.0 after cloning from the master) 

A: Use the following git command.

git clone -b release-1.9.0


Q : I need to go ahead and build no matter i get build failures. Can I do that with maven build?

A: Yes. Try building like this.

mvn clean install -fn 

Q : Can I directly clone a tag of a particular git branch ?

A : Yes. Lets Imagine your tag is 4.3.0 , Following command will let you directly clone the tag instead the branch.

Syntax : git clone --branch <tag_name> <repo_url>

git clone --branch carbon-platform-integration-utils-4.3.0

Jayanga DissanayakeEveryday Git (Git commands you need in your everyday work)

Git [1] is one of the most popular version control systems. In this post I am going to show you how to work with GitHub [2]. When it comes to GitHub there are thousands of public repositories. If you are interested in a project you can start working on it and contributing it. Followings are the steps and commands you will have to use while you work with GitHub.

1. Forking a repository
This is done via the GithHub [2] web site.

2. Clone a new repository
git clone

3. Get updates from the remote repository (origin/master)
git pull origin master

4. Push the updates to the the remote repository (origin/master)
git push origin master

5. Add updated files to staging
git add

6. Commit the local changes to the remote repository
git commit -m "Modifications to" --signoff

7. Set the upstream repository
git remote add upstream

8. Fetch from upstream repository
git fetch upstream

9. Fetch from all the remote repositories
git fetch --all

10. Merge new changes from upstream repository for the master branch
git checkout master
git merge upstream/master

11. Merge new changes from upstream repository for the "otherbranch" branch
git checkout otherbranch
git merge upstream/otherbranch

12. View the history of commits
git log

13. If needed to discard some commits in the local repository
First find the commit ID to which you want to revert back to. The user the following command
git reset --hard #commitId


Jayanga DissanayakeWSO2 Carbon : Get notified just after the server start and just before server shutdown

WSO2 Carbon [1] is a 100% open source, integrated and componentized middleware platform which enables you to develop your business and enterprise solutions rapidly. WSO2 Carbon is based on OSGi framework [2]. It inherits molecularity and dynamism from the OSGi.

In this post I am going to show you how to get notified, when the server is starting up and when the server is about to shut down. 

In OSGi, bundle start up sequence is random. So you can't rely on the bundle start up sequence.

There are real world scenarios where you have some dependencies amount bundles, hence need to perform some actions before other dependent bundles get deactivated in the server shutdown.

Eg. Let's say you have to send messages to a external system. Your message sending module use your authentication module to authenticate the request and send it to the external system and your message sending module try to send all the buffered messages before the server shutdown.

Bundle unloading sequence in OSGi not happened in a guaranteed sequence. So, what would happen if your authentication bundle get deactivated before your message sending bundle get deactivated. In this case message sending module can't send the messages

To help these type of scenarios WSO2 Carbon framework provide you with a special OSGi service which can be used to detect the server start up and server shutdown

1. How to get notified the server startup

Implement the interface org.wso2.carbon.core.ServerStartupObserver [3], and register it as a service via the bundle context.

When the server is starting you will receive notifications via completingServerStartup() and completedServerStartup()

2. How to get notified the server shutdown

Implement the interface org.wso2.carbon.core.ServerShutdownHandler [4], and register it as a service via the bundle context.

When the server is about to shutdown you will receive the notification via invoke()


protected void activate(ComponentContext componentContext) {
try {
componentContext.getBundleContext().registerService(ServerStartupObserver.class.getName(), new CustomServerStartupObserver(), null) ;
} catch (Throwable e) {
log.error("Failed to activate the bundle ", e);

Sajith KariyawasamConvert request parameter to an HTTP header in WSO2 ESB

Say, you have a requirement to pass a request parameter (named key) which comes to an ESB proxy service, and that need to be passed to the backend service as an HTTP header (named APIKEY).

Request to ESB proxy service would be as follows.
      curl -X GET "http://localhost:8280/services/MyPTProxy?key=67876gGXjjsskIISL

In that case you can make use of xpath expression $url:[param_name]

In your in-sequence you can add a header mediator as follows to set the HTTP header with key request parameter

John MathonIOT : Should you do your Automation for your disparate IOT devices locally or in the cloud, review HUBS

IOT publish subscribe


Here is my blog series on this IoT project to automate some key home functions using a variety of IOT devices from many manufacturers.

Here is my “Buy/no buy/might buy list and IOT landscape article here”

Here is my “What are the integration points for IoT article here”

Here is my “Why would you want to integrate IoT devices article here”

Combining Services from disparate devices

There is a question where to integrate different IOT devices.  Conceptually integrating different devices would be best done locally since there would be less latency and there would be less dependency on outside systems or loss of connectivity to the cloud.   However, almost everyone wants to be able to control their devices from the cloud so cloud connectivity is needed anyway.   Also, if any of your devices collect data you may want to have that data in the cloud because of issues around backup at home.   If you build the automation locally you will still want to have access in the cloud so you will have to build or have access to your automation in the cloud.

Whether you decide to do your integration locally or in the cloud the decision is also affected by what is available.   Devices have a variety of compatibility with local hubs.  Some hubs can support some devices.  Nothing can support all devices, not even the Homey which seems to support virtually every protocol out there.   The reason is that device manufacturers still feel they have the option to invent their own protocols and hardware or there may be a devices from the ancient past that use something before the latest craze that still need to be integrated.

Some IOT devices offer SDKs for development of applications on computers or for SmartPhone apps.    Some offer APIs to talk to the device directly.   Some offer APIs to talk to a Web Service in the cloud.  Some offer “standard protocols” such as Z-Wave or Zigbee, CoAp, etc which provide a way to talk to numerous devices in a standard way locally.    Some only offer a Web Service.   Some offer all of these and some offer only a few.   So, how do you build automation across numerous devices of different protocols and approaches and where?

One way is to buy only devices which fit a certain profile so that you are sure that all your devices can be communicated with in a way you will support.   This is almost certainly to some extent required as the variety of different protocols and device integrations can be extremely costly and time consuming beyond the utility of a particular manufacturers device features.

For integration locally we have the hubs on the market.  For integration in the cloud we have a service like IFTTT.  Let’s discuss each.


Local Integration Compatability of Protocols

The IOT market is beset with hubs galore at this time.      Here are 13 hubs that are either in the market today or soon to be.  7 of them are shipping today and 6 are soon to be shipped.

Manufacturer Protocol Programmability
Shipping Wifi 802.11 Bluetooth Low Energy Near Field ZigBee Z-Wave Lutron 433Mhz nrf24l01+ Insteon Infrared Cell Apple HomeKit Other
SmartThings Y    Y    Y    Y Groovy
StaplesConnect Y    Y    Y    Y    Y NO
Wink Y    Y    Y    Y    Y    Y Robots
Insteon Hub – Microsoft Y    Y    Y    Y Insteon, dual band, wireline YES
Honeywell Lynx Y    Y    Y    Y 345Mhz, Power Backup NO
Mi Casa Verde Vera Y    Y    Y    Y    Y YES
HomeSeer Y    Y    Y    Y    Y LOCAL ONLY YES
New Smartthings N    Y    Y    Y    Y    Y Power Backup Groovy
Ninja Sphere N    Y    Y    Y    Y    Y Gestures and Trilateralization YES
Apple Hub N    Y    Y    Y ??
Homey – Kickstarter N    Y    Y Y    Y    Y    Y    Y    Y JavaScript
Oort N    Y    Y NO
Almond+ N    Y    Y    Y NO


If you want to integrate your devices locally then you face the issue if they are all available through a single hub device.     Wifi devices tend to be devices that are powered by a plug since 802.11 requires more power although it is ubiquitous.  BLE is gaining popularity but few devices are BLE in my experience.  Zwave is the most popular protocol and is supported by almost all hubs.  Zigbee close behind.   A requirement to do sophisticated integration is that the hub support programming.   The Wink hub looks promising but it’s robot language may not be sophisticated enough for some applications.   This leaves the SmartThings, MiCasaVerde, HomeSeer, Ninja Sphere and Homey as contenders.  I discarded the Insteon as it refuses to support recent protocols.   It may be a viable entry if you have a lot of legacy X10 and other devices from the past.

The first 3 SmartThings, Vera and HomeSeer are the only currently shipping products.

In its infinite wisdom Lutron which has a history of making lots of lights and other devices decided to use a 433Mhz frequency protocol which is not well supported by all the hubs.  NRF24L01+ is a hobbyist protocol.   The infrared protocol can be mitigated by purchasing irBeam kickstarter devices.   These devices support BLE and allow you to automate transmission of IR commands to control all your IR based home entertainment devices.   Therefore, by having a BLE enabled hub and some automation you can probably support IR devices without having support in the hub.   Some devices have a backup Cell Phone connection capability in case your internet connection fails.  The new SmartThings has power backup as does the Honeywell Lynx.  However, the Lynx isn’t programmable.

It is possible that like the Infrared option there may be “point solutions” that could offer additional protocols through a proxy.  A company is proposing to build such point solution products with no user interface so you can buy them individually.  However, this is a kickstarter project not quite past the conceptualization stage.    The Davis weatherstation uses a proprietary protocol and hardware for communicating between its weather sensors so this required me to purchase the weather envoy which consolidates all their devices and allows me to deliver that locally and to the internet through an application on my Mac Mini.   Similarly Honeywell has a history of devices using the 433Mhz frequency that work very well for security and won’t interact with anything but a Honeywell device.

The Ninja Sphere deserves mention because it has 2 very cool technologies built into it besides the standard ones.  The Ninja has a capability to detect the location in the house of the devices it is talking to.   So, by attaching a “Tag” device to anything in the house or from the movement of devices it is listening to it can triangulate an approximate position.   It uses this to detect if things are moving in the house.  Another feature of the Ninja is the ability to detect hand gestures around the device itself.  So, they can support the idea that tapping the device will turn on things or a movement of your arm a certain way above the device will cause the temperature of the house to go up  or something else.   The Ninja also has the ability theoretically to support Spheramid’s which they say can be used to attach any future protocol that someone wants to.   It’s also kinda cool looking.

I bought a MYO armband which requires you wear something on your arm but the MYO works throught BLE and if your hub uses BLE and had an integration with the MYO then you could support gestures anywhere not just near the Ninja.

IOT Automation example code.38217a6b20616a48440c2fc66de6a354_original

Local Integration Automation

Once you have a hub selected to do all your automation you need to be able to program it to the devices you are supporting.   In my case there are several devices on my list that aren’t supported by any hub so a hub can’t do it all for me.  In addition you may require, like me, the use of external web services to support some of the functionality and intelligence.

For instance, I need to go to PGE web site to find rates and times of rate changes.   I need to go to a weather service to find predictions for future weather which I want to use in my automation.   I also want to look at my cell phone’s movement to detect my movements and I want to automate some things around the car (Tesla) and all of these will require external web services to work.  So, my hub must allow a programming environment powerful enough to allow me to include these external services as well as the data coming from my devices that are connected to the hub.   Even if I do the integration on a local device I will need internet connectivity to implement all the intelligence I want.

The SmartThings, Homey, Vera and Ninja claim to support a full language that can allow the complexity I would want.


Cloud Integration Compatibility of Protocols

All of my devices connect to the internet either directly or through a device locally.   So, my electric meter is monitored by PG&E which stores data on the Opower website.  I also collect real time data via a HAN compatible device which funnels the real time information to the internet as well as locally.   The Davis weather station data is available locally but also is delivered to WeatherUnderground to be accessible from the Internet.  All my Z-Wave devices talk to the Lynx 7000 locally but also deliver their status to the cloud.   So, whether I want it or not everything is in the cloud anyway.   To be truthful some of the devices could be set up so they didn’t go to the cloud but most of the time people want to be able to access devices from the cloud so it makes sense to have the data and control in the cloud too.

The spreadsheet for supported services in the cloud is much sparser than the HUB market above:

Service Integration of
F=Future Planned Prod? WEMO Google Mail Drop Box SMS TESLA GPS Following HAN Energy UP Jawbone Weather Hub Support MYO Carrier irBeacon ATT DVR MyQ Liftmaster
Google Home N
myHomeSeer N
ATT Digital Life
Vera Y Y Y Vera Y
SmartThings Y Y SmartThings
Ninja N Ninja
Homey N Homey


Google Home or Android@home is a concept at this point.  I am not aware of a delivery date.   myHomeSeer is a service that promises to allow you to control your HomeSeer devices from the cloud. ATT Digital Life is still a concept from what I can see.  I am surprised there are not more IFTTT like clones out there.
Basically, only IFTTT at this time seems to support doing automation in the cloud.   There are a number of IFTTT like services including:  Zapier, Yahoo!Pipes, CloudWork, CloudHQ.     None of them seem to provide any IOT support at this time although they could all in theory help in building the automation I am considering.

You could of course create your own virtual server in the cloud, run your own application and implement whatever automation you wanted.  You can use any of a number of PaaS services in the cloud that allow you to build and run applications in the cloud.   This requires a complete knowledge of development and you building your own application from the ground up.

You could also build your own Ios application or Android application and run it on your phone.   That’s even more complicated.

IFTTT allows you to specify a channel as a trigger.   The channel can be any of 70 different services they support today.  So, I can say that when I send an SMS message to IFTTT it will turn on a WEMO switch which turns on something in my house.

IFTTT is far from perfect as well.   It supports a very few devices directly and I will need to leverage some clever tricks to get IFTTT to do some of the automation I want.   For instance, IFTTT allows you to specify when an email with a specific subject comes in to do something.   So, I can build some automation either locally or in some PaaS service that will perform some of the automation I want and gather information and then send a message to IFTTT through either an SMS message or google email or other channels that already exist in IFTTT to trigger the functionality I want.

IFTTT is working on a development platform that will allow people to build their own channels and actions.   They also have a large number of Future items that many people seem to be working on.  I suspect in a year the IFTTT story will be very much more complete and compelling looking at the comments and momentum it seems to have garnered.


Could this be simpler?

Of course I am tackling a difficult problem because I am trying to stretch what IOT can do today and how smart it can be.   There are choices I could have made to make life much simpler.   If I had stuck to virtually everything being a Z-Wave device it would have eliminated a number of complexities.  If I relaxed some of the requirements, like knowing my location or trying to integrate my Tesla, BodyMedia, Myo, Carrier devices it would make things easier.

Virtuous Circle

Things are rapidly evolving


The state of things is rapidly evolving.   Some of the hubs I am talking about are due out soon.  I expect there will be a lot more automation available in all the platforms and more compatibility.   There will still be numerous legacy things that can’t be changed and there will be vendors who refuse to be helpful.   A number of new entrants will undoubtedly confuse things.  It’s early in the IOT space.   Many vendors are trying to stake out ownership of the whole space.   Too many protocols and different models of how things can be integrated exist than can be supported.   I suspect in a year things will be better but also I don’t think the wars between the participants are nearly won.

Where to put the automation?

I am going to decide on a hub and put some integration and intelligence into the hubs and some into the cloud.   I will describe more of how I propose to split the work and the data in the next blog.  I will also address how to get around some of the thornier problems if possible.


Other articles you may find interesting:

Why would you want to Integrate IOT (Internet of Things) devices? Examples from a use case for the home.

Integrating IoT Devices. The IOT Landscape.

Iot (Internet of Things), $7 Trillion, $14 Trillion or $19 Trillion? A personal look

Siri, SMS, IFTTT, and Todoist.


A Reference Architecture for the Internet of Things



 Alternatives to IFTTT

Home Automation Startups

sanjeewa malalgodaHow to fine tune API Manager 1.8.0 to get maximum TPS and minimum response time

Here in this post i will discuss about API Manager 1.8.0 performance tuning. I have tested this is below mentioned deployment. Please note that this results can vary depend on you hardware server load and network. And this is not even fully optimized environment and probably you may go beyond this with better hardware+ network and configuration combination according to your use case.

Server specifications

System Information
Manufacturer: Fedora Project
Product Name: OpenStack Nova
Version: 2014.2.1-1.el7.centos

4 X CPU cores
Processor Information
Socket Designation: CPU 1
Type: Central Processor
Family: Other
Manufacturer: Bochs
Max Speed: 2000 MHz
Current Speed: 2000 MHz
Status: Populated, Enabled

Memory Device
Total Width: 64 bits
Data Width: 64 bits
Size: 8192 MB
Form Factor: DIMM
Type: RAM

Deployment Details

Deployment 01.
2 gateways (each run on dedicated machine)
2 key managers(each run on dedicated machine)
MySQL database server
1 dedicated machine to run Jmeter

Deployment 02.
1 gateways
1 key managers
MySQL database server
1 dedicated machine to run Jmeter 


Configuration changes.

Gateway changes.
Enable WS key validation for key management.
Edit /home/sanjeewa/work/wso2am-1.8.0/repository/conf/api-manager.xml with following configurations.
[Default value is ThriftClient]

[Default value is true]
Other than this all configurations will be default configuration.
However please note that each gateway should configure to communicate key manager.

Key Manager changes.
Edit /home/sanjeewa/work/wso2am-1.8.0/repository/conf/api-manager.xml with following configurations.
[Default value is true]
No need to run thrift server there as we use WS client to key validation calls.
Both gateway and key managers configured with mysql servers. For this i configured usr manager, api manager and registry database with mysql servers.

Tuning parameters applied.

Gateway nodes.

01. Change synapse configurationsAdd following entries to /home/sanjeewa/work/wso2am-1.8.0/repository/conf/ file.

02. Disable HTTP access logs
Since we are testing gateway functionality here we should not much worry about http access logs. However we may need to enable this to track access. But for this deployment we assume key managers are running in DMZ and no need track http access. For gateways most of the time this does not require as we do not expose servlet ports to outside(normally we only open 8243 and 8280).
Add following entry to /home/sanjeewa/work/wso2am-1.8.0/repository/conf/ file.

03. Tune parameters in axis2client.xml file. We will be using axis2 client to communicate from gateway to key manager for key validation. For this edit wso2am-1.8.0/repository/conf/axis2/axis2_client.xml and update following entries.

    <parameter name="defaultMaxConnPerHost">1000</parameter>
    <parameter name="maxTotalConnections">30000</parameter>

Key manager nodes.

01. Disable HTTP access logs
Since we are testing gateway functionality here we should not much worry about http access logs. However we may need to enable this to track access. But for this deployment we assume key managers are running in DMZ and no need track http access.

Add following entry to /home/sanjeewa/work/wso2am-1.8.0/repository/conf/ file.

02. Change DBCP connection parameters / Datasource configurations.
There can be argument on these parameters. Specially disable validation query. But when we have high concurrency and well performing database servers we may disable this as created connections are heavily used. And on the other hand connection may work when we validate it but when we really use it connection may not work. So as per my understanding there is no issue with disabling in high concurrency scenario.

Also i added following additional parameters to optimize database connection pool.

If you dont want to disable validation query you may use following configuration(here i increased validation interval to avoid frequent query validation).

<validationQuery>SELECT 1</validationQuery>

03. Tuning Tomcat parameters in key manager node.
This is important because we call key validation web service service from gateway.
change following properties in this(/home/sanjeewa/work/wso2am-1.8.0/repository/conf/tomcat/catalina-server.xml) file.

Here is the brief description about changed parameters. Also i added description for each field copied from this( document for your reference.

I updated acceptorThreadCount to 4(default it was 2) because in my machine i have 4 cores.
However after adding this change i noticed considerable reduction of CPU usage of each core.

Increased maxThreads to 750(default value was 250)
The maximum number of request processing threads to be created by this Connector, which therefore determines the maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200. If an executor is associated with this connector, this attribute is ignored as the connector will execute tasks using the executor rather than an internal thread pool.

Increased minSpareThreads to 250 (default value was 50)
The minimum number of threads always kept running. If not specified, the default of 10 is used.

Increased maxKeepAliveRequests to 400 (default value was 200)
The maximum number of HTTP requests which can be pipelined until the connection is closed by the server. Setting this attribute to 1 will disable HTTP/1.0 keep-alive, as well as HTTP/1.1 keep-alive and pipelining. Setting this to -1 will allow an unlimited amount of pipelined or keep-alive HTTP requests. If not specified, this attribute is set to 100.

Increased acceptCount to 400 (default value was 200)
The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 100.

Disabled compression. However this might not effective as we do not use compressed data format.

<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
               server="WSO2 Carbon Server"
               noCompressionUserAgents="gozilla, traviata"               



Test 01 - Clustered gateway/key manager test(2 nodes)

For this test we used 10000 tokens and 150 concurrency per single gateway server. Test carried out 20 minutes to avoid caching effect on performance figures.

Denis WeerasiriMeasuring the Livability

Every year, I come back to Sri Lanka for 5-6 weeks to meet my family during the new year season. Every time when I am back in Sri Lanka, I think what has been changed compared to the previous year?, which aspects have been improved? and do such improvements affect the livability of Sri Lanka?.

When it comes to the livability, I am concerned and believe on three things which are absolutely important. They are,
  • Do people respect other people?
  • Do people respect the law of the country?
  • Do people respect money?
But, how do I measure them? I just observe three simple things when I commute via public transportation.
  • Do motorist stop when pedestrians wait beside crosswalks?
  • Do motorist and pedestrians oblige to traffic lights and signs?
  • Do bus conductors give back the right amount of change back to commuters?
If I get “Yes” for at least one question, I would say "Yes, the livability is improving".

Chathurika Erandi De SilvaValidating a WSDL

In a scenario if you want to validate a wsdl we can use the following two methods

1. Validate it using WSO2 ESB
2. Validate it using Eclipse Web Tools

Validate it using WSO2 ESB

In WSO2 ESB there's a inbuilt WSDL validator. This can be used to validate Axis 2 based WSDLs. You can either specify the file or the source url in order to provide the WSDL.

More on WSO2 ESB Validator

Validate it using the Eclipse

Follow the below steps to get the wsdl validated through ESB

1. Open Eclipse
2. Go to help -> Install New Software
3. Search for "Eclipse Web Tools"
4. Install the latest version of "Eclipse Web Tools"
5. Create a new WSDL (File -> New -> WSDL)
6. Go to the source view of the WSDL and copy paste the WSDL content you want to validate
7. Save the file
8. Right click -> Validate

By this step the WSDL can be validated.

Yumani RanaweeraJTW test cases and queries related to WSO2 Carbon 4.2.0

JWT stands for JSON Web Token. Used for authentication as well as resource access.

Decoding and verifying signed JWT token

In the default WSO2 server, the signature of JWT token is generated using the primary keystore of the server, where the private key of wso2carbon.jks is used to sign it. Therefore when you need to verify the signature you need to use public certificate of wso2carbon.jks file.

Sample Java code to validate the signature is available here -

Is token generation and authentication happening in a different webapp in wso2 IS ?

In WSO2 Identity server 5.0, the token generation is done via a RESTful web app hosted inside WSO2 IS. It is accessible via https://$host:9443/oauth2/token. You can call with any REST client. This blog post shows how retrieve access token via curl -

The token validation is performed via a SOAP service, its not a web application.

How to generate custom tokens?

How custom JWT tokens can be generated is explained in here -

From where does the JWT reads user attributes?

User attributes for the JWT token is retrieved from the user store that is connected with APIM. If we take APIM scenario as an example user must be in the user store that APIM (Key Manager) is connected. If claims are stored in some other place (rather than in the user store, in some custom database table), then you need to write an extension point to retrieve claims. As mentioned in here [], you can implement new "ClaimsRetrieverImplClass" for it.

Use cases:

There was a use case where the claims contained in SAML response needed to be used to enrich HTTP header at API invocation time.

Claims that are contained in SAML2 Assertion are not stored by default. Therefore, they can not be retrieved when JWT token is created. But we can create a new user in the user store and store these attributes based on the details in the SAML2 assertion. If claims are stored in the user store by creating new user, those attributes would be added in to the JWT token by default and any customization is not required here.

Optionally we can store them in a custom database table. To achieve this, you may need to customize the current SAML2 Bearer grant handler implementation as it does not store any attribute of the SAML2 Assertion. Details on customizing existing grant type can be found here [].

Ashansa PereraNeed to run many instances of your project in visual studio?

Solution is simple.

Right click on your solution
Set Startup Projects
Select ‘Multiple startup projects
And select ‘Start’ option for your projects in the list.

And now when you hit start button, all the project you selected will be started.

Well, you need several instances of the same project to be started?
Right click on the project
Debug > Start New Instance

(This option will be useful to interestingly try out the Chat Application we developed previously. Because with more chat clients, it will be more interesting )

Shelan PereraWhat should you do when your mobile phone is lost?

Have you ever lost your mobile phone in your life? I have lost twice and yes it is not a great position to be. But these two incidents had two different implications. When i lost my first phone it was a nokia phone (N71 to be exact) which was quite a smart phone at that time around 2007. Yes i wanted to find the phone but could not the story was over and life went on. But...  Second time i lost a Nexus 4 which is android.

So what is the big deal?

"Oh my gmail account"

When i sensed for the first time that i have lost my mobile phone the first thing that came to my mind was "oh gosh all my accounts are there". But fortunately i have thought about this topic before i lost this phone so the next steps to be done was obvious to me. I think you might be interesting and you may have better suggestions as well. :)

Three important things

1) Android phone has your email account , If you use gmail as the primary account then you might use gmail for most of other online accounts as well. (Facebook , twitter, ebay so on...)

So if anyone can get hold of it then you are doomed, Because anyone can use forgot my username or password reset features to take control of other account.

2) Android phone has all your data :). In this what google does not back up. I think only your life which cannot be backed up restore later.

3) If your phone is not pin protected or not protected with any other mechanism then you are at the highest level of vulnerability.

So what I should do?

1) Change your email address password immediately. This is very crucial and the most important step

You can check login history here. If there is a recent login just after you

2) You should visit Using this you can erase your mobile phone data (or in other words do a remote wipe out of data).

This will only happen if the device is online.

If you need to locate the device you need to do it before the wipe out using the same above link.

3) You may register a police complaint at last.

Finding the phone is more important. But for someone who needs to protect data, other accounts and also privacy above mentioned steps becomes vital.

If you have other suggestions please do share in comments :)

Madhuka UdanthaCouchDB-fauxton introduction

Here we will be using CouchDB (developer-preview 2.0). You can build couchDB with preview guide line in here.

After built is success, you can start couchDB from 'dev/run'


Above command starts a three node cluster on the ports 15984, 25984 and 35984. Front ports are 15986, 25986 and 35986.


Using front port to check the nodes by http://localhost:15986/nodes


Then run the haproxy

HAProxy is providing load balancing and proxying for TCP and HTTP-based applications by spreading requests across multiple servers/nodes

haproxy -f rel/haproxy.cfg


Now look at Fauxton

If your in windows you can follow this post for to build it.

cd src/fauxton

run two below lines

npm install

grunt dev


Go to http://localhost:8000


Adding Document


Now time to enjoy fauxton UI for couchDB, it is the times explore the couch features in more interactive manner.

Sajith KariyawasamJava based CLI client to install WSO2 carbon features

WSO2 Products are based on WSO2 Carbon platform + set of features. (Carbon platform is also a collection of features) So, WSO2 products are released with set of pre-bundled features. For example, WSO2 Identity Server comes with System and User Identity Management feature, Entitlement Management features etc. API Manager comes with API management features, and features related to publishing and managing APIs etc. There can be requirement where the Key management features found in API Manager, need to be handled in the Identity Server itself, without going for a dedicated API Manager (Key manager) node. In such a scenario, you can install key management features into Identity Server.

Feature installation / management can be done via Management console UI [1] or using pom files [2]

In this blog post I'm presenting another way of installing features, that is via a Java based client.

This method can be used in situations where you have an automated setup to provision nodes with required features. You can find the code for that client in github.


sanjeewa malalgodaHow to write custom handler for API Manager

In this( post we have explained how we can add handler to API Manager.

Here in this post i will add sample code required for handler. You can import this to you favorite IDE and start implementing your logic.

Please find sample code here[]

Dummy class would be like this. You can implement your logic there

package org.wso2.carbon.test.gateway;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.synapse.Mediator;
import org.apache.synapse.MessageContext;
import org.apache.synapse.core.axis2.Axis2MessageContext;
import org.wso2.carbon.apimgt.impl.APIConstants;
import org.wso2.carbon.context.PrivilegedCarbonContext;
import org.wso2.carbon.utils.multitenancy.MultitenantConstants;
import java.util.Iterator;
import java.util.Map;
import java.util.TreeMap;

public class TestHandler extends AbstractHandler {

    private static final String EXT_SEQUENCE_PREFIX = "WSO2AM--Ext--";
    private static final String DIRECTION_OUT = "Out";
    private static final Log log = LogFactory.getLog(TestHandler.class);

    public boolean mediate(MessageContext messageContext, String direction) {"===============================================================================");
        return true;

    public boolean handleRequest(MessageContext messageContext) {"===============================================================================");
        return true;

    public boolean handleResponse(MessageContext messageContext) {
        return mediate(messageContext, DIRECTION_OUT);

John MathonMicroServices – Martin Fowler, Netscape, Componentized Composable Platforms and SOA (Service Oriented Architecture)


As an enterprise architect we are faced with constructing solutions for our companies that must be implemented quickly but need to scale to almost arbitrary capacity quickly when demand materializes and must stand the test of time because it is almost impossible to rebuild enterprise applications once they are successful and in production.

There are many examples of the history of really successful Enterprise Applications and how they evolved.  At TIBCO we had to build several generations of our publish/subscribe technology.  Each time we had a major architectural change which reflected our need to scale and interact with more things than the previous generation could possibly handle.   Each time we did this was a company “turning point” because a failed attempt could mean the death of the company.  So, it is with great pride that during those years we made these transitions successfully. It is a very hard thing to do to build software which is designed and architected so it is flexible enough to adapt to changing technology around it.


Having built many enterprise applications we start with good ideas of how to break things into pieces but after 5 years or more it becomes readily apparent that we broke some things into pieces the wrong way.  The book “Zen and the art of Motorcycle Maintenance”  discusses how you can look at a motorcycle engine from many vantage points giving you a different way to break down the structure of the engine which gives you different understanding of its workings.  This philosophical truth applies to motorcycle engines, string theory and software development.  There are always many ways to break a problem down.  Depending on your purpose one may work better but it is hard in advance to know which way will work best for problems in the future you have not anticipated.

Todays world of Software development is 10x more productive than the software development of just a decade ago.   The use of more powerful languages, DevOps/PaaS, APIs and Container technology such as Docker has revolutionized the world of software development.  I call it Platform 3.0.

A key aspect of Platform 3.0 is building reusable services and components.   You can think of Services as instances of components.   Micro-Services is the idea that we build services to be smaller entities in a lightweight container that are reusable and fast.

Micro-Services and SOA

There is some confusion about the term micro-services.   It can be a couple of different things depending on the person using it and their intention.    One definition that must be taken seriously is proposed by Martin Fowler: 

“The term “Microservice Architecture” has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.”

Martin goes on to describe micro-services as a way of building software applications as a combination of light weight services (API’s) that are independent.   However, IMO this is simply the default way all software is being written today.  It is the default architecture for Platform 3.0.

However, micro-services has emerged recently as a counterpoint to SOA type architecture.    The idea being that SOA architecture is heavyweight, has intermediate processes that slow things down, increase complexity.   Micro-service architecture in some cases is being proposed as a shortcut that means you can build software by simply hardcoding services into your applications and that this is faster and in the long term better than using a SOA type approach of a mediation component or message broker to facilitate agility and best practices.

I have no beef nor concern about services being broken into smaller and smaller pieces that are more and more independent of each other.   This is good architecture in general.   There is also no beef with the idea micro-services are lightweight and can be replicated and instanced quickly as demand builds.   Micro-services can be encapsulated in containers which woven together by a “container orchestration layer” allows one to specify how the micro-services are handled in fault tolerant and in scaling up and scaling down.    The orchestration does not have to become an intermediary process or communications layer that slows anything down.

Many of the principles of SOA have to do with agility to know what applications and services use what other applications and services.  This facilitates being able to change those dependencies quickly and reliably in an enterprise situation.   These concerns are not lessened but magnified because of a micro-services architecture.

Another definition of Micro-Services

Over the last 5 years APIs have grown dramatically in popularity and importance.   These are the “services” that are being reused in Platform 3.0 around the world in applications for mobile devices and in enterprises.     These API services are becoming ubiquitous and driving enormous innovation and growth.   See my article on 10^16 API calls.

What has evolved is that these APIs are becoming heavier and heavier weight as naturally happens with success.   As vendors get more customers with more diverse needs the APIs become bulkier and more powerful.   This is the standard evolution of software.

A counter to this is Micro-services.   The idea is that APIs should be leaner and that getting the functionality you want should come from a variety of smaller APIs that each serve a particular function.   In order to facilitate efficient operation of applications that utilize micro-services as opposed to the larger “API services” concessions have to be made to the overhead associated with invoking a micro-service.

Netscape is one of the major proponents of this idea of Micro-services.   Netscape has been learning as it builds its enormous presence in the cloud that breaking its offerings into more and more independent services each of which is ultra efficient and easily instanced is the architecture that works best.   They are even challenging one of the most powerful concepts of the last 10 years in APIs that APIs should be RESTful.   They suggest that “simpler” protocols with less overhead work better in many situations and actually trying to convince people REST is not the end of the story for APIs.   Paradoxically, REST is promoted because of its simplicity compared to SOAP for instance.

The advantage of micro-services according to the Netscape definition is that services can be more rapidly scaled and de-scaled because they are so lightweight.   Also, the overhead associated with conformance with heavy protocols like HTTP is not worth it for micro-services.   They need to be able to communicate on a more succinct, purpose driven approach.   Another advantage of the Netscape approach is to be able to short-circuit unavailable services quicker.  Please see the article on Chaos Monkey and Circuit Breakers.

Screen Shot 2015-03-19 at 4.13.21 PM

In a similar way with the first definition of Micro-services breaking down these services into hyper small entities doesn’t mean that they are not still containerized and the interface to them documented and managed albeit in a less obtrusive way than some API management approaches would entail.   If you look at the commandments from the Netscape development paradigm it includes key aspects of good SOA practices and API Management concepts.

API Management and MicroServices

API management is in my mind is simply an evolution of SOA registration service with many additional important capabilities, most important SOCIAL capabilities that enable widespread reuse and transparency.   If the goal of micro-services is to avoid transparency or social capabilities I would have serious argument with the micro-services approach.   As I mention above Netscape’s approach doesn’t suggest API Management is defunct for a micro-services architecture.

Part of the problem with API management and micro-services or with the mediation / message broker middleware technologies is that these can impose a software layer that is burdensome on the micro-services architecture.   In the case of the mediation / message broker paradigm this layer if placed between two communicating parties can seriously impede performance (either latency or overall throughput) especially if it isn’t architected well.   In fact this layer could make the entire benefit of micro-services architecture become a negative potentially.

Let us say an approach to implementing micro-services would be to put them into a traditional API Management service as offered by a number of vendors.   There would indeed be an overhead introduced because the API Layers impose a stiff penalty of authentication, authorization, then load balancing before a message can be delivered.   What is needed is a lightweight version of API Management for some trusted services behind a firewall that are low risk and high performance.  The only vendor I know that offers such a capability is WSO2.

WSO2 implements all its interfaces to all its products in API management as default.  The philosophy is called API Everywhere and enables you to see and manage all services, even micro-services in your architecture with an API Management layer.   You can choose during debug phase to impose a higher burden API management that gives more logging of what is happening to help you understand potential issues with a component or micro-service and to monitor performance and other characteristics more closely and when you are satisfied something is production worthy to go to a minimal in-process gateway that minimizes the impact on performance but still gives you some API Management best practices.

Other SOA components and Micro-services

Other components in the SOA architecture are mediation services and message brokers.   These components are used to provide agility by allowing you to easily change components in and out, version components or services, create integration with new services without having to change applications and other benefits.   These are still important in any enterprise application world and are particularly important in the fast changing world of Platform 3.0 we live in today where services change rapidly and applications are built using existing services rapidly.

One of the primary goals of the SOA architectural components was to facilitate the eradication of the productivity killing “Slide11

point-to-point architecture that made “the pile” of become impossibly


hard to whittle down.

More important than that is the lesson we have learned in the last 10 years which is that social is a critical component of reuse.  In order to facilitate reuse we must have transparency and the ability to track performance, usage of each component and have feedback in order to create the trust to reuse.  If the result of micro-services is to destroy transparency or some ability to track usage and performance then I would be against micro-services.

I believe the answer is in some cases to use API management traditionally and in some cases to use container orchestration services such as Kubernetes, Docker swarm and Docker Compose, Helios and other composition services.   I will be writing about these container composition services as they are an important part of the new architecture of Platform 3.0.


Micro-services is the new terminology being espoused as the replacement architecture for everything from SOA to REST. However, the concepts of micro-services are well founded in reuse and componentization and as long as good architectural practices from the lessons learned in the last 2 decades of programming are followed and implemented inside a framework of Platform 3.0 then micro-services is a good thing and a good way to build applications.

Micro-services provides an easier to scale, more efficient and more stable infrastructure in principle.   Micro-services once written are less likely to see lots of modifications over time as their functionality is constrained.   This means fewer bugs, more reusability and more agility.

Micro-services however should still be implemented in an API Management framework and ESB’s, Message Brokers and other SOA architectural components still make sense in this micro-services world especially when augmented with a container composition tool.    In fact these components should be used to make micro-services transparent and reusable.

Other Articles of interest you may want to read:

Using the cloud to enable microservices

Do Good Microservices Architectures Spell the Death of the Enterprise Service Bus?

Microservices : Building Services with the Guts on the Outside

Event Driven Architecture in the age of Cloud, Mobile, Internet of Things(IoT), Social

Is 10,000,000,000,000,000 (= 10^16 ) API calls / month a big deal?

What is Chaos Monkey?

Microservices – Martin Fowler

Merging Microservice Architecture with SOA Practices

7 Reasons to get API Management, 7 Features to look for in API Management 

Things that cost millions of dollars are costing thousands. Things that took years are taking months.

RDBs (Relational Databases) are too hard. NoSQL is the future for most data.

John MathonWhy would you want to Integrate IOT (Internet of Things) devices? Examples from a use case for the home.


Using the Network Effect to build automation at home between IOT devices

The Network effect is the value that is gained by the existence of a growing number of devices in a network.  I talk about the network effect in this article. I am proposing to leverage multiple IOT devices in a coordinated way to provide more value to my home in particular and more intelligence.  In the same way a business could think about integrating hardware to provide added value and intelligence in how it operates.

What kinds of automation can I do that would be useful at home.   There are dozens of things that could be done that might be marginally useful.    My main goals with this project is to:

1) Reduce energy use and energy cost

2) Provide improved security

3) Enhance the comfort of the home

4) Cool Stuff

I discuss a set of IOT devices I have reviewed, some I have bought, some I have decided you shouldn’t buy and some I am not sure.   I also discuss the market for IOT in general here.

I have also looked at ways these devices can be interfaced and integrated with in the article here..

In this article I am discussing what kinds of functionality to achieve by integrating diverse set of web services and IoT devices that would prove useful to an average person.  You should read the previous 2 articles to get the ideas behind this automation I propose but it is not necessary.

Energy Efficiency

I want my house to be as energy efficient as possible.  This is more complex problem than you might think.  My energy plan charges me different rates for energy usage at different times of the day.   Daytime energy rates are 4 times the nighttime energy rates.   The primary drivers of my energy use are heating the house and cooling, the pool cleaning and heating, various appliances, fans and finally charging the Tesla and lights.    I have already reduced my house heating costs by trying to move heating to the night time and also the pool cleaning and Tesla charging.   The combination of doing these things has cut my electric energy bill in half.    I further gained substantial benefit from moving to a variable speed pump on the pool.   The power consumed by an electric motor goes up as the cube of the speed of the motor.    Therefore being able to reduce the motor speed during certain functions by half cuts the cost by 7/8!   One problem is that heating the pool with solar panels is possible only during the daytime and therefore part of the challenge is deciding when to spend the energy to pump the solar panels during peak or off-peak energy periods.

I love my pool hot and I want to utilize the sun to the maximum to get the pool hot yet I don’t want to waste energy if it doesn’t make sense to heat it.  The automation system should know my schedule and where I am.   There is no point in making the effort to heat the pool if the temperature of the pool is 60 degrees and we get one hot 80 degree day so a dumb system that simply looks at the sun irradiance to determine when to heat is going to waste a lot of energy pumping when there is no point.

In the same way if the temperature today will hit 85 and the house is cool in the morning the last thing I want to do is heat the house knowing that by the afternoon I may want the house cooler.   So, the forecast of temperature and current temperature inside and outside the house are important.   If it is cool outside but the sun is out then the windows will produce a lot of warmth if they are open, however, if it is hot outside and the sun is blaring the house would need the shades down to reduce the heating from the windows.  Substantial gains in efficiency of the house can be gained by controlling a few shades.

There is no point in heating the house above a certain point if I am away from the house.   On the other hand if I am on my way back to the house it would be useful if the house knew that and heated itself prior to my arrival.  I wouldn’t want to do that if the energy cost would be too high but in general it would be useful to know my location to figure out how much to heat or cool the house.

A NEST thermostat claims to be smart but understanding all these things is beyond what I believe a NEST can do or knows about.   The NEST is supposed to learn my patterns but I don’t have any patterns.

Some of the rules I have come up with to help achieve these aims with the heating system.

if off peak heat to 72
If lowest energy cost raise heat to 75 to reduce heating at other times
When I am >100 miles from home minimize heating
If I’m coming home and it’s not peak energy cost time then raise temp to 72, send message
If I’m coming home and it is peak energy time, send message
When hitemp today will be >80 do not engage heater during day at all
if it is peak energy time minimize heat
If I am >15 miles form home and it is a workday minimize heat
If hitemp today > 80 and inside temp > 65 lower blinds
inside temp < 75 and hitemp today < 76 raise blinds
if after sunset or before sunrise lower blinds to conserve heat
if current outside temp is > 80 and inside temp is <65
if guests are in house keep temp at 72 except at night reduce to 65


These rules depend on understanding the times of the day that energy rates change.  They depend on knowing the current temperature outside and inside the house as well as predicted temperatures for today and the future.   The rules depend on how far I am from home and what direction I am going.  Where there is a question about what to do it depends on having an SMS chat with me to resolve the condition.   I also want to be able to override the state of the system for special conditions like guest are in the house.    From understanding these rules and the types of information I needed to be able to perform the automation I wanted I picked services and devices I needed to interface to.   For instance I need the following devices and services to perform the automation I want.


Followmee seems like a good open source service to grab my current location.   The app records my cell phones location every minute.  I can use this to help automate some functions.   For instance if my location over a period of 10 minutes moves 5 miles closer to home then I will assume I am coming home.   If I am >100 miles from home then I am on vacation or on a business trip.    If I am a mile from home and 5 minutes ago I was at home then the garage door and deadbolts should be secure.


IF THEN THIS THEN THAT is a great service that allows you to build automation to all kinds of services.   It will be the hub of my automation framework allowing me to create the rules above to control the house.

PG & EWeatherunderground, Davis Weather Station, Weathersnoop and Weatherbug

They have information on rates at different times of the day and also through opower and my Smart Zigbee Rainforest Eagle they can give me information on my electricity usage.

Weatherunderground, Davis Weather Station, Weathersnoop and Weatherbug

I have a weather station from Davis that records all kinds of useful information I will need to automate numerous functions.   The weather station information is sent up to the cloud to Weatherunderground using  Weathersnoop.   Forecasts can be obtained easily from Weatherbug.   I can also get internal temperature information from the Davis equipment and my pool temp.

Carrier Wifi Thermostat

The Carrier is an ultra-efficient heating and cooling system that also has a digital service for monitoring and controlling the house.   I will utilize it’s capabilities to control my home thermostat.

Pool Automation

I have another set of rules for controlling the pool pumping system.   I will expound on that in the next article and the rules for other automations.


Security and Safety

I want to be able to use intelligence to make sure my house is secure even if I forget something.     These include things like making sure the deadbolts are locked when I am away from the house and the doors and windows are closed.   I want to be notified when conditions at the house are changed.  I would like to be notified when winds exceed a certain range or temperatures exceed certain limits, if my electricity usage goes up or water usage goes up beyond certain points.   If there is an electrical interruption, doors, windows or garage is tampered with.   I also might want to get video of the house although I haven’t purchased those yet I expect I will at some point and want them integrated.


There are numerous things I have figured out would be easy to automate that would make life better, reduce mistakes or save me from a trip home or asking a friend to do something.   The irrigation system for plants might be a good target but I haven’t decided to tackle that yet.  There are tools to do this that are interesting.   I have read that one of the things that can harm my Tesla’s battery is leaving it in a discharged state for a long time.   I would like to be notified if my Tesla battery is below 70 miles for more than 4 hours.

I have a Gazebo with an automated motorized shade on the front.  I would like to control the shade based on the time of sunset and wind conditions, rain or factors such as whether I am home or not.

These are things that could be classified as “Cool.”   Since I am and always was a computer geek such things are kind of “pride” issues and it is important to have a few cool things to talk about.

Additional Services Needed for all this other Automation

Exponential Value from Connectedness

Lynx 7000 Home Security system and connected IOT devices that allow me to monitor security of the home, status of doors, garage and Z-Wave devices

Honeywell has led the way with home security cost reductions and with the 7000 and some other models has provided a low cost service to provide IOT home automation services.    The Honeywell Lynx 7000 can control Z-wave devices such as the shade operations both for the Gazebo and in-home, the deadbolt automation is through a Yale Zwave enabled device.   All these devices are controlled by the TotalConnect2 service.  Unfortunately this service does not have an API yet but it can be hacked through access to the web site.   I hope Honeywell adds an API soon.   It would make this part much easier.


Tesla car has an API that can be used to gather information and even operate certain functions on the car.


Wemo devices from Belkin provide ability to control electrical switches.   There are certain things I find useful to automate through these switches and WEMO provides a nice API and service for doing this.


The BodyMedia armband is the premier workout and life band in my opinion.   By using capacitance of the skin, motion, heat flux and heartbeat the BodyMedia gives a better view of energy produced in working out and sleep patterns.  Bodymedia was acquired by Jawbone which has created a uniform API for all its devices and is promoting numerous applications to control and monitor all its devices.


The Myo armband is able to detect gestures of my fingers, wrist and elbow to create events to control things.   I hope to link the armband to my system so that I can do things like raise and lower shades, turn on and off electronics, raise temperatures and even do some things to the Tesla based on arm gestures.


You may find some of the things I am trying to do stupid or not very relevant to you.   Please excuse my geekiness but I believe some of these things could be considered useful to a lot of us who simply want to for instance minimize energy usage or have specialized needs for safety, security or automation.   There are so many IOT devices and things being offered now that combining devices to accomplish some level of automation can be accomplished.

However, there are great difficulties in combining devices as I write this article.   Numerous of the services and interfaces to these devices differ dramatically.   Some have integrated into IFTTT to make automation easy and some have not.   Some have easy APIs to use and some have to be hacked to gain access.   Some of the APIs are described in Python, Java or other languages and some aren’t even REST.   It will take a good programmer or hacker to put all this automation together to combine these devices.   Ideally, services like IFTTT would become a central hub making it easy to create automations from many different IOT devices.   For now I will have to build my own automation in the cloud and leverage IFTTT.

In the next blog I will make the next step in this automation project by describing all the APIs I am using for each service, the rules specifically and how to prioritize the rules and automations.   What I hope I have described is an example of how using the Network Effect of multiple IoT devices you can build intelligence into a system of multiple IoT devices that can make your life better.    In a similar way any company can conceive of how to leverage the devices mentioned here or others to provide better support to customers, higher productivity for workers or to reduce costs in its infrastructure.   The proof of that is left to the reader at this point.

Other Articles you may find interesting:

Integrating IoT Devices. The IOT Landscape.

Tesla Update: How is the first IoT Smart-car Connected Car faring?

Iot (Internet of Things), $7 Trillion, $14 Trillion or $19 Trillion? A personal look

WSO2 – Platform 3.0 – What does the name mean? What does WSO2 offer that would be useful to your business?

A Reference Architecture for the Internet of Things


Breakout MegaTrends that will explode in 2015.

Tooling Up for the Marriage of the Internet of Things, Big Data, and Cloud Computing

The Internet of Things, Communication, APIs, and Integration

M2M integration platforms enable complex IoT systems

Madhuka UdanthaBuilding Apache Zeppelin

Apache Zeppelin (incubator) is a collaborative data analytics and visualization tool for Apache Spark, Apache Flink. It is web-based tool for the data scientists to collaborate over large-scale data exploration. Zeppelin is independent of the execution framework. Zeppelin integrated full support of Apache Spark so I will try sample with spark in it's self. Zeppelin interpreter allows any language/data-processing-backend to be plugged into Zeppelin (Scala- Apache Spark, SparkSQL, Markdown and Shell)

Let’s build from the source.

1. Get repo to machine.

git clone

2. Build code

mvn clean package

You can used '-DskipTests' to skip the test in the build


For cluster mode
mvn install -DskipTests -Dspark.version=1.1.0 -Dhadoop.version=2.2.0

Change spark.version and hadoop.version to your cluster's one.


3. Add jars, files

spark.jars, spark.files property in ZEPPELIN_JAVA_OPTS adds jars, files into SparkContext

ZEPPELIN_JAVA_OPTS="-Dspark.jars=/mylib1.jar,/mylib2.jar -Dspark.files=/myfile1.dat,/myfile2.dat"

4. Start Zeppelin
bin/ start

In console you will see print ‘Zeppelin start ‘ So go to  http://localhost:8080/


Go to NoteBook –> Tutorial

There you can see the chats and graph with queries. There you can pick chart attributes with drag and drop mode.



Srinath PereraIntroducing WSO2 Analytics Platform: Note for Architects

WSO2 have had several analytics products:WSO2 BAM and WSO2 CEP for some time (or Big Data products if you prefer the term).  We are adding WSO2 Machine Learner, a product to create, evaluate, and deploy predictive models, very soon to that mix. This post describes how all those fit within to a single story. 

Following Picture summarises what you can do with the platform. 

Lets look at each stages depicted the picture in detail. 

Stage 1: Collecting Data

There are two things for you to do.

Define Streams - Just like you create tables before you put data into a database, first you define streams before sending events. Streams are description of how your data look like (Schema). You will use the same Streams to write queries at the second stage. You do this via CEP or BAM's admin console (https://host:9443/carbon) or via Sensor API described in the next step.

Publish Event - Now you can publish events. We provide a one Sensor API to publish events for both batch and realtime pipelines. Sensor API available as Java clients (Thrift, JMS, Kafka), java script clients* ( Web Socket and REST) and 100s of connectors via WSO2 ESB. See How to Publish Your own Events (Data) to WSO2 Analytics Platform (BAM, CEP)  for details on how to write your own data publisher. 

Stage 2: Analyse Data

Now time to analyse the data. There are two ways to do this: analytics and predictive analytics. 

Write Queries

For both batch and realtime processing you can write SQL like queries. For batch queries, we support HIVE SQL and for realtime queries we support Siddhi Event Query Language

Example 1: Realtime Query (e.g. Calculate Average Temperature over 1 minute sliding window from the Temperature Stream) 

from TemperatureStream#window.time(1 min)
select roomNo, avg(temp) as avgTemp
insert into HotRoomsStream ;

Example 2: Batch Query (e.g. Calculate Average Temperature per each hour from the Temperature Stream)

insert overwrite table TemperatureHistory
select hour, average(t) as avgT, buildingId
from TemperatureStream group by buildingId, getHour(ts);

Build Machine Learning (Predictive Analytics) Models

Predictive analytics let us learn “logic” from examples where such logic is complex. For example, we can build “a model” to find fraudulent transactions. To that end, we can use machine learning algorithms to train the model with historical data about Fraudulent and non-fraudulent transactions.

WSO2 Analytics platform supports predictive analytics in multiple forms
  1. Use WSO2 Machine Learner ( 2015 Q2) Wizard to build Machine Learning models, and we can use them with your Business Logic. For example, WSO2 CEP, BAM and ESB would support running those models.
  2. R is a widely used language for statistical computing, and we can build model using R, export them as PMML ( a XML description of Machine Learning Models), and use the model within WSO2 CEP. Also you can directly call R Scripts from CEP queries
  3. WSO2 CEP also includes several streaming Regression and Anomaly Detection Operators

Stage 3: Communicate the Results

OK now we have some results, and we communicate those results to users or systems that cares for these results. That communications can be done in three forms.
  1. Alerts detects special conditions and cover the last mile to notify the users ( e.g. Email, SMS, and Push notifications to a Mobile App, Pager, Trigger physical Alarm ). This can be easily done with CEP.
  2. Visualising data via Dashboards provide the “Overall idea” in a glance (e.g. car dashboard). They supports customising and creating user's own dashboards. Also when there is a special condition, they draw the user's attention to the condition and enable him to drill down and find details. Upcoming WSO2 BAM and CEP 2015 Q2 releases will have a Wizard to start from your data and build custom visualisation with the support for drill downs as well.
  3. APIs expose Data as to users external to the organisational boundary, which are often used by mobile phones. WSO2 API Manager is one of the leading API solutions, and you can use it to expose your data as APIs. In the later releases, we are planning to add support to expose data as APIs via a Wizard.

Why choose WSO2 Analytics Platform?

Reason 1: One Platform for both Realtime, Batch, and Combined Processing - with Single API for publish events, and with support to implement combined usecases like following
  1. Run the similar query in batch pipeline and realtime pipeline ( a.k.a Lambda Architecture)
  2. Train a Machine Learning model (e.g. Fraud Detection Model) in the batch pipeline, and use it in the realtime pipeline (usecases: Fraud Detections, Segmentation, Predict next value, Predict Churn)
  3. Detect conditions in the realtime pipeline, but switch to detail analysis using the data stored in the batch pipeline (e.g. Fraud, giving deals to customers in a e-commerce site)
Reason 2: Performance - WSO2 CEP can process 100K+ events per second and one of the fastest realtime processing engines around. WSO2 CEP was a Finalist for DEBS Grand Challenge 2014 where it processed 0.8 Million events per second with 4 nodes.

Reason 3: Scalable Realtime Pipeline with support for running SQL like CEP Queries Running on top of Storm. - Users can provide queries using SQL like Siddhi Event Query Language. SQL like query language provides higher level operators to build complex realtime queries. See SQL-like Query Language for Real-time Streaming Analytics for more details. 
For batch processing, we use Apache Spark ( 2015 Q2 release forward), and for realtime processing, users can run those queries in one of the two modes.
  1. Run those queries using a two CEP nodes, one nodes as the HA backup for the other. Since WSO2 CEP can process in excess of hundred thousand events per second, this choice is sufficient for many usecases.
  2. Partition the queries and streams, build a Apache Storm topology running CEP nodes as Storm Sprouts, and run it on top of Apache Storm. Please see the slide deck Scalable Realtime Analytics with declarative SQL like Complex Event Processing Scripts. This enable users to do complex queries as supported by Complex Event Processing, but still scale the computations for large data streams. 
Reason 4: Support for Predictive analytics support building Machine learning models, comparing them and selecting the best model, and using them within real life distributed deployments.

Almost forgot, all these are opensource under Apache Licence. Most design decisions are discussed publicly at

If you find this interesting, please try it out. Please reach out to me or through if you want to know more information.

Heshan SuriyaarachchiFixing mysql replication errors in a master/slave setup

If you have a mysql master/slave replication setup and have run into replication errors, you can follow the below instructions to fix up the replication break and sync up the data.
1) Stop mysql on the slave.
service mysql stop
2) Login to the master.

3) Lock the master by running the following command
mysql> flush tables with read lock;
NOTE: No data can be written at this time to the master, so be quick in the following steps if your site is in production. It is important that you do not close mysql or this running ssh terminal. Login to a separate ssh terminal for the next part.

4) Now, in a separate ssh screen, type the following command from the master mysql server.
rsync -varPe ssh /var/lib/mysql root@IP_ADDRESS:/var/lib/ —delete-after
5) After this is done, now you must get the binlog and position from the master before you unlock the tables. Run the following command on master
mysql> show master status\G;
Then you’ll get some information on master status. You need the file and the position because that’s what you’re going to use on the slave in a moment. See step 10 on how this information is used, but please do not skip to step 10.

6) Unlock the master. It does not need to be locked now that you have the copy of the data and the log position.
mysql> unlock tables;
7) Login to the slave now via ssh. First remove the two files : /var/lib/mysql/ and /var/lib/mysql/

8) Start mysql on the slave.
service mysqld start
9) Immediately login to mysql and stop slave
mysql> slave stop; 
10) Now, you have to run the following query filling in the information from the show master status above (Step 5.)
mysql> MASTER_USER=‘replicate’, MASTER_PASSWORD=‘replicate’,
mysql> MASTER_LOG_FILE=‘mysql-bin.000001’, MASTER_LOG_POS=1234512351;
11) Now start the slave.
mysql > slave start
12) Check slave status.
mysql> show slave status\G;

Heshan SuriyaarachchiFixing Error: Could not find a suitable provider in Puppet

I'm quite new to Puppet and I had a Puppet script which configures a MySQL database working fine on a Puppet learning VM on VirtualBox. This issue happened when I installed a and setup Puppet on a server of my own. I kept seeing the following error and it was driving me crazy for some time.

[hesxxxxxxx@xxxxxxpocx ~]$ sudo puppet apply --verbose --noop /etc/puppet/manifests/site.pp 
Info: Loading facts
Info: Loading facts
Info: Loading facts
Warning: Config file /etc/puppet/hiera.yaml not found, using Hiera defaults
Notice: Compiled catalog for xxxxxxxxxxxxxxxxxxx in environment production in 0.99 seconds
Warning: The package type's allow_virtual parameter will be changing its default value from false to true in a future release. If you do not want to allow virtual packages, please explicitly set allow_virtual to false.
(at /usr/lib/ruby/site_ruby/1.8/puppet/type/package.rb:430:in `default')
Info: Applying configuration version '1426611197'
Notice: /Stage[main]/Mysql::Server::Install/Package[mysql-server]/ensure: current_value absent, should be present (noop)
Notice: /Stage[main]/Mysql::Server::Install/Exec[mysql_install_db]/returns: current_value notrun, should be 0 (noop)
Notice: Class[Mysql::Server::Install]: Would have triggered 'refresh' from 2 events
Notice: /Stage[main]/Mysql::Server::Config/File[mysql-config-file]/content: current_value {md5}8ace886bbe7e274448bc8bea16d3ead6, should be {md5}d0d209eb5ed544658b3f1a72274bc3ed (noop)
Notice: /Stage[main]/Mysql::Server::Config/File[/etc/my.cnf.d]/ensure: current_value absent, should be directory (noop)
Notice: Class[Mysql::Server::Config]: Would have triggered 'refresh' from 2 events
Notice: /Stage[main]/Mysql::Server::Service/Service[mysqld]/ensure: current_value stopped, should be running (noop)
Info: /Stage[main]/Mysql::Server::Service/Service[mysqld]: Unscheduling refresh on Service[mysqld]
Notice: /Stage[main]/Mysql::Server::Service/File[/var/log/mysqld.log]/ensure: current_value absent, should be present (noop)
Notice: Class[Mysql::Server::Service]: Would have triggered 'refresh' from 2 events
Error: Could not prefetch mysql_grant provider 'mysql': Command mysql is missing
Notice: /Stage[main]/Main/Node[default]/Mysql_grant[m_user@localhost/lvm.*]: Dependency Mysql_user[m_user@localhost] has failures: true
Warning: /Stage[main]/Main/Node[default]/Mysql_grant[m_user@localhost/lvm.*]: Skipping because of failed dependencies
Notice: Stage[main]: Would have triggered 'refresh' from 3 events
Error: Could not find a suitable provider for mysql_user
Error: Could not find a suitable provider for mysql_database

The issue was I was running puppet with —noop mode. When my Puppet tries to configure the mysql setup it gives errors because it didn’t have a mysql setup to configure since I had —noop. Removing this did the trick.

Although this is trivial, I thought of blogging this because someone might find this useful when facing the same issue as I did.

John MathonHow could Tesla eliminate “Range” Anxiety?


The Tesla 6.2 upgrade will consist of improvements in managing the existing range of the car, not improve it in any way.

The car will do this by:

1) Being aware of charging stations (unclear if only supercharging stations)

2) Understanding traffic, weather, elevation changes and other factors so that the car will be able to estimate with much greater accuracy how much power is needed to get to your destination.

3) Warning you if your destination is unreachable without additional power or you are driving out of reach of a Supercharging station.

These are useful convenience software features but don’t really address range anxiety in my opinion because most of us cannot travel 50 or 100 miles out of the way on the spur of the moment to find the closest supercharging station.   We also can’t sit at a conventional charging station for 3 hours to get 50 miles extra range.   While I am a huge fan of Elon and Tesla this does not address “range anxiety” for me because I was aware of the limitations of the cars battery and the options I already had, so a software feature that enables people who can’t think ahead doesn’t really help me.

If range anxiety is the fear that I literally will be abandoned on the side of the road without power because I was too stupid to look at the “battery level” and I need a computer to prompt me to tell me that you can’t go 80 miles with only 60 miles left in the battery then he has solved it.   If eliminating range anxiety means that my solution to having insufficient charge at my work to get home is that I can drive 30 miles (out of the way) to Fremont from my work location in Mountain View to get to a supercharging station, spend an hour there and then face a commute that is an additional hour for a total of 2 1/2 hours when my initial commute was 30 minutes then I will never use it.     I also don’t want to be told that to get from my house to Las Vegas I should take a route that is efficient in terms of supercharging stations but doesn’t let me go through Big Sur and visit my friend in Newport Beach without taking 100 mile detours.   These are useful features to supplement normal operation and possibly warn you from making a stupid mistake but if the “remedy” is to spend 2 hours going out of my way then it doesn’t solve range anxiety for me.

I don’t want to make a bad impression of the Tesla.   The fact is I don’t really fear range anxiety in my car.   I always keep the battery within my normal daily driving parameters.   I am aware of charging stations near me and if I have a problem I know where to get a boost or I can find a charging station fairly close to where I am going using Chargepoint or other services available.   I have not had a problem of worrying ever about if I was going to run out of charge but that’s because I think ahead.  It’s not rocket science.

What would have worked for me?

What would have made my life simpler and truly ended range anxiety would be having enough supercharging stations (thousands at least) that I could find one within no more than 5 miles from my planned itinerary.   If Elon would simply say we are going to build many more supercharging stations or we have a deal with Costco or some other similar company with a large number of outlets that could offer supercharging on the side then I would be much happier.   Costco and companies with big infrastructure such as Walmart already have very large power infrastructure.  Adding 5 or so supercharging stations would require practically no change in their infrastructure and would encourage customers to spend the half hour or so it takes to charge shopping at their outlet.   Who couldn’t use a half hour or hour at Costco or Walmart to fill up on the essentials every now and then while getting a full charge of your car for free?

This is what Elon does with his supercharging network already by placing them at locations with stores such as Starbucks, etc….   However, a company like Costco for instance would have a ready made customer base of well off customers who would spend an hour at their store every week possibly loading up on stuff while they charge their car.   Elon gets no benefit from positioning his charging stations at these locations but Costco would see a benefit and if it was smart would consider adding electric charging alongside their inexpensive gas stations it offers at some locations.

I think you get my point.

Another viable thing I think would be possible soon

I was hoping that Elon had figured out how to use the existing circuitry in the car to charge LiOn batteries like some others have that enables them to achieve many more charges (up to 10x) with the same batteries.   Such an improvement would make battery life less of an issue making battery replacement much more realizable option and cost effective.   Battery replacement as Elon has promoted it would cut the time to a fully charged battery from an hour to 90 seconds and would be faster than refilling a gas vehicle.   If batteries had a longer lifetimes then the cost of a refill would simply be the cost of the energy difference between the battery you drop off and the battery you pick up.   If the batteries lifetimes is limited to a certain number of cycles then the battery replacement facility has to factor in the life of the battery, the number of cycles and charge you for a cost to replace batteries much sooner which raises the cost.   This of course is just reality.

I am an optimist.   I expect that lifetime of the batteries will be much longer and replacement costs for batteries will fall dramatically making the cost of driving a mile close to the 100mpg that the car sticker proclaims.    The original Tesla batteries in the roadsters are holding up better than initially estimated.   They were guaranteed for 50,000 miles but have been routinely getting closer to 100,000 before dropping to the 80% level.   Consider that after 100,000 miles many ICE engines are losing their oats too and don’t perform quite the way they did at purchase.

I did a TCO analysis before buying the Tesla and it came out for me very favorably for the Tesla.  Part of this is the factors above and part is the fact I live in California which gives extra credit and PGE which gives me a big benefit by allowing me to change how I am charged for electricity.  I also assume that Elon and Tesla will prove to have built a reliable car.

TCO Analysis (Is Range Anxiety an issue in terms of overall cost of a Tesla?)

To say a Tesla gets 100mpg is correct if you don’t consider the cost of the batteries in the mpg.    If the cost of the batteries is $8,000 and the batteries last 125,000 miles then the fuel economy drops significantly.   A regular car would cost possibly $18,000-$25,000 to drive those miles with maintenance and all costs included while a Tesla might cost $15,500 including the cost of replacing batteries.   If the battery life is 250,000 miles then the cost drops to $11,000 or twice the fuel economy and if the battery is cheaper the costs fall further.

This is not an entirely fair comparison because if we really are fair an ICE car in 8 years or 125,000 or more miles may need an engine change, transmission change or other major repair.  These cars have far more items to maintain and break than a Tesla. If those costs are factored in to an ICE car then the Tesla looks much cheaper.

Another thing which is harder to factor into any TCO is safety.  The Tesla is the safest car ever built.  Hands down.  How do you factor that into TCO?

My TCO analysis of a Tesla vs BMW equivalent M series cars was that the Tesla cost 1/3 less over 8 years.  This included factors such as residual value (I estimated the BMW would have 150% the residual value of the Tesla and I chose the BMW 6 year maintenance plan and generously only charged the BMW 2,000/year for maintenance after year 6.  If you’ve owned a BMW you know I am being very light on the BMW and it still doesn’t perform as well and costs a lot more.  Forget the environment.  Forget the OTA (over the air upgrades) you can expect from a Tesla over 8 years compared to the lack/cost of anything you do to your BMW.   My analysis did not make any assumptions about battery cost reductions or battery life extension.  I simply did the math on the 8 years and gave the Tesla a ding at the end just because of risk.  Yet it was much less costly than an equivalent luxury car.

On Sunday March 15, 2015 Elon Musk tweeted:

Screen Shot 2015-03-17 at 8.45.19 AM


There are numerous ways Tesla could extend the performance of the existing Teslas to eliminate “Range” anxiety.   It depends what problem of “Range” anxiety Elon will attempt to mollify.

This would definitely end Range anxiety

tesla app ->tesla app extended range

Elon Musk has said that on Thursday Morning 9AM PST he will announce elimination of range anxiety via an OTA (Over the Air) update to all Tesla owners.

Substantial speculation about what he could be thinking of has many people guessing.   Here are the possibilities he will announce  from the point of view of a technologist.

A) A deal is announced with a major chain retailer with 10s of thousands of locations to offer supercharging locally for all Tesla owners.

B) “Insane conservation” mode allows Tesla to extend the range of Teslas by 100 miles effectively making it nearly impossible to “run out” of charge before you need to fill up.

C) The ability to charge Teslas to full charge or even 110% of full on a regular basis increasing the range of the cars by 50-100 miles (depending on model) and making it less likely you will run out of charge

D) The ability to recharge the batteries indefinitely and retain 95% of their original capacity giving the batteries a 50 year lifetime.

E) The announcement of dramatic cost reductions in the battery program making replacement batteries much cheaper.

Here are the issues that cause  “Range” anxiety

1) It takes a long time to charge the Tesla.

Typical charging takes a long time (8 hours at home close to empty) making managing the charge differently than an ICE (internal combustion engine).   Elon recently said that San Francisco to LA battery replacements were working well but that long term prospects were still focused on making the charging of batteries faster and getting more range from batteries.

New technologies are finding ways to rapidly charge batteries.  There are commercial solutions now that charge batteries 2 to 10 times faster.   Some of these require special batteries, some claim to manage the charge process better.  Unfortunately all charging cannot violate the laws of physics and in order to fill an 85KW battery will still take 85KW of energy.  Even if you could put that energy in the battery in 60 seconds you would need to be able to deliver 5,000 Amps at 240 Volts during this period. Since most households in the US have 200 Amp service it is not feasible to deliver this much current safely in normal environments.  Therefore it seems unlikely that Elon has made any changes in the charging time of Teslas at home.    Improvements could be substantial but would also require homeowners to upgrade their electricity service.  So, I expect improvements in this area although easily obtained can’t be what Elon is talking about.  He said so as well:

Screen Shot 2015-03-17 at 8.48.12 AM

2) There are only 100 or so supercharging stations in the US.

The high speed charging and battery replacement technologies are limited to 100 or so locations in the United States means that rapid recharging is not as simple as finding a gas station close by.

It would be simple for Elon to strike a deal with Chevron, Shell, McDonalds or any company with points of presence ubiquitous in the US to offer supercharging capability.  Increasing the number of locations with supercharging to 10,000 or 50,000 would essentially end range anxiety by making 20 minute charging like putting fuel in an ICE, ubiquitous and easy.


3)  The fact that 170 and 250 are the “normal” driving distances of a 60 or 85kw Tesla(respectively) while longer than other electric vehicles means that road trips in excess of these limits requires thinking ahead and possible route diversion to get to your destination.  The expected energy use of the car is 300w/mile of driving and therefore there are 2 ways of extending range:

3a) Can the efficiency of the car be substantially improved beyond 300w/mile?

I don’t believe the car can be substantially improved in terms of the energy consumed without a substantial rework and use of other components in the car.  The vast majority of the cars energy is used to drive the motor but it is possible to imagine that an “insane” mode for energy conservation could be implemented that would cut the cars energy use 30-40% by putting hard limits on drain of the battery (reducing the maximum energy draw from the battery) turning off all electronics and power consuming devices and limits on acceleration or battery drain rate.

If this could be achieved and attain 30-40% reduction in power consumption you could imagine that when the car hits quarter tank (70 miles) that the car automatically warns you it is going into “insane conservation” mode and gives you 100-150 miles effectively increasing the cars range by 50 or more miles when you are low on energy.  This may quell people’s range anxiety by making them feel they can get to their destination albeit in a hobbled state.

While this might work it seems it would be at the expense of the anxiety that the $100,000 car you bought acts more like a $10,000 car for a substantial amount of the time you use the car.   Of course most people might not use this mode very much it would make the “anxiety” of being stuck with no charge considerably less.

3b) Can the capacity of the battery be increased (without replacing the battery)?

There are numerous technologies that enable being able to charge LiOn cells to  100% or greater (110%) charge and still getting the full life from the batteries.    If this could be achieved then you might be able to get 15-25% in typical range for Tesla owners who now charge their car to 85% or less.   It would be impossible to store much more energy than the LiOn cell was designed without damaging physical the chemicals in the battery.  New battery technologies such as Magnesium and Lithium Air technologies as well as other battery technologies may be on the way but without replacing the battery itself it would be impossible to improve the capacity of existing batteries substantially so I doubt this is what is up Elon’s sleeve in this announcement.

4) The total lifetimes of the battery is guaranteed for 8 years or 125,000 miles or more,depending on model. The cost of a replacement battery on the order of $5,000 or more means a substantial financial cliff exists for cars that could produce range anxiety.

4a) Increased lifetime of the battery from 8 years to 40 or more years

The most serious degradation of LiOn cells happen because the lithium in the cells expands and contracts as it is discharged or charged.   The physical damage of repeated full discharges or full charges comes about from physical stress on the materials in the battery.   The material actually develops scar tissue that limits future charging and discharging capability.  Numerous papers have come out and commercial technologies have implemented algorithms that dramatically improve the number of charges one can obtain from a LiOn cell by a factor of 10 by carefully controlling the charging and discharging of the cells.   As the cells are charged and discharged mechanical stress can be minimized by controlling this process with a feedback mechanism that still allows mostly full use of the battery.   This technology is seeing more widespread adoption and could easily be what Elon is talking about.   If the battery life could be made to be 40 years or more then the cost efficiency of the Tesla dramatically changes and the anxiety about battery management decreases.   If the batteries can be used indefinitely it may make the battery replacement option at supercharging stations much more reasonable.

4b) Dramatically reduced cost of the battery

Elon’s main goal in building a massive battery factory in Nevada is to drastically cut the cost of the battery from the reported $8,000 for an 85KW battery to the $3-4,000 range or lower.   If Elon is quite confident in his ability to achieve this due to testing of new battery technologies he could promise substantial reductions in costs to existing Tesla owners to replace the battery also dramatically improving the economics of the Tesla.

In summary from my analysis these are the options for thursday mornings 9am release from Tesla regarding the “end of range anxiety.”


A) A deal is announced with a major chain retailer with 10s of thousands of locations to offer supercharging locally for all Tesla owners.

B) “Insane conservation” mode allows Tesla to extend the range of Teslas by 100 miles effectively making it nearly impossible to “run out” of charge before you need to fill up.

C) The ability to charge Teslas to full charge or greater (110%) on a regular basis increasing the typical range of the cars by 50-100 miles (depending on model) and making it less likely you will run out of charge

D) The ability to recharge the batteries indefinitely and retain 95% of their original capacity giving the batteries a 50 year lifetime.

E) The announcement of dramatic cost reductions in the battery program making replacement batteries much cheaper.

I believe that what Elon means by “eliminating Range anxiety” is about reducing the immediate concern that you will be stranded if you run out of juice.   This means options A, B, C are the only ones to be considered.  I would consider D and E to be likely improvements anyway that further improves the ROI of a Tesla but I don’t believe those could be what Elon will announce on Thursday.

So, that leaves us with A, B, C.   A) is simple and simply a matter of Elon working a deal with a major retailer.    B and C could be done with software and a combination could be accomplished giving a combined increase in range of the car by 50% or more.   Option B requires operating in a limited functionality which is undesirable and seems unlikely to be such an exciting option that would merit a big announcement.  Option C is possible but would not give a huge increase in range implied by his Tweet.

I therefore think it is possible that option A could be a major part of the Thursday call or a combination of A, B and C effectively providing substantial range increases with increased numbers of supercharging stations would effectively quell  “Range” anxiety.

 If I had to bet I would say it’s a combination of A, B and C.

Ushani BalasooriyaHow to monitor the created tcp connections for a particular proxy service in WSO2 ESB

As an example if we want to monitor the number of tcp connections created during a proxy invocation, the following steps can be performed.

1. Assume you need to monitor the number of tcp connections created for the following proxy service :

 <?xml version="1.0" encoding="UTF-8"?>  
<proxy xmlns=""
<property name="NO_KEEPALIVE" value="true" scope="axis2"/>
<address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
<address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
<address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
<onComplete xmlns:m0="http://services.samples" expression="//m0:getQuoteResponse">

You will see there are 5 clones. Therefore it should create only 5 tcp connections.

2. We have used SimpleStockQuoteService service as the backend.
3. Since you need to monitor the connections created, we should delay the response coming from the backend. Therefore we need to change the code slightly in the SimpleStockQuoteService.

We have include a Thread.sleep() for 10 seconds until we monitor the number of connections.
Therefore go to <ESB_HOME>/samples/axis2Server/src/SimpleStockQuoteService/src/samples/services/

and add the Thread.sleep(10000); as below to hold the response for sometime.

 public GetQuoteResponse getQuote(GetQuote request) throws Exception {  
if ("ERR".equals(request.getSymbol())) {
throw new Exception("Invalid stock symbol : ERR");
System.out.println(new Date() + " " + this.getClass().getName() +
" :: Generating quote for : " + request.getSymbol());
return new GetQuoteResponse(request.getSymbol());

4. Build the SimpleStockQuoteService once again by “ant” in here, <ESB_HOME>/samples/axis2Server/src/SimpleStockQuoteService

5. Now start the axis2server
6. Now you have to open a terminal and provide the following netstat command to get the process id.

sudo netstat --tcp --listening --programs

You will see the relevant SimpleStockQuoteService which is up in port 9000 like below.

tcp6 0 0 [::]:9000 [::]:* LISTEN 20664/java

So your process ID will be 20664.

7. Then open a terminal and provide the below command to view the tcp connections for the particular process id.

watch -n1 -d "netstat -n -tap | grep 20664"

8. Now open your soapui and send the following request to Proxy1
 <soapenv:Envelope xmlns:soapenv="" xmlns:ser="http://services.samples" xmlns:xsd="http://services.samples/xsd">  

9. View the tcp connections created in the terminal which you have been monitoring as soon as you send the request. You should be able to view only 3 connections since we have configured like that in proxy using clone mediator.

tcp6 0 0 ESTABLISHED 20664/java
tcp6 0 0 ESTABLISHED 20664/java
tcp6 0 0 ESTABLISHED 20664/java

Srinath PereraHow to Publish Your own Events (Data) to WSO2 Analytics Platform (BAM, CEP)

We collect data via a Sensor API (a.k.a. agents), send them to servers: WSO2 CEP and WSO2 BAM, process them, and do something with the results. You can find more information about the big picture from the slide deck

This post describes how you can collect data.

We provide a one Sensor API to publish events for both batch and realtime pipelines. The Sensor API is available as Java clients (Thrift, JMS, Kafka), java script clients* ( Web Socket and REST) and 100s of connectors via WSO2 ESB. 

Lets see how we can use the java thrift client to publish events. 

First of all, you need CEP or BAM running. Download, unzip, and run WSO2 CEP or WSO2 BAM (via bin/ 

Now, lets write a client. Add the jars given in Appendix A or add POM Dependancies given in Appendix B to your Maven POM file to setup the classpath. 

The Java client would look like following. 

Just like you create tables before you put data into a database, first you define streams before sending events to WSO2 Analytic Platfrom. Streams are a description of how your data look like (a.k.a. Schema). Then you can publish events. In the code, the "Event Data" is an array of objects, and it must match the types and parameters given in the event stream definition.

You can find an example client from /samples/producers/pizza-shop from WSO2 CEP distribution. 

Appendix A: Dependancy Jars

You can find the jars from the location ${cep.home}/repository/components/plugins/ of CEP or BAM pack.

  1. org.wso2.carbon.logging_4.2.0.jar
  2. commons-pool_1.5.6.*.jar
  3. httpclient_4.2.5.*.jar
  4. httpcore_4.3.0.*.jar
  5. commons-httpclient_3.1.0.*.jar
  6. commons-codec_1.4.0.*.jar
  7. slf4j.log4j*.jar
  8. slf4j.api_*.jar
  9. axis2_1.6.1.*.jar
  10. axiom_1.2.11.*.jar
  11. wsdl4j_1.6.2.*.jar
  12. XmlSchema_1.4.7.*.jar
  13. neethi_*.jar
  14. org.wso2.securevault_*.jar
  15. org.wso2.carbon.databridge.agent.thrift_*.jar
  16. org.wso2.carbon.databridge.commons.thrift_*.jar
  17. org.wso2.carbon.databridge.commons_*.jar
  19. libthrift_*.jar

Appendix B: Maven POM Dependancies 

Add the following WSO2 nexus repo and dependancies to pom.xml at corresponding sections. 



Kavith Thiranga LokuhewageHow to use DTO Factory in Eclipse Che

What is a DTO?

Data transfer objects are used in Che to do the communication between client and server. In a code level, this is just an interface annotated with @DTO com.codenvy.dto.shared.DTO. This interface should contain getters and setters (with bean naming conventions) for each and every fields that we need in this object.

 For example, following is a DTO with a single String field.

public interface HelloUser {
String getHelloMessage();
void setHelloMessage(String message);

By convention, we need to put these DTOs to shared package as it will be used by both client and server side. 

DTO Factory 

DTO Factory is a factory available for both client and server sides, which can be used to serialize/deserialize DTOs. DTO factory internally uses generated DTO implementations (described in next section) to get this job done. Yet, it has a properly encapsulated API and developers can simply use DTOFactoy instance directly. 

For client side   : com.codenvy.ide.dto.DtoFactory
For server side  : com.codenvy.dto.server.DtoFactory

HelloUser helloUser = DtoFactory.getInstance().createDto(HelloUser.class);

Above code snippet shows how to initialize a DTO using DTOFactory. As mentioned above, proper DtoFactory classes should be used by client or server sides. 

Deserializing in client side

//important imports

//invoke helloService
Unmarshallable<HelloUser> unmarshaller = unmarshallerFactory.newUnmarshaller(HelloUser.class);

helloService.sayHello(sayHello, new AsyncRequestCallback<HelloUser>(unmarshaller) {
protected void onSuccess(HelloUser result) {
protected void onFailure(Throwable exception) {

When invoking a service that returns a DTO, client side should register a callback created using relevant unmarshaller factory. Then, the on success method will be called with a deserialized DTO. 

De-serializing in server side

public ... sayHello(SayHello sayHello){
... sayHello.getHelloMessage() ...

Everest (JAX-RS implementation of Che) implementation automatically deserialize DTOs when they are used as parameters in rest services. It will identify serialized DTO with marked type -  @Consumes(MediaType.APPLICATION_JSON)  - and use generated DTO implementations to deserialize DTO. 

DTO maven plugin

As mentioned earlier, for DtoFactoy to function properly, it needs some generated code that will contain concrete logic to serialize/deserialize DTOs. GWT compiler should be able to access generated code for client side and generated code for server side should go in jar file.

Che uses a special maven plugin called “codenvy-dto-maven-plugin” to generate these codes. Following figure illustrates a sample configuration of this plugin. It contains separate executions for client and server sides. 

We have to input correct package structures accordingly and file paths to which these generated files should be copied. 

Other dependencies if DTOs from current project need them.

package - package, in which, DTO interfaces resides
outputDirectory -  directory, to which, generated files should be copied
genClassName - class name for the generated class

You should also configure your maven build to use these generated classes as a resource when compiling and packaging. Just add following line in resources in build section.


Paul FremantleGNU Terry Pratchett on WSO2 ESB / Apache Synapse

If any of you are following the GNU Terry Pratchett discussion on Reddit, BBC or the Telegraph, then you might be wondering how to do this in the WSO2 ESB or Apache Synapse. Its very very simple. Here you go. Enjoy.
Loading ....

Madhuka UdanthaCouchDB with Fauxton in windows 8

This post mainly about installing and running  ‘Fauxton’ in windows environment. Fauxton is the new Web UI for CouchDB. For this post I will be using windows 8 (64bit).

Prerequisite for Fauxton

1. nodejs (Download from here)

2. npm (now npm comes with node)

3. CouchDB (Installation from binaries or sources. I will have post on installing couch DB from source on windows later)

4. git (git or tortoisegit tool)


To test Prerequisite open cmd of the windows and type below to check each of their versions. Here are mine.

node  --version

npm --version

git --version


5. start CouchDB . Open up Futon to test it work fine. go to http://localhost:5984/_utils/


Here you see the

Now we have all the thing for ‘Fauxton’

Build ‘Fauxton’

1. Get git clone from ‘

2. Go to the directory (cd couchdb-fauxton)
3. enter ‘npm install’ on cmd

4. Install the grunt-cli by typing

npm install -g grunt-cli



There is small change to be done to run in windows.

'./node_modules/react-tools/bin/jsx -x jsx app/addons/ app/addons/',
update to

'node ./node_modules/react-tools/bin/jsx -x jsx app/addons/ app/addons/',

You can try 'uname -a' to find OS information and deped on it we can switch the code.



Dev Server

As I am looking for some development in here. I will start dev server and it is the easiest way to use Fauxton. Type

grunt dev


got tohttp://localhost:8000/#/_all_dbs



Here you have nice way interact with CouchDB and also I you can find rest all over in here. The work on the Fauxton implementation is going on and making good progress in preparation for CouchDB 2.0. It has just been merged in the new Fauxton sidebar redesign[1,2].

such http://localhost:8000/_users.etc.. Here is API for call in UI


Enjoy the cochDB with new UI Fauxton




sanjeewa malalgodaHow to cleanup old and unused tokens in WSO2 API Manager

When we use WSO2 API Manager over few months we may have lot of expired, revoked and inactive tokens in IDN_OAUTH2_ACCESS_TOKEN table.
As of now we do not clear these entries for logging and audit purposes.
But with the time when table grow we may need to clear table.
Having large number of entries will slow down token generation and validation process.
So in this post we will discuss about clearing unused tokens in API Manager.

Most important thing is we should not try this with actual deployment to prevent data loss.
First take a dump of running servers database.
Then perform these instructions.
And then start server pointing to updated database and test throughly to verify that we do not have any issues.
Once you are confident with process you may schedule it for server maintenance time window.
Since table entry deletion may take considerable amount of time its advisable to test dumped data before actual cleanup task.

Stored procedure to cleanup tokens

  • Back up the existing IDN_OAUTH2_ACCESS_TOKEN table.
  • Turn off SQL_SAFE_UPDATES.
  • Delete the non-active tokens other than a single record for each state for each combination of CONSUMER_KEY, AUTHZ_USER and TOKEN_SCOPE.
  • Restore the original SQL_SAFE_UPDATES value.

DROP PROCEDURE IF EXISTS `cleanup_tokens`;

CREATE PROCEDURE `cleanup_tokens` ()


-- 'Turn off SQL_SAFE_UPDATES'

-- 'Keep the most recent INACTIVE key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'

-- 'Keep the most recent REVOKED key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'

-- 'Keep the most recent EXPIRED key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'

-- 'Restore the original SQL_SAFE_UPDATES value'


Schedule event to run cleanup task per week
DROP EVENT IF EXISTS `cleanup_tokens_event`;
CREATE EVENT `cleanup_tokens_event`
      EVERY 1 WEEK STARTS '2015-01-01 00:00.00'
      CALL `WSO2AM_DB`.`cleanup_tokens`();

-- 'Turn on the event_scheduler'
SET GLOBAL event_scheduler = ON;

Muhammed ShariqTroubleshooting WSO2 server database operations with log4jdbc-log4j2

If you are using WSO2 Carbon based servers and are facing issues related to the database, there are few steps you should take in order to rectify those issue. Since Carbon 4.2.0 based products use Tomcat JDBC Connection Pool, first thing you could do is to try tuning the datasource parameters in the master-datasources.xml (or *-datasources.xml) file located in the ${CARBON_HOME}/repository/conf/datasources/ directory. Some of the parameters you might want to double check is;

  1. Set the "validationQuery" parameter 
  2. Set "testOnBurrow" to "true"
  3. Set a "validationInterval" and try tuning it to fit your environment
For a detailed explanation about those properties and also addition parameters that can be used to tune the JDBC pool, please visit the Tomcat site listed above.

Even though these parameters might help fix some of the JDBC issues you'd encounter, there might be instances where you'd want additional information to understand what's going on between the WSO2 server and the underlying database. 

We can use log4jdbc-log4j2 which is an improvement of the log4jdbc to do an in depth analysis JDBC operations between the WSO2 server and the database. In this post I'll be explaining how to configure log4jdbc-log4j2 with WSO2 servers.  

To setup a WSO2 server to log4jdbc-log4j2, follow the steps below (In the post I am assuming that the server has already been configured to point to the external database and setup with the necessary JDBC driver etc)
  1. Download log4jdbc-log4j2 jar and copy it to the ${CARBON_HOME}/repository/components/lib directory. 
  2. Prepend "jdbc:log4" to the JDBC url, <url> parameter in the datasource configuration, so the url would look like;
  3. jdbc:log4jdbc:mysql://localhost:3306/governance

  4. Change the "driverClassName" to "net.sf.log4jdbc.sql.jdbcapi.DriverSpy" as follows;
  5. net.sf.log4jdbc.sql.jdbcapi.DriverSpy

  6. To direct the log4jdbc-log4j2 output to a separate log file, add the following entries to the file located in the conf/ directory
  7. log4j.logger.jdbc.connection=DEBUG, MySQL
    log4j.logger.jdbc.audit=DEBUG, MySQL

  8. Finally, you need to start the server with the system property;

Note: You can set the system property in the file located in the bin/ directory for ease of use.

Now that you have the log4jdbc-log4j library and the required configurations in place, you can start using the server. The JDBC debug logs will be printed in the mysql-profile.log file located in the logs/ directory. There are six different loggers you can use to troubleshoot different types of problems, check section 4.2.2 of this page for more information on the different logging options.

Good luck !!!

Sriskandarajah SuhothayanBecoming a Master in Apache Maven 3

Writing programs in Java is cool, its a language thats very powerful which have right amount of flexibility that makes a developers life a hell easy. But when it comes to compiling, building and managing releases of a project its not that easy, it also has the same issues encountered by other programming languages.

To solve this problem, build tools like Apache ANT and Apache Maven have emerged. ANT is very flexible tool which allows users to do almost any thing when it comes to build, maintenance and releases. Having said that since its so flexible its quite hard to configure and manage, every project using ANT uses it in their own way and hence the projects using ANT looses their consistency. At the same time when we look at Apache Maven which is not flexible as ANT by default, but it follows an amazing concept "Convention over configuration" which give that right mix of convention and configuration for you to easily create, build, deploy and even manage releases at an enterprise level.

For examples Maven always works with defaults, and you can easily create and build a maven project just with the follow snippet in the pom.xml file of your project.


And this little configuration is tied up with many conventions

  • The Java source code is available at {base-dir}/src/main/java
  • Test cases are available at {base-dir}/src/test/java
  • A JAR file type of artifact is produced
  • Compiled class files are copied into {base-dir}/target/classes
  • The final artifact is copied into {base-dir}/target

But there are cases where we need to go a step ahead and break the rules with reasons ! And in any case if we need to change the above defaults its just a matter of adding the Maven Build Plugin and the artifact type to the project tag as below.

I came across this great book "Mastering Apache Maven 3" by Prabath Siriwardena that give you all the bits and pieces from getting started to eventually becoming a master in Maven. From this you will get to know the fundamentals and when to break the conventions with reasons. This helps you to develop and manage large, complex projects with confidence by providing an enterprise level knowledge to manage the whole Maven infrastructure.

This book covers Maven Configuration from the basics, discussing how to construct and build a  Maven project, manage Build Lifecycles, introduce useful functionalities through Maven Plugins and helps you to write your own custom plugins when needed. It also provide steps on building distributable archives using Maven Assemblies which adheres to a user-defined layout and structure,   demonstrate the usage of  Maven Archetypes for easily construct Maven projects and steps to create new Archetypes to help your developers and customers to quickly start on your project type without any configurations and replicated work. Further it also helps you to host and manage your Maven artifacts in repositories using Maven Repository Management, and most importantly explains you the Best Practices to keep your projects in line with enterprise standards.

Srinath PereraWhy We need SQL like Query Language for Realtime Streaming Analytics?

I was at O'reilly Strata in last week and certainly interest for realtime analytics was at it’s top.

Realtime analytics, or what people call Realtime Analytics, has two flavours.  
  1. Realtime Streaming Analytics ( static queries given once that do not change, they process data as they come in without storing. CEP, Apache Strom, Apache Samza etc., are examples of this. 
  2. Realtime Interactive/Ad-hoc Analytics (user issue ad-hoc dynamic queries and system responds). Druid, SAP Hana, VolotDB, MemSQL, Apache Drill are examples of this. 
In this post, I am focusing on Realtime Streaming Analytics. (Ad-hoc analytics uses a SQL like query language anyway.)

Still when thinking about Realtime Analytics, people think only counting usecases. However, that is the tip of the iceberg. Due to the time dimension of the data inherent in realtime usecases, there are lot more you can do. Lets us look at few common patterns. 
  1. Simple counting (e.g. failure count)
  2. Counting with Windows ( e.g. failure count every hour)
  3. Preprocessing: filtering, transformations (e.g. data cleanup)
  4. Alerts , thresholds (e.g. Alarm on high temperature)
  5. Data Correlation, Detect missing events, detecting erroneous data (e.g. detecting failed sensors)
  6. Joining event streams (e.g. detect a hit on soccer ball)
  7. Merge with data in a database, collect, update data conditionally
  8. Detecting Event Sequence Patterns (e.g. small transaction followed by large transaction)
  9. Tracking - follow some related entity’s state in space, time etc. (e.g. location of airline baggage, vehicle, tracking wild life)
  10. Detect trends – Rise, turn, fall, Outliers, Complex trends like triple bottom etc., (e.g. algorithmic trading, SLA, load balancing)
  11. Learning a Model (e.g. Predictive maintenance)
  12. Predicting next value and corrective actions (e.g. automated car)

Why we need SQL like query language for Realtime Streaming  Analytics?

Each of above has come up in use cases, and we have implemented them using SQL like CEP query languages. Knowing the internal of implementing the CEP core concepts like sliding windows, temporal query patterns, I do not think every Streaming use case developer should rewrite those. Algorithms are not trivial, and those are very hard to get right! 

Instead, we need higher levels of abstractions. We should implement those once and for all, and reuse them. Best lesson we can learn from Hive and Hadoop, which does exactly that for batch analytics. I have explained Big Data with Hive many time, most gets it right away. Hive has become the major programming API most Big Data use cases.

Following is list of reasons for SQL like query language. 
  1. Realtime analytics are hard. Every developer do not want to hand implement sliding windows and temporal event patterns, etc.  
  2. Easy to follow and learn for people who knows SQL, which is pretty much everybody 
  3. SQL like languages are Expressive, short, sweet and fast!!
  4. SQL like languages define core operations that covers 90% of problems
  5. They experts dig in when they like!
  6. Realtime analytics Runtimes can better optimize the executions with SQL like model. Most optimisations are already studied, and there is lot you can just borrow from database optimisations. 
Finally what are such languages? There are lot defined in world of Complex Event processing (e.g. WSO2 Siddhi, Esper, Tibco StreamBase,IBM Infoshpere Streams etc. SQL stream has fully ANSI SQL comment version of it. Last week I did a talk on Strata discussing this problem in detail and how CEP could match the bill. You could find the slide deck from below.

Nuwan BandaraMulti-tenant healthcare information systems integration

Scenario: A single healthcare information system needs to be exposed for different healthcare providers (hospitals). The system need to pass HL7 messages that comes via HTTP (API calls) to a HL7 receiver, (over tcp) reliably TODO: Enable HL7 transport senders in axis2.xml & axis2_blocking_client.xml in WSO2 ESB  following config shows the ESB configuration for tenant

sanjeewa malalgodaConfigure WSO2 API Manager 1.8.0 with reverse proxy (with proxy context path)

Remove current installation of Nginx
sudo apt-get purge nginx nginx-common nginx-full

Install Nginx
sudo apt-get install nginx

Edit configurations
sudo vi /etc/nginx/sites-enabled/default

Create ssl certificates and copy then to ssl folder.
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt

 Sample configuration:

server {

       listen 443;
       ssl on;
       ssl_certificate /etc/nginx/ssl/nginx.crt;
       ssl_certificate_key /etc/nginx/ssl/nginx.key;
       location /apimanager/carbon {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_pass https://localhost:9443/carbon/;
           proxy_redirect  https://localhost:9443/carbon/  https://localhost/apimanager/carbon/;
           proxy_cookie_path / /apimanager/carbon/;

      location ~ ^/apimanager/store/(.*)registry/(.*)$ {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

       location ~ ^/apimanager/publisher/(.*)registry/(.*)$ {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

  location /apimanager/publisher {
          index index.html;
           proxy_set_header X-Forwarded-Host $host
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_pass https://localhost:9443/publisher;
           proxy_redirect  https://localhost:9443/publisher  https://localhost/apimanager/publisher;
           proxy_cookie_path /publisher /apimanager/publisher;


      location /apimanager/store {
           index index.html;
           proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_pass https://localhost:9443/store;
           proxy_redirect https://localhost:9443/store https://localhost/apimanager/store;
           proxy_cookie_path /store /apimanager/store;

To stop start us following commands

sudo /etc/init.d/nginx start
sudo /etc/init.d/nginx stop

API Manager configurations

Add following API Manager configurations:

In API store edit wso2am-1.8.0/repository/deployment/server/jaggeryapps/store/site/conf/site.json  file and add following.

  "reverseProxy" : {
       "enabled" : true,
       "host" : "localhost",

In API publisher edit wso2am-1.8.0/repository/deployment/server/jaggeryapps/publisher/site/conf/site.json  file and add following.

   "reverseProxy" : {
       "enabled" : true,   
       "host" : "localhost",

Edit /repository/conf/carbon.xml and update following properties.


Then start API Manager.
Server URLs would be something like this


Ajith VitharanaAdd registry permisson for roles using admin service - WSO2 products.

You can use the ResourceAdminService to perform that task.

i) Open the carbon.xml and enable the WSDL view for admin services.
ii) Use the following WSDL endpoint to generate the SoapUI project.


Select the "addRolePermission" request.
<soap:Envelope xmlns:soap="" xmlns:ser="">
pathToAuthorize    - The registry path that need to set permissions.
roleToAuthorize     - Role name that need to authorize
actionToAuthorize - This can be following values

 2 - READ
 3 - WRITE

permissionType - This can be following valies

 1 - Allowed
 2 - Denied

Ajith VitharanaAggregate two REST responses- WSO2 ESB

There are two mock APIs (ListUsersAPI and UserRolesAPI).

 1. ListUsersAPI
<?xml version="1.0" encoding="UTF-8"?>
<api xmlns="" name="ListUsersAPI" context="/services/users">
   <resource methods="GET" url-mapping="/*">
         <payloadFactory media-type="json">
            <format>{     "persons":{        "person":[           {              "Id":"1",            "givenName":"ajith",            "lastName":"vitharana",            "age":"25",            "contactInfos":[                 {                    "InfoId":"1",                  "department":"1",                  "contactType":"email",                  "value":""               },               {                    "InfoId":"2",                  "department":"1",                  "contactType":"mobile",                  "value":"111111111"               },               {                    "InfoId":"3",                  "department":"1",                  "contactType":"home",                  "value":"Magic Dr,USA"               }            ]         },         {              "Id":"2",            "givenName":"shammi",            "lastName":"jagasingha",            "age":"30",            "contactInfos":[                 {                    "InfoId":"1",                  "department":"1",                  "contactType":"email",                  "value":""               },               {                  "InfoId":"2",                  "department":"1",                  "contactType":"mobile",                  "value":"2222222222"               },               {                    "InfoId":"3",                  "department":"1",                  "contactType":"home",                  "value":"Magic Dr,USA"               }            ]         }      ]   }}</format>
            <args />
         <property name="NO_ENTITY_BODY" scope="axis2" action="remove" />
         <property name="messageType" value="application/json" scope="axis2" type="STRING" />
         <respond />
Sample output.
curl -X GET  http://localhost:8280/services/users

                  "value":"Magic Dr,USA"
                  "value":"Magic Dr,USA"
2. UserRolesAPI
<?xml version="1.0" encoding="UTF-8"?>
<api xmlns=""
   <resource methods="GET" uri-template="/{personid}">
         <filter source="get-property('uri.var.personid')" regex="1">
               <payloadFactory media-type="json">
                  <format>{     "Id":1,   "roles":[        {           "roleId":1,         "personKey":1,         "role":"Deverloper"      },      {           "roleId":2,         "personKey":1,         "role":"Engineer"      }   ]}</format>
               <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
               <property name="messageType"
         <filter source="get-property('uri.var.personid')" regex="2">
               <payloadFactory media-type="json">
                  <format>{"personId": 2,"roles": [{ "personRoleId": 1, "personKey": 2, "role": "Manager" },{ "personRoleId": 2, "personKey": 2, "role": "QA" }]}</format>
               <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
               <property name="messageType"
Sample output.
curl -X GET  http://localhost:8280/services/roles/1 


3. UserDetailsAPI is the aggregated API
<?xml version="1.0" encoding="UTF-8"?>
<api xmlns=""
   <resource methods="GET">
               <http method="get" uri-template="http://localhost:8280/services/users"/>
         <iterate xmlns:soapenv=""
                  <property xmlns:ns="http://org.apache.synapse/xsd"
                  <property xmlns:ns="http://org.apache.synapse/xsd"
                        <http method="get"
                  <payloadFactory media-type="xml">
                        <combined xmlns="">                        $1$2                        </combined>
                        <arg xmlns:ns="http://org.apache.synapse/xsd"
                        <arg xmlns:ns="http://org.apache.synapse/xsd"
         <property name="ECNCLOSING_ELEMENT" scope="default">
            <wrapper xmlns=""/>
         <aggregate id="it1">
               <messageCount min="2" max="-1"/>
            <onComplete xmlns:s12=""
               <property name="messageType"
Sample output
curl -X GET  http://localhost:8280/userdetails

                     "value":"Magic Dr,USA"
                     "value":"Magic Dr,USA"

Chanaka FernandoEnabling audit logs for WSO2 carbon based servers

Audit logs provide very useful information related to the users who has tried to access the server. By default, most of the WSO2 carbon based products (ESB, APIM, DSS) have not enabled this logging. In production environments, it is always better to enable audit logs due to various reasons.

All you need to do is add the following section to file which resides in <CARBON_HOME>/repository/conf directory.

# Configure audit log for auditing purposes
log4j.appender.AUDIT_LOGFILE.layout.ConversionPattern=[%d] %P%5p - %x %m %n
log4j.appender.AUDIT_LOGFILE.layout.TenantPattern=%U%@%D [%T] [%S]

Once you enable this, you can see the audit log file is created under <CARBON_HOME>/repository/logs directory. It will contain information similar to below mentioned lines.

[2015-03-12 10:44:01,565]  INFO -  'admin@carbon.super [-1234]' logged in at [2015-03-12 10:44:01,565-0500]
[2015-03-12 10:44:45,825]  INFO -  User admin successfully authenticated to perform JMX operations.
[2015-03-12 10:44:45,826]  INFO -  User : admin successfully authorized to perform JMX operations.
[2015-03-12 10:44:45,851]  WARN -  Unauthorized access attempt to JMX operation.
java.lang.SecurityException: Login failed for user : jmx_user. Invalid username or password.
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(
at sun.reflect.DelegatingMethodAccessorImpl.invoke(
at java.lang.reflect.Method.invoke(
at sun.rmi.server.UnicastServerRef.dispatch(
at sun.rmi.transport.Transport$
at sun.rmi.transport.Transport$
at Method)
at sun.rmi.transport.Transport.serviceCall(
at sun.rmi.transport.tcp.TCPTransport.handleMessages(
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(
at sun.rmi.transport.tcp.TCPTransport$
at java.util.concurrent.ThreadPoolExecutor.runWorker(
at java.util.concurrent.ThreadPoolExecutor$

Nuwan BandaraWSO2 API Manager distributed deployment architecture

// The solutions architecture (WSO2 API-M v1.8.0 with WSO2 BAM v2.5.0) // The deployment architecture (WSO2 API-M v1.8.0 with WSO2 BAM v2.5.0)

Srinath PereraEmbedding WSO2 Siddhi from Java

Siddhi is the CEP Engine that powers WSO2 CEP. WSO2 CEP is the server, that can accepts messages over the network via long list of protocols  such as Thrift, HTTP/JSON, JMS, Kafka, and Web Socket.

Siddhi, in contrast, is a java Library. That means you can use it from a Java class, or a java main method. I personally do this to debug CEP queries before putting them into WSO2 CEP. Following Describes how to do it. However, you can embedded it and create your own apps.

First, add following jars into class path. ( You can find them from WSO2 CEP pack, The Jar versions might change with new packs, but what ever in the same CEP pack will work.
  1. siddhi-api-2.1.0-wso2v1.jar
  2. antlr-runtime-3.4.jar
  3. log4j-1.2.14.jar
  4. 2.0.19/mvel2-2.0.19.jar
  5. siddhi-query-2.1.0-wso2v1.jar
  6. antlr-2.7.7.jar
  7. siddhi-core-2.1.0-wso2v1.jar
  8. commons-pool-1.5.6.jar
  9. stringtemplate-3.2.1.jar

Now you can use Siddhi using the following code. You define a Siddhi engine, add queries, register callbacks to receive results, and send events.

SiddhiManager siddhiManager = new SiddhiManager();
//define stream
siddhiManager.defineStream("define stream StockQuoteStream (symbol string,
    value double, time long, count long); ");
//add CEP queries
siddhiManager.addQuery("from StockQuoteStream[value>20] 
   insert into HighValueQuotes;");
//add Callbacks to see results
siddhiManager.addCallback("HighValueQuotes", new StreamCallback() {
public void receive(Event[] events) {

//send events in to Siddhi
InputHandler inputHandler = siddhiManager.getInputHandler("StockQuoteStream");
inputHandler.send(new Object[]{"IBM", 34.0, System.currentTimeMillis(), 10});

Here events you sent in must agree with the event streams you have defined. For example, StockQuoteStream must have a string, double, long, and a long as per event stream definition.

See my earlier blog for example of more queries.

Please see  [1] and [2] for more information about the Siddhi query language. If you create a complicated query, you can check intermediate results by adding callbacks to intermediate streams.

Enjoy! reach us via wso2 tag at stackoverflow if you have any questions or send a mail to


Madhuka UdanthaInstalling Flask in Windows8

Flask is a lightweight web application framework written in Python. Flask depends on two external libraries, Werkzeug and Jinja2.

  • Werkzeug is a toolkit for Web Server Gateway Interface (WSGI)
  • Jinja2 renders templates

Flask is called as microframework because it keeps the core simple. It has no database abstraction layer, supports extensions (object-relational mappers, form validation,  open authentication technologies). It have Google App Engine Compatibility and RESTful request dispatching.


Let install the Flask.
Here we will be using windows 8 and python 2.7
To install all them in once we will used 'Virtualenv' and there are some more ways.  'Virtualenv'  provides a clever way to keep different project environments isolated.

As we are on Windows and we don’t have the easy_install command. firstly we have to install it.

1. install pip

1.1 Assuming you are using Python 2.7 on the default path, so if have not add python to your path, added it now:


1.2 Now you have easy_install so we can use it to install pip by below command
easy_install pip

2 Create virtualenv environment

2.1 Install virtualenv in windows

pip install virtualenv

2.2 After virtualenv installed, we can create our own environment or browser to your location from CMD. we usually create a project folder and a venv folder within

virtualenv venv

2.3 To activate the corresponding environment

you should now be using your virtualenv (notice how the prompt of your shell has changed to show the active environment)


3. Installing Flask

The following command to get Flask activated in our virtualenv:

pip install Flask


Nest post we will write simple REST services

Nuwan BandaraMulti data center deployment of WSO2 API Manager

Hope this diagram helps to understand the multi dc deployment architecture of WSO2 API Manager

Dedunu DhananjayaAlfresco: How to write a simple Java based Alfresco web script?

If you want to develop new feature for Alfresco best way is WebScript! Let's start with a simple Alfresco web script. First you need to create an Alfresco AMP maven project using archetype. In this example I'll use the latest alfresco version 5.0.

First I generated Alfresco All-in-One AMP. (Please refer my blog post on generating AMP projects.)

If you go through the files structure which is generated, you will find out a sample web script. It is a JavaScript based WebScript. By this example, I'm going to explain how to write a simple Java based Hello World web script. service-context.xml helloworld.get.desc.xml helloworld.get.html.ftl
Create above files in below locations of your maven project.
  • - repo-amp/src/main/java/org/dedunu/alfresco/
  • helloworld.get.desc.xml - repo-amp/src/main/amp/config/alfresco/extension/templates/webscripts/org/dedunu/alfresco/helloworld.get.desc.xml
  • helloworld.get.html.ftl - repo-amp/src/main/amp/config/alfresco/extension/templates/webscripts/org/dedunu/alfresco/helloworld.get.html.ftl
  • service-context.xml - repo-amp/src/main/amp/config/alfresco/module/repo-amp/context/service-context.xml
Use below command to run the maven project.

mvn clean install -Prun 

It may take a while to run the project after that open a browser window. Then visit to below URL


Lali DevamanthriThe Fedora Project for GSoC 2015

The Fedora Project has been selected as a participating organization for GSoC 2015. Refer to the organization Idea Page and contact Org Administrators for more info.

The Fedora project represented the Google Summer of Code program for 9 years and participating on year 2015 program as well. Students who are looking for challenges and would like to contribute to the worlds’ leading and innovative Linux Distro, this could be the chance. Contact and refer to the material and start contacting mentors.

Follow the Main wiki page for Fedora GSoC 2015 for more details.

Sohani Weerasinghe

Setup svn server on ubunto

This post describes about setting up the svn server on ubunto.
  • Install Subversion
$ sudo apt-get install subversion libapache2-svn apache2

  • Configure Subversion
Create a directory for where you want to keep your subversion repositories:
$ sudo mkdir /subversion
$ sudo chown -R www-data:www-data /subversion/

  • Open the subversion config file dav_svn.conf and add the lines at the end as shown below. Comment or delete all the other lines in the config file:
<Location /subversion>
DAV svn
SVNParentPath /subversion
AuthType Basic
AuthName "Subversion Repository"
AuthUserFile /etc/apache2/dav_svn.passwd
Require valid-user
  • Now create a SVN user using the following command. Here I create a new SVN user called sohani with password msc12345:

$ sudo htpasswd -cm /etc/apache2/dav_svn.passwd sohani
New password: 
Re-type new password: 
Adding password for user sohani

Use -cm parameter for the first time only, you can create another user with only -m parameter.
  • Create Subversion repository called sohani.repo under /subversion directory:

$ cd /subversion/
$ sudo svnadmin create sohani.repo
Restart Apache service:

$ /etc/apache2 sudo service apache2 reload
  • Open the browser and navigate to http://ip-address/subversion/sohani.repo. Enter the SVN username and password which you have created in the earlier step. In my case, username is sohani and password is msc12345.

Sohani Weerasinghe

Maven private remote repository setup

1. Download nexus-2.11.1-01-bundle.tar.gz or latest version of nexus oss.

2. Extract the tar file in you home directory-

$ tar -xvf nexus-2.11.1-01-bundle.tar.gz
Now you will get two directories - nexus-2.11.1-01 and sonatype-work in your home directory.

3. Copy these two directories to /usr/local/ directory 

$ cp -r nexus-2.11.1-01 /usr/local/
$ cp -r sonatype-work /usr/local/

The executable/configuration files related to nexus are stored in nexus-2.11.1-01 directory and the jar file mentioned in pom.xml are stored in sonatype-work directory.

These jar files are mirror of your ~/.m2/repository. First time you issue a mvn package command then all the jars are stored here. After then when you issue mvn package again then all jars are downloaded from the nexus repository instead of downloading from the central repository.

4. Go to the /usr/local/ directory 

$ cd /usr/local/  

5. Create a link to nexus-2.11.1-01 

$ sudo ln -s nexus-2.7.0-06 nexus

6. Now to run nexus, execute the following 

$ bash nexus/bin/nexus console  

Here nexus is attached with your console. If you close your console then the nexus server will be terminated. 

7.Then grant permission as follows

$ sudo chmod -R 777 nexus-2.11.1-01/
$ sudo chmod -R 777 sonatype-work/

8. Now in browser type the address - http://localhost:8081/nexus/. The default login user name is - admin and password is - admin123

9. To stop nexus. Just close the terminal or press Ctrl+C. Copy the following content into this settings.xml file 

        <!--This sends everything else to /public -->

            <!--Enable snapshots for the built in central repo to direct -->
            <!--all requests to nexus via the mirror -->


    <!--make the profile active all the time -->

10. And add these following line in your project's pom.xml file 

        <name>My internal repository</name>

        <name>My internal repository</name>

Sohani Weerasinghe

Building/Releasing projects with maven release plugin

This post describes about building/releasing projects with maven using the maven release plugin with org.wso2.maven:wso2-release-pre-prepare-plugin.

1. Checkout the svn repo and create two folders as tag and trunk.

2. Then you need to create a Maven Multi Module project called 'trunk' and then add following sections to the main pom.xml



<!-- This must be set! The plugin throws a NPE when not set -->



3. In settings.xml resides at .m2, you need to add below sections.



      <name>Human Readable Name for this Mirror.</name>

            <!--Enable snapshots for the built in central repo to direct -->
            <!--all requests to nexus via the mirror -->



4. Then navigate to the checkout location (trunk) of the svn repo and execute below commands

mvn release:prepare -Dusername=<svn_uname> -Dpassword=<svn_password>

mvn release:perform

5. If you want to revert changes you can execute below commands

mvn release:rollback
mvn release:clean

Dedunu DhananjayaAlfresco: .gitignore for Alfresco Maven Projects

If you are an Alfresco developer, you have to develop projects using Alfresco AMP modules. Previously Alfresco has used Ant to build projects. But latest Alfresco SDK is using Apache Maven. AMP maven projects generates whole lot of temporary files. Those files you don't want in your version control system. 

Nowadays almost everyone is using Git. If I say Git is the most popular version control system today, I hope a lot of people would agree on that. In Git you can use .gitignore file to mention what are the files that should not add to the repository. So if you mention the patterns on .gitignore, Git won't commit unwanted files. For that you need a good .gitignore file. Last year I wrote a blog post which has almost all the file patterns which you should emit from Java project.

You can create a file called .gitignore on root folder of your Git repository. Then copy above content and add it to that file. After that commit that file into your Git repository. Now you don't have to worry about unwanted files.

Lali DevamanthriLinux kernel (Trusty HWE) vulnerabilities regression

A flaw was discovered in the Kernel Virtual Machine’s (KVM) emulation of
the SYSTENTER instruction when the guest OS does not initialize the
SYSENTER MSRs. A guest OS user could exploit this flaw to cause a denial of
service of the guest OS (crash) or potentially gain privileges on the guest
OS. (CVE-2015-0239)

Andy Lutomirski discovered an information leak in the Linux kernel’s Thread
Local Storage (TLS) implementation allowing users to bypass the espfix to
obtain information that could be used to bypass the Address Space Layout
Randomization (ASLR) protection mechanism. A local user could exploit this
flaw to obtain potentially sensitive information from kernel memory.

A restriction bypass was discovered in iptables when conntrack rules are
specified and the conntrack protocol handler module is not loaded into the
Linux kernel. This flaw can cause the firewall rules on the system to be
bypassed when conntrack rules are used. (CVE-2014-8160)

A flaw was discovered with file renaming in the linux kernel. A local user
could exploit this flaw to cause a denial of service (deadlock and system
hang). (CVE-2014-8559)

A flaw was discovered in how supplemental group memberships are handled in
certain namespace scenarios. A local user could exploit this flaw to bypass
file permission restrictions. (CVE-2014-8989)

A flaw was discovered in how Thread Local Storage (TLS) is handled by the
task switching function in the Linux kernel for x86_64 based machines. A
local user could exploit this flaw to bypass the Address Space Layout
Radomization (ASLR) protection mechanism. (CVE-2014-9419)

Prasad J Pandit reported a flaw in the rock_continue function of the Linux
kernel’s ISO 9660 CDROM file system. A local user could exploit this flaw
to cause a denial of service (system crash or hang). (CVE-2014-9420)

A flaw was discovered in the fragment handling of the B.A.T.M.A.N. Advanced
Meshing Protocol in the Linux kernel. A remote attacker could exploit this
flaw to cause a denial of service (mesh-node system crash) via fragmented
packets. (CVE-2014-9428)

A race condition was discovered in the Linux kernel’s key ring. A local
user could cause a denial of service (memory corruption or panic) or
possibly have unspecified impact via the keyctl commands. (CVE-2014-9529)

A memory leak was discovered in the ISO 9660 CDROM file system when parsing
rock ridge ER records. A local user could exploit this flaw to obtain
sensitive information from kernel memory via a crafted iso9660 image.

A flaw was discovered in the Address Space Layout Randomization (ASLR) of
the Virtual Dynamically linked Shared Objects (vDSO) location. This flaw
makes it easier for a local user to bypass the ASLR protection mechanism.

Dmitry Chernenkov discovered a buffer overflow in eCryptfs’ encrypted file
name decoding. A local unprivileged user could exploit this flaw to cause a
denial of service (system crash) or potentially gain administrative
privileges. (CVE-2014-9683)

Update instructions:

The problem can be corrected by updating your system to the following
package versions:

Ubuntu 12.04 LTS:
linux-image-3.13.0-46-generic   3.13.0-46.77~precise1
linux-image-3.13.0-46-generic-lpae  3.13.0-46.77~precise1

After a standard system update you need to reboot your computer to make
all the necessary changes.


Package Information:

John MathonIntegrating IoT Devices. The IOT Landscape.

The IOT Landscape:

IOT landscape

How do you integrate IoT devices


Let’s take an example of the devices I am talking in my review here.  Here is a summary of the functionality and interoperability of a set of IoT devices:

Screen Shot 2015-03-04 at 2.34.23 PM

Some of the devices can be operated from a common device or app but a lot of devices have separate controls.  Just like in the home electronics world a lot of the IoT devices are not compatible with each other in terms of being easily interoperable.    If my goal is to make my life better by leveraging multiple IoT devices to provide “intelligent” functions this is not obvious how to do this.

Communication Protocols

IoT devices come with one or more protocols they talk.  Some devices can connect with only a certain controller type.  Some devices connect directly to the internet.  The types of connectivity that devices support include Bluetooth Low Energy, Z-Wave, Zigbee, Wifi protocols, CoAP and sometimes a proprietary protocol.   Some devices fall within a standard like Z-wave or Zigbee where numerous devices operate according to a predefined set of standards that allows multiple devices to share a similar controller and be operated easily in a common way.

It would obviously make life a lot simpler if everything talked a single standard protocol for IoT like Z-Wave for instance that defines a good low power inexpensive physical transport as well as high interoperability but for reasons there are arguments why each vendor has chosen a particular set of protocols to support.   There is no need for a single standard if higher level functionality provided sufficient abstraction of underlying devices the interoperability would be less of a concern.   Some hub manufacturers are doing that like Ninja.

The Ninja Sphere is slated to provide support for Z-Wave, Zigbee, Bluetooth, Wifi and other protocols but even it does not support some of the protocols listed in the above list of devices.

If a single device could provide integrated local connectivity to all devices for instance in the above table it wouldn’t solve several other critical problems.  To operate the devices or leverage the information from the devices you need higher level services sometimes in the cloud or specific applications that don’t support integration.  In that case we need to look at programmatic interfaces to leverage multiple devices.

User Interfaces

Each IoT device offers usually a mobile App for controlling the device.   In some cases, like Z-wave devices there are a number of applications that know how to operate and leverage the devices.  There are other standards like UP (from Jawbone)  that allow multiple apps to leverage several IoT devices.

Many IoT devices offer a local interface.  This is particularly true if the device is large enough that you would interact with it, for instance a car.

Almost all IoT devices also offer an internet service that allows you to control the device, operate it remotely.  This is one of the biggest reasons to have IoT devices so most IoT’s have this.  Unfortunately, if you have 10 devices you may find you have 10 different web services you will have to sign in to, 10 web services that don’t necessarily know about other web services meaning you still don’t have a way to integrate devices easily that aren’t in a common Web service.

Programmatic Interfaces

This leaves us with the low level programmatic interfaces and APIs provided by the device manufacturers.   Most manufacturers are providing SDK (software development kits) or APIs to their devices so that 3rd parties can build interfaces to them.  However, with thousands of devices out there and many using their own SDKs it is hard to imagine that there will be a lot of this integration across a lot of applications.

Nonetheless if you are your own programmer you may find this a powerful way for you to build some cross device integration.

The more desirable way than an SDK is to use an API to talk to the device or to the devices Web service.   APIs are a relatively uniform way to build integration between devices.  This still requires a programmer to leverage to build an application to integrate the devices.

Integration Services / Approaches

The next level of integration is to use an existing Integration service in the cloud to combine different IoT devices.   An example of this is IFTTT (If this then that.)

IFTTT already provides some cross IoT device API integration.


Other articles in this series:

Iot (Internet of Things), $7 Trillion, $14 Trillion or $19 Trillion? A personal look

Integrating IoT Devices. The IOT Landscape.

Why would you want to Integrate IOT devices?  Examples from a use case for the home (coming)

An example of integration of diverse sets of IOT devices  (coming)


Dedunu DhananjayaAlfresco: Calculate folder size using Java based WebScript

I was assigned to a training task to write a web script for calculating the size of a folder or a file. But you need to go through all the nodes recursively. If you don't calculate it recursively in folders you won't get accurate folder size.

  • Java Development Kit 1.7 or later
  • Text Editor or IDE (Eclipse/Sublime Text/Atom)
  • Apache Maven 3 or later
  • Web Browser (Chrome/Firefox/Safari)

For this project, I generated Alfresco 5 All-in-One maven project. You really don't want Alfresco  Share module in this project. But I included it because you may need to find a NodeRefId. It would be easier with Share.  Source code of this project is available at GitHub.

Create above files in below locations of your maven project. 
  • size.get.desc.xml - repo-amp/src/main/amp/config/alfresco/extension/templates/webscripts/org/dedunu/alfresco/size.get.desc.xml
  • size.get.html.ftl - repo-amp/src/main/amp/config/alfresco/extension/templates/webscripts/org/dedunu/alfresco/size.get.html.ftl
  • - repo-amp/src/main/java/org/dedunu/alfresco/
  • service-context.xml - repo-amp/src/main/amp/config/alfresco/module/repo-amp/context/service-context.xml
How to test the web script?
Take a terminal. Navigate to project folder. And type below command.

mvn clean install -Prun -Dmaven.test.skip

It may take a while to start the Alfresco Repository and Share server. Wait till it finishes completely. 

Then open a web browser and go to http://localhost:8080/share. Then login. Go to Document library.

Find a folder and click on "View Details". Then copy NodeRef from browser as shown below.

Open a new tab and type below URL. (Replace <NodeRef> with the NodeRef you copied from Alfresco Share interface.)

If you have followed instruction properly, you will get a page like below.

If you have any questions regarding this examples, please comment!!! Enjoy Alfresco!

Malintha AdikariGenerate random number in WSO2 ESB

This is how we can generate random number and store it to a property for later use inside WSO2 ESB sequese

Here we are using script mediator for generating random number with javascript.

   <script language="js">   
var randomnumber1=Math.floor((Math.random() * 10000) + 1);
mc.setProperty("refcodenumber", randomnumber1);

Malintha AdikariHow to send string content as the response from WSO2 ESB Proxy/API

We can send XML content or JSON content as the response out from our WSO2 ESB proxy/REST API. But there may be a scenario where we want to send string content (which is not in XML format) as the response of our service. Following synapse code snippet shows you how to do it

As an example think you have to send following string content to your client service


Note :
  • Above is not in XML format. So we cannot generate this directly through payload factory mediator.
  • We have to send <,> symbols inside the response, but WSO2 ESB doesn't allow to keep those inside your synapse configuration.
1. First you have to encode above expected response. You can use this tool to encode your xml. We get following xml after encoding in our example


Note : If you want to encode dynamic payload content you can use script mediator or class mediator for that task

2. Now we can attach the require string content to our payload as follows

<payloadFactory media-type="xml">
<ms11:text xmlns:ms11="">$1</ms11:text>
<arg value="&lt;result1&gt;malintha&lt;/result1&gt;+&lt;result2&gt;adikari&lt;/result2&gt;"/>
<property name="messageType" value="text/plain" scope="axis2"/>

Here we are using payload factory mediator to create our payload. You can see still our media-type is XML. Then load our string content as argument value and finally we change the message type to "text/plain". So this would return string content as it's response.

Malintha AdikariString concatenation in WSO2 ESB

There may some incidents where we want to concatenate two strings and create new string (value of a property ) in WSO2 ESB. Here I am using two property values and join them in to a new property.

1. Property #1

 <property name="myFirstProperty" value="WSO2"/>  

2. Property #2

 <property name="mySecondProperty" value="ESB"/>  

3. Concatenating above two properties

  <property name="ConcatenatedProperty" expression="fn:concat(get-property('myFirstProperty'),get-property('mySecondProperty')"/>  

Note: You can provide any number of values to be concatenated in to the fn:concat() function.

Value of "ConcatenatedProperty" property is "WSO2ESB"

Malintha AdikariHow to setup WSO2 p2 repo in your local machine

You can point wso2 p2 repository online through url (you can use any release replacing turing). But if you cannot fetch features from internet , better solution is set up whole p2 repo locally and then point to your local installation. Follow these steps to install and use it

1. Go to your prefferred folder to install p2_repo in.

mine is "/home/malintha/my_p2"

2. open terminal there and execute below command.

wget -r  -l inf -p -e robots=off

Note: Here "" is the URL of the p2 repo. Change it according to the release.

3. Now everything in p2 repo will be downloaded in to your local location.

4. After completion of the download you can use local path as

"/home/malintha/my_p2/" to point p2_repo local installed location when you need it

Malintha AdikariHow to grant access permission for user from external machine to MySQL database

I faced this problem while doing a deployment. I create a database on the sql server which is running on one machine. Then I wanted to grant access permission to this database from another machine. There are two step process to achieve this

Think you want to grant access permission for machine and your DB server is running on machine

1. Create a user for the remote machine with preferred username and password

mysql> CREATE USER `abcuser`@`` IDENTIFIED BY 'abcpassword'; 

 Here "abcuser" is the username

           "abcpassword" is the password for that user

2. Then grant permission for that user to your database 

GRANT ALL PRIVILEGES ON registry.* TO 'abcuser'@''; 

Here "registry" is the DB name

Thats it...............!


sanjeewa malalgodaHow to pass Basic authntication headers to backend server via API Manager

First let me explain how authorization headers work in API Manager. When user send authorization header along with API request we will use it for API authentication purpose. And we will drop it from out going message.
If you want to pass clients auth headers to back end server without dropping them at gateway you can enable following parameter and disable it.
Update following property in /repository/conf/api-manager.xml and restart server.


Then it will not drop user sent authorization headers at gateway. So whatever user send will go to back end as well

Send API request with Basic Auth header.

Incoming message to API gateway. As you can see we do not use API Manager authentication here. For this we can set resource auth type as none when we create API. Then send Basic auth header that need to pass back end server.
[2015-02-27 18:08:05,010] DEBUG - wire >> "GET /test-sanjeewa1/1.0.0 HTTP/1.1[\r][\n]"
[2015-02-27 18:08:05,011] DEBUG - wire >> "User-Agent: curl/7.32.0[\r][\n]"
[2015-02-27 18:08:05,011] DEBUG - wire >> "Host:[\r][\n]"
[2015-02-27 18:08:05,011] DEBUG - wire >> "Accept: */*[\r][\n]"
[2015-02-27 18:08:05,011] DEBUG - wire >> "Authorization: Basic 2690b6dd2af649782bf9221fa6188[\r][\n]"
[2015-02-27 18:08:05,011] DEBUG - wire >> "[\r][\n]"

Out going message from gateway. You can see client sent Basic auth header is present in out going message
[2015-02-27 18:08:05,024] DEBUG - wire << "GET http://localhost/apim1/ HTTP/1.1[\r][\n]"
[2015-02-27 18:08:05,025] DEBUG - wire << "Authorization: Basic 2690b6dd2af649782bf9221fa6188[\r][\n]"
[2015-02-27 18:08:05,025] DEBUG - wire << "Accept: */*[\r][\n]"
[2015-02-27 18:08:05,025] DEBUG - wire << "Host: localhost:80[\r][\n]"
[2015-02-27 18:08:05,025] DEBUG - wire << "Connection: Keep-Alive[\r][\n]"
[2015-02-27 18:08:05,026] DEBUG - wire << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"
[2015-02-27 18:08:05,026] DEBUG - wire << "[\r][\n]"

Other possible option is setting Basic auth headers at API gateway. For this we have 2 options.

01. Define Basic auth headers in API when you create API(see attached image). 

In API implement phase you can provide required basic auth details. Then API manager gateway will send provided authorization details as basic oauth headers to back end. Here we can let client to send Bearer token authorization header with API request. And gateway will drop it(after Bearer token validation) and pass Basic auth header to back end.

Incoming message to API gateway. Here user send Bearer token to gateway. Then gateway validate it and drop from out message.
[2015-02-27 17:36:15,580] DEBUG - wire >> "GET /test-sanjeewa/1.0.0 HTTP/1.1[\r][\n]"
[2015-02-27 17:36:15,595] DEBUG - wire >> "User-Agent: curl/7.32.0[\r][\n]"
[2015-02-27 17:36:15,595] DEBUG - wire >> "Host:[\r][\n]"
[2015-02-27 17:36:15,595] DEBUG - wire >> "Accept: */*[\r][\n]"
[2015-02-27 17:36:15,595] DEBUG - wire >> "Authorization: Bearer 2690b6dd2af649782bf9221fa6188-[\r][\n]"
[2015-02-27 17:36:15,595] DEBUG - wire >> "[\r][\n]"

Out going message from gateway. You can see Basic auth header added to out going message
[2015-02-27 17:36:20,523] DEBUG - wire << "GET http://localhost/apim1/ HTTP/1.1[\r][\n]"
[2015-02-27 17:36:20,539] DEBUG - wire << "Authorization: Basic YWRtaW46YWRtaW4=[\r][\n]"
[2015-02-27 17:36:20,539] DEBUG - wire << "Accept: */*[\r][\n]"
[2015-02-27 17:36:20,540] DEBUG - wire << "Host: localhost:80[\r][\n]"
[2015-02-27 17:36:20,540] DEBUG - wire << "Connection: Keep-Alive[\r][\n]"
[2015-02-27 17:36:20,540] DEBUG - wire << "User-Agent: Synapse-PT-HttpComponents-NIO[\r][\n]"

02. This is also same as previous sample. But if need you can set API resource authorization type as none. Then client don't need to send anything in request. But APIM will add Basic auth headers to outgoing message.
You can understand message flow and headers by looking following wire log 

Incoming message to API gateway
[2015-02-27 17:37:10,951] DEBUG - wire >> "GET /test-sanjeewa/1.0.0 HTTP/1.1[\r][\n]"
[2015-02-27 17:37:10,953] DEBUG - wire >> "User-Agent: curl/7.32.0[\r][\n]"
[2015-02-27 17:37:10,953] DEBUG - wire >> "Host:[\r][\n]"
[2015-02-27 17:37:10,953] DEBUG - wire >> "Accept: */*[\r][\n]"
[2015-02-27 17:37:10,953] DEBUG - wire >> "[\r][\n]"

Out going message from gateway. You can see Basic auth header is present in out going message
[2015-02-27 17:37:13,766] DEBUG - wire << "GET http://localhost/apim1/ HTTP/1.1[\r][\n]"
[2015-02-27 17:37:13,766] DEBUG - wire << "Authorization: Basic YWRtaW46YWRtaW4=[\r][\n]"
[2015-02-27 17:37:13,766] DEBUG - wire << "Accept: */*[\r][\n]"
[2015-02-27 17:37:13,766] DEBUG - wire << "Host: localhost:80[\r][\n]"
[2015-02-27 17:37:13,766] DEBUG - wire << "Connection: Keep-Alive[\r][\n]"

Isuru PereraJava Flight Recorder Continuous Recordings

When we are trying to find performance issues, it is sometimes necessary to do continuous recordings with Java Flight Recorder.

Usually we debug issues in an environment similar to a production setup. That means we don't have a desktop environment and we cannot use Java Mission Control for flight recording.

That also means we need to record & get dumps using command line in servers. We can of course use remote connection methods, but it's more easier to get recordings from the server.

With continuous recordings, we need to figure out how to get dumps. There are few options.
  1. Get a dump when the Java application exits. For this, we need to use dumponexit and dumponexitpath options.
  2. Get a dump manually from JFR.dump diagnostic command via "jcmd"
Note: The "jcmd" command is in $JAVA_HOME/bin. If you use the Oracle Java Installation script for Ubuntu, you can directly use "jcmd" without including  $JAVA_HOME/bin in $PATH.

Enabling Java Flight Recorder and starting a continuous recording

To demonstrate, I will use WSO2 AS 5.2.1. First of all we need to enable Java Flight Recorder in WSO2 AS. Then we will configure it to start a default recording.

$ cd wso2as-5.2.1/bin
$ vi

In VI editor, press SHIFT+G to go to the end of file. Add following lines in between "-Dfile.encoding=UTF8 \" and "org.wso2.carbon.bootstrap.Bootstrap $*"

    -XX:+UnlockCommercialFeatures \
-XX:+FlightRecorder \
-XX:FlightRecorderOptions=defaultrecording=true,settings=profile,disk=true,repository=./tmp,dumponexit=true,dumponexitpath=./ \

As I mentioned in my previous blog post on Java Mission Control, we use the default recording option to start a "Continuous Recording". Please look at java command reference to see the meanings of each Flight Recorder option.

Please note that I'm using the "profile" setting and using disk=true to write a continuous recording to the disk. I'm also using ./tmp directory as the repository, which is the temporary disk storage for JFR recordings.

It's also important to note that the default value of "maxage" is set to 15 minutes.

To be honest, I couldn't exactly figure out how this maxage works. For example, if I set to 1m, I see events for around 20 mins. If I use 10m, I see events for around 40 mins to 1 hour. Then I found an answer in Java Mission Control forum. See the thread Help with maxage / limiting default recording disk usage.

What really happens is that maxage threshold is checked only when a new recording chunk is created. We haven't specified the "maxchunksize" above and the default value of 12MB is used. It might take a considerable time to fill the data and trigger removal of chunks.

If you need infinite recordings, you can set maxage=0 to override the default value.

Getting Periodic Java Flight Recorder Dumps

Let's use "jcmd" to get a Java Flight Recorder Dump. For this, I wrote a simple script (jfrdump)

now=`date +%Y_%m_%d_%H_%M_%S`
if ps -p $1 > /dev/null
echo "$now: The process $1 is running. Getting a JFR dump"
# Dump
jcmd $1 JFR.dump recording=0 filename="recording-$now.jfr"
echo "$now: The process $1 is not running. Cannot take a JFR dump"
exit 1

You can see that I have used "JFR.dump" diagnostic command and the script expects the Java process ID as an argument.

I have used recording id as 0. The reason is that the default recording started from the has the recording id as 0.

You check JFR recordings via JFR.check diagnostic command.

isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1$ jcmd `cat` JFR.check
Recording: recording=0 name="HotSpot default" maxage=15m (running)

I have also used the date for the recording name, which will help us to have multiple files with the date and time of the dump. Note that the recordings will be saved in the CARBON_HOME directory, which is the working directory for the Java process.

Let's test jfrdump script!

isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1$ jfrdump `cat`
2015_02_27_15_02_27: The process 21674 is running. Getting a JFR dump
Dumped recording 0, 2.3 MB written to:


Since we have a working script to get a dump, we can use it as a task for Cron.

Edit the crontab.

$ crontab -e

Add following line.

*/15 * * * * (/home/isuru/programs/sh/jfrdump `cat /home/isuru/test/wso2as-5.2.1/`) >> /tmp/jfrdump.log 2>&1

Now you should get a JFR dump every 15 minutes. I used 15 minutes as the maxage is 15 minutes. But you can adjust these values depending on your requirements.

See also: Linux Crontab: 15 Awesome Cron Job Examples

Troubleshooting Tips

  • After you edit, always run the server once in foreground (./ to see whether there are issues in script syntax. If the server is running successfully, you can start the server in background.
  • If you want to get a dump at shutdown, do not kill the server forcefully. Always allow the server to gracefully shutdown. Use "./ stop"
  • You may not be able to connect to the server if you are running jcmd with a different user. Unless you own the process, following error might happen with jcmd.

isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1$ sudo jcmd `cat` help
[sudo] password for isuru:
21674: well-known file is not secure
at Method)

Madhuka UdanthaBasic Functionality of Series or DataFrame in Pandas

Throughout this post I will take you over the fundamental mechanics of interacting with the data contained in a Series or DataFrame in pandas(python).


Reindexing is a critical method on pandas objects. 'Reindexing' means to create a new object with the data conformed to a new index. Here is my object that I will be using for this post

obj = pd.Series([4.1, 2.6, 1.1, 3.7], index=['d', 'b', 'a', 'c'])


By Calling reindex on this Series, it rearranges the data according to the new index, introducing missing values if any index values were not already present in last series.

obj2 = obj.reindex(['a', 'b', 'c', 'd', 'e'])

you can fill the missing value by passing fill_value as below

obj.reindex(['a', 'b', 'c', 'd', 'e'], fill_value=0)

You can fill by forward ('ffill') value or backward 'bfill'

obj3.reindex(range(6), method='ffill')

In DataFrame, reindex can alter either the (row) index, columns, or both.
When you passed just a sequence, the rows are reindexed in the result.

frame = pd.DataFrame(np.arange(27.0,31.5,0.5)).reshape((3, 3)), index=['a', 'c', 'd'], columns=['Colombo', 'Negombo', 'Gampaha'])


frame2 = frame.reindex(['a', 'b', 'c', 'd'])

reindex in rows in dataframe
cities= ['Colombo', 'Negombo', 'Kandy']

frame.reindex(index=['a', 'b', 'c', 'd'], method='ffill', columns=cities)


Dropping entries

In series, We can drop one or more entries from an axis
new_obj = obj.drop('c')

new_obj2 = obj.drop(['d', 'c'])

With DataFrame, index values can be deleted from either axis:

deleting entries over the axis

data.drop('Negombo', axis=1)

data.drop(['Negombo', 'Kandy'], axis=1)

Ajith VitharanaHow to delete API from publisher with active subscriptions - WSO2 API Manager.

WSO2 API Manager doesn't allow to delete  APIs from publisher  which contains active subscriptions. However if  you have strong requirement to delete such API you can follow the  bellow steps to remove it.

Lets say we have API with active subscriptions.

Name : MobileAPI
Context : /mobile
Version : 1.0.0

1.First we should change the lifecycle state to BLOCKED. Then that API will be invisible from store and no longer can invoke that API.

2. Browse the  AM database and find the API_ID.

3. Delete all the subscriptions related to that API.
4. Now you can delete that API from publisher.

sanjeewa malalgodaHow to modify API Manager publisher to remove footer - API Manager 1.8.0

1. Go to publisher jaggery app (/repository/deployment/server/jaggeryapps/publisher)

2. Go to subthemes folder in publisher (site/themes/default/subthemes)

3. Create a folder with the name of your subtheme. For example "nofooter"

4. Create a folder called 'css' inside 'nofooter' folder

5. Copy the "/repository/deployment/server/jaggeryapps/publisher/site/themes/default/css/localstyles.css" to the new subtheme's css location " /repository/deployment/server/jaggeryapps/publisher/site/themes/default/subthemes/nofooter/css/"

6. Copy the "/repository/deployment/server/jaggeryapps/publisher/site/themes/default/images" folder to the new subtheme location " /repository/deployment/server/jaggeryapps/publisher/site/themes/default/subthemes/nofooter/"

7. add following css to localstyles.css file in "/repository/deployment/server/jaggeryapps/publisher/site/themes/default/subthemes/nofooter/css/" folder


8. Edit "/repository/deployment/server/jaggeryapps/publisher/site/conf/site.json" file as below in order to make the new sub theme as the default theme.
         "theme" : {
               "base" : "default",
               "subtheme" : "nofooter"

Lali DevamanthriLatest header files for FreeBSD 9 kernel

Mateusz Kocielski and Marek Kroemeke discovered that an integer overflow
in IGMP processing may result in denial of service through malformed
IGMP packets.

For the stable distribution (wheezy), this problem has been fixed in
version 9.0-10+deb70.9.

We recommend that you upgrade your kfreebsd-9 packages.

Further information about Debian Security Advisories, how to apply
these updates to your system and frequently asked questions can be
found at:

Srinath PereraWSO2 Demo Videos from O'reilly Strata 2015 Booth

We just came back from O’reilly Strata. It was great to see most of the Big Data world gathered at a one place. 

WSO2 have had a booth, and following are demos we showed in the booth. 

Demo 1: Realtime Analytics for a Football Game played with Sensors

This is shows a realtime analytics done using a dataset created by playing football game with sensors in the ball and the boots of the player. You can find more information from the earlier post.  

Demo 2: GIS Queries using Public Transport for London Data Feeds 

TFL (Transport for London) provides several public data feeds about London public transport. We used those feeds within WSO2 CEP's Geo Dashboard to implement "Speed Alerts", "Proximity Alerts", and Geo Fencing.

Please see this slide deck for more information. 

Srinath PereraIntroduction to Large Scale Data Analysis with WSO2 Analytics Platform

Slide deck for the talk I did at Indiana University, Bloomington. It walks though WSO2 Big data offering providing example queries.

Isuru PereraMonitor WSO2 Carbon logs with Logstash

The ELK stack is a popular stack for searching and analyzing data. Many people use it for analyzing logs. WSO2 also has a full-fledged Big Data Analytics Platform, which can analyze logs and do many more things.

In this blog post, I'm explaining on how to monitor logs with Elasticsearch, Logstash and Kibana. I will mainly explain logstash configurations. I will not show how to set up Elasticsearch and Kibana. Those are very easy to setup and there are not much configurations. You can just figure it out very easily! :)

If you want to test an elasticsearch server, you can just extract the elasticsearch distribution and start an elasticsearch server. If you are using Kibana 3, you need to use a web server to host the Kibana application. With Kibana 4, you can use the standalone server provided in the distribution.

Configuring Logstash

Logstash is a great tool for managing events and logs. See Getting Started with Logstash if you haven't used logstash.

First of all, we need to configure logstash to get the wso2carbon.log file as an input. Then we need to use a filter to parse the log messages and extract data to analyze.

The wso2carbon.log file is written using log4j and the configurations are in $CARBON_HOME/repository/conf/

For WSO2 log message parsing, we will be using the grok filter to extract the details configured via log4j pattern layout

For example, following is the pattern layout configured for wso2carbon.log in WSO2 AS 5.2.1 (wso2as-5.2.1/repository/conf/

log4j.appender.CARBON_LOGFILE.layout.ConversionPattern=TID: [%T] [%S] [%d] %P%5p {%c} - %x %m {%c}%n

In this pattern, the class name ("{%c}") is logged twice. So, let's remove the extra class name. (I have created a JIRA to remove the extra class name from log4j configuration. See CARBON-15065)

Following should be the final configuration for wso2carbon.log.

log4j.appender.CARBON_LOGFILE.layout.ConversionPattern=TID: [%T] [%S] [%d] %P%5p {%c} - %x %m %n

Now when we start WSO2 AS 5.2.1, we can see all log messages have the pattern specified in log4j configuration.

For example:

isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1/bin$ ./ start
isuru@isurup-ThinkPad-T530:~/test/wso2as-5.2.1/bin$ tail -4f ../repository/logs/wso2carbon.log
TID: [0] [AS] [2015-02-25 18:02:00,345] INFO {org.wso2.carbon.core.init.JMXServerManager} - JMX Service URL : service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi
TID: [0] [AS] [2015-02-25 18:02:00,346] INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - Server : Application Server-5.2.1
TID: [0] [AS] [2015-02-25 18:02:00,347] INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - WSO2 Carbon started in 29 sec
TID: [0] [AS] [2015-02-25 18:02:00,701] INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} - Mgt Console URL :

Let's write a grok pattern to parse a log message (single line). Please look at grok filter docs for basic syntax in grok patterns. Once you are familiar with grok syntax, it's very easier to write patterns.

There is also an online "Grok Debugger" application to test grok patterns.

Following is the Grok pattern written for parsing above log lines.


You can test this Grok pattern with Grok Debugger. Use one of the lines in above log file for the input.

Grok Debugger
Grok Debugger

We are now parsing a single log line in logstash. Next, we need to look at how we can group exceptions or multiline log messages into one event.

For that we will use the multiline filter. As mentioned in the docs, we need to use a pattern to identify whether a particular log line is a part of the previous line. As configured in the log4j, all logs must start with "TID". If not we can assume that the particular log line belongs to the previous log line.

Finally we need to configure logstash to send output to some destination. We can use "stdout" output for testing. In a production setup, you can use elasticsearch servers.

Logstash Config File

Following is the complete logstash config file. Save it as "logstash.conf"

input { 
file {
add_field => {
instance_name => 'wso2-worker'
type => "wso2"
path => [ '/home/isuru/test/wso2as-5.2.1/repository/logs/wso2carbon.log' ]

filter {
if [type] == "wso2" {
grok {
match => [ "message", "TID:%{SPACE}\[%{INT:tenant_id}\]%{SPACE}\[%{WORD:server_type}\]%{SPACE}\[%{TIMESTAMP_ISO8601:timestamp}\]%{SPACE}%{LOGLEVEL:level}%{SPACE}{%{JAVACLASS:java_class}}%{SPACE}-%{SPACE}%{GREEDYDATA:log_message}" ]
multiline {
pattern => "^TID"
negate => true
what => "previous"

output {
# elasticsearch { host => localhost }
stdout { codec => rubydebug }

Please note that I have used "add_field" in file input to show that you can add extra details to the log event.

Running Logstash

Now it's time to run logstash!

$ tar -xvf logstash-1.4.2.tar.gz 
$ cd logstash-1.4.2/bin

We will first test whether the configuration file is okay.

$ ./logstash --configtest -f ~/conf/logstash.conf 
Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see {:level=>:warn}
Configuration OK

Let's start logstash

$ ./logstash -f ~/conf/logstash.conf

Now start the WSO2 AS 5.2.1 server. You will now see log events from logstash.

For example:

"message" => "TID: [0] [AS] [2015-02-26 00:31:41,389] INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} - WSO2 Carbon started in 17 sec",
"@version" => "1",
"@timestamp" => "2015-02-25T19:01:49.151Z",
"type" => "wso2",
"instance_name" => "wso2-worker",
"host" => "isurup-ThinkPad-T530",
"path" => "/home/isuru/test/wso2as-5.2.1/repository/logs/wso2carbon.log",
"tenant_id" => "0",
"server_type" => "AS",
"timestamp" => "2015-02-26 00:31:41,389",
"level" => "INFO",
"java_class" => "org.wso2.carbon.core.internal.StartupFinalizerServiceComponent",
"log_message" => "WSO2 Carbon started in 17 sec"

Troubleshooting Tips

  • You need to run the server as the file input will read only new lines. However if you want to test a log file from the beginning, you can use following input configuration.

input { 
file {
add_field => {
instance_name => 'wso2-worker'
type => "wso2"
start_position => "beginning"
sincedb_path => "/dev/null"
path => [ '/home/isuru/test/wso2as-5.2.1/repository/logs/wso2carbon.log' ]

  • When you are using multiline filter, the last line of the log file may not be processed. This is an issue in logstash 1.4.2 and I didn't notice the same issue in logstash-1.5.0.beta1. 

I hope these instructions are clear. I will try to write a blog post on using Elasticsearch & Kibana later. :)

Shelan PereraWhen should you give up ?

 When should i give up something?

 So you should never give up something until you find something you really want. Yes... Until you find it.. :). We usually draw the give up line on what society believe, Not what we really believe. We often create boundaries on what society believes achievable. Until someone who really believe him or herself step in expand the limits.

We never thought someone could fly until wright brothers flew, We never imagined a world speaking with someone thousand of miles away until Alexander Graham Bell invented the first practical telephone. We may be prisoners of the society unless we brave enough to reach outside.

So When should we give up on something? We should give up on the day we win the game. It seems so simple, obvious but senseless.

Watch the following video if you need to breath a life into what i mentioned.

"It's Not OVER Until You Win! Your Dream is Possible - Les Brown"

Sajith RavindraReason for WSO2 ESB returns 202 response when an API is called!


There's a REST API hosted WSO2 ESB. And when you invoke it, ESB only returns a 202 Accept response similar to follows and no processing is done to the request. And there are no errors printed in the wso2carbon.log too.

HTTP/1.1 202 Accepted
Date: Wed, 25 Feb 2015 13:43:14 GMT
Server: WSO2-PassThrough-HTTP
Transfer-Encoding: chunked
Connection: keep-alive


The reason for this is, there's no API or a resource that matches the request URL. Let me elaborate more on this. Let's say the request URL is as follows,

If this request returns a response similar to above, following are the possible causes,
  1. There's no API with the context="/myapicontext"
  2. If there's an API with  context="/myapicontext", it has no resource with uri-template or a url-mapping which matches /myresource/myvar/ portion of the request URL.
Therefore, to fix this issue we should make sure target API and the resource exists in ESB.

In order for the ESB to send  a more meaning full response in case 2 ONLY, add the following sequence to the ESB.

<sequence xmlns="" name="_resource_mismatch_handler_">
   <payloadFactory media-type="xml">
         <tp:fault xmlns:tp="">
            <tp:type>Status report</tp:type>
            <tp:message>Not Found</tp:message>
            <tp:description>The requested resource (/$1) is not available.</tp:description>
      <arg xmlns:ns="http://org.apache.synapse/xsd" xmlns:ns3="http://org.apache.synapse/xsd" expression="$axis2:REST_URL_POSTFIX" evaluator="xml"></arg>
   <property name="NO_ENTITY_BODY" action="remove" scope="axis2"></property>
   <property name="HTTP_SC" value="404" scope="axis2"></property>

So that the ESB will return a response as follows if there's no matching resource in API,

<tp:fault xmlns:tp="">
    <tp:type>Status report</tp:type>
    <tp:message>Not Found</tp:message>
    <tp:description>The requested resource (//myresource/myvar/) is not available.</tp:description>

Madhuka UdanthaPandas for Data Manipulation and Analysis

Pandas is a software library written for the Python programming language for data manipulation and analysis. In many organizations, it is common to research, prototype, and test new ideas using
a more domain-specific computing language like MATLAB or R then later port those ideas to be part of a larger production system written in, say, Java, C#, or C++. What
people are increasingly finding is that Python is a suitable language not only for doing research and prototyping but also building the production systems, too.

It contains data structures and operations for manipulating numerical tables and time series. I notice pandas while I was researching on big data. It saved my hours in research so I thought of writing some blog post on pandas. It contain

  • Data structures
  • Date range generation Index objects (simple axis indexing and multi-level / hierarchical axis indexing)
  • Data Wrangling (Clean, Transform, Merge, Reshape)
  • Grouping (aggregating and transforming data sets)
  • Interacting with the data/files (tabular data and flat files (CSV, delimited, Excel))
  • Statistical functions (Rolling statistics/ moments)
  • Static and moving window linear and panel regression
  • Plotting and Visualization


Lets do coding, Firstly we import as follows

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

1. Date Generation

1.1 Creating a Series by passing a list of values

series  = pd.Series([1,2,4,np.nan,5,7])
print series


1 import pandas as pd
2 import numpy as np
3 import matplotlib.pyplot as plt
5 series = pd.Series([1,2,4,np.nan,5,7])
6 print series



1.2 Random sample values

The numpy.random module supplements the built-in Python random with functions for
efficiently generating whole arrays of sample values from many kinds of probability  distributions

1 samples = np.random.normal(size=(4, 4))
2 print samples



1.3 Creating a DataFrame by passing a numpy array, with a datetime index and labeled columns.

1 import pandas as pd
2 import numpy as np
3 import matplotlib.pyplot as plt
5 dates = pd.date_range('20150201',periods=5)
7 df = pd.DataFrame(np.random.randn(5,3),index=dates,columns=list(['stock A','stock B','stock C']))
8 print "Colombo Stock Exchange Growth - 2015 Feb"
9 print (45*"=")
10 print df



1.4 Statistic summary

We can view a quick statistic summary of your data by describe

print df.describe()


1.5 Sorting

Now we want to sort by the values in one or more columns. Therefore we have to pass one or more column names to the 'by' option:
eg: We can sort data by increment of 'stock A' as below

df.sort_index(by='stock A')



To sort by multiple columns, pass a list of names:
df.sort_index(by=['stock A','stock B'])

df.sort(columns=None, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last') [2]

1.6 Ranking
DataFrame can compute ranks over the rows or the columns


df.rank(axis=0, numeric_only=None, method='average', na_option='keep', ascending=True, pct=False) [3]

[NOTE]Tie-breaking methods with rank

  • 'average' - Default:  assign the average rank to each entry in the equal group.
  • 'min' -  Use the minimum rank for the whole group.
  • 'max' -  Use the maximum rank for the whole group.
  • 'first' -  Assign ranks in the order the values appear in the data.
  • ‘dense’ - like ‘min’, but rank always increases by 1 between groups

1.7 Descriptive Statistics Methods
pandas objects are equipped with a set of common mathematical and statistical methods. Most of these fall into the category of reductions or summary statistics, methods
that extract a single value (like the sum or mean).



We need each day total increment of the stockA, stockB and stockC
print df.sum(axis=1)

We need highest stock increment day (date) per each stocks
print df.idxmax()



Descriptive and summary statistics


Number of non-NA values


Compute set of summary statistics for Series or each DataFrame column

min, max

Compute minimum and maximum values

argmin, argmax

Compute index locations (integers) at which minimum or maximum value obtained, respectively

idxmin, idxmax

Compute index values at which minimum or maximum value obtained, respectively


Compute sample quantile ranging from 0 to 1


Sum of values


Mean of values


Arithmetic median (50% quantile) of values


Mean absolute deviation from mean value


Sample variance of values


Sample standard deviation of values


Sample skewness (3rd moment) of values


Sample kurtosis (4th moment) of values


Cumulative sum of values

cummin, cummax

Cumulative minimum or maximum of values, respectively


Cumulative product of values


Compute 1st arithmetic difference (useful for time series)


Compute percent changes

There so many features I will go through them in my next posts.

Ajith VitharanaRegistry resource indexing in WSO2 server.

The registry resources are going to be stored in underline database as blob content. If we want to search some resource by value of content/attribute(s) we have to read the entire content and pass through the content to find the matching resource(s). When the resource volume is high,  that will affect to the performance of the search operations. To get rid of that issue we have introduced the Apache solr based content indexing/searching.

Set of  WSO2 products (WSO2 API Manager, WSO2 Governance Registry, WSO2 User Engagement Server ..etc) are using this feature to list , search the registry artifacts.

The embedded solr server has configured and it is scheduled to start after the server start-up. The solr  configuration can be found in registry.xml file.

Eg: registry.xml file for WSO2 API Manager 1.8.0

        <!--number of resources submit for given indexing thread -->
         <!--number of worker threads for indexing -->
        <!-- location storing the time the indexing took place-->
        <!-- the indexers that implement the indexer interface for a relevant media type/(s) -->
            <indexer class="org.wso2.carbon.registry.indexing.indexer.MSExcelIndexer" mediaTypeRegEx="application/"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.MSPowerpointIndexer" mediaTypeRegEx="application/"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.MSWordIndexer" mediaTypeRegEx="application/msword"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.PDFIndexer" mediaTypeRegEx="application/pdf"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.XMLIndexer" mediaTypeRegEx="application/xml"/>
            <indexer class="org.wso2.carbon.governance.registry.extensions.indexers.RXTIndexer" mediaTypeRegEx="application/wsdl\+xml" profiles ="default,uddi-registry"/>
            <indexer class="org.wso2.carbon.governance.registry.extensions.indexers.RXTIndexer" mediaTypeRegEx="application/x-xsd\+xml " profiles ="default,uddi-registry"/>
            <indexer class="org.wso2.carbon.governance.registry.extensions.indexers.RXTIndexer" mediaTypeRegEx="application/policy\+xml" profiles ="default,uddi-registry"/>
            <indexer class="org.wso2.carbon.governance.registry.extensions.indexers.RXTIndexer" mediaTypeRegEx="application/vnd.(.)+\+xml" profiles ="default,uddi-registry"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.XMLIndexer" mediaTypeRegEx="application/(.)+\+xml"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.PlainTextIndexer" mediaTypeRegEx="text/(.)+"/>
            <indexer class="org.wso2.carbon.registry.indexing.indexer.PlainTextIndexer" mediaTypeRegEx="application/x-javascript"/>
            <exclusion pathRegEx="/_system/config/repository/dashboards/gadgets/swfobject1-5/.*[.]html"/>
            <exclusion pathRegEx="/_system/local/repository/components/org[.]wso2[.]carbon[.]registry/mount/.*"/>

When the server is starting (at the time registry indexing service register in OSGI ) indexing task is scheduling with 60s delay.
Indexing task is running every 5s second to index the newly added, updated and deleted resources.
Max resource count can handle by one indexing task.
Indexing task thread pool size.
The Last indexing task executed time is stored in that location. (Then next indexing task will fetch resources from registry which are updated/added after the last indexing time. This prevents re-indexing resource, which is already indexed but not updated/deleted.)

<indexer class="org.wso2.carbon.registry.indexing.indexer.MSExcelIndexer" mediaTypeRegEx="application/"/>
There are set of indexers to index different type of resources/artifacts based on media type.
 <exclusion pathRegEx="/_system/config/repository/dashboards/gadgets/swfobject1-5/.*[.]html"/>
You can exclude the resource indexing which are stored in given path(s).

How to select resource(s) to index.

1. Registry indexing task read the activity logs from the REG_LOG table and filter the logs which are added/updated after the timestamps stored in lastAccessTimeLocation.

2. Then it filter the relevant indexer(configured in registry.xml) matching with the media type. if the matching media type found , it creates the  indexable resource file and send to solr server to index.

3. The Governance API (GenericArtifactManager) and Registry API (ContentBasedSearchService) provides the APIs to search the indexed resources through the indexing client.


WSO2 API Manager store the API meta data as configurable governance artifact in registry. Those API meta data going to be indexed using the RXTIndexer. The Governance API provides an API to search the indexed api artifacts. Following client code search the api which contain the  state as "PUBLISHED" and visibility as "public".

*  Copyright (c) 2005-2010, WSO2 Inc. ( All Rights Reserved.
*  WSO2 Inc. licenses this file to you under the Apache License,
*  Version 2.0 (the "License"); you may not use this file except
*  in compliance with the License.
*  You may obtain a copy of the License at
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* KIND, either express or implied.  See the License for the
* specific language governing permissions and limitations
* under the License.

import org.apache.axis2.context.ConfigurationContext;
import org.apache.axis2.context.ConfigurationContextFactory;
import org.wso2.carbon.base.ServerConfiguration;
import org.wso2.carbon.governance.api.generic.GenericArtifactManager;
import org.wso2.carbon.governance.api.generic.dataobjects.GenericArtifact;
import org.wso2.carbon.governance.api.util.GovernanceUtils;
import org.wso2.carbon.governance.client.WSRegistrySearchClient;
import org.wso2.carbon.registry.core.Registry;
import org.wso2.carbon.registry.core.pagination.PaginationContext;
import org.wso2.carbon.registry.core.session.UserRegistry;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

public class SampleWSRegistrySearchClient {

    private static ConfigurationContext configContext = null;

    private static final String CARBON_HOME = "/home/ajith/wso2/packs/wso2am-1.8.0/";
    private static final String axis2Repo = CARBON_HOME + File.separator + "repository" +
            File.separator + "deployment" + File.separator + "client";
    private static final String axis2Conf = ServerConfiguration.getInstance().getFirstProperty("Axis2Config.clientAxis2XmlLocation");
    private static final String username = "admin";
    private static final String password = "admin";
    private static final String serverURL = "https://localhost:9443/services/";
    private static Registry registry;

    private static WSRegistryServiceClient initialize() throws Exception {

        System.setProperty("", CARBON_HOME + File.separator + "repository" +
                File.separator + "resources" + File.separator + "security" + File.separator +
        System.setProperty("", "wso2carbon");
        System.setProperty("", "JKS");
        System.setProperty("carbon.repo.write.mode", "true");
        configContext = ConfigurationContextFactory.createConfigurationContextFromFileSystem(
                axis2Repo, axis2Conf);
        return new WSRegistryServiceClient(serverURL, username, password, configContext);

    public static void main(String[] args) throws Exception {
        try {
            registry = initialize();
            Registry gov = GovernanceUtils.getGovernanceUserRegistry(registry, "admin");
            // Should be load the governance artifact.
            GovernanceUtils.loadGovernanceArtifacts((UserRegistry) gov);
            //Initialize the pagination context.
            PaginationContext.init(0, 20, "", "", 10);
            WSRegistrySearchClient wsRegistrySearchClient = new WSRegistrySearchClient(serverURL, username, password, configContext);
            //This should be execute to initialize the AttributeSearchService.
            //Initialize the GenericArtifactManager
            GenericArtifactManager artifactManager = new GenericArtifactManager(gov, "api");
            Map<String, List<String>> listMap = new HashMap<String, List<String>>();
            //Create the search attribute map
            listMap.put("overview_status", new ArrayList<String>() {{
            listMap.put("overview_visibility", new ArrayList<String>() {{
            //Find the results.
            GenericArtifact[] genericArtifacts = artifactManager.findGenericArtifacts(listMap);

            for(GenericArtifact artifact : genericArtifacts){

        } finally {


Sohani Weerasinghe

Installing a new keystore into WSO2 Products

Basically WSO2 carbon based products are shipped with a default keystore (wso2carbon.jks) which can be found at <CARBON_HOME>/repository/resources/security directory. This has a private/public key pair which is mainly use for encrypt the sensitive information.

When the products are deployed in production environment it is better to replace this default keystore with a self signed or CA signed certificates.

1). Create a new keystore with a private and public key pair using keytool which is shipped with JDK installation.

Go to <CARBON_HOME>/repository/resources/security directory and type the following command

keytool -genkey -alias testcert -keyalg RSA -keysize 1024 -keypass testpassword -keystore testkeystore.jks -storepass testpassword

Then you have to provide necessary information in order to construct the DN of the certificate.
After you enter the information the created keystore can be found at the above location.

Note: You can view the contents of the generated keystore by:

keytool -list -v -keystore testkeystore.jks -storepass testpassword

2). In order to signed the public certificate, you can use two options as follows.

3). Export your public certificate from the keystore and import it into the trust store.

In WSO2 Carbon products, this trust store is set as client-truststore.jks which resides in the same above directory as the keystore.

Now we have to imort the new public certificate into this trust store for Front End and Back End communication.

  • Export the new public certificate:

keytool -export -alias testcert -keystore testkeystore.jks -storepass testpassword -file testcert.pem

This will export the public certificate into a file called testcert.pem in the same directory.

  • Import it into client-truststore.jks with following command:

keytool -import -alias testnewcert -file testcert.pem -keystore client-truststore.jks -storepass wso2carbon

(Password of client-truststore.jks keystore is: wso2carbon)

4). Change the below configuration files:

Go to  <CARBON_HOME>/repository/conf  and point the new keystore as below:

  •  carbon.xml 


  •  axis2.xml (Only for WSO2 ESB) (WSO2 ESB uses different HTTPS transport sender and receiver for accessing the services exposed over HTTPS as below, and the keystore used for this purpose is specified in the following configuration)

<transportreceiver class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLListener" name="https">  
<parameter locked="false" name="port">8243</parameter>  
<parameter locked="false" name="non-blocking">true</parameter>  
<parameter locked="false" name="httpGetProcessor">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter>  
<parameter locked="false" name="keystore">  
<parameter locked="false" name="truststore">  
<transportsender class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLSender" name="https">  
<parameter locked="false" name="non-blocking">true</parameter>  
<parameter locked="false" name="keystore">  
<parameter locked="false" name="truststore">  

Chandana NapagodaMonitor WSO2 carbon Instance using New Relic

While I was doing a performance analysis of WSO2 Governance registry, I was looking for a way to monitor information of Apache Solr and its performance numbers. While reading "Apache Solr 3 Enterprise Search Server" book I found this very real time monitoring Tool(site) called New Relic.

So I was able to integrate New Relic with WSo2 Governance Registry Server and was able to monitor many information about the server instance. There I found that Java Agent Self Installer is not working for my scenario. So I had to set Java agent information into JAVA_OPTS. After Few minutes(around 2 min) I was able to view my server related information in the newrelic console(

Here is the JAVA_OPTS which I have used:

export JAVA_OPTS="$JAVA_OPTS -javaagent:/home/chandana/Documents/g-reg/newrelic/newrelic.jar"

newrelic Java agent self-installer :

Chandana NapagodaRemove duplicate XML elements using XSLT

Today I faced an issue where I am receiving a XML message with duplicate elements. So I wanted to remove those duplicate elements using some condition . For that I came up with a XSLT which does that.

My XML input:

<OurGuestsCollection  xmlns="">


<xsl:stylesheet version="2.0"
    <xsl:output omit-xml-declaration="yes" indent="yes"/>

   <xsl:template match="@*|node()">
            <xsl:apply-templates select="@*|node()"/>
   <xsl:template match="m0:OurGuests" xmlns:m0="" >
        <xsl:if test="not(following::m0:OurGuests[m0:firstname=current()/m0:firstname])">
                <xsl:apply-templates select="@*|node()"/>

XML Output :

<OurGuestsCollection xmlns="">

Sohani Weerasinghe

Testing secured proxies using a security client 

Please follow the below steps to test a secured proxy

1. Create a Java project with and

2. Add Following configuration parameters to file

clientRepo = Path for Client repository location. Sample repo can be found in ESB_HOME/samples/axis2Server/repository location.

clientKey =Path for Client’s Key Store.  Here I am using same key Store (wso2carbon.jks). You can find it from ESB_HOME/resources/security.

securityPolicyLocation=Path for the client side security policy files. You can fine 15 policy files from here.

trustStore= This is trusted store that is used for ssl communication on https. You can use same key store for this. (wso2carbon.jks)

securityScenarioNo=Security scenario number that used to secure (eg: If it is non-repudiation it is 2)

SoapAction =You can find it from wsdl

endpointHttp =Http endpont of proxy service

endpointHttpS=Https endpont of proxy service

body = Body part of your Soap message

Sample configuration

clientKey =/home/sohani/Downloads/Desktop/ServerUP/new/wso2esb-4.8.1/repository/resources/security/wso2carbon.jks
SoapAction =urn:mediate
endpointHttp =http://localhost:8280/services/SampleProxy
endpointHttpS =https://localhost:8243/services/SampleProxy

3. Copy Following Java code

import org.apache.neethi.Policy;
import org.apache.neethi.PolicyEngine;
import org.apache.rampart.policy.model.RampartConfig;
import org.apache.rampart.policy.model.CryptoConfig;
import org.apache.rampart.RampartMessageData;
import org.apache.axis2.client.ServiceClient;
import org.apache.axis2.client.Options;
import org.apache.axis2.addressing.EndpointReference;
import org.apache.axis2.context.ConfigurationContext;
import org.apache.axis2.context.ConfigurationContextFactory;
import java.util.Properties;

public class SecurityClient implements CallbackHandler {

 public static void main(String srgs[]) {

        SecurityClient securityCl = new SecurityClient();
        OMElement result = null;
          try {
                result = securityCl.runSecurityClient();
            } catch (Exception e) {


    public OMElement runSecurityClient( ) throws Exception {

        Properties properties = new Properties();
        File file = new File("/home/sohani/workspace_new/TestClient/src/ ");
        FileInputStream freader=new FileInputStream(file);
        String clientRepo  = properties.getProperty("clientRepo");
        String endpointHttpS   = properties.getProperty("endpointHttpS");
        String endpointHttp   = properties.getProperty("endpointHttp");
        int securityScenario =Integer.parseInt(properties.getProperty("securityScenarioNo"));
        String clientKey = properties.getProperty("clientKey");
        String SoapAction = properties.getProperty("SoapAction");
        String body = properties.getProperty("body");
        String trustStore=properties.getProperty("trustStore");
        String securityPolicy =properties.getProperty("securityPolicyLocation");

        OMElement result = null;

        System.setProperty("", trustStore);
        System.setProperty("", "wso2carbon");

        ConfigurationContext ctx = ConfigurationContextFactory.createConfigurationContextFromFileSystem(clientRepo, null);
        ServiceClient sc = new ServiceClient(ctx, null);

        Options opts = new Options();

                opts.setTo(new EndpointReference(endpointHttpS));
                opts.setTo(new EndpointReference(endpointHttp));


                try {
                    String securityPolicyPath=securityPolicy+File.separator +"scenario"+securityScenario+"-policy.xml";
                    opts.setProperty(RampartMessageData.KEY_RAMPART_POLICY, loadPolicy(securityPolicyPath,clientKey));
                } catch (Exception e) {
        result = sc.sendReceive(AXIOMUtil.stringToOM(body));
        return result;
    public Policy loadPolicy(String xmlPath , String clientKey) throws Exception {

        StAXOMBuilder builder = new StAXOMBuilder(xmlPath);
        Policy policy = PolicyEngine.getPolicy(builder.getDocumentElement());

        RampartConfig rc = new RampartConfig();


        CryptoConfig sigCryptoConfig = new CryptoConfig();

        Properties prop1 = new Properties();
        prop1.put("", "JKS");
        prop1.put("", clientKey);
        prop1.put("", "wso2carbon");

        CryptoConfig encrCryptoConfig = new CryptoConfig();

        Properties prop2 = new Properties();
        prop2.put("", "JKS");
        prop2.put("", clientKey);
        prop2.put("", "wso2carbon");


        return policy;

    public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException {

        WSPasswordCallback pwcb = (WSPasswordCallback) callbacks[0];
        String id = pwcb.getIdentifer();
        int usage = pwcb.getUsage();

        if (usage == WSPasswordCallback.USERNAME_TOKEN) {

           if ("admin".equals(id)) {

        } else if (usage == WSPasswordCallback.SIGNATURE || usage == WSPasswordCallback.DECRYPT) {

            if ("wso2carbon".equals(id)) {

4. Add relevant libraries to your class path

It is easy , Go to ESB_HOME/bin and run ant command. You will see created jar file in ESB_HOME/repository/lib  directory. Do not forget to add saxon9he.jar  that is in ESB_HOME/lib/endorsed directory.

5. Then run your secured client

Chanaka JayasenaDisplaying a resource in the WSO2 registry with jaggery

    var carbon = require('carbon');
var server = new carbon.server.Server();
var options = {
username: 'admin',
tenantId: -1234
var reg = new carbon.registry.Registry(server, options);
var path = '/_system/es/cartoon001gossiplankanews.png';
var resource = reg.get(path);
response.contentType = resource.mediaType;

sanjeewa malalgodaHow monitor WSO2 server CPU usage and generate thread dump on high CPU usage using simple shell script

When we deployed WSO2 servers in production deployments we may need to monitor them for high CPU and memory usages. So in this article i will describe how we can use simple shell script to monitor server CPU usage and generate thread dump using jstack command.

First you need to create script using following command.

Then paste following script content.

# 1: ['command\ name' or PID number(,s)] 2: MAX_CPU_PERCENT
[[ $# -ne 2 ]] && exit 1
# get all PIDS as nn,nn,nn
if [[ ! "$PID_NAMES" =~ ^[0-9,]+$ ]] ; then
    PIDS=$(pgrep -d ',' -x $PID_NAMES)
#  echo "$PIDS $MAX_CPU"
MAX_CPU="$(echo "($MAX_CPU+0.5)/1" | bc)"
while [[ $LOOP -eq 1 ]] ; do
    sleep 0.3s
    # Depending on your 'top' version and OS you might have
    #   to change head and tail line-numbers
    LINE="$(top -b -d 0 -n 1 -p $PIDS | head -n 8 \
        | tail -n 1 | sed -r 's/[ ]+/,/g' | \
        sed -r 's/^\,|\,$//')"
    # If multiple processes in $PIDS, $LINE will only match\
    #   the most active process
    CURR_PID=$(echo "$LINE" | cut -d ',' -f 1)
    # calculate cpu limits
    CURR_CPU_FLOAT=$(echo "$LINE"| cut -d ',' -f 9)
    CURR_CPU=$(echo "($CURR_CPU_FLOAT+0.5)/1" | bc)
    echo "PID $CURR_PID: $CURR_CPU""%"
    if [[ $CURR_CPU -ge $MAX_CPU ]] ; then
        echo "PID $CURR_PID ($PID_NAMES) went over $MAX_CPU""%" on $now
        jstack $CURR_PID > ./$now+jlog.txt
        echo "[[ $CURR_CPU""% -ge $MAX_CPU""% ]]"
echo "Stopped"

Then we need to get process id of running WSO2 server by running following command.

sanjeewa@sanjeewa-ThinkPad-T530:~/work$ jps
30755 Bootstrap
8543 Jps
4892 Main

Now we know carbon server running with process ID 30755. Then we can start our script by providing init parameters(process ID and CPU limit). So it will keep printing CPU usage in terminal and once it reached limit it will take thread dump using jstack command. It will create new file with with embedding current date time and push Jstack output to it.

We can start scritp like this.
 sh <processId> <CPU Limit>

sanjeewa@sanjeewa-ThinkPad-T530:~/work$ sh 30755 10
PID 30755: 0%
PID 30755: 0%
PID 30755: 0%
PID 30755: 0%
PID 30755 (30755) went over 10% on 2015 පෙබරවාරි 19 වැනි බ්‍රහස්පතින්දා 14:44:55 +0530
[[ 13% -ge 10% ]]

As you can see when CPU goes above 10% it will create log file and append thread dump.

Sivajothy VanjikumaranWSO2 ESB SOAP headers lost

 During the connector and we experience some of the SOAP header information are being dropped by the ESB.

Please find the retrieved SOAP headers from direct API call and ESB call below.

Response from Direct API call
Response from ESB call
         <wsu:Timestamp wsu:Id="Timestamp-abd7433b-821f-4a23-861e-83ade6857961">
         <wsu:Timestamp wsu:Id="Timestamp-ec0a6c73-4633-437a-a555-6482f6a72f5d">

We can observe that the API is returning complete set of header information to the ESB in wire log, yet ESB returns only a selected set from it as shown above.

Reason for this issue is WS Addressing headers are removed while sending out. This can be solved by introducing Synapse Property "PRESERVE_WS_ADDRESSING" ( <property name="PRESERVE_WS_ADDRESSING" value="true" scope="default" /> )

Further detail can be found in [1]

Chathurika Erandi De SilvaForking with WSO2 App Factory - Part 2

This is the continuation of the previous blog post where we discussed about forking the main repository, changing the master of the fork and merging the changes from the fork master to the main repository's master.

In this post we will discuss about using the WSO2 App Factory to fork the main repository, changing a branch of the fork and merging the changes from the fork branch to the main repository branch.

From Fork branch to Main branch

1. Go to your application in  WSO2 App Factory and fork the main repository. When forked a separate repository is created for you

2. Clone the forked repository in to your local machine.

E.g. git clone

3. Change to the relevant branch where you need to work. In this case it will be 1.0.0

E.g. git checkout 1.0.0

4. Do the needed code changes and save them

5. Add the changes to the GIT repo

E.g. git add * ; will add all the changes

6. Commit the changes to the GIT repo

E.g. git commit -am "Commit"

7. Push the commits to the remote fork branch

E.g. git push

Now we have changed the remote fork repository with the needed code changes. We can merge the changes to main repository's relevant branch now. In order to do this follow the below steps

1. Clone the main repository to your local machine

E.g. git clone

2. Go inside the cloned folder

3. Change to the relevant branch. In this instance it will be 1.0.0. E.g. git checkout 1.0.0

4. Now we have to add our forked repository as a remote repository in the main repository.

E.g. git remote add -f b

5. After this command if you issue a git branch -r you will see the remote repository added as shown below

  origin/HEAD -> origin/master

6. Now we have to get the difference between the remote branch and our main  repository's branch.

E.g. git diff origin/1.0.0 remotes/b/1.0.0 > jagapp8.diff

7. Now we can apply the diff to our main master repository

E.g. git apply jagapp8.diff

8. Next add the changes by git add *

9. Commit the changes by git commit -am "commit"

10. Push the changes to the remote branch. git push

Chathurika Erandi De SilvaForking with WSO2 App Factory - Part 1

In WSO2 App Factory, we can fork a git repository and later merge it to the main repository. There are two ways in which this merging can be done.

1. From Fork master to Main master
2. From Fork branch to Main branch


Know bit about what git fork about before reading this post :-)

From Fork master to Main master

1. Go to your application in WSO2 App Factory and fork the main repository. When forked a separate repository is created for you

2. Clone the forked repository in to your local machine.

E.g. git clone

3. Since we are working in the fork master we are not changing to a branch

4. Do the needed code changes and save them

5. Add the changes to the GIT repo

E.g. git add * ; will add all the changes

6. Commit the changes to the GIT repo

E.g. git commit -am "Commit"

7. Push the commits to the remote fork branch

E.g. git push

Now we have changed the remote fork repository with the needed code changes. We can merge the changes to main master repository now. In order to do this follow the below steps

1. Clone the main repository to your local machine

E.g. git clone

2. Go inside the cloned folder

3. Now we have to add our forked repository as a remote repository in the main repository.

E.g. git remote add -f b

4. After this command if you issue a git branch -r you will see the remote repository added as shown below

  origin/HEAD -> origin/master

5. Now we have to get the difference between the remote master and our main master repository.

E.g. git diff origin/master remotes/b/master > jagapp8.diff

6. Now we can apply the diff to our main master repository

E.g. git apply jagapp8.diff

7. Next add the changes by git add *

8. Commit the changes by git commit -am "commit"

9. Push the changes to the remote branch. git push

In my next post I will discuss about the second method which is to merge the changes of the fork branch to main branch. 

Shelan PereraHow to use python BOTO framework to connect to Amazon EC2

 Python Boto framework is an excellent tool to automate things with Amazon. You have almost everything to automate Amazon EC2 using this comprehensive library.

A quick guide on how to use it in your project

1) Configure your EC2 credentials to be used by your application using one of the followings.

  • /etc/boto.cfg - for site-wide settings that all users on this machine will use
  • ~/.boto - for user-specific settings
  • ~/.aws/credentials - for credentials shared between SDKs

2) Refer the API and choose what you need to do.

3) Sample code for a simple autoscaler written using Boto framework. You may reuse the code in your projects quickly. This autoscaler spawn new instances based on the spike pattern of the load.

Dimuthu De Lanerolle

Java Tips .....

To get directory names inside a particular directory ....

private String[] getDirectoryNames(String path) {

        File fileName = new File(path);
        String[] directoryNamesArr = fileName.list(new FilenameFilter() {
            public boolean accept(File current, String name) {
                return new File(current, name).isDirectory();
        });"Directories inside " + path + " are " + Arrays.toString(directoryNamesArr));
        return directoryNamesArr;

To retrieve links on a web page ......

 private List<String> getLinks(String url) throws ParserException {
        Parser htmlParser = new Parser(url);
        List<String> links = new LinkedList<String>();

        NodeList tagNodeList = htmlParser.extractAllNodesThatMatch(new NodeClassFilter(LinkTag.class));
        for (int x = 0; x < tagNodeList.size(); x++) {
            LinkTag loopLinks = (LinkTag) tagNodeList.elementAt(m);
            String linkName = loopLinks.getLink();
        return links;

To search for all files in a directory recursively from the file/s extension/s ......

private List<String> getFilesWithSpecificExtensions(String filePath) throws ParserException {

// extension list - Do not specify "." 
 List<File> files = (List<File>) FileUtils.listFiles(new File(filePath),
                new String[]{"txt"}, true);

        File[] extensionFiles = new File[files.size()];

        Iterator<File> itFileList = files.iterator();
        int count = 0;

        while (itFileList.hasNext()) {
            File filePath =;
extensionFiles[count] = filePath;

Reading files in a zip

     public static void main(String[] args) throws IOException {
        final ZipFile file = new ZipFile("Your zip file path goes here");
            final Enumeration<? extends ZipEntry> entries = file.entries();
            while (entries.hasMoreElements())
                final ZipEntry entry = entries.nextElement();
                System.out.println( "Entry "+ entry.getName() );
                readInputStream( file.getInputStream( entry ) );
        private static int readInputStream( final InputStream is ) throws IOException {
            final byte[] buf = new byte[8192];
            int read = 0;
            int cntRead;
            while ((cntRead =, 0, buf.length) ) >=0)
                read += cntRead;
            return read;

Converting Object A to Long[]

 long [] myLongArray = (long[])oo;
        Long myLongArray [] = new Long[myLongArray.length];
        int i=0;

        for(long temp:myLongArray){
            myLongArray[i++] = temp;

Getting cookie details on HTTP clients

import org.apache.http.impl.client.DefaultHttpClient;
HttpClient httpClient = new DefaultHttpClient();
((DefaultHttpClient) httpClient).getCookieStore().getCookies(); 

 HttpPost post = new HttpPost(URL);
        post.setHeader("User-Agent", USER_AGENT);
        post.addHeader("Referer",URL );
        List<NameValuePair> urlParameters = new ArrayList<NameValuePair>();
        urlParameters.add(new BasicNameValuePair("username", "admin"));
        urlParameters.add(new BasicNameValuePair("password", "admin"));
        urlParameters.add(new BasicNameValuePair("sessionDataKey", sessionKey));
        post.setEntity(new UrlEncodedFormEntity(urlParameters));
        return httpClient.execute(post);

Ubuntu Commands

1. Getting the process listening to a given port (eg: port 9000) 

sudo netstat -tapen | grep ":9000 "

Ushani BalasooriyaA sample on a WSO2 ESB proxy with a DBLookup mediator and a dedicated faultSequence to excute during an error

Databases :

DB customer should be created with the required tables.

DB : customerDetails
table :  customers

Proxy :

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
      <inSequence onError="testError">
               <sql>SELECT * FROM customers WHERE name =?</sql>
               <parameter value="WSO2" type="INTEGER"/>
               <result name="company_id" column="id"/>
         <log level="custom">
            <property name="text" value="An unexpected error occurred for the service"/>

onError targeting sequence :

<sequence xmlns="" name="testError">
      <property name="error" value="Error Occurred"/>

Invoke :

curl -v http://host:8280/services/TestProxyUsh

Ushani BalasooriyaOne possible reason and solution for getting Access denied for user 'username'@'host' in mysql when accessing from outside

Sometimes we might get the below exception when creating a DB connection.

org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Access denied for user 'username'@''

Reasons :

The particular IP might not be accessible from outside.

Therefore you need to grant priviledges by the below command in mysql :

GRANT ALL PRIVILEGES ON *.* TO 'username'@'%' IDENTIFIED BY 'password';


Sometimes you might try out below :


Eventhough it says Query OK, still the issue can be there since you have not used the password.

Also you must do a Flush privileges to update it.

Hope this might help sometimes.

Ushani BalasooriyaA possible reason and solution for getting 400 Bad Request even when the synapse configuration in WSO2 ESB is correct

Sometimes you might spend lot of time in analyzing the reason for getting 400 bad request when you invoke an up and running backend service through WSO2 ESB.

When you configure and save the synapse configuration of your proxy, you will not see any issue. But sometimes there can a possible reason for this issue.

One good example is explained below :

Look at the below synapse configuration of WSO2 ESB proxy service.

 <proxy name="statuscodeProxy"
          transports="https http"
            <log level="full"/>
                  <address uri=""/>
            <log level="full"/>

By the look of it, this is alright and you are good to configure it in WSO2 ESB. But the endpoint defined as : is not exactly the endpoint the backend service expects.

Sometimes there can be missing parameters etc. But if you exactly know your backend service does not expect any parameters and still it throws a 400 bad reques, can you imagine the reason for this?

After spending hours, I accidently found a possible reason :

 it was due to the backend expects a / at the end of the endpoint. So it expects : except

If you configure a tcpmon in between ESB and the backend, you will see the difference as below : :  GET  HTTP/1.1  - 400 bad Request : GET / HTTP/1.1  - 200 OK

So if you too come across this situation, please try out this option. Sometimes this might save your time! :)

Ushani BalasooriyaOne Possible Reason and solution for dropping the client's custom headers when sending through nginx

Problem :

Sometimes you may have experienced that the custom header values defined and sent through client has been dropped off when it goes via the nginx in a setup fronted by nginx.

Solution :

1. First you should check whether the custom headers  contain underscores (_)

2. If so you need to check whether you have configured underscores_in_headers in your nginx.conf under each server.

3. By default, underscores_in_headers is off.

4. Therefore you need to configure it to on.

Reason :

By this property,

It enables or disables the use of underscores in client request header fields. When the use of underscores is disabled, request header fields whose names contain underscores are marked as invalid and become subject to the ignore_invalid_headers directive.

Ref: /http/ngx_http_core_module.html

Example :

Sample configuration would be as below :

      listen 8280;
      underscores_in_headers on;

location / {
       proxy_set_header X-Forwarded-Host $host;
       proxy_set_header X-Forwarded-Server $host;.

Ushani BalasooriyaReason and solution for not loading WSDL in a clustered setup fronted by NGINX

For more information about nginx http proxy module please refer :

Issue :
Sometimes a WSDL in a clustered setup fronted by NGINX might not be loaded.

Reason :

By default, http version 1.0 is used in nginx. Version 1.1 is recommended for use with keepalive connections. Therefore sometimes the wsdl will not be loaded.

You can confirm it by doing a curl command to the wsdl ngingx url.

E.g.,  curl
Then you will get half of the wsdl.

Solution :

To avoid this, you will have to configure the nginx configuration file to set it's http version.

       proxy_http_version 1.1;

Example :

As an example,

      listen 8280;

location / {
       proxy_set_header X-Forwarded-Host $host;
       proxy_set_header X-Forwarded-Server $host;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

       proxy_http_version 1.1;

        proxy_pass http://esbworkers;
        proxy_redirect http://esbworkers;


This configuration is applicable for nginx versions above 1.1.4

Krishanthi Bhagya SamarasingheSteps to install Rails on Windows

Instigation : Currently I am having a training on ROR (Ruby on Rails) for a project which is more concerned on the productivity than the performance. Since Rails is based on Ruby, which is a scripting language, it's productivity is very high. Although, too much memory consumption directly effects for it's performance.


  • If you have successfully installed Ruby, you can skip steps 1 and 2.
  • If you have successfully configured Ruby development kit, you can skip steps 3 and 4.

Step 1: Download the required ruby version from the following location.

Step 2: Run the executable file and select 3 options while running.

By selecting those options, it will help you to configure ruby on your environment.

Step 3: Download and run the development kit which is relevant to the ruby installation, from the following link.

This will ask you to extract files and it is better to give the same root location where ruby is installed.

Step 4: Open a command prompt and go to the development kit extracted location. Run the following commands.
  1. ruby dk.rb init
  2. ruby dk.rb install
Step 5: Open a command prompt and run following command to install Rails. You can run it from any location. It will find the Ruby installed location and will install Rails into the Ruby. Through this it will install gems which are related to run Rails on Ruby.

gem install rails

Step 6: On the command prompt go to a location where you like to create your Rails project and type:
rails new <ProjectName>

This will create the project with its folder structure.

Step 7: Cd <ProjectName>, run"bundle install". There you might get SSL certificate issue. Open the "Gemfile" file in the project folder and edit the source value's protocol as "http", and run the command again.

Step 8: Cd  <ProjectName>/bin, and run "rails server".

Step 9: Go to the page http://localhost:3000 from your browser. If you are getting following, you have successfully installed the Rails and your server is running successfully.

Jayanga DissanayakeCustom Authenticator for WSO2 Identity Server (WSO2IS) with Custom Claims

WSO2IS is one of the best Identity Servers, which enables you to offload your identity and user entitlement management burden totally from your application. It comes with many features, supports many industry standards and most importantly it allows you to extent it according to your security requirements.

In this post I am going to show you how to write your own Authenticator, which uses some custom claim to validate users and how to invoke your custom authenticator with your web app.

Create your Custom Authenticator Bundle

WSO2IS is based OSGi, so if you want to add a new authenticator you have to crate an OSGi bungle. Following is the source of the OSGi bundle you have to prepare.

This bundle will consist of three files,
1. CustomAuthenticatorServiceComponent
2. CustomAuthenticator
3. CustomAuthenticatorConstants

CustomAuthenticatorServiceComponent is an OSGi service component it basically registers the CustomAuthenticator (service). CustomAuthenticator is an implementation of org.wso2.carbon.identity.application.authentication.framework.ApplicationAuthenticator, which actually provides our custom authentication.

1. CustomAuthenticatorServiceComponent

package org.wso2.carbon.identity.application.authenticator.customauth.internal;

import java.util.Hashtable;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.osgi.service.component.ComponentContext;
import org.wso2.carbon.identity.application.authentication.framework.ApplicationAuthenticator;
import org.wso2.carbon.identity.application.authenticator.customauth.CustomAuthenticator;
import org.wso2.carbon.user.core.service.RealmService;

* @scr.component name="identity.application.authenticator.customauth.component" immediate="true"
* @scr.reference name="realm.service"
* interface="org.wso2.carbon.user.core.service.RealmService"cardinality="1..1"
* policy="dynamic" bind="setRealmService" unbind="unsetRealmService"
public class CustomAuthenticatorServiceComponent {

private static Log log = LogFactory.getLog(CustomAuthenticatorServiceComponent.class);

private static RealmService realmService;

protected void activate(ComponentContext ctxt) {

CustomAuthenticator customAuth = new CustomAuthenticator();
Hashtable<String, String> props = new Hashtable<String, String>();

ctxt.getBundleContext().registerService(ApplicationAuthenticator.class.getName(), customAuth, props);

if (log.isDebugEnabled()) {"CustomAuthenticator bundle is activated");

protected void deactivate(ComponentContext ctxt) {
if (log.isDebugEnabled()) {"CustomAuthenticator bundle is deactivated");

protected void setRealmService(RealmService realmService) {
log.debug("Setting the Realm Service");
CustomAuthenticatorServiceComponent.realmService = realmService;

protected void unsetRealmService(RealmService realmService) {
log.debug("UnSetting the Realm Service");
CustomAuthenticatorServiceComponent.realmService = null;

public static RealmService getRealmService() {
return realmService;


2. CustomAuthenticator

This is where your actual authentication logic is implemented

package org.wso2.carbon.identity.application.authenticator.customauth;

import java.util.Map;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.wso2.carbon.identity.application.authentication.framework.AbstractApplicationAuthenticator;
import org.wso2.carbon.identity.application.authentication.framework.AuthenticatorFlowStatus;
import org.wso2.carbon.identity.application.authentication.framework.LocalApplicationAuthenticator;
import org.wso2.carbon.identity.application.authentication.framework.config.ConfigurationFacade;
import org.wso2.carbon.identity.application.authentication.framework.context.AuthenticationContext;
import org.wso2.carbon.identity.application.authentication.framework.exception.AuthenticationFailedException;
import org.wso2.carbon.identity.application.authentication.framework.exception.InvalidCredentialsException;
import org.wso2.carbon.identity.application.authentication.framework.exception.LogoutFailedException;
import org.wso2.carbon.identity.application.authentication.framework.util.FrameworkUtils;
import org.wso2.carbon.identity.application.authenticator.customauth.internal.CustomAuthenticatorServiceComponent;
import org.wso2.carbon.identity.base.IdentityException;
import org.wso2.carbon.identity.core.util.IdentityUtil;
import org.wso2.carbon.user.api.UserRealm;
import org.wso2.carbon.user.core.UserStoreManager;
import org.wso2.carbon.utils.multitenancy.MultitenantUtils;

* Username Password based Authenticator
public class CustomAuthenticator extends AbstractApplicationAuthenticator
implements LocalApplicationAuthenticator {

private static final long serialVersionUID = 192277307414921623L;

private static Log log = LogFactory.getLog(CustomAuthenticator.class);

public boolean canHandle(HttpServletRequest request) {
String userName = request.getParameter("username");
String password = request.getParameter("password");

if (userName != null && password != null) {
return true;

return false;

public AuthenticatorFlowStatus process(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException, LogoutFailedException {

if (context.isLogoutRequest()) {
return AuthenticatorFlowStatus.SUCCESS_COMPLETED;
} else {
return super.process(request, response, context);

protected void initiateAuthenticationRequest(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException {

String loginPage = ConfigurationFacade.getInstance().getAuthenticationEndpointURL();
String queryParams = FrameworkUtils

try {
String retryParam = "";

if (context.isRetrying()) {
retryParam = "&authFailure=true&";

response.sendRedirect(response.encodeRedirectURL(loginPage + ("?" + queryParams))
+ "&authenticators=" + getName() + ":" + "LOCAL" + retryParam);
} catch (IOException e) {
throw new AuthenticationFailedException(e.getMessage(), e);

protected void processAuthenticationResponse(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException {

String username = request.getParameter("username");
String password = request.getParameter("password");

boolean isAuthenticated = false;

// Check the authentication
try {
int tenantId = IdentityUtil.getTenantIdOFUser(username);
UserRealm userRealm = CustomAuthenticatorServiceComponent.getRealmService()

if (userRealm != null) {
UserStoreManager userStoreManager = (UserStoreManager)userRealm.getUserStoreManager();
isAuthenticated = userStoreManager.authenticate(MultitenantUtils.getTenantAwareUsername(username),password);

Map<String, String> parameterMap = getAuthenticatorConfig().getParameterMap();
String blockSPLoginClaim = null;
if(parameterMap != null) {
blockSPLoginClaim = parameterMap.get("BlockSPLoginClaim");
if (blockSPLoginClaim == null) {
blockSPLoginClaim = "";
if(log.isDebugEnabled()) {
log.debug("BlockSPLoginClaim has been set as : " + blockSPLoginClaim);

String blockSPLogin = userStoreManager.getUserClaimValue(MultitenantUtils.getTenantAwareUsername(username),
blockSPLoginClaim, null);

boolean isBlockSpLogin = Boolean.parseBoolean(blockSPLogin);
if (isAuthenticated && isBlockSpLogin) {
if (log.isDebugEnabled()) {
log.debug("user authentication failed due to user is blocked for the SP");
throw new AuthenticationFailedException("SPs are blocked");
} else {
throw new AuthenticationFailedException("Cannot find the user realm for the given tenant: " + tenantId);
} catch (IdentityException e) {
log.error("CustomAuthentication failed while trying to get the tenant ID of the use", e);
throw new AuthenticationFailedException(e.getMessage(), e);
} catch (org.wso2.carbon.user.api.UserStoreException e) {
log.error("CustomAuthentication failed while trying to authenticate", e);
throw new AuthenticationFailedException(e.getMessage(), e);

if (!isAuthenticated) {
if (log.isDebugEnabled()) {
log.debug("user authentication failed due to invalid credentials.");

throw new InvalidCredentialsException();

String rememberMe = request.getParameter("chkRemember");

if (rememberMe != null && "on".equals(rememberMe)) {

protected boolean retryAuthenticationEnabled() {
return true;

public String getContextIdentifier(HttpServletRequest request) {
return request.getParameter("sessionDataKey");

public String getFriendlyName() {
return CustomAuthenticatorConstants.AUTHENTICATOR_FRIENDLY_NAME;

public String getName() {
return CustomAuthenticatorConstants.AUTHENTICATOR_NAME;

3. CustomAuthenticatorConstants

This is a helper class to just to hold the constants you are using in your authenticaator

package org.wso2.carbon.identity.application.authenticator.customauth;

* Constants used by the CustomAuthenticator
public abstract class CustomAuthenticatorConstants {

public static final String AUTHENTICATOR_NAME = "CustomAuthenticator";
public static final String AUTHENTICATOR_FRIENDLY_NAME = "custom";
public static final String AUTHENTICATOR_STATUS = "CustomAuthenticatorStatus";

Once you are done with these files, your authenticator is ready. Now you can build you OSGi bundle and place the bundle inside <CRBON_HOME>/repository/components/dropins.

*sample pom.xml file [3]

Create new Claim

Now you have to create a new claim in WSO2IS. To do this, log into the management console of WSO2IS and do the steps described in [1]. In this example, I am going to create new claim "Block SP Login".

So, goto configuration section of the management console click on "Claim Management", then select "" Dialect

Click on "Add New Claim Mapping", and fill the details related to your claim.

Display Name   Block SP Login
Description   Block SP Login
Claim Uri
Mapped Attribute (s)  localityName
Regular Expression
Display Order 0
Supported by Default true
Required false
Read-only false

Now, your new claim is ready in WSO2IS. As you select "Supported by Default" as true, this claim will be available in your user profile. So you will see this field appear, when you try to create a user, but this field in not mandatory as you didn't specify it as "Required"

Change application-authentication.xml

There is another configuration change you have to do, as it is going to take the claim name from the configuration file (, 107-114). Add the information about the your new claim into repository/conf/security/application-authentication.xml

<AuthenticatorConfig name="CustomAuthenticator" enabled="true">
<Parameter name="BlockSPLoginClaim"></Parameter>

If you check the code line,107-128. You will see in the processAuthenticationResponse, in addition to authenticating the user from the user store, it checks for the new claim,

So, this finishes the, basic steps to setup your custom authentication. Now you have to setup new Service Provider in WSO2IS and set you custom authentication to it. So that when ever your SP try to authenticate a user from WSO2IS, it will use your custom authenticator.

Create Service Provider and set the Authenticator

Follow the basic steps given in [2] to create a new Service Provider.

Then, goto, "Inbound Authentication Configuration"->"SAML2 Web SSO Configuration", and make the following changes,

Issuer* = <name of you SP>
Assertion Consumer URL = <http://localhost:8080/your-app/samlsso-home.jsp>
Enable Response Signing = true
Enable Assertion Signing = true
Enable Single Logout = true
Enable Attribute Profile = true

Then goto, "Local & Outbound Authentication Configuration" section,
select "Local Authentication" as the authentication type, and select your authenticator, here "custom".

Now you have completed all the steps needed to setup your custom autheticator with your custom claims

You can now start the WSO2IS, and start using your service. Meanwhile, change the value of the "Block SP Login" of a particular user and see the effect.


Chanaka FernandoSimple tutorial to work with WSO2 GIT source code for developers

Some important Git commands for branching with remotes and upstream projects

First create a fork from an existing repository on wso2 repo

Then create a cloned repository in your local machine from that forked repo. Replace the URL with your own repo URL.

git clone

Now add upstream repositories for your local clones repository.This should be the upstream project which you have forked from.

git remote add upstream

You can remove an upstream if you needed.
git remote rm upstream

Now you can see all the available branches in your git metadata repository. Here you need to keep in mind that only metadata available for all the branches except the master (which is the local branch created by default)

git branch -a

* master
 remotes/origin/HEAD -> origin/master

If you cannot see the upstream branches list from the above command, just fetch them with the following command.

git fetch upstream

Now you should see the branched list as above.

This will list down all the available branches. Then decide on which branch you are going to work on. Let’s say you are going to work on release-2.1.3-wso2v2 branch. Now you should create a tracking branch in your local repository for that branch.

git checkout --track origin/release-2.1.3-wso2v2

Here we are creating a tracking local branch to track the origin (your forked repository, not the upstream). Now if you look at the branch list, you can see the newly created branch.

git branch -a

* release-2.1.3-wso2v2
 remotes/origin/HEAD -> origin/master

Now you need to see what is the link between your remote branches and local branches. You can do that with the following command.

git remote show origin

This will give you the information about the connection between your local and remote branches.

* remote origin
 Fetch URL:
 Push  URL:
 HEAD branch: master
 Remote branches:
    master               tracked
    release-2.1.3-wso2v1 tracked
    release-2.1.3-wso2v2 tracked
 Local branches configured for 'git pull':
    master               merges with remote master
    release-2.1.3-wso2v2 merges with remote release-2.1.3-wso2v2
 Local refs configured for 'git push':
    master               pushes to master               (up to date)
    release-2.1.3-wso2v2 pushes to release-2.1.3-wso2v2 (up to date)

Chandana NapagodaWriting a Create API Executor for API.RXT

One of the use cases of the WSO2 Governance Registry is storing metadata information of different artifacts. In an Organization, there can be different metadata of different artifacts such their REST APIs, SOAP Services etc. In such a scenario you can use API and Service RXT which are available in the WSO2 Governance Registry to store metadata information.

With the use of API metadata which is stored in the WSO2 Governance Registry, you can publish APIs into WSO2 API Manager without accessing the web interface of the API Manager. This API creation is handled through lifecycle executor of the WSO2 Governance Registry. Once lifecycle state of the api publisher is reached, the executor will invoke Publisher REST API of the WSO2 API Manager and create the API. "Integrating with WSO2 API Manager" documentation explains about how to create an API using SOAP service meta data information.

If you want to create an API using the REST API metadata information available in the WSO2 Governance Registry, you have to write your own executor to achieve this. So I have written an example Implementation of the lifecycle executor which can used to create API's in the API Manager.

John MathonTesla Update: How is the first IoT Smart-car Connected Car faring?

Telsa S’s at Supercharging stationTesla Supercharger

Here is my previous article on the Tesla IoT Case Study

How’s it going – a year later?

I have owned my Tesla for a year and it is a year since I published my first review of it as an IOT and published a list of 13 beefs with the Tesla on the Tesla forums.  I have 14,000 miles on the car which is a lot more than I planned.   I thought I would use my ICE car a lot more.  It has sat in the driveway unused collecting dust month after month as the Tesla is so much more fun to drive and cheaper.   My fiance has insisted on driving it whenever I don’t.

Tesla owners are pretty feisty and when I published my 13 small beefs in the Tesla forums, instead of getting a number of high-fives and people who felt similarly I got a lot of criticism.  I was only offering some helpful suggestions.  I was happy with the car just trying to help Tesla improve.  They came down on me for the slightest criticism as if I was a foreign agent coming in with evil propaganda from the enemy.    The ferociousness of their criticism must be similar to the loyalty that Apple owners had in the early days of Apple.

Tesla whether looking at my list or not has fixed 12 of the 13 beefs through software upgrades over the last year! Now that is something you don’t see everyday.   The 13th beef – I didn’t like the placement of the turn signal lever. Everybody told me this is where Mercedes places their turn signal lever so I was wrong.  I still find it awkward to use but other than that I can say over the last 12 months I have had nothing short of an amazing car experience.

Besides fixing 12 of my 13 beefs they have provided service above the call I didn’t even imagine.   They installed titanium bars for free to prevent my car when driving low on the highway from being damaged, they replaced a windshield for free ($1800) when it was damaged by a rock.

So far the worries about hacking the Telsa, possibly corruption of apps have not emerged.   That’s obviously good.  Part of that is undoubtedly the low # of Teslas and that Tesla has not enabled a lot of apps to run in the car and limited the functions of the API and Mobile App.  I hope this year will allow some more capabilities and they will not experience any security issues.

It is reported that Teslas workforce is about 60% software engineers.   The typical car company has 2% of its workforce software engineers.   You can imagine this results in a drastically different kind of car.   No wonder the car boasts incredible flexibility and upgradeability.  When you think about what is disruptive, what is the core competence of Tesla the typical response is that the battery has been the most unique capability.   Indeed nobody has really replicated the Tesla battery capability surprisingly.   Maybe there is more to this than meets the eye.   However, with 60% of the workforce software engineers the kind of things that Tesla cars can do and are doing is BEYOND THE CAPABILITY of other car manufacturers to compete in the near term or even possibly ever.   If cars will be predominantly software in the future then car companies which are 98% mechanical workers and engineers will be hopeless against the integrated software powerhouse that Telsa is building.

Think about the clunky user interfaces of most cars.   Do you imagine anytime soon they are going to replicate the integrated experience of the Tesla?  Where the suspension system is linked to the GPS system so it knows when to raise or lower the suspension?   Where my calendar is integrated with the navigation and heating and cooling systems to facilitate my comfort and convenience.     I think we are looking at a long term sustainable advantage that may be impossible for almost any car manufacturers to compete.    In the end it will depend if the smart connected car really is what consumers want.   If so, then Tesla will be able to manufacture a wide variety of cars and win the market because of tremendous advantage in software integration.

One of Tesla’s software advantages is gathering information from the car to feedback into improving the cars serviceability.   They can fine tune performance much faster and better than regular car manufacturers.  It will depend on if Tesla can execute but the fact that the car is basically a few components (motor, steering system, braking, suspension system, heating cooling system) and these are composed of relatively few parts means that less is spent on engineering the cars parts to work together physically and more is spent on making them work together logically.   Everything becomes adjustable virtually.   An example is the suspension system which can become smarter because it can know the experience of other drivers on the road.  This is the network effect applied to cars.  Watch out car industry.  If Tesla can scale it may be impossible to stop.

Elon Musk is disruptive individual

His strategy for business seems to be how can he throw the most monkey wrenches into the ways we have thought things “had to be”.    I mentioned some of the disruptive things he did with the Tesla.  (I won’t belabor SpaceX or SolarCity disruptive ideas or building a thousands of mph tunnel between LA and SF, etc…)

Tesla’s biggest problem is that with a battery production capacity limited to 30,000 or so cars a year and an annual consumption in the US alone of 16 MILLION ICE cars he’s not making the slightest dent in the car market or in challenging the ICE makers.   His new battery factory will allow him to hit 500,000 cars a year which still would be smaller than virtually any other car manufacturer in the world.   He also expects to be able to reduce battery cost for the 60KWH battery to below $3,000.

When you realize the scale of the ICE commitment the world has you realize this is not something that is going to happen overnight.   Even in the most optimistic scenario for Tesla it is a 1% car manufacturer for the next decade and cannot create disruption unless additional car manufacturers and additional battery capacity comes on board more quickly.

Let’s review those disruptive aspects of the Tesla and how successful they’ve been:

1) At 1/3 cost to operate due to efficiency of central electrical production and electric motor the car produces less ecological impact than other cars

green car

The move to electric cars and replicating Elon Musk’s Tesla has made waves but the ICE car manufacturers haven’t conceded defeat yet and they don’t seem to be quaking in their boots.   Other electric cars are not selling well at all.  Tesla seems to be the only electric car succeeding.

I think the problem is similar to the Steve Jobs effect.  Steve Jobs understood the problem was that you had to solve the holistic ecosystem problem.   When Apple came out with the iPod they also did the user interface, made it fast, simple and backed it up with service.   The result was 70% market share in the consumer electronics industry which is unheard of.   When I tried to buy an iPod competitor I was astonished how stupid they were.  The user interface consisted of 12 indecipherable buttons, a 2 line barely readable screen.  It took me 24 hours to upload a few songs to it.    Needless to say I returned it and just got the next model iPod.    I couldn’t fathom how when the competitors were shown the exact thing they needed to make they still produced crappy alternatives.

The cars cost of operation has met all my expectations and more.

Service costs:  $0

Energy Costs:  -$450/month

How could I be paying 450 dollars a month less?   I save $250/month on my electric bill because I could switch to a time based rate plan in california allowing me to move energy to evening hours and cutting my costs dramatically.  I save at least $200/month in gas costs I no longer have.   I charge my car only half the time at home which is about $20/month.  The other half is charged at garages and other places I can hook a charge for free.  I don’t know how long many places will let me charge for free but I expect corporations will provide free chargers more and more for employees.

It is clearly the most ecological full sized vehicle ever produced in any quantity and it is cheap to operate.  I understand that if the car doesn’t last, has serious service problems that this prognosis will change but so far it is virtually free to drive (other than the capital expenditure).

2) The size of the battery, the charge at home or in garages all over, superstations across the country for free and battery replacement options disrupts the gas service station business as well as other electric vehicles


Tesla is not only potentially disruptive to ICE’s but it is disrupting today the electric competition.  Sales are in decline for almost every other electric car.

Elon Musk solved the distance traveling problem, the lifetime problems, the time to charge problems.  He gave us a solution that is elegant, powerful, competitive in every aspect.  When these other companies put up 80 mile battery cars with slow charging options, poor performance, is it any wonder nobody wants to buy one of those?   My fiance was thinking of an electric car after seeing the Tesla.  We went through all the alternatives and concluded they were all crap.  We wanted to buy another Electric car but none of them met the simple requirements.   I’m sorry there is no comparison.   I don’t think they have a clue.   The European carmakers seem to be making the most effort to compete, but even they are coming up far short. Mercedes has licensed the Tesla battery.  It will be interesting to see how that evolves.

3) The low maintenance and maintenance at home and the ability to update the car remotely with new features and fix the car, the ability of Tesla to capture real time data to improve service is transformative and disrupts the entire service business for cars.

There has been zero maintenance costs or issues with the car.  (Unless you count a piece of the molding having to be glued back that became loose – also done gratis by Tesla as they looked over my car to adjust tire pressure and check it out electronically.)

As I mentioned above Cost of service:  $0.  However, because there are only 30,000 cars on the road the ability to disrupt the service business is ZERO at the scale he can achieve in the next decade.   However, aspects of what he has done is having an impact on car makers and consumers who have adopted 100% the connected car concept.   Over the next few years I believe Tesla will have succeeded 100% in proving that connected car is a better car, a cheaper and safer car.

Items 5 and 6 below Tesla has been successful in 2 years in disrupting the industry beyond expectations and I think that partly he is succeeding in his disruption on service on this point because most car manufacturers seem to be acknowledging that the ability to gather information from the car during operation rather than simply at service calls can provide valuable input that allows them to improve their cars faster.   We will see if other manufacturers use the connected options below to actually improve their service as well as Tesla has done.

4) The user interface design, big screen with apps and ability to control the car with an app is precedent setting including the smartness of the car, anticipating your schedule, finding alternate routes, raising and lowering suspension based on experience demonstrate a smart car.  Future upgrades include self-driving capabilities all of which is precedent setting

tesla screen2 tesla screen1

No other car manufacturer has implemented the user interface, nor have I seen other car manufacturers commit to such a digital version of a car, large screen full control user interface concept.  No disruption yet.

5) The IOT capabilities including the ability to manage, find, operate the car remotely is precedent setting

tesla app

I believe TESLA has achieved disruption with the connected IOT car

I love the IoT aspects of the car.   I love checking in to see how charging is going, I love being able to remotely turn on the air or heating before I get to the car.   I love being able to find my car anytime.  I love tracking when it is being driven by other people and knowing how fast they are driving or when it is about to meet me.   I love knowing I could operate the car without the key.

Virtually every manufacturer has agreed and is committed to full time internet in cars.  Some have committed their entire fleets to the idea in the next year.   The biggest problem will be how much of the car is available for IoT operation or viewing, how much is actually useful?  Is this simply a matter of making it easier to browse the web in your car or about the car itself? If they are just making internet connectivity in your car more available I think they will find this is not necessarily a big winner.   Connectivity costs money and if you have it on your phone already it is not clear everybody will sign up for this alone.

I believe that part of the success of the connected car is the obvious benefits of these features and the upgrades I talk about in the next point.

tesla software update

6) The ability to upgrade the car over the air or fix it is precedent setting

These are some of the improvements they have downloaded to my car in the last 12 months:

1) Better support for starting and stopping on hills

2) Smart suspension that seems to magically figure out when my suspension should be lower or higher based on experience

3) Smart Calendar, Phone and Car integration which makes it easier to get to appointments and interact with my calendar, destination, conference call support, notes from the big screen

4) Smart commute pre-planning and pre-conditioning (figuring out the best route to work or back even when I didn’t ask it to saving me from stupidly taking the route with the accident on it.), pre-heating/cooling my car automatically before my commute

5) Better backup guide lines and backup camera integration, better support for parallel parking and object detection around the car

6) Improved bluetooth functionality

7) Expanded Media Options, Better Media Buffering for Internet Media, Improved USB playing

8) Improved route planning, traffic integration, telling me how much “gas” i’ll have at my destination and how much if I return to my start point, better ways to see how much “fuel” i was using during a trip compared to estimated

9) automatically remembering charge stations I’ve used and finding Tesla charge stations easily

10) Traffic aware cruise control

11) The key fob can now also open the charge port remotely – super cool

12) Improvements in controlling the screen layout

13) Improvements in the Tesla app to allow operating the car without the key and controlling more car functions remotely

Is it any wonder that Tesla Owner satisfaction is at 98-99% for 2 years in a row?

I can’t imagine living with a car that didn’t constantly improve itself.  I believe other car manufacturers will implement this feature albeit with a lot less utility since the cars aren’t fully digital.

7. Self-driving car

tesla google self driving car

This wasn’t on my original list.   At the time a year ago I didn’t know Tesla was so committed to self-driving features.

Everybody talks about Google self-driving cars.  Everybody talks about how Europeans are ahead of us in self-driving regulation and self-driving features.   The fact is Tesla is implementing these things in part this year and is delivering in every car today the ability to be self-driving.

Google says they will have self-driving out for production in 2017.  People say that in the US this means US car manufacturers will start delivering self-driving features in 2020.

Tesla is delivering many of those features this year and next.

Telsa X due mid-year 2015 4-wheel drive 7-person crossover


(okay, it’s an S with 4-wheel drive, a little taller roof and cool doors)

My laundry list for Tesla for 2015

1) Tesla has promised app integration with Android and support for Chrome browser.

facebook  waze_logo-1skype_logo-580x367 fandango

Star-Chart_1 thescore

I hope this year it comes and with it the following apps supported:  Pandora, Waze, Chargepoint, StarChart, Yelp, Weather (along my route), Fandango, theScore, Camera Recording or photo taking from the car, RecorderPro, youtube, audible, audio read google news, facebook audio check in with camera, audio SMS both receipt and send, audio banking, clock with timer alarm etc…, skype, email, chrome, contacts.

Some cool things would be 2-way easy integration with Waze to allow easy reporting of police and accidents.

Being able to see the names of the stars and planets around me at night while driving would be cool on the big screen.

Weather would be a cool integration allowing me to see how weather along my route will change as I drive.

Facebook integration with camera would be ultra cool.  I promise to use it very infrequently.

Using the camera for more than driving functions would be pretty cool.  How many times have you taken a lousy picture from your car through the window.

Many of the other functions above would come with Android integration promised.

2) More self-driving features

3) More ability to control the car from the app including opening or closing windows

4) Ability to stream video from the cameras or sensors to the app

5) I want the video cameras and sensors to detect cops in the vicinity

6) Calendar “driving” event the car prepares (warm or cool, sets destination up)

7) Better efficiency (Hey I can hope!)

8) Integration with iWatch or other personal devices


The Future of the Connected Car

These are things I think are reasonable to expect

1) High speed communications

There is talk lately of the connected car having a huge impact including possibly being the main motivation for the next generation cell phone communications speed improvements – what is called G5.  The reason for this is to be able to support self-driving capabilities and also to make communication between cars seamless and instant.

Cars need to sense their environment in order to self-drive but many believe that alone is not enough.  The idea is that if cars can communicate with each other instantly (very low delays) they can coordinate themselves better.  A car stopping ahead can notify cars behind it to slow down potentially much farther than can be sensed with sensors (or our eyes.)   If there is traffic the flow can be regulated to produce optimum throughput on freeways.

Many people don’t know that freeway stops are caused by a wave function.   When too many cars are on a freeway a simple slowdown by one car causes a wave backwards that can eventually cause a stop.   This is why you stop on the freeway for no apparent reason sometimes.    LA implemented stop lights at entrances of freeways to slow ingestion which reduces the waves.   However, feedback from cars could work much better.

I don’t know if this will be the motivation for next generation cell communications.  I think it will happen anyway because of other IoT needs and general needs but this could be helpful too.

Cars talking to each other could be cool and something people want or could be obtrusive and annoying,  privacy problem or even dangerous.

2) All Cars talking to the factory

It is obvious that cars sending real-time information back to manufacturers can help them provide better service, design cars better and prevent breakdowns.   It will allow us to build safer cars, easier to maintain cars, more efficient cars.   However, a lot of people might not like having their driving habits reported.

3) Cars being upgraded

I am sold on cars upgrading themselves.

4) Cars being smart

A lot of the features Tesla has added this year prove how a car can be integrated better into our lives and our technology.  The ability of the car to anticipate things or to remember things that happened at certain locations is very powerful and smart. Tesla is absolutely being disruptive in showing the way here.

a) the car plans a route knowing the altitude changes and the effect on consumption

b) the car knows your calendar and scheduled events making it easy to anticipate your destination and tell you about driving problems without having to ask

c) the car makes it easy to access and integrate phone and car so that you need to take your hands off the wheel less

d) the car knows when it needed to be at high suspension or can be in low suspension

e) it knows when to heat or cool itself as I’m about to get in the car to go someplace in my calendar

These are things Tesla has already done and point to the idea that “smart” is really useful.   I believe that other manufacturers will see the utility and start adding these things as well.

If a car has sensors and somebody knocks it or it senses a danger it could notify you and give you visual information to decide to call the police.  In the event of accident the sensors could definitively identify the driver who made the mistake or all the drivers mistakes.

5) Cars driving themselves

This is the biggie obviously that is talked about a lot.   I am a bit of a skeptic in that I don’t see completely self-driving cars being able to really work safely.  I see more and more driving assist features.   When cars are completely self-driving it will be a huge change.   Car design will probably radically change with it because what’s the point of even having a driver seat if the car drives itself?  I see this as still a decade away or more.   However, improvements to make cars drive on highways seems possible much shorter term.

Other articles:

At $18B, the Connected Car is an Ideal Market for the Internet of Things

Progressive Car Dongle creates IoT security and safety risk 

Tesla:  IOT Case Study

John MathonDeep Learning Part 3 – Why CNN and DBN are far way from human intelligence


How do current CNN compare to the scale of the human brain?

We know that the human brain consists of a 21 billion neurons and somewhere between 1 trillion and 1000 trillion dendrite connections.  The closest species is an Elephant which has half as many neurons as a human brain.    We can therefore postulate that higher level learning  requires at least 12 billion neurons and probably >10 trillion dendrites.   The human cortex is composed of 6 layers that might correspond to the layers in a CNN network but not likely.  The human cortex is what is called “recurrent” which CNN neurons can be configured as well to be.   In recurrent neural nets the output of higher level neurons is fed back into lower levels to produce potentially an infinite number of levels of abstraction.

An information theoretic approach to this problem  would suggest that until we are able to build CNN with billions of neurons we are not going to get “human challenging” intelligence.   The largest CNN built today is 650,000 neurons or about 1/20,000th the number of neurons in the postulated needed quantity to achieve minimal human intelligence.   However, if the equivalent of dendrites is the connections of the neurons in our neural nets then we are 1/20 millionth of the way to having a sufficient CNN.   The problem with making larger CNN is that the computational and memory problems of larger CNN grows exponentially with the size so that even if we could design a CNN with 20,000 or 20,000,000 times the CNN neurons we would need all the compute power ever built in the world to operate one functioning slightly smarter than one elephant level brain.

It simply wouldn’t be economic in the near future even 10 years hence assuming massive improvements in compute power to use nearly all the compute power of the world to build one slightly dumber than a human brain assuming that this technique of CNN or DBN proves to be scalable to much larger levels.

Some will object to this simplistic approach of saying CNN are far away from human intelligence.

CNN lacks key features of intelligence

CNN, DBN or any XNN technology have NOT demonstrated the following characteristics of intelligence and awareness:

1) Plasticity – the capability to adapt to new data to revise old abstractions.  In these CNN systems the neurons are essentially frozen after the learning phase.   This is because if they don’t additional data doesn’t reinforce the neurons but leads to the neurons becoming less and less discriminating.   Clearly this is not a characteristic of the human brain.

2) Many levels of abstraction – we have so far demonstrated only a few levels of abstraction with XNN.  Going from 3 or 4 layers to 100 layers or more has not been demonstrated and many difficulties could be expected.   It has been shown so far that when we expand the layers too much it tends to cause too much backpropogation and the network becomes less facile.

3) Planning - no CNN does anything close to “planning”, whereas we know that planning is a separate function in the brain and is not the same as recognition.

4) Memory in general -CNN  Neurons in a sense are memory in that they recognize based on past data but they are an invisible memory in the sense that the underlying original data fed into the XNN is not remembered.   In other words the XNN doesn’t remember it saw a face or text before, it cannot go back and relive that memory or use an old memory to create a new level of abstraction.   We have no idea how to incorporate such memory into a CNN.

5) Experiential Tagged Memory – CNN do not experience time in the sense that it can relate events to time or to other experiences or memories.   The human brain tags all memories with lots of synthetic information, smells we had at the same time as we experienced the event, feelings we had, other things that happened at the same time.   Our brains have a way of extracting memories by searching through all its memories by any of numerous tags we have associated with those memories.  We have no such mechanism in CNN.   For instance, if we did have 3 CNN’s, a text recognition CNN, a object recognition CNN and a voice recognition CNN we have no idea how combining those CNN could produce anything better.

6) Self-perception – CNN do not have a feedback of their own processes, their own state such that they can perceive the state of their neurons, the state of itself changing from one moment to the next.  A human has an entire nervous system dedicated to perceiving itself, Proprioception is an incredibly powerful capability of living things where they can tell where their arm is, how they feel physically, if they are injured, etc…   These innate systems we have are fed into all our learning systems which enable us to do things such as throw a ball, look at ourselves and perceive ourselves.  No CNN has anything like this.

7) Common sense – CNN do not have a sense of what is correct and what is wrong.  An example would be if we create a “self-driving” CNN and the CNN discovers the rules of driving it will not be able to make a judgement about a situation it hasn’t been explicitly taught.   So, if presented with a new situation at a stop light it might say the best thing to do is charge through the stop light.  There is no way to know a priori in advance how a CNN will operate in all situations because until it has been trained on all possible input scenarios we don’t know if it will make a big mistake in some situations.  It may be able to see a face in numerous pictures but a face we humans easily see the CNN may say that it is a tractor.

8) Emotions – As much as emotions don’t seem to be part of our cognitive mind they are a critical part of how we learn, how we decide what to learn.  Emotions depend on having needs and we have not figured out how to represent emotions in a CNN.

9) Self-preservation – like emotion, a basic factor that drives all creatures learning is self-preservation.  CNN don’t learn based on self-preservation but simply do the math to learn patterns of input.

10) Qualia – CNN have no way of determining if something smells good, looks nice or if things are going well today.   We have no way of knowing if a CNN could ever have a concept of beauty.

11) Consciousness - consciousness is a complicated topic but CNN are clearly not conscious.

12) Creativity – There is an element to creativity in CNN in the sense that it can see patterns and makes new abstractions.   Abstractions are some evidence of creativity but in the case of CNN it is a mechanical creativity in the sense that we know in advance the range of creativity possible.  It is not shown yet that higher level abstractions and the CNN mechanism would allow a CNN to make a leap of insight to discover or think of something that it wasn’t presented with earlier.

Some of these things may be possible with CNN we just don’t know.   Some we may discover may be easy to add to CNN but in any case the sheer number of unknown things, the complexity of all these other mechanisms clearly involved in higher level abstraction, thinking and consciousness simply aren’t demonstrated in CNN therefore I state unequivocally that we are NOT on the precipice any time soon to a higher level intelligence as Elon Musk, Bill Gates, Stephen Hawking suggest.   CNN are cool technology but they are nowhere near able to represent higher level thinking yet.


CNN, DBN, XNN technology is definitely important and is helping us build smarter devices and smarter cloud, smarter applications.  We should leverage our advances as much as possible but I would say is clear we are nowhere near the danger feared by some about this technology.  These XNN technologies today lack a huge amount of the basic machinery of intelligent thought and planning, logic, understanding.   There is much that we simply have no idea if this technology will scale or get much better in 5 years.   This technology with the recent advances may simply like other AI techniques reach a wall in a couple years and we are stuck with no better way to move the ball forward to building intelligent machines.

To get links to more information on these technologies check this:


Articles in this series on Artificial Intelligence and Deep Learning


Is DeepLearning going to result in human intelligence or better anytime soon? – part 2, CNN and DBN explained

Deep Learning Part 3  – Why CNN and DBN are far way from human intelligence


Isuru PereraEnabling Java Security Manager for WSO2 products

Why Java Security Manager is needed?

In Java, the Security Manager is available for applications to have various security policies. The Security Manager helps to prevent untrusted code from doing malicious actions on the system. 

You need to enable Security Manager, if you plan to host any untrusted user applications in WSO2 products, especially in products like WSO2 Application Server.

The security policies should explicitly allow actions performed by the code base. If any of the actions are not allowed by the security policy, there will be a SecurityException

For more information on this, you can refer Java SE 7 Security Documentation.

Security Policy Guidelines for WSO2 Products

When enabling Security Manager for WSO2 products, it is recommended to give all permissions to all jars inside WSO2 product. For that, we plan to sign all jars using a common key and grant all permissions to the signed code by using "signedBy" grant as follows.

grant signedBy "<signer>" {

We also recommend to allow all property reads and WSO2 has a customized Carbon Security Manager to deny certain system properties.

One of the main reasons is that in Java Security Policy, we need to explicitly mention which properties are allowed and if there are various user applications, we cannot have a pre-defined list of System Properties. Therefore Carbon Security Manager's approach is to define a list of denied properties using the System Property "". This approach basically changes Java Security Manager's rule of "Deny all, allow specified" to "Allow all, deny specified".

There is another system property named "restricted.packages" to control the package access. However this "restricted.packages" system property is not working in latest Carbon and I have created CARBON-14967 JIRA to fix that properly in a future Carbon release.

Signing all JARs inside WSO2 product.

To sign the jars, we need a key. We can use the keytool command to generate a key.

$ keytool -genkey -alias signFiles -keyalg RSA -keystore signkeystore.jks -validity 3650 -dname "CN=Isuru,OU=Engineering, O=WSO2, L=Colombo, ST=Western, C=LK"
Enter keystore password:
Re-enter new password:
Enter key password for
(RETURN if same as keystore password):

Above keytool command creates a new keystore file. If you omit -dname argument, all key details will be prompted.

Now extract the WSO2 product. I will be taking WSO2 Application Server as an example.

$ unzip -q  ~/wso2-packs/

Let's create two scripts to sign the jars. First script will find all jars and the second script will be used to sign a jar using the keystore we created earlier. script:

if [[ ! -d $1 ]]; then
echo "Please specify a target directory"
exit 1
for jarfile in `find . -type f -iname \*.jar`
./ $jarfile
done script:


set -e



signjar="$JAVA_HOME/bin/jarsigner -sigalg MD5withRSA -digestalg SHA1 -keystore $keystore_file -storepass $keystore_storepass -keypass $keystore_keypass"
verifyjar="$JAVA_HOME/bin/jarsigner -keystore $keystore_file -verify"

echo "Signing $jarfile"
$signjar $jarfile $keystore_keyalias

echo "Verifying $jarfile"
$verifyjar $jarfile

# Check whether the verification is successful.
if [ $? -eq 1 ]
echo "Verification failed for $jarfile"

Now we can see following files.

$ ls -l
-rwxrwxr-x 1 isuru isuru 602 Dec 9 13:05
-rwxrwxr-x 1 isuru isuru 174 Dec 9 12:56
-rw-rw-r-- 1 isuru isuru 2235 Dec 9 12:58 signkeystore.jks
drwxr-xr-x 11 isuru isuru 4096 Dec 6 2013 wso2as-5.2.1

When we run, all JARs found inside WSO2 Application Server will be signed using the "signFiles" key.

$ ./ wso2as-5.2.1/ > log

Configuring WSO2 Product to use Java Security Manager

To configure Java Security Manager, we need to pass few arguments to the main Java process. 

Java Security Manager can be enabled by using "" system property. We will specify the WSO2 Carbon Security Manager using this argument.

We also need to specify the security policy file using "" system property.

As I mentioned earlier, we will also set "restricted.packages" & "" system properties.

Following is the recommended set of values to be used in (Edit the startup script and add following lines just before the line " org.wso2.carbon.bootstrap.Bootstrap $*" \$CARBON_HOME/repository/conf/sec.policy \
-Drestricted.packages=sun.,,com.sun.xml.internal.bind.,com.sun.imageio.,org.wso2.carbon. \,, \

Exporting signFiles public key certificate and importing it to wso2carbon.jks

We need to import the signFiles public key certificate to the wso2carbon.jks as the security policy file will be referring the signFiles signer certificate from the wso2carbon.jks (as specified by the first line).

$ keytool -export -keystore signkeystore.jks -alias signFiles -file sign-cert.cer
$ keytool -import -alias signFiles -file sign-cert.cer -keystore wso2as-5.2.1/repository/resources/security/wso2carbon.jks

Note: wso2carbon.jks' keystore password is "wso2carbon".

The Security Policy File

As specified in the system property "", we will keep the security policy file at $CARBON_HOME/repository/conf/sec.policy

Following policy file should be enough for starting up WSO2 Application Server and deploying a sample JSF & CXF webapps.

keystore "file:${user.dir}/repository/resources/security/wso2carbon.jks", "JKS";

// ========= Carbon Server Permissions ===================================
grant {
// Allow socket connections for any host
permission "*:1-65535", "connect,resolve";

// Allow to read all properties. Use in to restrict properties
permission java.util.PropertyPermission "*", "read";

permission java.lang.RuntimePermission "getClassLoader";

// CarbonContext APIs require this permission
permission "control";

// Required by any component reading XMLs. For example: org.wso2.carbon.databridge.agent.thrift:4.2.1.
permission java.lang.RuntimePermission "";

// Required by org.wso2.carbon.ndatasource.core:4.2.0. This is only necessary after adding above permission.
permission java.lang.RuntimePermission "";

// ========= Platform signed code permissions ===========================
grant signedBy "signFiles" {

// ========= Granting permissions to webapps ============================
grant codeBase "file:${carbon.home}/repository/deployment/server/webapps/-" {

// Required by webapps. For example JSF apps.
permission java.lang.reflect.ReflectPermission "suppressAccessChecks";

// Required by webapps. For example JSF apps require this to initialize com.sun.faces.config.ConfigureListener
permission java.lang.RuntimePermission "setContextClassLoader";

// Required by webapps to make HttpsURLConnection etc.
permission java.lang.RuntimePermission "modifyThreadGroup";

// Required by webapps. For example JSF apps need to invoke annotated methods like @PreDestroy
permission java.lang.RuntimePermission "accessDeclaredMembers";

// Required by webapps. For example JSF apps
permission java.lang.RuntimePermission "";

// Required by webapps. For example JSF EL
permission java.lang.RuntimePermission "getClassLoader";

// Required by CXF app. Needed when invoking services
permission javax.xml.bind.JAXBPermission "setDatatypeConverter";

// File reads required by JSF (Sun Mojarra & MyFaces require these)
// MyFaces has a fix
permission "/META-INF", "read";
permission "/META-INF/-", "read";

// OSGi permissions are requied to resolve bundles. Required by JSF
permission org.osgi.framework.AdminPermission "*", "resolve,resource";

The security policies may vary depending on your requirements. I recommend to test your application thoroughly in a development environment.

NOTE: There are risks in allowing some Runtime Permissions. Please look at the java docs for RuntimePermission. See Concerns below.

Troubleshooting Java Security

Java provides the "" system property to set various debugging options and monitor security access.

I recommend to add following line to whenever you need to troubleshoot some issue with Java Security."access,failure"

After adding that line, all the debug information will be printed to standard output. To check the logs, we can start the server using nohup.

$ nohup ./ &

Then we can grep the nohup.out and look for access denied messages.

$ tailf nohup.out | grep denied

Concerns with Java Security Policy

There are few concerns with current permission model in WSO2 products.

  • Use of ManagementPermission instead of Carbon specific permissions. The real ManagementPermission is used for a different purpose. I created CARBON-14966 jira to fix that.
  • Ideally the permission ' "control"' should not be specified in policy file as it is only required for privileged actions in Carbon. However due to indirect usage of such privileged actions within Carbon code, we need to specify that permission. This also needs to be fixed.
  • In above policy file, the JSF webapps etc require some risky runtime permissions. I recommend to use a Custom Runtime Environment (CRE) in WSO2 Application Server for JSF webapps etc and sign the jars inside CRE. You can also grant permissions based on the jar names (Use grant codeBase). However signing jars and using a CRE is a better approach with WSO2 AS.
If you also encounter any issues when using Java Security Manager, please discuss those issues in our developer mailing list.

Ajith VitharanaJAX-WS client for authenticate to WSO2 server.

We are going to use  AuthenticationAdmin service to generate the JAX-WS client.

1. Open the carbon.xml file and  set the value to false in following parameter.
2. Start the server and point your browser to,
https://[host or ip]:<port>/services/AuthticationAdmin?wsdl
3. Save the  AuthenticationAdmin.wsdl file to new folder (Lets say code-gen)
4 Go to the code-gen directory and execute the following command to generate the client codes.
wsimport -p org.wso2.auth.jaxws.client AuthenticationAdmin.wsdl
5. Now you will end up with following errors.
parsing WSDL...

[ERROR] operation "logout" has an invalid style
  line 1090 of file:/home/ajith/Desktop/auth-client-jaxws/AuthenticationAdmin.wsdl

[ERROR] operation "logout" has an invalid style
  line 1127 of file:/home/ajith/Desktop/auth-client-jaxws/AuthenticationAdmin.wsdl

[ERROR] operation "logout" has an invalid style
  line 1208 of file:/home/ajith/Desktop/auth-client-jaxws/AuthenticationAdmin.wsdl

[ERROR] missing required property "style" of element "operation"
    Failed to parse the WSDL.
6.The java2wsdl tool doesn't generate some important elements which are expecting by wsimport tool due to some issues (eg: output element in  wsdl operation  for void rerun type). So adding the following elements to the missing places(line numbers in above errors) will solve the issue.

<wsdl:output message="ns:logoutResponse" wsaw:Action="urn:logoutResponse"></wsdl:output>
     <soap:body use="literal"></soap:body>
<xs:element name="logoutResponse">
                        <xs:element minOccurs="0" name="return" type="xs:boolean">  </xs:element>
    <wsdl:message name="logoutResponse">
        <wsdl:part name="parameters" element="ns:loginResponse"></wsdl:part>
7. You can find the updated wsdl file here.
8. Execute the above command again to generate the client code.
9. Execute the following comment to generate the jar file.
jar cvf org.wso2.auth.jaxws.client.jar *

10. Add the above jar file to class path and build the client project.
 (Before executing the client change the trust store path)

package org.sample.jaxws.client;

import org.wso2.auth.jaxws.client.AuthenticationAdmin;
import org.wso2.auth.jaxws.client.AuthenticationAdminPortType;

import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

 * Created by ajith on 2/8/15.
public class LoginClient {

    public static void main(String[] args) {
        System.setProperty("", "<product_home>/repository/resources/security/client-truststore.jks");
        System.setProperty("", "wso2carbon");
        AuthenticationAdmin authenticationAdmin = new AuthenticationAdmin();
        AuthenticationAdminPortType portType = authenticationAdmin.getAuthenticationAdminHttpsSoap11Endpoint();

        try {
            portType.login("admin", "admin", "localhost");
            Headers headers = (Headers) ((BindingProvider) portType).getResponseContext().get(MessageContext.HTTP_RESPONSE_HEADERS);
            List<String> cookie = headers.get("Set-Cookie");
            Map<String, List<String>> requestHeaders = (Map) ((BindingProvider) portType).getResponseContext().get(MessageContext.HTTP_REQUEST_HEADERS);

            if (requestHeaders == null) {
                requestHeaders = new HashMap<String, List<String>>();
            requestHeaders.put("Cookie", Collections.singletonList(cookie.get(0)));
            ((BindingProvider) portType).getRequestContext().put(MessageContext.HTTP_REQUEST_HEADERS, requestHeaders);


        } catch (Exception e) {

John MathonIs DeepLearning going to result in human intelligence or better anytime soon? – part 2, CNN and DBN explained


2010 Convolutional Neural Nets

Convolutional Neural Nets operate on raw data from senses using some basic numerical techniques which aren’t that hard to understand.  It is possible these same techniques are employed in our brain.  Therefore the concern is that if these CNN have a similar ability to abstract things as a human brain does that it is simply a matter of scale to take a CNN system to the point it eventually can abstract up reality to the point it appears to be a general purpose learning machine, i.e. a human intelligence.

Part of this makes sense.  The way these CNN work is that every 2 layers of the CNN “brain” create abstractions and then select.  The more layers the more abstractions.    This is clearly how the brain works.  The brain is an abstraction building machine.  It takes raw sensory data and keeps building higher and higher level abstractions that we use to understand the world around us.  We also know the cortex of the brain is composed of layers (6 in the human brain) so layers seems like a workable hypothesis as a way to build a brain.

In a facial recognition algorithm using a CNN the first 2 layers of the “artificial brain” match raw stimula and try to find first level abstractions.   At this first layer for a facial recognition algorithm the neurons will be able to recognize local phenomenon like an eye, a nose, an ear of different shapes and perspectives.  The more neurons the more abstractions are possible to be formed, the more possible classifications the CNN can generate.  However,  a large number of these abstractions will be meaningless not well performing abstractions we should discard as they don’t represent abstractions that perform well (they don’t work time after time to help us understand but are just random coincidences in the data we sent to the brain).   The next layer of the CNN does a reduction (filter) essentially allowing the use of only the best performing abstractions from the previous level.   So, the first 2 layers give us the ability to abstract (recognize) some basic features of a face.  The next 2 layers operate to recognize combinations of these features that make up a face and then select the ones that produce the most reliable “face” recognition.   In the facial recognition example those next 2 layers may recognize that combinations of eyes, lips, ears and other shapes are consistent with a face, a tractor, a spoken word.   So, by using a 4 layer CNN we can recognize a face from something else.  The next 2 layers may abstract up facial characteristics that are common among groups of people, such as female or male, ethnicity, etc.

The more layers the more abstractions we can learn.  This seems to be what the human brain does.  So, if you could make enough layers would you have a human brain?

Each neuron in that first layer postulates a “match” of data by looking at local raw data that is in proximity to the neuron.  The neuron tries to find a formula that best recognizes a feature of the underlying stimuli and that consistently returns a better match to the next set of data that is presented.   This neuron is presented with spatially invariant data across the entire field of data from the lower layer.   In a Deep Belief Network (a type of CNN) a markov random mechanism is used to try to assemble possible candidate abstractions.   As stimula are presented to the system these postulates evolve and the second layer “selects” the best performing matching algorithms basically reducing the number of possible patterns down to a more workable number of likely abstractions that match better than others.  It does this simply by choosing the best performing neuron of all the local neurons from the lower level neurons it sees.    The algorithm each neuron uses is a function of estimating the error of the match from a known correct answer and for the neuron to adjust its matching criteria using a mathematical algorithm to rapidly approach the correct best answer.   Such a mechanism requires having what is called a “labeled” dataset in which the actual answer is known so that the error function can be calculated.

The neural net algorithms work best when trained with labeled datasets of course that allow the system to rapidly approach the correct answer because the system is given the correct answer.  This way of learning is not that much different than what we do with Machine Learning (another branch of AI.)

Another important advance in Neural Network design happened in 1997 which involved creating a Long short-term memory neuron which could remember data.   This neuron can remember things indefinitely or forget them when needed.   Adding this neuron type to a recurrent CNN has allowed for us to build much better cursive text recognition and phoneme recognition in voice systems.

Deep Belief Networks

A variation of CNN  called Deep Belief Networks operate by using probabilistic techniques to reproduce a set of inputs without labels.  A DBN is trained first with unlabeled data to develop abstractions and then tuned with a second phase of training using labeled data.   Improvements of this type of neural net have enabled the best results of any CNN quickly and without having to have massive labeled data sets for training which is a problem.  A further enhancement of this technique involves during the second phase of learning actively managing the learning of each set of layers in the neural network to produce the best results from each layer.   This would be equivalent to learning your arithmetic before learning your algebra.

In humans we do have an analog to the DBN and “labeled” dataset approach to learning in the sense that we go to school.  In school they tell us the correct answers so our brains can learn what lower level abstractions, what neurons to select as producing the better answers.  As we go through school we fine tune our higher level abstractions.  All day the human brain is confronted with learning tasks in which the human learns the correct answer and could be training its neurons to select the best performing abstraction from the candidates we build in our brain.  Labeled datasets by themselves are not a discriminating issue in the difference between human brains and CNN.


CNN and DBN and variations on these neural network patterns are producing significantly better results for recognition problems

To get links to more information on these technologies check this:


Articles in this series on Artificial Intelligence and Deep Learning


Is DeepLearning going to result in human intelligence or better anytime soon? – part 2, CNN and DBN explained

Deep Learning Part 3  – Why CNN and DBN are far way from human intelligence


Ajith VitharanaSSL termination - API Gateway.

SSL Termination-API Gateway

WSO2 API Manager product consist with  following four components.

1. API Publisher
2. API Store
3. API Gateway
4. API Key Manager

API Publisher:

API publisher provides three rich wizard (Design / Implement/ Publish) to create an enterprise API.  

API Store:

The published APIs are visible in API store, that provides all the enterprise API store capabilities like subscribe,  token generate, rate , API docs , client tools ..etc.

API Gateway

The published APIs will be deployed in the API Gateway. All the inbound and out bound API request will be accepted by API Gateway. You can publish APIs using both HTTP and/or HTTPS.

API Key Manager

 Once the request comes to API Gateway , that will be redirected to the API Key Manager to validate the authentication tokens.

If the token is valid, API Gateway can forward the request to the actual  back end API or service through Non- TLS connection (HTTP) or TLS (HTTPS) connection.

 If the token is invalid, API Gateway terminate the request and send back the authentication failure respond to the client who invoked the API.


# - All the APIs (exposed by API Gateway) has the public certificate of the API Gateway. The client (who invoke the API) will  use that certificate to establish the TLS connection with API Gateway.

# - All four components of API manager can be clustered separately to achieve the high availability and load balancing in each layer.

# - The API meta data has sored in  registry database (REG_DB) and API Manager database (AM_DB) , therefore those two databases should be shared across publisher, store and key manager components.

# - The reverse proxy servers has  established against the API gateway and API store to add for more security and load balancing.


John MathonIs DeepLearning going to result in human intelligence or better anytime soon – the setup?

Does a significant advance like the recent advances in AI presage a massive new potentially dangerous robotic world?


Elon Musk Stephen Hawking, Bill Gates and others. have stated that recent advances in AI, specifically around CNN (Convoluted Neural Nets) also called DeepLearning has the potential to finally represent real AI.

This is exciting and worrisome if true.   I have been interested in this problem from the beginning of my career.   When I started doing research at MIT into AI I had kind of a “depressing” feeling about the science.   It seemed to me that the process the brain used to think couldn’t be that hard and that it would take computer scientists not very long to try lots of possible approaches to learn the basic operation and process of how people “abstract and learn” to eventually give computers the ability to compete with us humans.   Well, this has NOT been easy.   Decades later we had made essentially zero progress in figuring out how the brain did this “learning” thing let alone what consciousness was and how to achieve that.    I have a blog about that sorry state of affairs from the computer science perspective and the pathetic results of biologists to solve the problem from the other side, i.e. to figure out how the brain worked.    Recent discoveries indicate the brain may be far more complicated than we thought even a few years ago.  This is not surprising.  My experience in almost every scientific discipline is that the more we look into something in nature inevitably we discover it is a lot more complicated than it first seemed.

Things are simplified by a discovery but rapidly get more complicated

Let’s take genetics for instance.  Let’s take the amazing simple discovery of DNA.  DNA was composed of 4 principal chemicals distributed across a number of chromosomes in a helical fashion.   This simple model seemed like it would lead to some simple experiments to understand these DNA strands and how they function.   Suffice it to say but the simple idea has become enormously complex to the point that we now know that DNA gets created during the lifespan of humans that can affect their functioning in their life and some can be passed on to subsequent generations to children contrary to a fundamental belief of evolution postulated by Darwin.   We learned that the DNA was composed of 99% junk only to discover it wasn’t junk but a whole different language than the language of genes.   Each of these things breaks down into more and more complex patterns.

This same pattern of simplifying discovery leads to some advances, we think it’s all so simple and then we find it’s way more complicated as we look deeper is same pattern found in physics over and over, chemistry and almost every subject I am aware of.   So, it is not surprising that the more we look at brains and intelligence the more we learn it is more complicated than we initially thought.   We have a conceit as humans to believe the world must be broken down into simple concepts fundamentally that we can figure out and eventually understand but every time we make advances in understanding a part we seem to find there is a whole new set of areas that are raised that put our understanding farther away than ever.   Every simplification is followed by an incredible array of complications that lead us to more questions than we knew even existed when we started.

Interestingly, this philosophical problem that as we look at something using a simplification or organizational mechanism that the problem simply becomes more complex was discussed in the book “Zen and the Art of Motorcycle Maintenance,” an incredibly insightful philosophy book  published 50 years ago and still relevant.

I therefore have 100% skepticism that the recent discovery of a new way to operate neural nets is somehow the gateway to us claiming a 50 year goal of human level intelligence.   Nevertheless, the advance has moved us substantially forward in a field that has been more or less dead for decades.   We can suddenly claim much improved recognition rates for voices, faces, text and lots of new discoveries.  It’s an exciting time.   Yet the goal of human level intelligence is far far from where we are.

Will neural nets produce applications that will take our jobs away or make decisions to end human life?   Maybe, but it’s nothing different than has been going on for centuries.

Neural nets were a technology first imagined in the 1950s and 1960s.  First implementations of neural nets was tried in the 70s and 80s and resulted in less than impressive results.  For 30 years virtually no progress was made until 2010, 4 years ago.   Some simple advances in neural nets has resulted in the best performing pattern recognition algorithms we have for speech, text, facial, general visual recognition problems and other problems.  This has brought neural nets and in particular a form of neural nets called Convoluted Neural Nets (CNN) sometimes also called DeepLearning as a new discovery that will lead to human intelligence soon.

Some of these new recognition machines are capable of defeating humans at specific skills.  This has always been the case with computers in general.   At specific problems we can design, using our understanding of the problem, a computer algorithm which when executed by a high speed computer performs better at that specific task than any human.  One of the first that was impressive and useful was Mathematica.  Done in the early 80s Mathematica was able to solve complex algebraic and differential equation problems that even the best mathematicians couldn’t solve.  IBM’s Watson is a recent example of something similar but as I’ve pointed out such things are not examples of general purpose learning machines.   Mathematica will never help you plan a vacation to Mexico nor will Watson be able to solve mathematical equations.   Neither will ever have awareness of the world or be able to go beyond their limited domain.  They may be a threat to humans in the sense that they might cost jobs for humans if we develop enough of these special purpose systems.  They could eventually do a lot of the work that many humans do that is rote.   If such systems are put in charge of dangerous systems such as our nuclear arsenal and they make a decision to kill humans inadvertently it is not because of evil AI, it is because some human was stupid enough to put a computer with no common sense in charge .    Such problems should be blamed on the humans who wrote those algorithms or who put such systems in charge of dangerous elements.

So, the idea that computers, AI and such could be a physical threat to humans or a job risk is real but is within the domain of our current abilities to control.

To get links to more information on these technologies check this:


Articles in this series on Artificial Intelligence and Deep Learning


Is DeepLearning going to result in human intelligence or better anytime soon? – part 2, CNN and DBN explained

Deep Learning Part 3  – Why CNN and DBN are far way from human intelligence


Chintana WilamunaScalable deployments - Choosing middleware components

Different user groups have different performance expectations from a system they’re involved with. Given a platform, be it a middleware/application development or integration, there are 3 sets of stakeholders we can consider,


Developers always want the platform/tools they’re working with to be robust. Some developer expectations include,

  1. A robust platform applications can be developed and deployed with minimum effort and time.
  2. Use of different 3rd party libraries to ease development as much as possible. When library usage increase, available resources for shared tenants goes down. No amount of autoscaling can solve uncontrolled library usage
  3. Fast service calls regardless of business logic complexities in services. Sort of ignoring the network boundaries/external DB and system calls. Another aspect is when more stricter security measures are utilized, more time spent for processing security related headers, thus reducing response times.
  4. Fast application loading times when the applications grow in size and complexity. Without designing the application to keep a low footprint, or process large data chunks efficiently, underlying platform cannot solve this. When size and complexity of applications grow, autoscaling times of instances will also increase.

Operations or deveops folks have a different set of performance expectations from a platform.

  1. They want instant instance spawning and start up times.
  2. Fast log collection. This can be collecting platform level logs to identify performance problems as well as collecting application level logs for  developers.
  3. No sane devops folks want to manually deploy a complex system. The entire deployment should be automated.
  4. Seamless autoscaling of services from the deployment itself without having to wait until one server gets bogged down and do some manual patching.
  5. Available resources should be equally distributed among active nodes for efficient utilization of shared resources.

Customers are probably the most difficult lot to please. Quoting from The Guardian article based on Mozilla blog of metrics,

32% of consumers will start abandoning slow sites between one and five seconds

So applications exposed to customers should be fast and have short response times.

Users expect application developers to write fast responsive applications. App developers look at devops/operations to provide a scalable platform for their apps. Devops in turn rely on architects to provide a robust deployment architecture that scales based on application requirements.

Expectations on architects are,

  1. Identify and use correct reference architecture/s
  2. Use correct deployment architecture based on application requirements as well as non functional requirements
  3. Deployment architecture should be aligned with business expectations of the system
  4. Should take SLAs into account
  5. Identify correct SOA/middleware components for the business use cases involved

Getting the right architecture

Identifying the right SOA components is an important step. In a typical middleware stack there are multiple components to choose from. Usually each component have very different performance characteristics. There are many ways to implement the same scenario.

Let’s see a simple example. Scenario - withdrawing money from a bank. This can be implemented is several different ways.

Example - Method 1

Here this can be implemented as a web service or web application which talks to a DB and host it in an application container

Example - Method 2

This example expose DB operations through Data Services Server, use an ESB for doing mediation and then expose an API for withdrawing money.

Example - Method 3

In this example, there’s a Business Process Server orchestrating the process. There are Business Activity Monitor for monitoring business transactions and Governance Registry as a repository.

A given scenario can be implemented in many ways and also the complexity and use of components will vary. Based on the scenario in hand, right components should be selected.

Chanaka FernandoGIT Cheat Sheet for beginners

Clone an existing repository
$ git clone ssh://
Create a new local repository
$ git init

Changed files in your working directory
$ git status
Changes to tracked files
$ git diff
Add all current changes to the next commit
$ git add .
Add some changes in <file> to the next commit
$ git add -p <file>
Commit all local changes in tracked files
$ git commit -a
Commit previously staged changes
$ git commit
Change the last commit
Don‘t amend published commits!
$ git commit --amend

Show all commits, starting with newest
$ git log
Show changes over time for a specific file
$ git log -p <file>
Who changed what and when in <file>
$ git blame <file>

List all existing branches
$ git branch
Switch HEAD branch
$ git checkout <branch>
Create a new branch based on your current HEAD
$ git branch <new-branch>
Create a new tracking branch based on a remote branch
$ git checkout --track <remote/branch>
Delete a local branch
$ git branch -d <branch>
Mark the current commit with a tag
$ git tag <tag-name>

List all currently configured remotes
$ git remote -v
Show information about a remote
$ git remote show <remote>
Add new remote repository, named <remote>
$ git remote add <remote> <url> 
Download all changes from <remote>,
but don‘t integrate into HEAD
$ git fetch <remote>
Download changes and directly merge/ integrate into HEAD
$ git pull <remote> <branch>
Publish local changes on a remote
$ git push <remote> <branch>
Delete a branch on the remote
$ git branch -dr <remote/branch>
Publish your tags
$ git push --tags

Merge <branch> into your current HEAD
$ git merge <branch>
Rebase your current HEAD onto <branch>
Don‘t rebase published commits!
$ git rebase <branch>
Abort a rebase
$ git rebase --abort
Continue a rebase after resolving conflicts $ git rebase --continue
Use your configured merge tool to solve conflicts
$ git mergetool
Use your editor to manually solve conflicts and (after resolving) mark file as resolved
$ git add <resolved-file> 
$ git rm <resolved-file>

Discard all local changes in your working directory
$ git reset --hard HEAD
Discard local changes in a specific file
$ git checkout HEAD <file>
Revert a commit (by producing a new commit with contrary changes)
$ git revert <commit>
Reset your HEAD pointer to a previous commit
...and discard all changes since then
$ git reset --hard <commit>
...and preserve all changes as unstaged changes
$ git reset <commit>
...and preserve uncommitted local changes
$ git reset --keep <commit>

Chanaka FernandoLearning GIT from level zero

GIT is one of the heavily used version control systems in the software industry. In this blog post, I am covering the fundamental concepts of GIT vcs for a beginner level user.

create a new repository

create a new directory, open it and perform a
git init
to create a new git repository.

checkout a repository

create a working copy of a local repository by running the command
git clone /path/to/repository

when using a remote server, your command will be
git clone username@host:/path/to/repository


your local repository consists of three "trees" maintained by git. the first one is your Working Directory which holds the actual files. the second one is the Index which acts as a staging area and finally the HEAD which points to the last commit you've made.

add & commit

You can propose changes (add it to the Index) using
git add <filename>
git add *

This is the first step in the basic git workflow. To actually commit these changes use
git commit -m "Commit message"
Now the file is committed to the HEAD, but not in your remote repository yet.

pushing changes

Your changes are now in the HEAD of your local working copy. To send those changes to your remote repository, execute
git push origin master
Change master to whatever branch you want to push your changes to.

If you have not cloned an existing repository and want to connect your repository to a remote server, you need to add it with
git remote add origin <server>
Now you are able to push your changes to the selected remote server


Branches are used to develop features isolated from each other. The master branch is the "default" branch when you create a repository. Use other branches for development and merge them back to the master branch upon completion.

create a new branch named "feature_x" and switch to it using
git checkout -b feature_x

switch back to master
git checkout master

and delete the branch again
git branch -d feature_x

a branch is not available to others unless you push the branch to your remote repository
git push origin <branch>

update & merge

to update your local repository to the newest commit, execute
git pull
in your working directory to fetch and merge remote changes.

to merge another branch into your active branch (e.g. master), use
git merge <branch>

in both cases git tries to auto-merge changes. Unfortunately, this is not always possible and results in conflicts. You are responsible to merge those conflicts manually by editing the files shown by git. After changing, you need to mark them as merged with
git add <filename>

before merging changes, you can also preview them by using
git diff <source_branch> <target_branch>


it's recommended to create tags for software releases. this is a known concept, which also exists in SVN. You can create a new tag named 1.0.0 by executing
git tag 1.0.0 1b2e1d63ff

the 1b2e1d63ff stands for the first 10 characters of the commit id you want to reference with your tag. You can get the commit id by looking at the...


in its simplest form, you can study repository history using.. git log
You can add a lot of parameters to make the log look like what you want. To see only the commits of a certain author:
git log --author=bob

To see a very compressed log where each commit is one line:
git log --pretty=oneline

Or maybe you want to see an ASCII art tree of all the branches, decorated with the names of tags and branches:
git log --graph --oneline --decorate --all

See only which files have changed:
git log --name-status

These are just a few of the possible parameters you can use. For more, see git log --help

replace local changes

In case you did something wrong (which for sure never happens ;) you can replace local changes using the command
git checkout -- <filename>

this replaces the changes in your working tree with the last content in HEAD. Changes already added to the index, as well as new files, will be kept.

If you instead want to drop all your local changes and commits, fetch the latest history from the server and point your local master branch at it like this
git fetch origin
git reset --hard origin/master

useful hints

built-in git GUI

use colorful git output
git config color.ui true

show log on just one line per commit
git config format.pretty oneline

use interactive adding
git add -i

This blog post is prepared with the reference of this content.

John MathonIot (Internet of Things), $7 Trillion, $14 Trillion or $19 Trillion? A personal look


Internet of Things market to hit $7.1 trillion by 2020: IDC

Mar 14, 2013 – SAN JOSE, Calif. –Cisco Systems is gearing up for what it claims could be $14 trillion opportunity with the Internet of Things.

The Internet of Things: 19 Trillion Dollar Market – Bloomberg

There is a lot being made of IoT.  Claims are from billions of devices to trillions of dollars in value.  A lot of this value is in the industrial part of IoT not the consumer side. The industrial side of IoT is actually well along and pretty much a given in my opinion.  There is a lot of good economic reasons to instrument and be able to control almost everything in the commercial industrial side of the world.  Industry is about fine tuning costs, managing operation, reusing and longevity.  They don’t usually buy the cheap version of things.  They buy the $1,000 hammers, the $13 light bulbs, switches that are instrumented so they can turn anything on or off automatically.  Robotics, vehicles, tools of all types, everything in commercial buildings all are potentially useful by connecting to the cloud and have demonstrable payback and value.  I have no doubt this market is trillions of dollars and billions of devices alone.  That’s a no-brainer to me.

IoT Technology

At WSO2 we have customers doing things such as instrumenting industrial tools as IoTs  for optimizing construction workplaces, connected car project, connected UAV’s and other IoT solutions for business.  IoTs need special software connectors, back end software to manage their flaky connectedness, technology to help them with security of IoT.  WSO2 has a powerful open source IoT story that if you are building IoT for consumers or industry you should know about.  I will write a blog entry about the whole IoT space describing how I see the layers of the technology, security aspects, how to manage communication, battery life and how to design an IoT base level that will enhance security and yet give powerful network effects.  Apple has patented security technology that would allow you to access your devices more intimately when you are at home than if you aren’t at home.  There are a lot of ideas floating around about how this technology should work, how it should be secured.  I will try to make sense of it.  The business side of IOT  is a well established and exciting business already.

I categorize IoT into the following Categories

Connected car:  There are many reasons to believe a connected car makes an incredible amount of sense.   Tesla has demonstrated the value of cars that can be operated by an app remotely, cars that can update themselves, collecting bigdata from cars to improve their service and performance.  A connected car can be smart.   A non-Telsa company doing this is called Automatic which is gaining a lot of interest among ICE car owners.   Numerous manufacturers have announced connected cars and some manufacturers saying their entire fleets will be connected soon.   There is reason to believe this movement will be extremely successful and it will have legs because of improvements in sensors for the car and self-driving capabilities.   There is no doubt Tesla has led the way and proven the connected car makes tremendous sense.

Connected home: It is yet to be seen how far the consumer will take the connected home.   A fully connected home could have its power, water, gas, wifi and sensors for intrusion along windows and doors, locks for doors or safes, pool, HVAC, lighting, entertainment equipment, computers and all the appliances connected.   In addition, things like IoT hubs, shades, window transparency control, vacuum cleaners, lawnmowers, irrigation systems could present an amazing high-tech lifestyle.   How much convenience these things might actually provide is unknown.   The cost of such a retrofit to a home might be exorbitant but specific items will undoubtedly prove economical and power significant growth.   I have a friend in Geneva who took a smart lawn mower and made it even smarter and more responsive by turning it into a connected lawnmower.  I have no doubt many of our at home devices will become connected as a default.

Connected business:  Business will adopt IoT rapidly because most businesses already have automated systems for lighting, hvac, security and other systems.  Many businesses have automated connected factories.  Many of these will convert to the new IoT standards if they emerge due to the cost improvements and improved functionality from interconnectedness.   An important addition for businesses is getting overall health and status of all your IoT devices.   Also, businesses will probably greatly expand things like connected shipping and logistics.  BigData connected to IoT in the business can help them improve products, improve performance, select vendors.

Connected health:  Health was one of the original IoT hot spots.  The fitness devices were early sellers but it is thought that many of these are saturated markets now.    I have hope that new devices for monitoring patients will reduce hospital stays, improve quality of care and dramatically do things like lower the cost of drug trials.  However, I doubt that 2015 will see this come about.   Apple will introduce health features on its watch that will invigorate the market a bit but I think this is more a 2016 story than a 2015 story.

Connected Person:  This refers to individual devices.  Who knows what is in this category?  Toys?  Drones?  Sports devices, devices to make life more convenient, more sociable.   We always seem to buy up the social.   One reason I knew for sure that smartphones would be a massive thing was I understood that people are fundamentally social and having that ability to be social all the time would be soaked up by some.   Any devices that enhance social will be scarfed up in my opinion.   2015 may be the year of the social IoT devices.


IoT I have bought:

iphone5s_silver_portrait My Smartphonebuy

… I believe this is an IoT but it is a “centerpiece” and I’m not sure it counts.  Jeff Clavier a noted VC in IoT space says the smartphone is the whole reason for IoT and that it wouldn’t exist without the ability of IoT devices to talk directly or indirectly with a smartphone.   I am not 100% in sync with Jeff but I would agree it is a central piece of the puzzle.

tesla-model-s-6 Teslabuy

I love my IoT Tesla.  I could never go back to an ICE after owning the Tesla.   The mere thought of having to buy gasoline, deal with all the problems of the ICE technology seems like a step back.  I love the fact my Tesla self-updates and look forward to my next set of improvements free of charge.  I love the fact it is always connected and I can find out its status and control the car remotely.   From my perspective there is no downside to the Tesla other than initial cost but it is not different than other luxury sedans.  I love the fact that Telsa gets information from my car that helps them provide better service and to build more reliable cars.   I am 100% sure that Elon has made the case for connected cars and I expect over the next 5 years almost every car will be connected.  He has proven this makes complete sense.

This is a good example of a consumer IoT device done right that recommends the whole idea.

wemo Wemo smart switchesbuy

I have found these to be useful in limited scenarios for things I want to schedule or control regularly like outside lighting.   I had automated a previous house I owned a long time ago before the “IoT” idea came about, the smartphone or even the internet embarrassingly.  I was able to control everything from my computer.   I had written a cute program that I could program light configurations, heating programs and anything else I wanted.  Every light and plug in the house was automated.  The thermostat was programmable through a novel device which fooled the manual heater to go on or off.  Programming the heat was pretty handy and saved money.  I had the ability to type a command at my computer command line which would turn the house into “party mode” or “living” or “go to bed.”   By the side of my bed I had a button I could press to turn off every light in the house.  Similar buttons on tables around the house allowed me to easily turn off all the lights or on in any area of the house.  I could also program dimmers.    Those were pretty useful functions but I am not sure worth the cost of thousands of dollars.   Being recently out of college I found it amusing to turn the lights off when people were in the bathroom and wait for the scream.  Chuckles.  (I admit it was juvenile)   I don’t think this was mass-market utility.  It was demonstration you could do things like that but subsequent houses I have not had the desire to automate in this way.   I just don’t think it was worth all that much.

Bodymedia_armband_link BodyMedia LINK armbandbuy

There are a whole bunch of fitness bands that basically work by looking at motion.  I have used the BodyMedia LINK band and I give this a high rating by comparison.  It measures not only motion but skin capacitance and heat flux and I believe a derivative of heat flux.  That enables the device to produce much more accurate numbers for the energy consumed by an activity and sleep quality.  I find the motion bands useless by comparison and deceptive.   They are not worth the money.  Armbands and clip type devices can be forgotten and are not worn as long.  They are more obtrusive I think.  The result is the ones I tried never got used consistently.  Only the BodyMedia I wore all the time.   I also like the integration with multiple web sites for fitness and food consumption.    BodyMedia was acquired by Jawbone and it interfaces via the UP compatible applications, APIs and SDKs.

smartmeterPG & E SmartMeterbuy

The smartmeter is included with PG&E service.  Using analytics provided by PG&E through a partner Opower I have lowered my average PGE bill 20-30%.  It is zigbee compatible!

dvrAT&T DVRbuy

This is another gimmee in that AT&T includes this as do other cable companies and allows you to do myriad of things remotely to your DVR including watching programs remotely.

vantage vue weather station Weather Envoy 06316Davis Vantage Vue Wireless Solar Powered Weather Station buy

This remarkable device allows you to measure everything you can imagine about the weather, is expandable with numerous options and publishes to so you can track the weather at your home from anywhere.   What you need to have a complete weather station IoT is the following:   Davis Vantage Vue, Davis Wireless Weather Envoy, USB Data logger, purchase Weathersnoop and use the  Weatherunderground service.   Why have your own weather station?   For one reason you have information on your local environment you can use to trigger interesting automation events.   For instance, if the wind or temperature, luminance, rain or pool temperature reaches certain limits you can trigger appropriate actions or warnings.   You can use this information to affect your shades, watering of plants, heating system or cooling system parameters or other functional parts of your house.



 Lynx7000yale_b1zZ-HON-5877GDPK-KT1-2Automated Secure Home buy

I chose a Z-wave based system over proprietary systems as well as other systems because Z-wave seems to have the most complete set of devices for security.   Lynx 7000 from Honeywell supports Z-wave as well as its own proprietary devices.   The Lynx has the advantage that it can be linked to the internet with a 3rd party security company at far lower prices than the traditional ADT approach.   TotalConnect2 is available from $10 -> $30 / month depending on the level of service that you want.   The Z-wave technology is incredibly impressive in how reliable it is.  It works over large distances and devices seem to be very reliable when connected.   I have chosen to focus on security automation first.   I was initially impressed with the Kwikset lock system but after reading more reviews I decided on a company with more experience and reputation for deadbolts, Yale.  I also felt the garage is an important area for security and automation as well as doors and windows.   I have not decided on the camera system yet but am leaning towards Dropcam after hearing some good feedback from someone else who installed it recently.


zipper_track_00005 somfy_zwave_controller Motorized shade Somfy Z-wave compatiblebuy


Motorized Blinds can be a way to reduce energy costs, improve security and privacy.   They are also pretty cool.   Unfortunately they are expensive.   A typical motorized blind is $300 more than a non-motorized version.  This can increase the cost of your window coverings substantially.   I have a gazebo in the back of my house that I want to have a motorized shade to provide shade, wind and rain protection at times.   The area to be covered is large enough that a manual system would be hard to work.   So, I have decided to go with Somfy and Z-wave.   The Somfy Zwave module can control 16 shades so even if it is pricey for one shade if you install multiples it is much more economical.   I expect to automate the shade function based on API calls to my weatherstation.

CON_SYSTXCCITN01_LargeCarrier Infinity Wireless Thermostat and Online servicebuy


The previous owner of my house purchased a Carrier HVAC system.  When I tried to hook up a standard Zwave compatible thermostat it became apparent something was awry.   The Carrier system is digital.  The thermostat talks to the furnace using a protocol not simply wires to “turn on and turn off” the furnace or air.   Fortunately, Carrier has a wireless addition to their thermostat that allows remote operation of the thermostat.   It is only a $100 option so I am pretty happy about that.


ThalmicLabsMYOpic   Myo.  buy

This device looks like it could make controlling everything pretty cool.   I am more worried that I actually find it useful.   If I do, then I may be forced to consider how to make this device acceptable fashion accessory and how to not annoy people around me.   It is not as intrusive as Google glasses but I suspect there will be problems programming it, not having it recognize motions sort of like voice recognition where I am repeating a motion over and over with bizarre things happening while I swear at it.   If it turns out to be really fun and easy to use, very functional and helpful then I will be faced with the much harder issue of how to fit it into my life and what changes it will cause.

 Lynx 7000 Smart Home Security and Z-wave control

5877PKG Garage Automation


 IoT I have placed orders

tile iot Tile buy

I have lost a lot of things during my life.  I have great hopes that Tile will actually work and provide a way to find things that I typically have misplaced at different times.  I plan to attach them to a variety of things, even my cat.  One thing I hope to track is sunglasses.  I love nice sunglasses but they always disappear.  At one point I had a $1000/year sunglass budget.  It was ridiculous.  I recently discovered my fiance has a gift for finding sunglasses.  We will be walking along trails in Hawaii or Colorado and expensive sunglasses will show up that she finds on the trails from out of nowhere.  So, I now have a negative sunglass budget and sunglasses are returning to me reborn. However, even given this good fortune I would like to see if there is a way I could actually not lose sunglasses in the first place or if there is a cosmic devourer of sunglasses that will still make my sunglasses disappear.

 noke Nokebuy

I like this idea.  At the gym, the house, bike, lots of potential uses.

bag2go-100249621-medium blue smart luggage 20141021212737-000BlueSmart Luggagebuy

Having lost luggage a few times in the last year and because I haven’t upgraded my luggage in a long time I think this is a smart thing.  I can’t wait to make my first call to Lufthansa or United airlines and tell them I know where my luggage is and I want it delivered.

IoT I am not going to Buy

hue-box-closed $60 connected light bulbs.dont buy

There is no way I am paying $60 for a light bulb no matter how smart or controllable it is!!

comcast hub Comcast Hub and connected devices.dont buy

Comcast screwed me with poor service that is indescribably bad.  I will never buy anything from them ever again if I can help it.

nest thermostat Nest thermostat and related devicesdont buy

I have no faith that Nest can learn when to heat my house.  I would like it to be programmable instead possibly the kind of automation I am looking for would be possible with IFTTT.  For instance, I want the thermostat to see that I am heading home by talking to my car.  I don’t want to have to tell the thermostat to go up.  However, this is more complicated than that.   If the time of day is during peak energy hours when I am charged 4 times as much for energy then it should not heat the house or heat it to a lower temperature.  It should know if my cat is home and heat the home for the minimum temperature the cat can tolerate when others are not there.    If a door is open it should not heat the house wastefully but tell me a door is open and won’t waste energy.   I want it to know that if temperatures are expected to rise substantially during the day today not to heat the house in the morning.  I would prefer not to waste energy during a heat wave by heating the house.   I would prefer if it knew that my cheapest energy use is till 7am and to get the house warm enough so that during the day the temperature will not fall to the point it will need to heat the house during peak hours when energy costs more.    When I can do that with a thermostat I will buy a connected thrermostat.

shirt-alert-300x225Wearable Shirt with buzzing shouldersdont buy

to help me navigate – seriously???   I am no technophobe but it is way too geeky to wear electronic clothes.  My iWatch could buzz me when it’s time to take a turn.  One buzz is right, two buzzes left.  I don’t want my clothing buzzing.

microsoft ibandMicrosoft Ibanddont buy

Complaints:  Complicated to manage.  Bulky, heavy, stiff.

solartouch_pack_d Smart Pool Pump.dont buy

I bought a variable speed intelligently controlled pool pumping system recently.  It is NOT connected to the internet.  That is fine!!!   I don’t need my pool pump connected to the internet.    It knows to reduce the speed of the pump to the minimal it needs during various functions and increase when it needs it.   I don’t need it connected to the internet or to fine tune its operation remotely.   This system has reduced my cost of electricity for the pool by 50% and raised the average temperature of my pool.  I don’t know why I would need it connected to the internet other than for the occasional ability on trips to change schedules.

IoT I might buy

Screen Shot 2015-01-19 at 10.31.13 AM    Vessyl   good idea

The vessyl will measure the amount of different liquids you consume in a day.  It can distinguish liquids with alcohol in them, sugar and even measure amounts of some things.  It charges wirelessly and will report to various apps.   It may seem stupid but I am a bit of a numbers guy and love having hard numbers to back up what I think is working or not working.   I might get it.   At $110 it isn’t TOO expensive although right on the edge of what I might pay for something like this.


kolibree Kolibree IoT Toothbrushgood idea

Really!  At first glance I totally thought this product sounded like the stupidest IoT ever.   However, reading about it I realized it actually could make sense.   I am a big believer in my electric toothbrush.  There is no question my gum and teeth have improved markedly since using the double headed OralB.  It’s like having a washing machine in my mouth.   I love my electric toothbrush.  However, the Kolibree promises to find the few little spots I sometimes miss.  If it really improves things it could easily be worth it and cool.  I’ll wait to see more reviews.

Apple_Slice Apple Hub and connected devicesgood idea

It’s hard to argue with success especially if this device combines several previous apple media products I have not purchased already it may be enough to put me over to deciding on their hub over the Ninja or Almond or other hubs that are on the horizon.

iwatchApple iWatch good idea

It’s hard to argue with success.

AirQualityEgg_EarlyPrototype AirQuality Egggood idea

Probably not unless it is improved to support particulate counts as well as gases.  Particulates are the real danger from air pollution.

NinjaSphere-663x442 Ninja Sphere good idea

This is totally cool in that it has gestures, support for multiple protocols and other cool things, besides it looks cool too.  I may do this instead of the Apple hub when I see what Apples device is capable of.  This device has the ability to use triangulation to detect where something is with great accuracy in its environment and to communicate with an extendable platform of spheramids.  Wow.  Cool.  It can apparently be programmed with cool complex IFTTT like functionality to say heat the house when you are heading home.   This looks like Apple hub has a serious competitor.

DropCam-PRO_Front_72dpi Dropcamgood idea

I would like to be able to check into the house sometimes and see what’s going on.  This seems marginally useful and the cost is reasonable.   This version comes with a lot of useful features that made previous “cams” seem like a pain.  The integration with the cloud is especially useful.   The dropcam can’t do this out of the box but a recent article in Gizmodo referred to the fact that software can be written to watch a plant or glass of water and without actually having any sound it is able to produce the sound in the room including voices intelligible enough to understand what people are saying.

 quadcopter iotpool temp sensor Toys good idea

like Quadcopters, environmental sensors for weather, etc. – Sure sounds fun


I don’t know if you agree with my personal take on these consumer IoT devices or if my shopping list is useful but it shows to me that some of these things are definitely worthwhile and some may be fads that have an initial “geek” appeal but no real lasting value.   I have a feeling we will find the consumer side of IoT will have some successes and failures but I hope that nothing fails so dramatically or has serious security problem that consumers lose interest in where this could go.

The next article Integrating IoT Devices. The IOT Landscape discusses how all these devices interplay and how you can integrate them.


Part II : Integrating IoT Devices. The IOT Landscape.



John MathonWhy Open Source has changed from the cheapest software to the best software

wso2 logo



Over the last 5 years we have seen a transformation in the way Open Source is viewed in Enterprises that is significant.

1.  Open source underwent a massive increase over the last 5 years with the help of organizations such as Google, Twitter, Facebook, Netflix and many others.

The number of new projects continues to grow exponentially and many senior people at mainstream companies are writing open source on the side.   Developers and technologists consider it important they are abreast and contribute to open source.

2. The virtuous circle or the rapid evolution of new technologies have forced open source into a critical position in software development

The rapid pace of evolution in open source has overtaken the closed-source world.  The closed source world is now FOLLOWING open source.  Back in the 2000 time frame open source was creating projects that were replicas of closed source software.  Now closed-source companies are copying open source components and ideas.

3.  The Enterprise Sales model continues to come under increasing pressure because of Open Source

The benefits of component technology whether APIs or open source components is so great that licensing component by component is unworkable.  You may use one open source project today and switch tomorrow.  You may use dozens of open source components.   The software licensing model is too cumbersome and limiting.

4. closed-source Enterprise License Model of Software is NOT aligned with customer

Closed source companies are interested in you re-upping your license to the next paid version.  They will put features into new versions to force you to commit and they will make you wait for those features you could get in open source today.

They want to write components because they make money by you using their closed source software NOT by making it easy for you to use open source projects.  They will want to lock you into their versions of these things and if they don’t have them yet they will make you wait.  The whole model of closed-source is opposed to the rapid adoption paradigm and reuse of the new era.

Lastly, it is hard to understand why anyone would pay a license fee for a commodity product and many enterprise closed-source products at this point are commodity products that are available in multiple different and in many cases better open source projects for no license.

Organizations are coming to realize that Open Source is NOT

1) the low quality but cheap software

2) a small part of the problem

3) a risky way to get software

open source icons4638740636_a12cdcfd86_z

Instead they are realizing that Open Source is:

1) The only way to get certain kinds of key software

2) The best software for some major pieces they need

3)  That they could benefit by participating in

4) that is more aligned with their business than closed-source software

5) that it is critical to their technology evolution and transformation

6) That it is critical for them to have the agility they need


Major companies are adopting a  “Open source first” policy or a “must look at open source alternative” policy. Many major companies have in the last year made open source the reference architecture in their companies.   Some companies are saying they must use an open source solution if it is available or that they must consider open source in any purchasing decision of software.

I have seen this more and more with bigger customers and talking to CIOs and even lower level people who say they have to talk to us (WSO2) because they need to consider open source in their decision making process.   That is a huge change from 5 years ago where many corporations thought open source was  “risky.”

Where will this lead?

The bigger question is whether “Open Source” is the way we should be doing software?  Will we ultimately suffer for doing open source, using open source?   I don’t think so.

I have thought about this quite a bit over the years.   For most enterprise software there is no need for software companies to use the closed-source model.  There is very little to be gained by a software company choosing to keep its software secret in my opinion and that advantage is diminishing every day.   Software like hardware is moving to a pay per use model.

The open source software issue is simply a variant of the entire problem of intellectual property in the digital age. Since all IP can be represented digitally whether news and reporting, music, film, education, software they all have the same problem.  When IP can be copied infinitely for free how do you maintain income to support the creation of IP?  The market is evolving to find answers to each domain in different ways.    I believe that no matter how easy it is to copy IP and use it that the users of content will be motivated to pay for the content in some way to cause the creation of the content (IP) they want.  It is simply a matter of the market to figure out how to do this but as long as people need and like and value music, film, software, etc we will find a way to compensate people to make it.  The fact this may not be the way traditional existing companies do business is irrelevant.

Many people bemoaned the music industries collapse but the fact that traditional music companies were vertically integrated to provide 100% of the value chain for a musician and had control of the musician and the listener was an artifact of history.  By disintermediating the components of the vertical industry into segments we still get our music.   In fact the amount of music has exploded and I would say our ability to listen and enjoy music has expanded at the same time musicians are not starving any more than they were.

You can take that example and see how all the IP domains in the digital world need to undergo a painful transition for some.  Universities is one of my favorite topics in this space.  What will happen to universities?  Another blog.

Other Articles on Opensource:

Open Source is just getting started, you have to adopt or die

Inner Source, Open source for the enterprise

Software Development today, Cool APIs, Open Source, Scala, Ruby, Chef, Puppet, IntelliJ, Storm, Kafka

The technology “disruption” occurring in today’s business world is driven by open source and APIs and a new paradigm of enterprise collaboration

The Virtuous Circle is key to understanding how the world is changing – Mobile, Social, Cloud, Open Source, APIs, DevOps

Enterprise Application Platform 3.0 is a Social, Mobile, API Centric, Bigdata, Open Source, Cloud Native Multi-tenant Internet of Things IOT Platform


John MathonBreakout MegaTrends that will explode in 2015, Continuation Perceptive APIs, Cognitive Technologies, Deep Learning, Convolutional Neural Networks

brain sprouting

This is a continuation of the series on the Disruptive Megatrends for 2015 Series

12. Perceptive APIs / Cognitive Technologies / DeepAI – smart technologies become more and more important to differentiate

I strongly believe rapid adoption of Deep Learning technologies and more adoption of various AI sub-disciplines will show dramatic growth in 2015.   This is because of the need by companies to utilize greater intelligence from BigData and the need to put intelligence into social and all applications in general.

Many people are not aware of the significant changes that have happened in Artificial Intelligence in the last few years.   There are several areas of AI that have made great strides, combined with some hardware advances we are seeing a 3rd wave of AI and this time may be the magic time that sticks and leads to mass adoption.   AI was my original study at MIT and I have a lot of thoughts on conventional AI approaches which have failed generally and which I was skeptical of from the beginning.  However, in the last couple years we have seen the emergence of true AI or what is being called Deep Learning or Deep AI.

Deep Learning involves the use of Convolutional Neural Networks which are a synthetic form of “brain” based on virtual neurons configured in layers.   Each layer of a convolutional neural network either amplifies or selects from the previous layer.  How many layers to use, what layers follow what other layers, how they are connected and how to configure them is a matter of experience.  The number of layers determines how deep the learning is and if you feed the output back into the input of the neural net you have potentially unlimited depth of learning.    It’s like designing your own brain.  The more you need the neural net to learn the deeper the layers and the more neurons you must use.  The amount of processing for all the neurons grows exponentially thus the interest in GPUs.  There are patterns that work in different scenarios.  You initially feed in a lot of data to the CNN and it learns.  After the training period you feed in new data and the system reports or acts on the data in however you have trained it to operate.

Neural networks were first designed in the 1980s but didn’t seem to work  that well.  Neural nets made some uninteresting advances for the next 30 years but in 2010 The principal recent advance was the introduction of LTSM (long term short memory) which led immediately to several impressive achievements by CNN that suddenly made them better than any other approach to recognition we have seen.   DeepMind (a British LLC acquired by Google)  has among other perfected the LTSM in each neuron which seems to have greatly improved the ability of neural networks to recognize and abstract information.  DeepMind demonstrated some interesting enough results to get Google interested and they were bought and have been employed in some of Googles recognition visual and voice recognition projects.

One of the claims of the DeepMind folks was that they could feed the visual feed of Atari games into DeepMind and it could learn to play Pong and other games from the video feed well enough to play the games and in some cases defeat good human players.   Now that does sound impressive.   Other examples of DeepLearning applied to pattern recognition has shown significant improvements from previous approaches leading to much higher recognition of text than ever before.

Elon Musk who has seen more of DeepMind than any of us is worried.  He claims that the technology at Google has the potential to become dangerous AI.  Dangerous AI is AI which may be able to gain some form of innate intelligence or perception which might be a threat to ourselves.    Since then other luminaries such as Bill Gates and even Stephen Hawkins have expressed reservations at Deep Learning AI.

Whether you believe such concerns as stated by these people the basic convolutional neural network technology is available in numerous open source projects and as APIs.   There is a lot of work and experience needed to configure the layers and parameters of each layer.   The amount of processing and memory required can be prodigious and some dedicated GPU’s are being developed to do CNN.   Several projects are underway to determine the limits of CNN learning capabilities.  It is exciting.

The technology is in use at numerous companies and in numerous applications already.

Given the way advances like this make their way into the industry you can see rapid adoption of Deep Learning technology through open source and APIs is likely to make its way into numerous applications and underlying technologies in 2015.

I want to make the distinction between different disciplines of AI which have been around for a while and being applied to applications and the Deep Learning.   I believe the Machine learning examples below will become Deep Learning before long this year.

D-Wave and Quantum Computer Technology is advancing rapidly


I don’t believe everyone will be buying a D-wave anytime soon.   The newest version coming out in March will have 1152qubits and represent a dramatic advance in quantum computer technology.   This is now at the point that people should become aware of it and consider what impact it will have.

I discuss quantum computers more deeply in an article on Artificial Intelligence here.

Google is currently using D-wave for some recognition tasks.  The prognosis is positive but they aren’t buying hundreds of these yet.   This is a technology that is rapidly evolving.  As of last year the D-wave was powerful enough to compete with the best processors built today.  That’s quite an achievement for a company in business developing a completely new technology to be able to build a processor which is as fast or maybe 5 times faster than the current state of the art processors available today but at $10 Million its a bit pricey compared to what is charged for state of the art silicon processors.   The real point is that if they have accomplished this and the scale goes up at Moores law every year on qubits which is extremely likely then the D-wave will quickly (<10 years) be faster than all computers on the earth today at least for solving some set of problems.

The set of problems the D-wave is good at are problems that involve optimization, i.e. logistics, recognition problems and many other problems that don’t look like optimization problems but can be translated to be optimization problems.   The D-wave leverages the fuzzy quantum fog to calculate at the square root of time a normal processor would need to calculate such problems.  As the number of qubits rise the size of problem that can be solved grows exponentially and eventually supersedes all existing computing ability to solve these problems.

The big win for D-wave and quantum computers are in the areas of security, better recognition for voice, visual and other big data and smarter query responses.

inner source

CNN / Deep Learning is being applied here:

 Facebook is using it in their face recognition software

Google apparently can transcribe house street addresses from pictures

Visual Search Engines for instance in Google+

Check Reading machines used by banks and treasury

IBM’s Watson


CNN and AI with BigData is a big theme.

Conventional AI is being used with BigData but I expect over the next year we will see more uses of  CNN.    Here are some articles talking about some companies doing this:

Machine Learning and BigData






CNN / Deep Learning resources


Atari Games Play

Deep Learning Resources


Madhuka UdanthaPython For Beginners

Python is an interpreted dynamically typed Language with very straightforward syntax. Python comes in two basic versions one in 2.x and 3.x and Python 2 and 3 are quite different. This post is bais for Python 2.x

In Python 2, the "print" is keyword. In Python 3 "print" is a function, and must be invoked with parentheses. There are no curly braces, no begin and end
keywords, no need for semicolons at the ends of lines - the only thing that
organizes code into blocks, functions, or classes is indentation

To mark a comment the line, use a pound sign, ‘#’.  use a triple quoted string (use 3 single or 3 double quotes) for multiple lines.

Python Data Types

  • int
  • long
  • float
  • complex
  • boolean

Python object types (builtin)

  • list  : Mutable sequence, in square brackets
  • tuple : Immutable sequence, in parentheses
  • dic  : Dictionary with key, value using curly braces
  • set  : Collection of unique elements unordered
  • str  : Sequence of characters, immutable
  • unicode  : Sequence of Unicode encoded characters


Sequence indexes

  • x[0]  : First element of a sequence
  • x[-1]  : Last element of a sequence
  • x[1:]  : Second element from the last element
  • x[:-1]  : First element up to (but NOT including last element)
  • x[:]  : All elements - returns a copy of list
  • x[1:3]  : From Second elements to 3rd element
  • x[0::2]  : Start at first element, then every second element
1 seq = ['A','B','C','D','E']
3 print seq
4 print seq[0]
5 print seq[-1]
6 print seq[1:]
7 print seq[:-1]
8 print seq[:]
9 print seq[1:3]
10 print seq[0::2]

output of above code


Function and Parameters

Functions are defined with the “def” keyword and parenthesis after the function name

1 #defining a function
2 def my_function():
3 """ to do """
4 print "my function is called"
6 #calling a defined function
7 my_function()

Parameters can be passed in many ways

  • Default parameters:
    def foo(x=3, y=2):
        print x

  • By position:
    foo(1, 2)

  • By name:


  • As a list:
    def foo(*args):
        print args
    foo(1, 2, 3)

  • As a dictionary:

1 def foo(a, b=2, c= 3):
2 print a, b, c
3 d = {'a':5, 'b':6, 'c':7}
5 foo(**d)
7 #need to pass one parameter
8 foo(1)



John MathonBreakout MegaTrends that will explode in 2015, part 2 CIO Priorities

This year has been a breakout year for a number of key technology trends, part 2 CIO Priorities

9.  CIO priorities will shift from cost abatement to “digitization”

2013 and 2014 were for most large organizations an experimental period with the cloud and many new technologies.   One of the biggest movements that many CXO’s saw from this new technology was cost reductions.  They have been pushing CIOs and IT departments to cut costs cut costs cut costs and to do more with less.   They heard about how many companies were drastically reducing costs by moving to the cloud.   While not every organization can see immediate gain from such a move, some companies could gain dramatic efficiencies with very little effort.  One company I know dropped its costs by a factor of 50 for its services by moving to the cloud.

With the price reductions we saw in cloud services mid-year in the very favorable cost/value equation for using cloud has become apparent to everyone.  Cost abatement will still be a factor in decisions to move to new technology but the pace of change will have accelerated and the expectations of the market have changed and are changing.  The fact is today you can get virtual hardware that has what you need to make your application run fast and efficiently in the cloud at unbeatable prices.    If you need SSD disk drives Google and Amazon sell you servers with SSD for cheap prices.  If you need high speed networking, high speed processors or alternatively if you don’t need those things but simply want to run your service at the least cost possible for the hours you use it in a day nothing can beat the cloud today.

The flexibility to order it when you need it, in incremental levels with the features you need for that application make this an unstoppable force.   It’s hard to think of a reason why a corporation would buy hardware these days.   The move to virtualization will take giant strides this year.

The need to adopt “digitization” or “platform 3.0″ the new paradigm emerging will dominate over cost concerns.   The productivity gains by adopting the open source, API Centric way of operating, PaaS DevOps deployment again are overwhelming and I can’t imagine anyone building applications that have any success using other approaches in todays competitive environment.   Organizations in 2015 will see more necessity to mainstream their digitization plans regardless of cost.   I believe IT spend in 2015 will stabilize and rise significantly, around 5%.

Gartner predicts 2.1% growth overall with 5-7% growth in Software spending.  This is astounding as the rapid improvements in productivity and efficiency would reduce costs substantially if adoption remained constant.   However, there is tremendous demand for mobile, APIs, Cloud services and new innovative approaches to IT.   The growth in IoT will help fuel a lot of corporate initiatives.

What I think is happening is that Enterprises are focusing more on the top level benefits of technology than they are on the bottom line cost reduction advantages.   They are realizing that this new technology is not an option anymore, it is not something to research.   This is life or death now.   An article that supports this thesis is here.

Other Articles in this series on 2015 Megatrends:

Here is the list of all 2015 Technology Disruptive Trends

1. APIs (CaaS)-Component as a Service (CaaS)  fuels major changes in applications and enterprise architecture

2. SaaS – the largest cloud segment continues to grow and change

3. PaaS – is poised to take off in 2015 with larger enterprises

4. IaaS – massive changes continue to rock and drive adoption

5. Docker (Application containerization) Componentization has become really important

6. Personal Cloud – increasing security solidifies this trend

7. Internet of Things (IoT) will face challenges

8. Open Source Adoption becomes broader and leading to death of the traditional enterprise sales model

9.  CIO priorities will shift from cost abatement to “digitization” – new technology renewal

10.  Cloud Security and Privacy –  the scary 2014 will improve.  Unfortunately privacy is going in the opposite direction

11. Platform 3.0 – disruptive change in the way we develop software will become mainstream

12 perceptive-apis-cognitive-technologies-deep-learning-convolutional-neural-networks

13.  Big Data – 2015 may see a 10 fold increase of data in the cloud

14. Enterprise Store – Enterprises will move to virtual enterprises and recognize a transformation of IT focused on services and less on hardware

15. App consolidation – the trend in 2015 will be to fewer apps that will drive companies to buddy up on apps

Here is more description of these changes.   Some of these I will talk more about in future blogs.

Related Articles you may find interesting:

Cloud Adoption and Risk Report,1-1248.html

Paul FremantleWSO2 \ {Paul}

I have an announcement to make: as of this month, I am stepping down as CTO of WSO2, in order to concentrate on my research into IoT secure middleware.

I have been at WSO2 since the start: our first glimmer of an idea came in August 2004 and the first solid ideas for the company came about in December 2004. We formally incorporated in 2005, so this has been more than 10 years.

The company has grown to more than 400 employees in three different continents and has evolved from a startup into a multi-million dollar business. I'm very proud of what has been achieved and I have to thank all of the team I've worked with. It is an amazing company.

In late 2013, I started to research into IoT middleware as part of a PhD programme I'm undertaking at the University of Portsmouth. You can read some of my published research here. I plan to double down on this research in order to make significantly more progress.

Let me divert a little. You often meet people who wish to ensure they are irreplaceable in their jobs, to ensure their job security. I've always taken the opposite view: to make sure I am replaceable and so that I can move onto the next chapter in my career. I've never thought I was irreplaceable as CTO of WSO2 - I've simply tried to make sure I was adding value to the company and our customers. In 2013, I took a sabbatical to start my PhD and I quickly realised that the team were more than ready to fill any gaps I left.

WSO2 is in a great position: the company has grown significantly year over year over year, creating a massive compound growth. The technology is proven at great customers like Fidelity, eBay, Trimble, Mercedes-Benz, Expedia and hundreds of others.

I am an observer on the board and I plan to continue in that role going forwards.

Once again I'd like to thank all the people who I've worked with at WSO2 who have made it such a productive and life-changing experience.

Shelan PereraPerfect Memory.. A Skill developed or a gifted super natural ?

 In the previous post i discussed about the importance of developing a most important gift a human has inside his brain.... the memory.

In this world we have externalized most of the important stuff to external digital world. In a way it has made things evolving faster. We are gathering much more knowledge than earlier. Wait.. Is it knowledge or knowledge pointers?

The most suitable word is knowledge pointers using which we can retrieve knowledge easily. Imagine a world without internet or any other form of knowledge reference. How far can we survive.? We are loosing our capability of retaining information in our brains day by day and making the external storages cost effective at an similar pace.

I am striving to regain that capability if i have lost and see how far i can succeed. Because following video will be one of your eye openers if you are one of the crazy people how would like to travel back to history and master one of the key aspects of a perfect Human being.

This is a Ted talk which you will be fascinated to watch.

"There are people who can quickly memorize lists of thousands of numbers, the order of all the cards in a deck (or ten!), and much more. Science writer Joshua Foer describes the technique — called the memory palace — and shows off its most remarkable feature: anyone can learn how to use it, including him"

Shelan PereraDo we need to master memory in Google's Era ?

 If you need any information you just type Google. Simple is n't it? So do we have to bother about memorizing anything this world? Is there a pay back for what you memorize? Can we rely on internet and only just learn how to search and can we be successful?

 I am sure most of the people have forgotten the importance of memorizing things in this information age. But the people who master memorizing things excel more often than people who do not. The simple reason is you can apply only what you know.

As a Software Engineer and as a Masters student in computer science we often think for any problem we can just Google and find answers. But how fast? You may say within seconds you may have thousands of results before you.

But... You need to type go to the first answer and if you are lucky you have a hit.. If not repeat until you find the correct answer. Imagine you have things in memory.... It will not take fractions of a second to retrieve. It will be simply comparing to a cache retrieval to a Hard disk access.

People often think there are lots of information how can we memorize all these things? Yes it is true that in this digital world content is being produced super faster. But you have to be super smart to filter out what is important to you and memorize those things. We often underestimate the capability of retaining information in our brain and lazily forget techniques to master memory.

If you have not read following book is a great source of encouragement as well as a learning tool by a
Memory Champ Kevin Horsley.

Following video is one of his world record breaking videos [Successful World Record attempt in Pi matrix memorization]

Shelan PereraLearning Math and Science - Genius Mind

 Do you think learning math and science is something alien from outer universe ? Do you struggle to solve complex mathematics or science problems? If you keep on adding questions there will be a lot to add on. Because even i have some of those problems in my mind.

I excelled in college and was able to get into the top engineering University in Sri Lanka. At present i am reading for my Masters Degree in Distributed computing in Europe which involve Science and Mathematics heavily.

If you read my post Want to Learn anythin faster was a spark to ignite my habit of learning. Often i was comfortable at Theoretical subjects which needed rote memorization or comprehension. I wanted to understand the reasons so anyone who suffers the same can be benefit of it.  Further there is a common misunderstanding that maths is hard and complex even before attempting to realize the beauty of it.

As Human beings we are computing machines. We can do lot of maths in our head to survive in this world. To measure the distance by contrasting with other objects, Cross a road safely without being hit by a vehicle and the list is endless..So then why we cannot do it in the class room?

I often find the way we approach math and science is wrong. There are several observations that i have made why we think maths is complex.

1) If we relate a story to a problem and try to solve it will be easier than denoting it with x,y or any other mathematical notation. When we have a story we have solid mapping of the problems in our mind so understanding a problem and working towards a solution makes more realistic. Good mathematicians or scientists create problems more vivid in their minds. They live in it as real worlds. Symbols are just notation to express what they understood in common language.

2) In classroom or tests we try to rote memorize concepts. Maths and science is super easy when you understand the fundamentals about it. You need to feel or live with the concepts to solve problems. learning an equation will not give you the ability to solve problems unless it is just a substitution.
If you clearly observe an equation, it is a complete story. It is a story of how incidents of left side will come to a common agreement on your right hand side.

3) Connect what you learn with what you know. Our brain is structured as a web. If you do not want to lose newly learnt concepts you have to link and bind with what you know to avoid loosing them. Isolated memory islands disappears soon. Try to relate to whatever the concept you have really understood and try to construct on top of that. If you find anything hard at once try it repeatably in different ways but always have a break. Our brain needs to digest and it takes time to assimilate.

I am still researching and try to apply those concepts in practice to see how they work in real world. But it has been producing interesting results so far. I find myself learning more complex math or science problem than before as i changed the approach.

I am currently reading this book which is an excellent resource for who wants to develop a "mind for numbers". Happy learning.

Heshan SuriyaarachchiResolving EACCES error when using Angular with Yeoman

In one of my earlier posts, I discussed installing NodeJS, NPM and Yeoman. Although the setup was good to start off my initial work, it was not was giving me the following error when trying to install Angular generator. Following post describe how to resolve this error.


| | .------------------------------------------.
|--(o)--| | Update available: 1.4.5 (current: 1.3.3) |
`---------´ | Run npm install -g yo to update. |
( _´U`_ ) '------------------------------------------'
| ~ |
´ ` |° ´ Y `

? 'Allo Heshan! What would you like to do? Install a generator
? Search NPM for generators: angular
? Here's what I found. Install one? angular
npm ERR! Darwin 14.0.0
npm ERR! argv "node" "/usr/local/bin/npm" "install" "-g" "generator-angular"
npm ERR! node v0.10.33
npm ERR! npm v2.1.11
npm ERR! path /Users/heshans/.node/lib/node_modules/generator-angular/
npm ERR! code EACCES
npm ERR! errno 3

npm ERR! Error: EACCES, unlink '/Users/heshans/.node/lib/node_modules/generator-angular/'
npm ERR! { [Error: EACCES, unlink '/Users/heshans/.node/lib/node_modules/generator-angular/']
npm ERR! errno: 3,
npm ERR! code: 'EACCES',
npm ERR! path: '/Users/heshans/.node/lib/node_modules/generator-angular/' }
npm ERR!
npm ERR! Please try running this command again as root/Administrator.
npm ERR! error rolling back Error: EACCES, unlink '/Users/heshans/.node/lib/node_modules/generator-angular/'
npm ERR! error rolling back { [Error: EACCES, unlink '/Users/heshans/.node/lib/node_modules/generator-angular/']
npm ERR! error rolling back errno: 3,
npm ERR! error rolling back code: 'EACCES',
npm ERR! error rolling back path: '/Users/heshans/.node/lib/node_modules/generator-angular/' }

npm ERR! Please include the following file with any support request:
npm ERR! /Users/heshans/Dev/projects/myYoApp/npm-debug.log

It was due to a permission error in my setup. I tried giving my user the permission to access those files but it still didn't resolve the issue. Then I decided to remove my existing NodeJS and NPM installations. Then I used the following script by isaacs with slight modifications. It worked like a charm. Then I was able to successfully install AngularJS generator to Yeoman.

PS : Also make sure that you have updated your PATH variable in your ~/.bash_profile file.
export PATH=$HOME/local/bin:~/.node/bin:$PATH

Waruna Lakshitha JayaweeraUse Account Lock feature to block user token generation


In this blog I will describe how we can block user getting token. So I am using api manager 1.6 with installing IS 4.6.0 features.


Step 1

Step 2

Step 3

Step 1

Step 2

Step 3

Create new user named testuser. Grant subscriber permission.Then go to users and select required user(testuser)
Goto user profiles > lock account(set FALSE to Account Locked) > update.

Step 4

After this restart the servers.

Test the scenario

Step 1

Login as test user.Subscribe any API.
Try to generate token like this.

curl -k -d "grant_type=password&username=waruna&password=test123&scope=PRODUCTION" -H "Authorization: Basic b3ZKMEtvVGd4YlJ5c2dBSDVQdGZnOUpJSmtJYTpBVjVZVFJlQkNUaGREUWp2NU0wbUw2VHFkdjhh, Content-Type: application/x-www-form-urlencoded" https://localhost:9443/oauth2/token 

You will get tokens.

Step 2

Login as admin.
Then go to users and select required user(testuser) and goto user profiles > lock account(set TRUE to
Account Locked) > update

Step 3

As Step 1 Try to generate token.
You will following message
{"error":"invalid_grant","error_description":"Provided Authorization Grant is invalid."}
Now you're done


Ashansa PereraIssue in put files to hadoop - 0 datanodes running

I setup hadoop as I have described in my previous posts. While I was working in some interesting projects on Map-Reduce, I faced an issue on sending some of my local files to the hdfs node. I have started dfs already, but exception says
14/10/25 17:46:20 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException( File /user/ashansa/input._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
I faced this issue several times, so I think it will be helpful if I tell you how to overcome this issue. ( I believe this is a common issue we get while running hdfs, since I got the same issue several times and I found that many others also have faced the same issue )

What worked for me?

1. Stop the hadoop cluster with

2. Clean hadoop tmp directory
    You can find the path to your hadoop tmp directory in hdfs-site.xml.
    Check for the following tag in hdfs-site.xml file.
    You can find the path to your hadoop temp directory in tag.

3. Format node
             bin/hadoop namenode -format

4. Start hadoop cluster

With these few simple steps, I was able to overcome this issue.
Hope this will help you too.

Dmitry SotnikovDownload Links for PowerGUI and QAD cmdlets

powergui logoWith Dell’s acquisition of Quest and all the IT reorganization that followed, it is actually not that easy to find these two popular free PowerShell tools any longer. So here are the links that work today (January 30, 2015):


The download is freely available from Dell’s PowerGUI community.

The community itself also got moved from to

Dell Software is still maintaining the product – as I am writing this the latest version is 3.8 released in April 2014.

Quest / QAD cmdlets

This one is a little more tricky to find:

If this link for some reason changes, all Dell’s freeware and trial links can be found in this catalog:

Happy PowerShelling!

Sivajothy VanjikumaranWSO2's 6th Office opened in Jaffna

WSO2 has opened a office at Jaffna with 10 employees including 9 software engineers

Sivajothy VanjikumaranGIT 101 @ WSO2


Git is yet another source code management like SVN, Harvard, Mercurial and So on!

Why GIT?

Why GIT instant of SVN in wso2?
I do not know why! it might be a off site meeting decision taken in the trinco after landing with adventurous flight trip ;)

  • awesome support for automation story
  • Easy to manage
  • No need to worry about backup and other infrastructure issues.
  • User friendly
  • Publicly your code reputation is available.

GIT in WSO2.

WSO2 has two different repository.
  • Main Repository.
    • Main purpose of this repository maintain the unbreakable code repository and actively build for the continuous delivery story incomprated with integrated automation.
  • Development Repository.
    • Development repository is the place teams play around with their active development.
    • wso2-dev is a fork of wso2 repo!


Now this statement invalid as WSO2 has changed it process on Dec/2014


  1. Developer should not fork wso2 repo.
    1. Technically he/she can but the pull request will not accepted.
    2. If something happen and build breaks! He/She should take the entire responsible and fix the issue and answer the mail thread following the build break :D
  2. Developer should fork respective wso2-dev repo.
    1. He/She can work on the development on her/his forked repo and when he/she feel build won't break he/she need to send the pull request to wso2-dev.
    2. If pull request should be reviewed by respective repo owners and merge.
    3. On the merge, Integration TG builder machine will get triggered and if build pass no problem. If fails, He/She will get a nice e-mail from Jenkins ;) so do not spam or filter it :D. Quickly respective person should take the action to solve it.
  3. When wso2-dev repository in a stable condition, Team lead/Release manager/ Responsible person  has to send a pull request from wso2-dev to wso2.
    1. WSO2 has pre-builder machine to verify the pull request is valid or not.
      1. if build is passed and the person who send a pull request is white listed the pull request will get merged in the main repository.
      2. if build fails, the pull request will be terminated and mail will send to the respective person who send the pull request. So now, respective team has to work out and fix the issue.
      3. Build pass but not in whitelist prebuild mark it a need to reviewed by admin. But ideally admin will close that ticket and ask the person to send the pull request to wso2-dev ;)
      4. If everyting merged peacefully in main repo. Main builder machine aka continuous delivery machine  build it. If it is fail, TEAM need to get into action and fix it.
  4. You do not need to build anything in upstream, ideally everything you need should fetched from the Nexus.
  5. Allways sync with the forked repository

GIT Basics

  1. Fork the respective code base to your git account
  2. git clone
  3. git commit -m “blha blah blah”
  4. git commit -m “Find my code if you can” -a
  5. git add
  6. git push

Git Beyond the Basics

  • Sych with upstream allways before push the code to your own repository

WSO2 GIT with ESB team

ESB team owns

Nobody else other than in ESB team has the mergeship :P for these code repository. So whenever somebody try to screw our repo, please take a careful look before merge!
The first principle is no one suppose to build anything other than currently working project.

Good to read

[Architecture] Validate & Merge solution for platform projects

Maven Rules in WSO2

Please find POM restructuring guidelines in addition to things we discussed during today's meeting.  

  1. Top level POM file is the 'parent POM' for your project and there is no real requirement to have separate Maven module to host parent POM file.
  2. Eliminate POM files available on 'component' , 'service-stub' and 'features' directories as there is no gain from them instead directly call real Maven modules from parent pom file ( REF - [1] )
  3. You must have a    section on parent POM and should define all your project dependencies along with versions.
  4. You CAN'T have  sections on any other POM file other than parent POM.
  5. In each submodule make sure you have Maven dependencies WITHOUT versions.
  6. When you introduce a new Maven dependency define it's version under section of parent POM file.  
  7. Make sure you have defined following repositories and plugin repositories on parent POM file. These will be used to drag SNAPSHOT versions of other carbon projects which used as dependencies of your project.

Prabath SiriwardenaMastering Apache Maven 3

Maven is the number one build tool used by developers and it has been there for more than a decade. Maven stands out among other build tools due to its extremely extensible architecture, which is built on top of the concept, convention over configuration. That in fact has made Maven the de-facto tool for managing and building Java projects. It’s being widely used by many open source Java projects under Apache Software Foundation, Sourceforge, Google Code, and many more. Mastering Apache Maven 3 provides a step-by-step guide showing the readers how to use Apache Maven in an optimal way to address enterprise build requirements.

 Following the book readers will be able to gain a thorough understanding on following key areas.
  • Apply Maven best practices in designing a build system to improve developer productivity.
  • Customize the build process to suit it exactly to your enterprise needs by developing custom Maven plugins, lifecycles and archetypes. 
  • Troubleshoot build issues with greater confidence. Implement and deploy a Maven repository manager to manage the build process in a better and smooth way. 
  • Design the build in a way, avoiding any maintenance nightmares, with proper dependency management. 
  • Optimize Maven configuration settings. 
  • Build your own distribution archive using Maven assemblies. Build custom Maven lifecycles and lifecycle extensions.
Chapter 1, Apache Maven Quick Start, focuses on giving a quick start on Apache Maven. If you are an advanced Maven user, you can simply jump into the next chapter. Even for an advanced user it is highly recommended that you at least brush through this chapter, as it will be helpful to make sure we are on the same page as we proceed.

Chapter 2, Demystifying Project Object Model (POM), focuses on core concepts and best practices related to POM, in building a large-scale multi-module Maven project.

Chapter 3, Maven Configurations, discusses how to customize Maven configuration at three different levels – the global level, the user level, and the project level for the optimal use.

Chapter 4, Build Lifecycles, discusses Maven build lifecycle in detail. A Maven build lifecycle consists of a set of well-defined phases. Each phase groups a set of goals defined by Maven plugins – and the lifecycle defines the order of execution.

Chapter 5, Maven Plugins, explains the usage of key Maven plugins and how to build custom plugins. All the useful functionalities in the build process are developed as Maven plugins. One could also easily call Maven, a plugin execution framework.

Chapter 6, Maven Assemblies, explains how to build custom assemblies with Maven assembly plugin. The Maven assembly plugin produces a custom archive, which adheres to a user-defined layout. This custom archive is also known as the Maven assembly. In other words, it’s a distribution unit, which is built according to a custom layout.

Chapter 7, Maven Archetypes, explains how to use existing archetypes and to build custom Maven archetypes. Maven archetypes provide a way of reducing repetitive work in building Maven projects. There are thousands of archetypes out there available publicly to assist you building different types of projects.

Chapter 8, Maven Repository Management, discusses the pros and cons in using a Maven repository manager. This chapter further explains how to use Nexus as a repository manager and configure it as a hosted, proxied and group repository.

Chapter 9, Best Practices, looks at and highlights some of the best practices to be followed in a large-scale development project with Maven. It is always recommended to follow best practices since it will drastically improve developer productivity and reduce any maintenance nightmare.

Dedunu DhananjayaDo you want Unlimited history in Mac OS X Terminal?

Back in 2013, I wrote a post about expanding terminal history unlimited. Recently I moved from Linux to Mac OS. Then I wanted unlimited history. Usually in Mac OS X you will only get 500 entries in history. New entries would replace old entries.

Take the a terminal window and type below command.

open ~/.bash_profile


vim ~/.bash_profile

Most probably you will get an empty file. Add below lines to that file. If the file is not empty add them in the end.

export HISTSIZE=

Next you have to save the file. Close the terminal and get new terminal windows. Now onwards your whole history would be stored in ~/.bash_history file.