WSO2 Venus

Sanjiva WeerawaranaWSO2 at 10

Today August 4th 2015 is WSO2's unofficial official birthday - we complete 10 years of existence.

I guess its been a while.

Its unofficial because not a whole lot happened on the 4th of August 2005 itself. Starting a global set up like WSO2 had many steps - registering a company in Sri Lanka (in early July 2005 IIRC), registering a company in the US, getting money to the US company, "selling" the LK company to the US company etc. etc.. We officially "launched" the company at OSCON 2005 in Portland, Oregon the first week of August.

However, I gave a talk there on the 4th on Open Source and Developing Countries. The talk abstract refers to the opportunity that open source gives to "fundamentally change the dynamics of the global software industry".

That's what we've been up to for 10 years - taking on the enterprise middleware part of the software industry with open source and Sri Lanka as the major competitive weapons. We can't claim victory yet but we're making progress. Getting into nearly 20 Gartner Magic Quadrants and Forrester Waves as a Visionary is not a bad track record from zero.

This is of course only possible because of the people we have and the way we do things (our culture) that allows people to do what they do best and do it well. To me, as the person at the helm, its been an incredible ride to work with such awesome people and to have such an awesome work environment that births and nurtures cool stuff just as effectively as how well it chews and spits out stupid stuff and BS.

We're now somewhat sizable ... just about crossing 500 full time employees globally on August 1 this year. I am still (and will be for the next 10 years) the last interview for every employee .. no matter what level and no matter what country they're in (yeah that means Skype sometimes). I don't check for ability to do the job - its all about what the person's about, what they want to achieve in their life and how well I think will fit into our culture and approach and value system. I have veto'ed many hires if my gut feeling is that the person is not the right fit for us.

Here's a graph of how the team has grown:

(The X axis is the number of months since August 1, 2005.)

A key to our ability to continue to challenge the world by taking on audacious tasks is the "so what if we fail" mindset that's integral to our culture. Another part is being young and stupid in terms of not knowing how hard some things apparently are. When I started WSO2 I was 38 .. not that young but definitely stupid in my understanding of how hard it is/was to take on the world of IBM/Oracle owned enterprise middleware market and ultimately stupid about the technical complexities of the problems we needed to solve. BUT what has worked for us so far is the "so what if we fail part" being used by young people who are regularly put in the deep end to get stuff done. I am still utterly stupid about how hard certain things are supposed to be - and I love that. Most of us in WSO2 are very stupid that way - but we're not afraid to try nor are we afraid to fail. Shit happens, life goes on (oh yeah and then we all die anyway at some point .. so why not give it a shot). I have little or no respect to the "way things are done" or the "way things work" - we've challenged and re-envisioned almost every part of our business from the way a normal software company works and I'm very proud of my team for having done that over and over and over again. And I'm of course grateful that they still talk to me for all the grief I give them daily on various little to big aspects of every side of the company - from colors to cleanliness to marketing to architecture to pricing to paying taxes.

The amazing thing is after 10 years we've managed to become slightly younger as a company over time! How is that possible? This is the average age of employees of WSO2 over time (same X axis as above):

We apparently had some old farts (like me) hired at the beginning and then again a few more around 3 years in .. but since then the average age has hovered between 30 and 32! Not bad for a 10 year old company where very few people leave ...

To me the actual physical age is not the issue - after all I'm now 48 years old but I don't hesitate to think and act like a 25 year old either mentally or physically (come and play basketball with me and lets see who hurts more at the end). Its all about how you think and act and accept "experience". I view experience and assumption as things to question and assume as false until proven true in our context. That frustrates a lot of senior people but that is exactly what has allowed WSO2 to keep growing and keep challenging the world of middleware and getting to its front. I view any assumption (e.g. "this is the way others do it") as a likely point of failure until proven otherwise. My challenge is to keep WSO2 "young" - in thinking and in age as much as possible (without age discrimination of course). I love this Jeff Bezos quote:
If your customer base is aging with you, then eventually you are going to become obsolete or irrelevant. You need to be constantly figuring out who are your new customers and what are you doing to stay forever young.
Technology will never stop - it maybe SOA, ESB, REST, CEP, Mashups. Cloud, APIs, IoT, Microservices, Docker,  Clojure, NodeJS, whatever ... and more will come. We need to keep on top of every new thing that comes along, be the ones to create a bunch of these and still deliver real stuff that works.

If we as a team continue to challenge every assumption, continue to treat each other with respect but not fear, continue to fight for doing the right long term thing instead of hype-chasing then we will never lose.

WSO2 is nowhere near the goal I set out to do for us - take over the world (of middleware!). But 10 years later, we're now on a solid foundation to build WSO2 into a much stronger position in the next 10 years. Thank you for all the wonderful people who are still in WSO2 and to those that have moved on but did their part, for helping us get there. Its been my honor and privilege to lead this incredible bunch of crazies.

Jayanga DissanayakeBinding a processes into CPUs in Ubuntu

In this post I'm going to show you how to bind a process into a particular CPU in Ubuntu. Usually the OS manages the processes and schedules the threads. There is no guarantee on which CPU your process is running, OS will schedule it based on the resource availability.

But there is a way to specify the CPU and bind your process into a CPU.

taskset -cp <CPU ID | CPU IDs> <Process ID>

Following is an sample to demonstrate how you can do that.

1. Sample code which consumes 100% CPU (for demo purposes)

class Test {
public static void main(String args[]) {
int i = 0;
while (true) {

2. Compile and run the above simple program

java Test

3. Use the 'htop' to view the CPU usage

In the above screen shot you can see that my sample process is running in the CPU 2. But its not guaranteed that it will always remain in CPU2. The OS might assign it to another CPU at some point.

4.  Run the following command, it will assign the process 5982 permanently into 5th CPU (CPU # start at zero, hence the index 4 refers to 5th CPU.)

taskset -cp 4 5982

In the above screen shot you can see, that 100% CPU usage is now indicated in the CPU 5.

Danushka FernandoWSO2 App Factory - Developing New Application Type

This post will describe how to use extension points to develop an application type. In [1] you can find step by step guide of adding a new application type and its runtime. In this post I will explain following.
  • Adding new Application Event Handlers
  • Maven Archetype creation for App Factory Applications
  • Extending the Application Type Processor Interface
  • Extending Deployer Interface

Adding new Application Event Handlers

You can find the interface ApplicationEventsHandler in [3] Lets say you need to do some special stuff for your application type (like create a database when creating an application to support your app type), you can extend methods of this to do that. For example we are creating a datasource using an handler when we create a Data Service type application. You can find that code in [4].

After you develop your handler you can add it to an OSGI bundle and then deploy it to the AppFactory server's following location.


And then you can edit the order of your handler by mentioning the necessary priority in the appfactory configuration which can be found in the following location.


In this configuration file you can find a tag named EventHandlers. You can insert a new tag to this place with the name of your handler class name and set the desired priority value as it is done for other handlers.

Figure 1 : Event Handlers Priority Settings

Maven Archetype creation for App Factory Applications

In WSO2 App Factory when we create an application we generate a sample code for that application. This is done using maven archetypes. We have created a maven archetype for all apptypes. Maven archetypes can be created for non maven applications as well. For an example even for .NET app type we are having a maven archetype even though its built using MS Build.

[5] describes how to create a maven archetype. And you can also take some hints from my previous post [7] for maven archetype creation. When we create an archetype for WSO2 AppFactory we have to add few more resources in to it. That is for the initial deployment. This initial deployment phase was introduced to reduce the time spent to create an application. So what happens there is we include built artifacts in to the maven archetype and we bundle them using assembly plugin and deploy the created artifact[6].

Figure 2 shows the tree structure of web application archetype. I use this to explain since this is the simplest and common structure. Here archetype-resources folder contains the content of the archetype. Apart from the src directory you can see there is a directory named built_artifact. This is the place that we use to keep built artifacts that we are going to keep to create the initial artifact using maven assembly plugin. And the assembly.xml and the bin.xml placed at the same level will be used to run maven assembly plugin and create the initial artifact.

Figure 2 : Tree structure of Web app archetype

You can see the assembly.xml and bin.xml here. When we trigger follwing command on this folder it will create an .war file with the content of the built_artifact folder and will copy it to a folder named  ${artifactId}_deploy_artifact which is located one level outside the application code. And in the initial deployment phase it will pick this artifact to deploy.

mvn clean install -f assembly.xml

Figure 3 : bin.xml

Figure 4 : assembly.xml

Figure 5 is the structure of maven archetype for ESB apptype which is the most complex apptype created in WSO2 AppFactory up to now. Here you can see that there is no assembly.xml and bin.xml and instead of that there is a pom.xml inside built_artifact folder and bin.xml is also there in the same level. So we have run following command inside the built_artifact folder and how this extension can be done will be described in later in this article.

mvn clean install

Figure 5 : Tree structure of ESB maven archetype

Extending the Application Type Processor Interface

Application Type Processor interface can be found at [8]. AbstractApplicationTypeProcessor is an abstract implementation of the interface which is recommended to use[9]. But there could be cases where that it needs to be implemented from the beginning. This processor is triggered in several places which will do something specific to an application type.

When we create an application when the repository get initiated[2], generateApplicationSkeleton method will get invoked. In [9] it is calling a method initialDeployArtifactGeneration which is not mentioned in the interface but anyone extending [9] can override that. The behavior I mentioned in the earlier section about ESB apptype initial artifact generation.

Extending Deployer Interface

The Deployer Interface [10] will be used to deploy the artifacts to the PaaS Artifact repository. Underlying PaaS will be the responsible one to copy it to the actual running server instance. If you need to do something special with your apptype you can use this interface. Anyway again the recommended class to extend is AbstractDeployer [11]. Most common use of this will be changing the artifacts that you choose to deploy. So you can override getArtifact method in [11] so it will select the artifacts to deploy in your algorithm.

And the next thing would be Initial Artifact Deployment. As I explained earlier WSO2 AppFactory will deploy an initial artifact generated directly from the archetype generated code. So this is usually done using the class InitialArtifactDeployer [12]. But if you want to change some behavior of this class then again you can extend this class [12] and add it to WSO2 AppFactory and mention it in apptype (apptype.xml) under a parameter name "InitialDeployerClassName".


[1] Adding a new App Type and its Runtime
[2] WSO2 Appfactory LifeCycle Of
[3] (org.wso2.carbon.appfactory.core.ApplicationEventsHandler)
[5] Guide Creating Archetypes
[6] Maven Assembly Plugin
[7] How to include artifactid in folder name or content of a file
[8] (org.wso2.carbon.appfactory.core.apptype.ApplicationTypeProcessor)
[10] (org.wso2.carbon.appfactory.core.Deployer)
[11] (org.wso2.carbon.appfactory.deployers.AbstractDeployer)
[12] (org.wso2.carbon.appfactory.deployers.InitialArtifactDeployer)

Danushka FernandoWSO2 App Factory - Life Cycle of an application

This post will try to explain the general life-cycle of an application in WSO2 AppFactory. WSO2 AppFactory is a place where multiple project teams can collaboratively create run and manage enterprise applications[1]. With WSO2 AppFactory users can create an complex application and push it to production within a matter of couple of hours.

Figure 1 - App Cloud Create Application Page

If you go to WSO2 AppFactory and create an application it will do several operations that generates the resources needed for your application such as
  • Create Repository
  • Generate Sample Code for the Application
  • Create Build Job
  • Create Issue Tracker Project
  • Deploy initial artifact in to PAAS artifact repository
  • etc..
When you hit create application it will create an instance of the application RXT installed in WSO2 AppFactory and then will call a bpel which is hosted in the BPS. Any one can edit this bpel to add their workflow in to it. Currently this will just trigger the AF service to trigger on creation event of Application Event Handlers. There are set of Application Event Handlers registered in AF. You can develop a new Application Event Handler class and add it to an OSGI bundle and copy to following location and start AF and that new handler will get invoked.


And you can configure the order of the handlers by setting the priority of the handler by setting priority in the AppFactory configuration placed in following location.


I will explain this in detail in my next post [2].

These handlers will be responsible for the operations mentioned above. Figure 2 is about the flow of this Application Creation process. 

Figure 2 : Create Application Flow

When the application is created trunk version will get created in the repository. Next step would be to create a new version from the trunk version to promote it to next stages.

Figure 3 : Repositories and Builds page

When you click the create branch it will trigger same kind of flow and will create the new version and new build jobs.

Figure 4 : Flow of creating a new version in an Application

Once you create a new version then you can promote this version to next stages (Development -> Testing -> Production). When you promote it will deploy the artifacts to next stages.


Danushka FernandoWSO2 AppFactory - Using ESB Apptype

In next version of WSO2 AppFactory (2.2.0), it's going to introduce a new apptype ESB apptype. With this apptype, it's going allow users to develop WSO2 ESB Capps (CAR) with WSO2 AppFactory. In this article I will give you a guide line to follow when you are developing a Capp using WSO2 AppFactory.

WSO2 AppFactory ESB apptype is the first multi module apptype supported by it. It will contain 4 modules in the sample project. They are as below.

  1. Resources module
  2. Resources CAR module
  3. Synapse Config (Proxy Service) module
  4. Main Car module


There are several rules that developer should obey when developing an ESB type application. Which are as follows.

  • Developers can add any number of modules to the project. But always there should only be two CAR modules. Which should be resources CAR and main CAR which will contain all synapse configs.
  • All synapse config names should contain the version number. Like foosequence-1.0.0. And the synapse config file name also should contain the version number in the same manner.
  • All modules should start with the application ID. This rule is not to conflict artifacts in between applications. If two applications contained artifacts in same name they could conflict.
  • Main Car module artifact id should be similar to the application ID.

It is recommended to use WSO2 Developer Studio to develop WSO2 AppFactory ESB type application. Developer Studio will validate the project structure and it will help developer to follow above rules. If some one is going to edit it by some other method, before accepting the commit it will check whether the required structure is there and will decide to accept it or reject it.

LifeCycle Management and Resources Management

When the ESB application is promoted it will still keep using the development endpoints / resources which is mentioned in the Resources CAR module. So the users in next stage (QA / DevOps) will need to update this resources CAR. So there will be an UI to upload a resources CAR for ESB applications. QAs and DevOps will need to checkout the code from the source code location which is mentioned in the Application Home page and they will need to edit registry resources and endpoints to match their endpoints and then they can build it and upload there Resources CAR so it will get deployed and main car will change it's endpoints to new ones.


Lali DevamanthriGPU Research Center at the University of Peradeniya

The University of Peradeniya is recognized as a GPU Research Center by NVIDIA Corporation for the GPU-based High-Performance Computing Research carried out at the Department of Computer Engineering.

The major focus of the research center is the investigation of the High-Performance Computing (HPC) aspect of GPU in various domains, including bio-computing, computer security, machine learning and data-mining, and physics.

The GPU Research Center at the University of Peradeniya is part of the Embedded Systems, and Computer Architecture Lab (ESCAL) at the Department of Computer Engineering, supervised by Dr. Roshan Ragel.

Senaka FernandoThe Polarizer Pattern

A polarizer is a filter used in optics to control beams of light by blocking some light waves while allowing others to passthrough. Polarizers are found in some sunglasses, LCDs and also photographic equipment.

When it comes to managing an API there is often the need to control what bits of an API gets exposed and what does not. However, this kind of control is generally done at a Gateway that supports API Management rather than at the back-end API which may provide many more functions that never get exposed to a consumer.

A good example, is a SOAP or REST service that is designed to support a web portal which you also want to expose as an API so that 3rd parties can build their own applications based on it. Though your API may provide many functionalities in support of your web portal, you may not want all of these functionalities available to the 3rd party application developer for one of many reasons. And, based on many cases (as in this example), you’d find that this pattern is most useful if you find that you have an existing capability that is currently being used for some purpose, which also has to be exposed for another purpose but with restricted functionality.

While The Polarizer might be treated as a special kind of an Adapter, the key for differentiating the two in terms of API Management is based on how these two patterns would be implemented. While adapter may expose a new interface to an existing implementation with new capabilities or logic combining two or more existing capabilities, a polarizer will simply restrict the number of methods exposed by an existing implementation without altering any of its functionalities. The outcome of The Polarizer may also be similar to a Remote Facade. But, unlike the remote facade, the purpose of a polarizer is not to expose a convenient and performance-oriented coarse-grained interface to a fine-grained implementation with an extensive number of methods; it is to purely to restrict the methods from being accessible in a given context.

The polarizer also fits alongside patterns such as Model-View-ViewModel (MVVM) and Model-View-Presenter (MVP). Unlike these patterns, which are designed to build integration layers to connect front-ends with back-ends, the focus of the polarizer is to control what is being exposed from a back-end without making any consideration in favour of a front-end. We however may find situations where an implementation of an MVVM or MVP pattern also performing tasks of a polarizer.

The graphic below explains how The Polarizer pattern can be implemented by a typical API Management platform. In such an implementation, the Gateway component will simply polarize all incoming requests through some sort of filter, which may or may not be based on some configurable policy.

The WSO2 API Manager provides the capability to configure what API resources are being exposed to the outside world and thereby polarize the requests to the actual implementation. Polarization may not necessarily be a one-time activity for an API. You may decide on a later date to change what methods are exposed. The WSO2 API Manager allows you to do such reconfiguration via the Publisher portal.

sanjeewa malalgodaHow to generate custom error message with custom http status code for throttled out messages in WSO2 API Manager.

In this post we will discuss how we can override the throttle out message HTTP status code. APIThrottleHandler.handleThrottleOut method indicates that the _throttle_out_handler.xml sequence is executed if it exists. If you need to send custom message with custom http status code we may execute additional sequence which can generate new error message. There we can override message body, http status code etc.

Create convert.xml with following content

<?xml version="1.0" encoding="UTF-8"?><sequence xmlns="" name="convert">
    <payloadFactory media-type="xml">
            <am:fault xmlns:am="">
                <am:type>Status report</am:type>
                <am:message>Runtime Error</am:message>
            <arg evaluator="xml" expression="$ctx:ERROR_CODE"/>
            <arg evaluator="xml" expression="$ctx:ERROR_MESSAGE"/>
    <property name="RESPONSE" value="true"/>
    <header name="To" action="remove"/>
    <property name="HTTP_SC" value="555" scope="axis2"/>
    <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
    <property name="ContentType" scope="axis2" action="remove"/>
    <property name="Authorization" scope="transport" action="remove"/>
    <property name="Access-Control-Allow-Origin" value="*" scope="transport"/>
    <property name="Host" scope="transport" action="remove"/>
    <property name="Accept" scope="transport" action="remove"/>
    <property name="X-JWT-Assertion" scope="transport" action="remove"/>
    <property name="messageType" value="application/json" scope="axis2"/>

Then copy it to wso2am-1.6.0/repository/deployment/server/synapse-configs/default/sequences directory or use source view to add it to synapse configuration.
If it deployed properly you will see following message in system logs. Please check the logs and see is there any issue in deployment process.

[2015-04-13 09:17:38,885]  INFO - SequenceDeployer Sequence named 'convert' has been deployed from file : /home/sanjeewa/work/support/wso2am-1.6.0/repository/deployment/server/synapse-configs/default/sequences/convert.xml

Now sequence deployed properly then we may use it in _throttle_out_handler_ sequence. Add it as follows.

<?xml version="1.0" encoding="UTF-8"?><sequence xmlns="" name="_throttle_out_handler_">
    <sequence key="_build_"/>
    <property name="X-JWT-Assertion" scope="transport" action="remove"/>
    <sequence key="convert"/>

Once _throttle_out_handler_ sequence deployed properly you will see following message in carbon logs. Check carbon console and see is there any errors with deployment.

[2015-04-13 09:22:40,106]  INFO - SequenceDeployer Sequence: _throttle_out_handler_ has been updated from the file: /home/sanjeewa/work/support/wso2am-1.6.0/repository/deployment/server/synapse-configs/default/sequences/_throttle_out_handler_.xml

Then try to invoke API until requests get throttled out. You will see following response.

curl -v -H "Authorization: Bearer 7f855a7d70aed820a78367c362385c86"

* About to connect() to port 8280 (#0)
*   Trying
* Adding handle: conn: 0x17a2db0
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x17a2db0) send_pipe: 1, recv_pipe: 0
* Connected to ( port 8280 (#0)
> GET /testam/sanjeewa/1.0.0 HTTP/1.1
> User-Agent: curl/7.32.0
> Host:
> Accept: */*
> Authorization: Bearer 7f855a7d70aed820a78367c362385c86
< HTTP/1.1 555
< Access-Control-Allow-Origin: *
< Content-Type: application/json
< Date: Mon, 13 Apr 2015 05:30:12 GMT
* Server WSO2-PassThrough-HTTP is not blacklisted
< Server: WSO2-PassThrough-HTTP
< Transfer-Encoding: chunked
* Connection #0 to host left intact
{"fault":{"code":"900800","type":"Status report","message":"Runtime Error","description":"Message throttled out"}}

John MathonIIOT and IOT combined in an Airport Use Case

picture of airplane

One of the most interesting use cases for IoT application is around airports.   WSO2 has been working with some airports worldwide and these are truly the most challenging environments for IoT and IIoT in the future.   Imagine the airport of the future.  What does the IIoT and IoT vision for them look like?

Airports have an enormous array of devices that are part of the infrastructure such as the sensors, valves, security systems of all types plus a mixture of facility energy management, vehicles for transportation inside and around the airport. robot devices for baggage handling and other purposes, authorization systems for personnel access.   In addition it is conceived airplanes themselves would be IoT devices.    Like a car you could imagine an API for accessing information about the plane for use by airport, traffic control and security personnel.   In fact many of these things already exist.

IoT and IIoT is about much more than the devices themselves.  It is about the network effect of having many devices connected.   Uber demonstrated a dramatic improvement was possible in an old business with simple application of some IoT ideas.   It is a combination of the network effect and the orchestration of connected devices that can make our world better.

In addition to the highly secure and industrial operational uses of IIoT, the airport of the future we would expect to be a hospitable place for consumers to access services and information that would make our travel experience vastly easier, smoother, less error prone.   Consumers would like to know information about plane arrival, baggage processing, find services in the airport or request services and have the service be able to contact the customer individually.   We would expect over time the airport and airlines would learn to interact with our devices to make this future better.

your bags are on 6

The airports are thinking of a highly IoT future in which both customers and the airport itself would have many novel devices to facilitate operations, efficiency and convenience for the customer in the future.  It is exciting to think of all the novel ways this might evolve and airports want to be ready for innovation and diversity in the future.

Here are 3 uses cases I am going to go in depth about I think you will find interesting both personally and from a technology perspective.  It may provide fodder for your own ideas how to make travel easier.

1) Judy is planning on taking a flight today and she is having a bit of bad luck.   Her friend Linda is picking her up.

2) A plane is arriving and needs some servicing before it can take off again.

3) Jonathon is having a medical emergency while at the airport.

I have purposely chosen use cases with more consumer focus although there are lots of interesting use cases around the operational aspects of the airport that are as complicated and involve many more devices.  Possibly this could be a second blog on this topic.

Use Case 1:  Judy is taking a flight – things don’t go well

Judy has an IoT watch with a silver band that has an airport app for her local airport.  Linda, her friend in Paris that is picking her up has a similar watch with an antique brass band.  The watch could just as easily be a phone or some other always connected device.  Judy is taking an international flight today from LAX to Paris.   Hours before the flight Judy is informed that weather is impacting the flights into and out of Paris.   She is given the option to reschedule her flight.

flight 909 possible cancel

Backstory: These events can be major hassles and costs for people, airports and airlines.   The passenger should be given specific information as it is available of the possibility of cancellations, delays of their flight and recommendations to reschedule or delay if possible their flight.    In many cases airlines will waive fees in such scenarios to reschedule considering the costs of handling inconvenienced passengers.  Making the inconvenience and costs of these kinds of regular incidents less would be a very desirable goal.   This implies that the APIs of the airport and the airlines are in sync so that each is aware of situations and conditions that affect each other and are up to date always and show the same information.   Ideally this applies to all the vendors or other organizations who are involved in the airport whether for infrastructure vendors or outward customer servicing vendors.

The airport should have APIs that are accessible by applications on watches, phones, cars or other IoT devices that queries airport status, known flight status, delays, potential delays for individual flights as well as general airport conditions.  Data can be fed to these services automatically from the IoT devices at the airport as well as airlines and other providers.  If a security situation requires a person to leave the airport or avoid the airport this could also be provided so that applications on Judy or Linda‘s phone or watch and warn them of such situations.    Today Judy receives a warning the airport she is going is going into has potential weather situation that may affect her flight.  She is on alert to see if her flight will be cancelled.   The flight is not cancelled before she leaves for the airport.

As the flight time comes closer Judy is automatically asked to check in if she hasn’t done so already.  

If the flight destination requires special visas, or has other restrictions she will be informed automatically before she heads to the airport.   The airport has also told her that congestion at the airport is high and knowing her current location it is able to tell her the latest she can leave from where she is right now to get to the airport in sufficient time.

congestionliondawants share

Judy determines that she will take the chance and go to the airport.   She leaves when the watch suggests giving her time to allow for the congestion at the airport.   She also approves her friend Linda to get updates on her status and location.

As she approaches the airport she is automatically informed of which parking garages are full, which are available.  If she has an electric car the system may be able to tell her what charging ports are open.   The system will tell her which terminal her flight is leaving from and the status of the plane arriving to LAX.

All of these things require personalization of the app so Judy needs to tell the app her preferences and specific things she would like.  She needs to tell if she is traveling by car or mass transit and many other things.  She may walk slowly and want the app to give her extra time to get places.   She may want an audible alarm in addition to a buzz on her wrist or phone.

ev ports avail

When she parks her car and enters the airport her watch synchronizes with the airport and establishes a secure channel to her device and application on her device.

Knowing her airline for her first segment Judy’s IoT device informs her as she enters the terminal that she can use desks 112-120 to check in for her flight.


Her IoT device communicates with the airport to tell the airport her location and this is relayed as well to the friends or others she has designated to share this information with.  When she gets to the checkin desk the system automatically brings up her flight record for the airline agent because the system has been tracking her. The agent informs her that the problem in the destination city has gotten better but that her flight may still be delayed.

securty and lounge 

Judy is issued electronic tickets to her airport watch app and her luggage is identified with her watch identity through attached NFC devices at the airport.   Judy is authorized to go to the upper class waiting area as she may have to spend extra time at the airport.  

id required

She can use her phone or watch’s identity fingerprint reader to go directly through the security system without having her identity checked manually.   She puts her bag through security and the airport knows she has gone through security successfully.    

loungejudy at airport notification

So far Judy’s path has been highly streamlined and made more efficient both for her, the airline and the airport in a difficult situation. There is enhanced security in knowing where Judy is and through enhanced biometric data.

As Judy is in the airport her watch app tells her of the status of the airplane and knowing her location and boarding priority it can estimate when she needs to head to the gate to board.    

Judy has ordered several presents from duty-free to take with her to her friend.   Her watch app is informed she has duty free items to pick up.  When they are ready she is told that she needs to go to duty free to pick up her products and directed how to get there.

duty free pickuptake transit bus 7

Similarly, she is led to her gate via whatever buses, trains or paths she needs to take.  When Judy gets to the gate agent her ticket is located in her watch passbook as well as her boarding priority so that she can simply walk through the gate at the right time and onto the walkway to her plane without needing a gate agent to handle her ticket.   If the plane requires an ID check before boarding she could simply use her biometric reader to bypass this.   

During her traversal of the airport Judy’s IoT device has determined her location and sent that information to the airport app but there are times when a more precise location is required and a NFC capability or similar functionality is required to provide short distance identification such as at various interactive devices, gates or ticket counters.

board the planeluggage on plane

Judy is on the plane so it seems the worst part of her journey may be over.   She sees her luggage made it as well.  Unfortunately, this is not Judy’s lucky day and before departure a physical problem with the plane is detected.  Judy is informed the airline is preparing an alternate plane to take her.  She must de-board the plane and proceed to a different gate.  She is informed her seat will be slightly different because of the different configuration of the new plane.   Judy gets off the plane and proceeds to the new gate.  

new departurejudy flight delayed

She gets in her seat on the new plane similarly to the last time without having to carry a ticket since her ticket is electronic it knows her plane and gate have been reassigned.    Judy left something on the first plane in her seat.   Don’t tell me you haven’t done this.  

Since the items have her identity on them the system knows where she is, the new plane and seat she is now located in.  The attendant simply dispatches an automated courier robot which picks up the item and proceeds to deliver it to Judy on her new plane.

article left

Judy is very happy she didn’t end up forgetting the duty free items.  Whew!  

The airport knows that Judy and her luggage are together on the same plane.   However, when she boards the new plane her luggage cannot fit onto the smaller plane.  The baggage system transfers her luggage automatically to the next flight to that destination.

Judy is informed before takeoff that her luggage will be delayed and will be scheduled onto the next flight in 6 hours to that destination.   She is automatically registered as having missing luggage and is issued a kit of consumables to help her survive without her luggage.

bagscom for delivery

Linda of course is aware of all this and texts Judy she regrets her baggage will be late and will be able to lend her some stuff too.    Judy’s plane finally takes off.   During the flight the special request she had made a week earlier from the airline web site for a Domaine Carneros La Reve Champagne was delivered to her.  This made the flight a lot more enjoyable.    She put on her virtual reality goggles and watched an engrossing version of an old movie called Avatar by James Cameron she had never seen.

To Judy’s relief the pilot was able to make up some time en-route but not enough to make it on-time.

Judy is able to see en-route through the planes IoT systems that are connected to the internet that her luggage did indeed make it onto the next flight.  She is able to tell the baggage system her location at her destination to send the luggage to.   This time as she exits the plane Judy doesn’t forget the duty free items.  :)

In the meantime Linda is aware the plane is delayed.  Her watch app tells her that given congestion on the streets and at the airport to arrive to pick up her friend she must leave 47 minutes prior to landing.  This includes the time Judy would wait for baggage (in this case 0) and the traversal through passport control and customs at this time of day.  The calculation is quite precise and Linda arrives at the airport knowing that Linda is out of customs and walking to door 7 within a few minutes of the expected time.

leave for linda

Linda is aware of Judy’s progress at all times.   The airport police are pushing drivers along who don’t move try to ask Linda to leave.  Linda shows them her watch app which shows her friend is en-route, has her baggage and within a minute of exiting the building.   The police allow her to loiter for her friend.

judy is past customs

Judy pops out into the miserable weather but the happy arms of her friends embrace knowing exactly where each other are.  

They drive out of the airport and the airport disengages and sends Judy a welcome message to her new city and any information that they might deem useful to her including traffic problems or other security information.

welcome to paris

Use Case II:    Flight 909 is a long haul flight from LAX to Paris.   Flight 909 is sitting at the gate waiting for the automated baggage and other systems to finish loading the plane up.

The plane itself is a giant set of IoT devices.  Entertainments system, baggage, GPS, food systems and of course the operation and controls of the plane itself.  All of these systems have telemetry to report to the airline, airport and various service providers.   When the plane landed the airport personnel and vendors already knew what consumables had been used during the last flight and what special needs there were for the new flight so that it could be stocked appropriately.     The plane knew for instance that one of the passengers wanted a special champagne to be available for this flight.

In the final checkup of the plane by inspectors at LAX they determine that the plane has a defect which will not allow it to fly to the destination immediately.   This is a rare occurrence because the plane normally detects these situations in flight and informs them prior to arrival at the destination.  The decision is made that a different plane can be made available.

A special complex orchestration is initiated which automates much of the work needed to transfer to the new airplane including robots needed to move the luggage or help in the movement of the airplanes.  Appropriate paperwork has to be filed with the tower and flight agencies.  Flight plans updated.  Passengers seats reassigned, The new plane is requisitioned.    As soon as this decision is final all passengers on the flight are informed of the need to disembark and go to the new gate.

Some passengers who will miss connections due to the delay are automatically informed of alternative routes and times.  They are automatically reassigned to new routes after the passenger agrees to the new routing. Passengers that will have to spend the night in this location are automatically told which hotels to go to and given directions.  The hotel is booked for them and the room paid for.   If they have special assistance needs an autonomous vehicle is dispatched to pick them up at the gate and transport them to the place they need to go.

During the transfer to the new smaller plane in this instance some luggage does not make it.   The passengers with displaced luggage are informed and the luggage is immediately routed to the new plane by the automated baggage system robots when the appropriate time is reached.

The new plane is boarded and as the plane goes through its final manifest and route approval all of the process is automated to make the rerouting seamless and fast.

As the new plane leaves the gate all the IoT devices on the Tarmac are automatically reassigned to other duties.  Each device on the tarmac and other areas of the operational area of the airport is instrumented so that it can be automatically fetched and dispatched where it is needed.  Each device has a health status so that any abnormal behavior or service needed is handled in advance as much as possible.

Autonomous operation is a much easier job at an airport with very limited paths and destinations.

Every door and access has biometric reading to validate personnel are allowed in that area of the airport.   If any device is commandeered and moved outside of a geo-fence it becomes de-activated and immediately all data is wiped from the device as needed and the system is informed.

In flight

The plane takes off and telemetry from the plane is transmitted constantly to various authorized consumers.    The consumers may be airlines and airplane manufacturers, service vendors who need to understand the operation of the plane itself, detailed location information for air traffic control, less detailed information for other parties.  Consumables are recorded as they are used.   Most important any safety or mechanical trouble is reported instantly along with logs.  Cockpits are instrumented to provide visual confirmation of the condition of the cockpit and can be remotely operated in case the pilots become unable to fly the plane.

En-route the airplane detects an asymmetrical weighting on one wing.  A camera captures the image of a ape-like man who is tearing at the wing.   Such things occur regularly so the pilot authorizes electrifying the wing causing the gremlin to jump off the planes wing.   Soon after that the airplane detects a small condition in one engine needs to be looked at.  3 parts are involved and this information is relayed to suppliers so that the parts can be made available if needed the instant the plane lands at the gate in Paris.

Use Case III (3) :  Jonathon is having a medical emergency while at the airport.

In Paris in terminal one a passenger is feeling ill.   He has a condition called COPD which is a progressive disease.  He has been issued a medical monitor armband which detects conditions when Jonathon may be in danger.   As Jonathon is walking to his next flight he feels a light-headedness.

His monitor detects Jonathon’s blood has a spectrum that is indicating insufficient oxygen is being consumed, an acceleration in his heart rate, increased perspiration and with little physical movement  it concludes something is wrong.    The monitor broadcasts to closeby IoT devices that there is a person in distress.   These signals are relayed to the airports emergency medical technicians who are dispatched to the exact location of Jonathon within the terminal within 2 minutes.   An autonomous medical vehicle is also routed in case it is needed.

The technicians are able to help Jonathon with some simple medications which help him breathe better and insure if he does have a heart attack that his heart will have minimal damage.   Within 20 minutes Jonathon is able to proceed on his journey after the EMT’s determine he is okay to travel.

What is needed at an airport to enable this kind of service and efficiency?

There are obviously requirements for hardware to support all these functions.   This blog will not concern itself so much with specific hardware sources or choices but simply specify the requirements in terms of standards and software components, capabilities to facilitate the functionality described.

Categorization of IoT devices at the airport

An airport in this scenario will have devices that require high security and fast response times and many diverse devices for numerous other functions.  It could easily be the case there are 10s of thousands of IIoT devices in and around the airport that have to be managed.     This number of devices requires a large amount of automation to make it practical.   There has to be well defined security protocols and standards as well as policies and rules for devices so that they can be managed in this highly complex environment.

It’s important to note that because of the low cost of IoT hardware, the ubiquity of standards and other technology that whether or not the airports want all the infrastructure and security made into IoT devices they will evolve rapidly to all become IoT devices.

An airport will need IoT devices which are designed for high security and high reliability operation.  Some of the protocols such as CoAp are evolving rapidly to support more secure applications.  In other cases hardwired connections or high security wifi connections will be used.

Let us establish some terms for different types of environments that devices need to operate within.

INFRASTRUCTURE:  Devices related to HVAC, energy management, pumps, watering systems and anything that has to do with the basic operation of the buildings and environment.

OPERATIONS:  Devices related to operating the airport including transportation vehicles, robots, baggage handling, conveyers.  Also, will include in this category things like airplanes themselves.

SECURITY: Devices related to locks, authentication systems, entitlement control systems, security monitoring devices, cameras, detectors of dangerous situations.  This might include cameras that can recognize possibly known people who are not supposed to be traveling, the security devices at security checkpoints.

SERVICE: Devices that are designed to provide services to customers.  This might include monitors that display information or dispense information to consumers, beacons for instance, devices that help consumers with baggage or travel assistance wheelchairs and carts.

CUSTOMER:  Devices that customers bring into the airport that need to access services within the airport.

It is expected that devices in the INFRASTRUCTURE and OPERATIONS category are always connected devices that have large battery power or directly connected to a power source and network connection that is physical and possibly replicated in some cases for reliability and security.

We would hope the same high level of service was possible for SECURITY devices but it is expected that this might include things like NFC or RFID devices.   Security devices might need to be mobile and thus hard wired power and networking may not be possible in all cases.

These classes of devices in the first 3 categories will also have to have the ability to be managed by a device manager that can establish geo-fences, wipe devices contents and de-authorize if they are tampered with or accessed outside the geo-fence they are assigned to.   Devices in these categories should have health APIs to tell of failure or imminent failure, battery loss or other concern.   They should support a authentication mechanism and all data on the devices at rest as well as data in motion over whatever protocol the device uses must be encrypted.   The certificates used by such encryption systems should be managed automatically so they can be revoked and re-issued automatically periodically.

Security, Integration (Orchestration), IoT, BigData and other requirements of Airport Infrastructure

Due to the large number of people coming and going from an airport at any time and the sheer number of devices with telemetry the amount of data captured by the systems and the number of transactions is truly a very large number.   It is anticipated that the system for an average big city airport would have to handle potentially tens of thousands of messages / second possibly many more.   This is roughly a billion messages / day and is quite reasonable for todays systems to handle.


A comprehensive full featured Identity and Security manager component is required which will support all the following:

OPENID, SAML2, Kerberos
Multi-factor authentication
credential mapping across different protocols
federation via OPENID, SAML2
account locking on failed user attempts
Account recovery with email and secret questions
bio-metric authentication
User/Group Management
LDAP, Active Directory or any database including Cassandra to support large user stores
SCIM support
RBAC Role Based Access Control
Fine Grained Policies via XACML
Entitlement Management for APIs – REST or SOAP
preventing login outside defined geofences
logging integration with BAM and CEP for KPI’s and suspicious activity eventing


A bigdata infrasturcture is required because of the data flow requirements as well as the scale of data being collected.  Each device will be polled frequently and that data will be logged both for use in security applications as well as for analysis to improve efficiency, discover loopholes, discover new automations and implement improvements in functionality quickly.

Support for standards such as Cassandra,

Bigdata Collection Must Collect to a common bigdata store for analysis
collect information on all API usage
collect information on data from devices or services that require polling
collect data from devices or services that publish themselves
easy to add new streams of unstructured or semi-structured data
discover new sources for data dynamically and collect data
support 10000s of messages / second easily
collect metadata from some services or devices where the raw data repository is elsewhere
collect data on the system itself, such as metrics produced, actions taken, events generated
Support Apache Thrift, HTTP, MQTT, JMS, SOAP, Kafka and Web services
ability to add GPS, other information to stream
ability to also send data to other loggers as needed
real-time analytics
non-programmers should be able to add new metrics, KPI’s or continuously calculated quantities
time-based pattern matching (Complex event processing) for real-time eventing
machine learning capability to learn behaviors to be monitored
ability to create events based on exceeding limits, falling below limits, breaking a geofence, entering a geofence or other conditions
ability to aggregate data in an event from other events or data streams
ability to process rule based analytics in real time
easy to create dashboards of visualizations of any event or data in any stream
easy to create maps of events, devices, data on any event stream
easy to use tools like Google gadgets to create visualizations
ability to aggregate data from multiple sources including bigdata, conventional databases, file systems or other sources
batch analytics
ability to integrate with hadoop and open source batch big data analytics tools such as pentaho
manage batch analytics to perform
ability to manage large clusters of cassandra or other big data storage databases automatically
ability to scale on increasing demand rapidly

Governance Registry / API/App/Web Store

A governance Registry and Enterprise Store Capability is needed to insure security, configuration consistency, managing the lifecycle of APIs, promote services to vendors, partners and the public.  A governance registry provides lifecycle management as well as fault tolerance configuration and some security services but the typical governance registry doesn’t provide a friendly interface for the public, vendors or even to the internal development.   An Enterprise Store designed to make it easy to find services, documentation, helpful hints to promote a community of users of services is required.   This is an essential part of the Platform 3 message I talk about that is key to productivity and agility.

Some of the features required in these components:

gov Registry / Enterprise Store
Need to be able to support any type of asset as a governed enttity including APIs for services, APIs for devices, different types of mobile devices, mobile phones, keys for APIs, credentials for devices, certificates for services, GPS coordinates for fixed devices, geofencing zones
easy to add devices and services
easy to find status of all entities
easy to find and view devices and services
must support APIs, Devices, APIs for devices and Apps
easy to find documentation about any asset
ability to create new asset classes
lifecycle creation for each class
easy to manage lifecycle of each classcertificate management for services and devices

Internet of Things (IoT)

The Internet of Things components have to do mainly with device management.   In the past device management has been focused on cell phones.  New device management capabilities must include the capability to manage devices of all types.  The purpose of device management is to register devices, configure devices in a uniform way, detect anomalous behaviors, handle device upgrades, replacement, failures, theft, maintenance or even tampering.

In order to support the wide variety of devices in both the IIoT and IoT sphere it is necessary to support a Connected Device Management Framework which allows abstraction of basic functions and services associated with any device.    An important aspect of device management in such a complex environment as an Airport is to group devices in intelligent ways that allows management and analysis of data contextually.

IP type devices connected over wired interfaces
IP type devices connected over wireless interfaces Wifi
IP type devices connected over CoAp/Zigbee
Protocols REST and SOAP
Device Management
Device Profile
API registration
Owner Registration
GPS and GeoFencing
Beacon Security Profile
Supported Security
Health Status monitoring
Entitlement / Authorization Profile
Data Logging
Device Wipe
Upgrade Status and Upgrade
functional tagging
Groups Groups are non-heirarchical list of connected devices that depend on each other and have a set of analytics or services that operate across the group. Groups can contain other groups.
Group Profile
Group APIs
Group Geofencing
Group Health
Group part of Groups, tags
Group Data Logging
Connected Device Management Framework
Support for OMA
Support for LWM2M
Extensible Support for Devices that don’t work under these
Extensible Support for all the device management semantics above

Integration and Orchestration

A key capability of the IoT infrastructure proposed is the intelligent working of multiple services together to produce intelligent behavior.    In the past individual devices were handled individually as their own function and not much thought was put into devices autonomously working together or information from multiple devices used to produce automation.

In order to make the Airport function efficiently with tens of thousands of devices and many services involving access to multiple devices it is necessary to be able to integrate all these devices so they can work together and to be able to establish rules, processes and simple workflows and dataflows between devices and people.   As a result there is a need for all 3 types of orchestration tools that we have used before:  Rules Engines, Integration Patterns in Enterprise Busses and Business Process Engines.   These 3 patterns provide an orthogonal and complete set of functionalities to specify behaviors that are simple integration of information, distribution of information from different devices to all the parties needed and to provide rules for complex behaviors or business process logic that may involve humans and take longer than a microsecond to fully process.

the full suite of enterprise integration patterns supported
JMS, AMQP, HTTP(S), Files,
support for visual process management scripting orchestration
support for business rules orchestration
support for ETL from wide variety of sources
APIs and Enterprise Integration Patterns

Scalability, Performance

Since this system must support high message rates and highly variable demand it is important that the system be able to scale as needed to provide services.   In addition it is highly desirable to have automatic failure detection and service replacement.  A cloud based architecture is well suited to this as has been shown by numerous companies such as Google, Yahoo, Twitter.

In order to support a cloud infrastructure a DevOps/PaaS strategy should be employed to automate the management, deployment of services and move, scale or upgrade them.    PaaS platforms provide many features that take months and years to custom build and then are inflexible.

For example at an airport it may be possible to anticipate load at certain times of the day or based on flight arrival and departure times, events in an area that may increase or decrease base load.   To keep response to services always efficient it should be possible to allocate instances to specific devices or regions.

Fault Tolerance The entire system and every component must support active/active and active/passive FT
Disaster Recovery Data must be replicated to alternate site and applications should be runnable in alternate environment through replicated governance registry contents
Scale The system should support dynamic scaling allowing for peak period data flows and stress conditions 10 times the average flow
The system should support 1000s of messages / second at average flow
The system should be able to dynamically add instances of services if required to meet demand
Load Balancing
Hybrid Support multiple clouds in different locations
Polyglot Ability to support different development environments, different development tools and applications
Container Support Support for Docker, Zen and other containers
Orchestration Support for Kubernetes
Operations Operational Management capabilities including performance monitoring
AutoScaling The ability to scale a process based on numerous factors not just queue lengths


The modern Airport will be one of the most challenging environments for IoT and IIoT applications.   The ability to provide efficient operation and security go hand in hand in this new world.   Many of the new devices and capabilities will make customers lives dramatically better especially in stressful situations.  People can be better informed and have more of the basic things handled automatically.

The purpose of all these “things” is to make our life easier and better.   If the complexity overwhelms the system then the purpose of IoT is lost.   The “things” enable intelligent behavior from the ability to sense and act.  However, if each thing has to be managed individually we will spend our lives managing the things and not enjoying.   So, the key is “intelligence.”   The key to implementing intelligence is interoperability and orchestration.  However, to know what to do that is intelligent requires BigData.    We must be able to discover the patterns and the actions that will result in saved time and effort.  We must figure out how to respond to security situations intelligently and to handle typical failures and events like weather or plane outages or worse in as automated and smooth a way as possible or the complexity of the system will overwhelm the people intended to manage and produce good results.

The technology exists today in open source to do a modern Airport like I have described.  Nothing I suggested or described was beyond what we have today.   It is simply a matter of the desire to build a better Airport and world for ourselves.

I hope you enjoyed these use cases.

Evanthika AmarasiriHow to print the results summary when JMeter is running on headless mode

When you are running JMeter in GUI mode, you can easily view the results through the Summary report or through the Aggregate report.

But how do you view the summary if you are running JMeter in headless mode?

This can be configured through the file which resides in $JMETER_HOME/bin folder.

Note that this is enabled by default in latest versions of JMeter. E.g.:- Version 2.13

#--------------------------------------------------------------------------- # Summariser - Generate Summary Results - configuration (mainly applies to non-GUI mode) #--------------------------------------------------------------------------- #

# Define the following property to automatically start a summariser with that name
# (applies to non-GUI mode only)
# interval between summaries (in seconds) default 3 minutes
# Write messages to log file
# Write messages to System.out

Ajith VitharanaInvoke file upload spring service using WSO2 API Manager(1.9).

This is one use case I tried recently. So I thought to post that as a blog post :).

1. Deploy sample spring service to upload a file.

i  Download the sample service from this website  (download link

The class like bellow.

package hello;


import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.multipart.MultipartFile;

public class FileUploadController {

@RequestMapping(value="/upload", method=RequestMethod.GET)
public @ResponseBody String provideUploadInfo() {
return "You can upload a file by posting to this same URL.";

@RequestMapping(value="/upload", method=RequestMethod.POST)
public @ResponseBody String handleFileUpload(@RequestParam("name") String name,
@RequestParam("file") MultipartFile file){
if (!file.isEmpty()) {
try {
byte[] bytes = file.getBytes();
BufferedOutputStream stream =
new BufferedOutputStream(new FileOutputStream(new File(name)));
return "You successfully uploaded " + name + "!";
} catch (Exception e) {
return "You failed to upload " + name + " => " + e.getMessage();
} else {
return "You failed to upload " + name + " because the file was empty.";


ii Unzip the file and go to the gs-uploading-files-master\complete directory from command window.

iii. Execute the following command to build the executable jar file.(You should have maven 3.x installed)

mvn  clean install

(The default pom file configured to build with Java 8 , if you are using  java 6 or 7 change the following section in the pom file gs-uploading-files-master\complete\pom.xml)
        <java .version="">1.7</java>

iv.  Go to the target directory and execute the following command to run our file upload service.

java -jar gs-uploading-files-0.1.0.jar

v. When you locate your browser to http://localhost:8080/ , you should be able to see the service UI to upload a file.

vi. You can send a POST request to http://localhost:8080/upload to upload file as well.

2. Create a API for this service using WSO2 API Manager.

i. Required fields to create an API.

API Name: FileUploadAPI
Context    : /fileupload
Version    : 1.0.0
URL pattern : /*
HTTP Method : POST
Production URL :  http://localhost:8080/upload
Tier Availability : Unlimited
ii. After published API , log in to the store , subscribe to API and generate token.

iii) Generate SOAP UI project using the API endpoint https://localhost:8243/fileupload/1.0.0

  • Change the HTTP method to POST.
  • Add a query parameter "name". (because hanleFileUpload operation in expect that parameter).
  • Select the Media Type as "multipart/form-data" . (because we send file to the upload service as an attachment)
  • Add Authorization header.(because we invoke OAuth2 protected resource exposed by WSO2 API Manager)

iv) Add a file as attachment in SOAP UI.

v) Now when you send a request,  you should see the error message which similar to this.

   "timestamp": 1437841327984,
   "status": 400,
   "error": "Bad Request",
   "exception": "org.springframework.web.bind.MissingServletRequestParameterException",
   "message": "Required MultipartFile parameter 'file' is not present",
   "path": "/upload"

vi) To avoid this issue , you need to define the ContentID as "file", in the attachment window.

This ContentID value is depend on the name given for the RequestParam [eg: @RequestParam("file")] defined for the MultipartFile in method signature.
public @ResponseBody String handleFileUpload(@RequestParam("name") String name, 
            @RequestParam("file") MultipartFile file){

vii) Go to the place you execute the jar file, then you should see the uploaded file.

Nirmal FernandoHow to tune hyperparameters?

Hyperparameter tuning is one of the key concepts in machine learning. Grid search, random search, gradient based optimization are few concepts you could use to perform hyperparameter tuning automatically [1].

In this article, I am going to explain how you could do the hyperparameter tuning manually by performing few tests. I am going to use WSO2 Machine Learner 1.0 for this purpose (refer [2] to understand what WSO2 ML 1.0 is capable of doing). Dataset I have used to perform this analysis is the well-known Pima Indians Diabetes dataset [3] and the algorithm picked was Logistic regression with mini batch gradient descent algorithm. For this algorithm, there are few hyperparameters namely,

  • Iterations - Number of times optimizer runs before completing the optimization process
  • Learning rate - Step size of the optimization algorithm
  • Regularization type - Type of the regularization. WSO2 Machine Learner                                supports L2 and L1 regularizations.
  • Regularization parameter - Regularization parameter controls the model complexity and hence, helps to control model overfitting.
  • SGD Data Fraction - Fraction of the training dataset use in a single iteration of the optimization algorithm

From the above set of hyperparameters, what I wanted to know was, the optimal learning rate and the number of iterations keeping other hyperparameters at a constant value.

  • Finding the optimal learning rate and the number of iterations which improves AUC (Area under curve of ROC curve [4])
  • Finding the relationship between Learning rate and AUC
  • Finding the relationship between number of iterations and AUC


Firstly, Pima Indians Diabetes dataset was uploaded to WSO2 ML 1.0. Then, I wanted to understand a fair number for the iterations so that I could find the optimal learning rate. For that the learning rate was kept at a fixed value (0.1) and varied the number of iterations and recorded the AUC against each iterations number.

LR = 0.1


According to the plotted graph, it is quite evident that the AUC increases with the number of iterations. Hence, I picked 10000 as a fair number of iterations to find the optimal learning rate (of course I could have picked any number > 5000 (where learning rate started to climb over 0.5)). Increasing number of iterations extensively would lead to an overfitted model.
Since, I have picked a ‘fair’ number for iterations, next step is to find the optimal learning rate. For that, the number of iterations was kept at a fixed value (10000) and varied the learning rate and recorded the AUC against each learning rate.



According to the above observations, we can see that the AUC has a global maxima at 0.01 learning rate (to be precise it is between 0.005 and 0.01). Hence, we could conclude that AUC is get maximized when learning rate approaches 0.01 i.e. 0.01 is the optimal learning rate for this particular dataset and algorithm.

Now, we could change the learning rate to 0.01 and re-run the first test mentioned in the article.

LR = 0.01


Above graph depicts that the AUC increases ever so slightly when we increase the number of iterations. So, how to find the optimal number of iterations? Well, it depends on how much computing power you have and also what level of AUC you expect. AUC will probably not improve drastically, even though you improve number of iterations.

How can I increase the AUC then? You can of course use another binary classification algorithm (Support Vector Machine) or else you could do some feature engineering on the dataset so that it reduces the noise of the training data.

This article tries to explain the process of tuning hyperparameters for a selected dataset and an algorithm. Same approach could be used with different datasets and algorithms too.

Danushka FernandoHow to include artifactid in a folder or file name or content of a file in a maven archetype.

When we create maven archetypes, specially the multi module ones, we might need to include the artifact id in to a folder or file name or include it to a content of a file. This can be done very easily.

To include artifactid to a folder or file name you just have to add the place holder __rootArtifactId__.

**Note that there are two '_' characters before and after the word rootArtifactId.

So for example if  you want a file name like -development.xml, then you can simply name it as __rootArtifactId__-development.xml. When file name is mentioned like this inside the archetype, when you run archetype generate command it will replace with the artifactid provided.

Next thing is how to include this inside a file. This was tricky. I couldn't find a way to do it first. So I keep trying things and it works and it's simple. You can simply add the place holder ${rootArtifactId}.

In the same pattern you can use other parameters like version as well by using __version__ and ${version} respectively.

Enjoy !!!!

Srinath PereraMoved to wordpress at

Moved to wordpress, and find the new blog at

Isuru PereraJava CPU Flame Graphs

Brendan Gregg shared an exciting news in his Monitorama talk: The "JDK-8068945" is fixed in Java 8 Update 60 Build 19!

Without this fix, it was not possible to see full stack in Java with Linux perf_events and standard JDK (without any patches). For more information, see Brendan's Java CPU Flame Graphs page.

The Problem with Java and Perf

First of all, let's see what's the problem with using current latest Java and perf.

For this example, I used the same program explained in my previous blog post regarding FlameGraphs.

java org.wso2.example.JavaThreadCPUUsage.App

Then I sampled on-CPU functions for Java program using perf. (See Brendan's Perf Examples)

sudo perf record -F 99 -g -p `pgrep -f JavaThreadCPUUsage`

After few seconds, I pressed Ctrl+C to stop recording.

When I tried to list all raw events from "" using the "sudo perf script" command, I see following message.

Failed to open /tmp/, continuing without symbols

Perf tries to load Java symbol table from the "/tmp/" file. The 29463 part in the file name is the PID of the Java program.

Missing these Java symbols is one of the main problems with Java and Perf.  In here, there are actually two specific problems and Brendan has explained those in Java CPU Flame Graphs.

As explained by Brendan, in order to solve these problems, we need to provide Java symbol table for the perf and instruct JVM to preserve frame pointers. This is why we need the fix for  "JDK-8068945".

Generating Java CPU Flame Graphs

I downloaded the latest JDK™ 8u60 Early Access Release, which is "JDK 8u60 Build b24" as of now.

I extracted the JDK to a temp directory in my home. So, my JAVA_HOME is now "~/temp/jdk1.8.0_60".

With this release, I can use the JVM argument "-XX:+PreserveFramePointer" with java command.

~/temp/jdk1.8.0_60/bin/java -XX:+PreserveFramePointer org.wso2.example.JavaThreadCPUUsage.App

Now we need to create the Java symbol file. I found two ways to do that.

  1. Use
  2. Use 

Using perf-map-agent

We need to build "perf-java".

git clone
cd perf-map-agent
export JAVA_HOME=~/temp/jdk1.8.0_60/
sudo apt-get install cmake
cmake .

Now, run "perf-java".

./perf-java `pgrep -f JavaThreadCPUUsage` 

This will create the Java symbol file in /tmp.

Please note that the perf-java command attaches to the java process. Please use the same JAVA_HOME to make sure there are no errors when attaching to the Java process.

Now we can start a perf recording. (Same command as mentioned above)

sudo perf record -F 99 -g -p `pgrep -f JavaThreadCPUUsage`

Using perfj

Perfj is a wrapper of linux perf command for java programs. Download the latest release from perfj releases page.

tar -xvf perfj-1.0.tgz
cd perfj-1.0/
sudo -u isuru JAVA_HOME=/home/isuru/temp/jdk1.8.0_60 ./bin/perfj record -F 99 -g -p `pgrep -f JavaThreadCPUUsage`

Please note that I'm using the same user as the java process. This is required as perfj also attaches to the Java process. JAVA_HOME also must be same.

Generating Flame Graph

Now we have and java symbols to generate the flame graph.

sudo perf script | ~/performance/brendangregg-git/FlameGraph/ > /tmp/out.perf-folded
cat /tmp/out.perf-folded | ~/performance/brendangregg-git/FlameGraph/ --color=java --width 550 > /tmp/perf.svg

When you open the perf.svg in browser, you can see complete stack traces.


This is a quick blog post on Generating Java CPU Flame Graphs. To create Java symbol file, we can use perfj or perf-map-agent.

When I tested, perfj seems to provide better results.

Here is the flame graph generated with perfj java symbols!

Flame Graph Reset Zoomjlong_disjoi.._ZL10java_startP6Thread__v.._ZN10JavaThread3runEvjava_ZN9JavaCalls12call_virtualEP9JavaValue6Handle11KlassHandleP6Symbo..[

If there are any questions, please ask in comments. 

Isuru PereraInstalling Oracle JDK 7 (Java Development Kit) on Ubuntu

There are many posts on this topic if you search on Google. This post just explains the steps I use to install JDK 7 on my laptop.

Download the JDK from Oracle. The latest version as of now is Java SE 7u51.

I'm on 64-bit machine, therefore I downloaded jdk-7u51-linux-x64.tar.gz

It's easy to get the tar.gz package as we just have to extract the JDK.

I usually extract the JDK to /usr/lib/jvm directory.

sudo mkdir -p /usr/lib/jvm
cd /usr/lib/jvm/
sudo tar -xf ~/Software/jdk-7u51-linux-x64.tar.gz
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.7.0_51/bin/javac" 1
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.7.0_51/bin/java" 1
sudo update-alternatives --install "/usr/lib/mozilla/plugins/" "" "/usr/lib/jvm/jdk1.7.0_51/jre/lib/amd64/" 1
sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.7.0_51/bin/javaws" 1

After installing, we should configure each alternative

sudo update-alternatives --config javac
sudo update-alternatives --config java
sudo update-alternatives --config
sudo update-alternatives --config javaws

Now we can configure JAVA_HOME. We can edit ~/.bashrc and add following.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_51/

That's it! :)


Please check Oracle Java Installation script for Ubuntu. All above steps are now automated via an installation script:

Evanthika AmarasiriCommon SVN related issues faced with WSO2 products and how they can be solved

Issue 1

TID: [0] [ESB] [2015-07-21 14:49:55,145] ERROR {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository} -  Error while attempting to create the directory: http://xx.xx.xx.xx/svn/wso2/-1234 {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository}
org.tigris.subversion.svnclientadapter.SVNClientException: org.tigris.subversion.javahl.ClientException: svn: authentication cancelled
    at org.tigris.subversion.svnclientadapter.javahl.AbstractJhlClientAdapter.mkdir(
    at org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository.checkRemoteDirectory(

Reason : The user is not authenticated to write to the provided SVN location i.e.:- http://xx.xx.xx.xx/svn/wso2/ . When you see this type of an error, verify the credentials you have given under the svn configuration in the carbon.xml


Issue 2

TID: [0] [ESB] [2015-07-21 14:56:49,089] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} -  Deployment synchronization commit for tenant -1234 failed {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask}
java.lang.RuntimeException: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: A repository synchronizer has not been engaged for the file path: /home/wso2/products/wso2esb-4.9.0/repository/deployment/server/
    at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.commit(
    at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncCommit(
    at java.util.concurrent.Executors$
    at java.util.concurrent.FutureTask.runAndReset(
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(
    at java.util.concurrent.ScheduledThreadPoolExecutor$

    (I) SVN version mismatch between local server and the SVN server (Carbon 4.2.0 products support SVN 1.6 only.

    Solution - Use the SVN kit jar 1.6 in Carbon server


      (II) If you have configured your server with a different SVN version than what's in the SVN server and even if you use the correct svnkit jar at the Carbon server side later, the issue will not get resolved

      Solution - Remove all the .svn files under $CARBON_HOME/repository/deployment/server folder

      (III) A similar issue can be observed when the SVN server is not reachable.

    Evanthika AmarasiriSolving the famous "java.sql.SQLException: Total number of available connections are less than the total number of closed connections" issue

    While starting up your WSO2 product after configuring a registry mount, you may have come across the below issue.

    Caused by: java.sql.SQLException: Total number of available connections are less than the total number of closed connections
        at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCDatabaseTransaction$ManagedRegistryConnection.close(
        at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCTransactionManager.endTransaction(
        ... 46 more

    When you see the above exception, the first thing you have to do is verify the mount configuration of the registry.xml.

    See below. In this config, if you accidentally refer to the dbConfig name of the local DB in the mount, you will get the exception mentioned in the $subject.The correct dbConfig name you should refer to is, wso2Mount where wso2Mount is pointed to an external DB.

    E.g. :-


        <dbConfig name="wso2registry">

        <dbConfig name="wso2Mount">

        <remoteInstance url="https://localhost:9443/registry">

        <mount path="/_system/config" overwrite="true">

    sanjeewa malalgodaHow to handle ditributed counter across cluster when each node contribute to counter - Distributed throttling

    Handle throttling in distributed environment is bit tricky task. For this we need to maintain time window and counters per instance and also those counters should be shared across cluster as well. Recently i worked on similar issue and i will share my thoughts about this problem.

    Lets say we have 5 nodes. Then each node will serve x number of requests within minutes. So across cluster we can server 5X requests per minutes. And some cases node1 may server 2x while other servers 1x. But still we need to have 5x across cluster. To address this issue we need shared counter across cluster. So each and every node can contribute to that and maintain counters.

    To implement something like that we may use following approach.

    We can maintain two Hazelcast IAtomicLong data structures or similar distributed counter as follow. This should be handle in cluster level.
    And node do not have to do anything about replication.

    • Shared Counter : This will maintain global request count across the cluster
    • Shared Timestamp : This will be used for manage time window across the cluster for particular throttling period

    In each and every instance we should maintain following per each counter object.
    • Local global counter which sync up with shared counter in replication task(Local global counter = shared counter + Local counter )
    • Local counter which holds request counts until replication task run.(after replication Local counter = 0)

    We may use replication task that will run periodically.
    During the replication task following tasks will be happen.
    Update the shared counter with node local counter and then update local global counter with the shared counter.
    If global counter set to zero, it will reset the global counter.

    In addition we need to set the current time into the hazelcast Atomic Long .When other servers get the first request it sets it first access time as the value in hazelcast according to the caller context ID.So all the servers will set into the one first access time. We check the throttle time will become to the time come from hazelcast and unit Time according to tier.
    To check time window is elapsed so if this happen We set previous callercontext globalcount to 0.
    As assumption we made was all nodes in cluster are having the same timestamp.

    See following diagrams.

    If you need to use throttle core for your application/component run in WSO2 runtime you can import throttle core to your project and use following code to check access availability.

    Here i have listed code to throttle message using handler. So you can write your own handler and call doThrottle method in message flow. First you need to import org.wso2.carbon.throttle.core to your project.

           private boolean doThrottle(MessageContext messageContext) {
                boolean canAccess = true;
                boolean isResponse = messageContext.isResponse();
                org.apache.axis2.context.MessageContext axis2MC = ((Axis2MessageContext) messageContext).
                ConfigurationContext cc = axis2MC.getConfigurationContext();
                synchronized (this) {

                    if (!isResponse) {
                        initThrottle(messageContext, cc);
                }         // if the access is success through concurrency throttle and if this is a request message
                // then do access rate based throttling
                if (!isResponse && throttle != null) {
                    AuthenticationContext authContext = APISecurityUtils.getAuthenticationContext(messageContext);
                    String tier;             if (authContext != null) {
                        AccessInformation info = null;
                        try {

                            String ipBasedKey = (String) ((TreeMap) axis2MC.
                            if (ipBasedKey == null) {
                                ipBasedKey = (String) axis2MC.getProperty("REMOTE_ADDR");
                            tier = authContext.getApplicationTier();
                            ThrottleContext apiThrottleContext =
                                            getApplicationThrottleContext(messageContext, cc, tier);
                            //    if (isClusteringEnable) {
                            //      applicationThrottleContext.setConfigurationContext(cc);
                            info = applicationRoleBasedAccessController.canAccess(apiThrottleContext,
                                                                                  ipBasedKey, tier);
                            canAccess = info.isAccessAllowed();
                        } catch (ThrottleException e) {
                            handleException("Error while trying evaluate IPBased throttling policy", e);
                }         if (!canAccess) {
                    return false;

                return canAccess;

        private void initThrottle(MessageContext synCtx, ConfigurationContext cc) {
                if (policyKey == null) {
                    throw new SynapseException("Throttle policy unspecified for the API");
                }         Entry entry = synCtx.getConfiguration().getEntryDefinition(policyKey);
                if (entry == null) {
                    handleException("Cannot find throttling policy using key: " + policyKey);
                Object entryValue = null;
                boolean reCreate = false;         if (entry.isDynamic()) {
                    if ((!entry.isCached()) || (entry.isExpired()) || throttle == null) {
                        entryValue = synCtx.getEntry(this.policyKey);
                        if (this.version != entry.getVersion()) {
                            reCreate = true;
                } else if (this.throttle == null) {
                    entryValue = synCtx.getEntry(this.policyKey);
                }         if (reCreate || throttle == null) {
                    if (entryValue == null || !(entryValue instanceof OMElement)) {
                        handleException("Unable to load throttling policy using key: " + policyKey);
                    version = entry.getVersion();             try {
                        // Creates the throttle from the policy
                        throttle = ThrottleFactory.createMediatorThrottle(
                                PolicyEngine.getPolicy((OMElement) entryValue));

                    } catch (ThrottleException e) {
                        handleException("Error processing the throttling policy", e);

    sanjeewa malalgodaHow to minimize solr idexing time(registry artifact loading time) in newly spawned instance

    In API Management platform sometimes we need to add store and publisher nodes to cluster. But if you have large number of resources in registry solr indexing will take some time.
    Solr indexing will be used to index registry data in local file system. In this post we will discuss how we can minimize time take to this loading process. Please note this will apply to all carbon kernel 4.2.0 or below versions. In GREG 5.0.0 we have handled this issue and do not need to do anything to handle this scenario.

    You can minimize the time taken to list down existing API list in Store and Publisher by copying an already indexed Solr/data directory to a fresh APIM instance.
    However note that you should NOT copy and replace Solr/data directories of different APIM product versions. (For example you can NOT copy and replace Solr/data directory of APIM 1.9 to APIM 1.7)

    [1] First create a backup of Solr indexed files using the last currently running API product version.
       [APIM_Home]/solr/data directory
    [2] Now copy and replace [Product_Home]/solr/data directory in the new APIM instance/s before puppet initializes it, which will list existing APIs since by the time your new carbon instance starts running we have already copied the Solr indexed file to the instance.

    If you are using automated process its recommend to automate this process.
    So you can follow these instructions to automate it.
    01. Get backup of solr/data directory of running server and push it to some artifact server(you can use Rsynch or svn for this).
    02. When new instance spawned before start it copy updated solr content from remote artifact server.
    03. Then start new server.

    If you need to manually re-index data you can follow approach listed below.

    Shutdown the server if it is already started.
    Rename the lastAccessTimeLocation in registry.xml ,
    Backup the solr directory and delete it.


    Restart the server and keep the server idle few minutes to re-index.

    Chanika GeeganageResolve - keytool error: java.lang.Exception: Failed to establish chain from reply

    This blog post is related to my previous post 'Add CA signed certificate to keystore'. When you are going to import the CA signed certificate to your keystore, you may be getting the following error

    keytool error: java.lang.Exception: Failed to establish chain from reply

    The cause of this error
    This error occurs if 
    • the correct root certificate is not imported to the keystore 
    • the correct intermediate certificate is not imported to the keystore
    The root cause is when you are going to import the signed certificate it checks whether it can create a chain from issuer and subject parameters in the imported certificate. 

    The solution is to

    Import the correct root and intermediate which is compatible with the CA and the certificate type. For and example if you are using VeriSign you can find all the intermediate and root certificates from here.

    Chanika GeeganageAdd CA signed certificate to keystore

    A keystore is a file that keeps private keys, certificates and symmetric keys as key value pair. Each certificate is uniquely identified by an identifier called as 'Alias'. In this blog post I will go through a very common usecase where we have to get a certificate signed by CA and import that to the keystore.

    As a prerequisite we have to make sure that Java is installed correctly and the class path is set. Then we can follow the following steps

    1. Create a key store

    You can create your own keystore by executing the following command.

    keytool -genkey -alias democert -keyalg RSA -keystore demokeystore.jks -keysize 2048

    You will be prompted to give below required information and a password for the keystore.

    Enter keystore password:
    Re-enter new password:
    What is your first and last name?
      [Unknown]:  localhost
    What is the name of your organizational unit?
      [Unknown]:  wso2
    What is the name of your organization?
      [Unknown]:  wso2
    What is the name of your City or Locality?
      [Unknown]:  colombo
    What is the name of your State or Province?
      [Unknown]:  WP
    What is the two-letter country code for this unit?
      [Unknown]:  LK
    Is CN=localhost, OU=wso2, O=wso2, L=colombo, ST=WP, C=LK correct?
      [no]:  yes

    Enter key password for <democert>
    (RETURN if same as keystore password):

    This generates a private key and the certificate with the alias specified in the command (ex: democert).

    Once you executed the above command a new file with the name demokeystore.jks will be created at your the location you executed the command.

    2. View the content in the created keystore.

    You can execute the following command in order to view the content of the created keystore in step 1.

    keytool -list -v -keystore demokeystore.jks -storepass password

    You will receive an output similar to

    Keystore type: JKS
    Keystore provider: SUN

    Your keystore contains 1 entry

    Alias name: democert
    Creation date: Jul 21, 2015
    Entry type: PrivateKeyEntry
    Certificate chain length: 1
    Owner: CN=localhost, OU=wso2, O=wso2, L=colombo, ST=WP, C=LK
    Issuer: CN=localhost, OU=wso2, O=wso2, L=colombo, ST=WP, C=LK
    Serial number: 2ef9b438
    Valid from: Tue Jul 21 18:46:12 IST 2015 until: Mon Oct 19 18:46:12 IST 2015
    Certificate fingerprints:
    MD5:  2F:1B:EF:8E:95:5D:0E:0F:81:34:FE:4A:27:A9:68:A8
    SHA1: FD:9D:98:A1:FB:36:DD:6B:D7:1A:F6:E8:AC:98:35:3A:5E:3C:7F:9A
    SHA256: CF:02:15:41:9E:CC:67:65:85:33:4A:E4:3D:B9:C4:C5:B2:04:CD:A8:FF:B6:63:D6:DB:DC:79:85:51:79:FA:1E
    Signature algorithm name: SHA256withRSA
    Version: 3


    #1: ObjectId: Criticality=false
    SubjectKeyIdentifier [
    KeyIdentifier [
    0000: 9B D4 69 A2 D9 A8 E0 22   02 D6 4F 57 71 3B 27 F4  ..i...."..OWq;'.
    0010: 18 8E 7F 4F                                        ...O


    3. Create CSR

    The certificate in the create keystore is a self-signed certificate. But you need to get your certificate signed by a Certificate Authority(CA). For that  a Certificate Signing Request (CSR) has to be generated. You can use the following command for that.

    keytool -certreq -v -alias democert -file csr_request.pem -keypass password -storepass password -keystore demokeystore.jks

    Then a csr_request.pem file is created in the location that you have executed this command.

    4. Get the certificate signed by CA

    In this blog post I'm going to use VeriSign free trail version to get the certificate signed. If you are using this trial version the certificate is valid for only 30 day. Follow the wizard in here. When you are asked to give the CSR, open the generated csr_request.pem in a text editor and copy the content and paste in the text area in the wizard. After you have completed the wizard you will be received an email from VeriSign with the signed certificate.

    5. Import the root and intermediate certificates to the keystore

    Before importing the signed certificate you have to import the root and intermediate certificates to the keystore. The root certificate for VeriSign trial version can be found from here. Copy the text in the root certificate to a new file and save it as verisign_root.pem file

    Now you can import the root certificate to the keystore by executing the following command.

    keytool -import -v -noprompt -trustcacerts -alias verisigndemoroot -file verisign_root.pem -keystore demokeystore.jks -storepass password

    Now the root cert is imported. You can verify that by listing the content in the keystore. (step2)

    The next step is to import the intermediate cert file. For the VeriSign trial version you can get the intermediate certificate from here. Copy the text and save that in a new file (verisign_intermediate.pem)

    Import the intermediate certificate:

    keytool -import -v -noprompt -trustcacerts -alias verisigndemoim -file verisign_intermediate.pem -keystore demokeystore.jks -storepass password

    6. Import the CA signed certificate to keystore

    Now we can import the signed certificate. You can find the certificate in the email you received from VeriSign. Copy the text file. For and example,

    -----END CERTIFICATE-----

    Create a new text file and past this content and save it as verisign_signed.pem
    Import that with the following command

    keytool -import -v -alias democert -file verisign_signed.pem -keystore demokeystore.jks -keypass password -storepass password

    Now you have the CA signed certificate in your keystore. You can verify that by listing the certificate in the keystore as you did in the step2

    Sumedha KodithuwakkuHow to pass an Authorization token to the back end sever, in WSO2 API Cloud

    There can be scenarios where the back-end service is expecting an Authorization token which is different from the Authorization token used in API Cloud. However when a request is sent to WSO2 API Cloud with the Authorization header, API Gateway will use it for API authentication/authorization and it will be dropped from the out going message to the back-end service.

    This requirement can be achived with the use of two headers; Authorization header containing the API Clouds token and a different header which contains the value of the token expected by the back-end. Using a custom mediation extension, the value of this second header can be extracted and set it to Authorization header, which will be then send to the back-end. For example the two headers, Authorization (API Clouds token) and Authentication (the token expected by back end) can be used.

    For this scenario, per API extension can be used. There is a naming pattern of a per-API extension sequence which is explained here. In API Cloud it should be similar to (assuming the user email as;

    You can find the Organization Key from Organization Manage page in WSO2 Cloud.

    Following is a sample synapse configuration of a per-API extension sequence for this scenario.

    <?xml version="1.0" encoding="UTF-8"?>
    <sequence xmlns=""
    <property xmlns:soapenv=""
    expression="get-property('transport', 'Authentication')"/>
    <property xmlns:soapenv=""
    <property name="Authentication" scope="transport" action="remove"/>

    A XML file should be created using the above configuration and it should be uploaded to the Governance registry of the API Cloud using the management console UI of Gateway (

    You can get the user name from the top right corner of the Publisher and then enter your password and log in. Once you are logged in select Resources (left hand side of the Management console) and click on Browse and then navigate to /_system/governance/apimgt/customsequences registry location. Since this sequence need to be invoked in the In direction (or the request path) navigate to the in collection. Click Add Resource and upload the XML file of the sequence configuration and add it. (Note: Once you add the sequence it might take up-to 15 minutes until it is deployed into the publisher)

    Now go to the Publisher and select the required API and go to edit wizard by clicking edit and then navigate into Manage section. Click on Sequences check box and select the sequence we added from the In Flow. After that Save and Publish the API.

    Now you should invoke the API passing the above two headers and the back-end will receive the required Authorization header. A sample curl request would be as follows;

    curl -H "Authorization: Bearer a414d15ebfe45a4542580244e53615b" -H "Authentication: Bearer custom-bearer-token-value"

    What happens will be as follows;

    Client (headers: Authorization, Authentication) -> 
               Gateway (drop: Authorization, convert: Authentication-Authorization) -> Backend 


    Saliya EkanayakeGet and Set Process Affinity in C

    Affinity of a process can be retrieved and set within a C program using sched_getaffinity (man page) and sched_setaffinity (man page) routines available in the sched.h. The following are two examples showing these two methods in action.

    Get Process Affinity

    #define _GNU_SOURCE

    int main(int argc, char* argvi[])
    pid_t pid = getpid();
    cpu_set_t my_set;
    int ret;

    ret = sched_getaffinity(0, sizeof(my_set), &my_set);
    char str[80];
    strcpy(str," ");
    int count = 0;
    int j;
    for (j = 0; j < CPU_SETSIZE; ++j)
    if (CPU_ISSET(j, &my_set))
    char cpunum[3];
    sprintf(cpunum, "%d ", j);
    strcat(str, cpunum);
    printf("pid %d affinity has %d CPUs ... %s\n", pid, count, str);
    return 0;
    You can test this by using taskset command in linux to set the affinity of this program and checking if the program returns the same affinity you set. For example you could do something like,
    taskset -c 1,2 ./a.out
    Note, you could use the non-standard CPU_COUNT(&my_set) macro routine to retrieve how many cores are assigned to this process instead of using a count variable within the loop as in the above example.

    Set Process Affinity

    #define _GNU_SOURCE

    int main(int argc, char* argvi[])
    pid_t pid = getpid();
    cpu_set_t my_set;
    int ret;

    CPU_SET(1, &my_set);
    CPU_SET(2, &my_set);

    ret = sched_setaffinity(0, sizeof(my_set), &my_set);
    printf("pid %d \n", pid);

    // A busy loop to keep the program from terminating while
    // you use taskset to check if affinity is set as you wanted
    long x = 0;
    long i;
    while (i < LONG_MAX)
    x += i;
    return 0;
    The program is set to bind to cores 1 and 2 (assuming you have that many cores) using the two CPU_SET macro routines. You can check if this is set correctly using the taskset command again. The output of the program will include its process id, say pid. Use this as follows to check withtaskset.
    taskset -pc pid
    Note, I've included a busy loop after printing the pid of the program just so that it'll keep running while you check if affinity is set correctly.

    Chamara SilvaHow to disable HTTP Keep-Alive connections in WSO2 ESB / API Manager

    WSO2 API manager and ESB uses HTTP keep-alive connection by default. If Keep-alive connection enabled, It will be used same HTTP connection to send request and getting responses. But if you want to disable keep-alive connection, Following property need to be added into the in-sequence in proxy service or API.

    Waruna Lakshitha JayaweeraRest Service with ODE process


    Web services can be two types as SOAP or REST. BPEL processes communicate using soap messages.This post describes how we can call Rest service within our BPEL process.

    Why we need to access Rest Service via WSDL

    Since Rest is light weight most of social networks and public services are deployed as restful services. There are some requirements where we need to provide WSDL file for client but through that we need to access rest service specially writing BPEL processes.

    Apache ODE – WSDL 1.1 Extensions for REST

    Traditional HTTP binding on WSDL support only GET and POST operations. But Rest requires additional operations of PUT and DELETE. This can be satisfied using WSDL 1.1 Extensions for REST with Apache ODE.

    User Service Rest Process

    I wrote sample Rest process called User rest process. This is the BPEL model of the process.

    Here User Process takes input as user ID and user name and create user  in the service store with PUT operation. POST will add username appending with '' and GET operation provides the user object to response which will be the output. Before sending the output DELETE operation will delete the user. Process doesn't do meaningful task but it was modeled to cover all four operations of rest service. Lets look at sample Rest request made during the the process.

    Get operation will give me user name based on given user_id.
    Put will add new user
    Post will update the username for given id
    delete will remove the user for given ID

    All these rest operations are to be specified to get user input than hard coded values. We know BPEL access external service using partner links via  WSDL files.When developing BPEL process to  call Rest service we need to create separate WSDL file for that. That will use WSDL 1.1 extensions to add rest service calls.
    For example I will describe How step by step we can get user input and pass into PUT Rest operation via WSDL. For other Rest operations we can use the same approach.

    1) Here is the user schema for consist of username and user ID.

    <xsd:schema attributeFormDefault="qualified"
    elementFormDefault="unqualified" targetNamespace="">
    <xsd:element name="user">
    <xsd:element minOccurs="0" name="uid" nillable="true"
    type="xsd:string" />
    <xsd:element minOccurs="0" name="uname" nillable="true"
    type="xsd:string" />
    <xsd:element name="userID">
    <xsd:element minOccurs="0" name="uid" nillable="true"
    type="xsd:string" />

    2) Put Message can be defined as follows.
    <wsdl:message name="putUserNameRequest">
    <wsdl:part name="user" element="ignore:user" />
    <wsdl:message name="putUserNameResponse">
    <wsdl:part name="part" type="xsd:string" />

    3) Port type for PUT operation can be specified as follows.
    <wsdl:portType name="UserServicePutPT">
    <wsdl:operation name="putUserName">
    <wsdl:input message="tns:putUserNameRequest" />
    <wsdl:output message="tns:putUserNameResponse" />

    4) WSDL Binding for PUT operation can be specified as follows. Here I am using {} to get message request data to pass it via Rest Put URL.

    <wsdl:binding name="UserServicePutHTTP" type="tns:UserServicePutPT">
    <http:binding verb="PUT" />
    <wsdl:operation name="putUserName">
    location="{uid}/{uname}" />
    <http:urlReplacement />
    <mime:content part="part" type="text/xml" />
    5) Service for Put operation can be written as follows.
    <wsdl:service name="UserServicePut">
    <wsdl:port binding="tns:UserServicePutHTTP" name="UserServicePutHTTP">
    location="" />

    You can checkout Rest user process from following link.

    Please find sample rest process from here.

    Lakmal WarusawithanaWSO2 Hackathon in the Cloud

    UPDATE : The hackathon is postponed. We'll let you know the new date as soon as possible

    # 100 Amazon EC2 Instances
    # 2000 Docker Containers
    # 70 Node Kubernetes Cluster
    # 4 Billion Events  
    # 4GB Data
    # 24 Hours
    # $10,000 in Prizes

    We are hosting our first ever 24-hour hackathon in the cloud to coincide with our 10th year anniversary celebrations. Are you ready to take on our big data in the cloud challenge, and show us how you can process over 4 billion events in the cloud?  More detail WSO2 Hackathon in the Cloud

    Here I’m going to tell some inside information how we are technically preparing the WSO2 Hackathon in the Cloud.

    Single WSO2 Private PaaS environments to host the entire hackathon

    WSO2 Private PaaS has the multi tenant capability which comes from underneath Apache Stratos allow us to host hackathon in a single PaaS environment while having all the scalability, isolation and security requirements. Other than PaaS capabilities, WSO2 CEP and WSO2 DAS provide necessary analytics platform to solve the big data challenge.

    Figure 01 - WSO2 Private PaaS
    For the PaaS environment we are using following cutting edge technologies and releases which general available within the July itself make more exciting!

    Apache Stratos 4.1.0 will set up on top of 100 EC2 instances which has totally 2580 virtual CPUs and 4425 GB memory in the infrastructure. It will create 10 Kubernetes cluster using 70 Kubernetes nodes each node having 36 virtual CPUs and 60 GB memory footprint. 2000 docker containers will orchestrate into 70 Kubernetes nodes each team allowing scale up to 200 docker containers in their application.

    Figure 02 - Apache Stratos with Kubernetes

    Each team will get Stratos tenant with pre-configured WSO2 CEP and WSO2 DAS cartridge groups including following cartridges.

    • WSO2 DAS Group
      • WSO2 DAS Data Receiver/Publisher
      • WSO2 DAS Analytics (Spark)
      • WSO2 DAS Dashboard
      • Hadoop (HDFS)
      • Apache Hbase
      • Zookeeper
      • MySQL
    • WSO2 CEP Group
      • WSO2 CEP Manager
      • WSO2 CEP Worker
      • Apache Storm Supervisor
      • Apache Storm UI
      • Zookeeper
      • Nimbus
      • MySQL

    Team members should create scalable application using above two groups to process all 4 billion events and solve the given 2 queries within very short period of time. Solution architecture of your application will definitely play a key role to win the challenge while writing necessary CEP and DAS quires to solve the given problems. They can used Apache Stratos web console to create solution architecture of their application in simple drag and drop manner.

    WSO2 DAS cartridge group comes with pre-configured work in following architecture, which we are expecting each team members use to solve query 02 - Outlier.


    Figure 03 - WSO2 DAS Architecture
    WSO2 CEP cartridge group pre-configured work with apache storm, enables parallel processing help to solve query 01 - Load Prediction in short period of time.
    Figure 04 - WSO2 CEP with Storm

    Metering and Monitoring

    Apache Stratos will monitor each docker instances and auto heals with their dependencies if any failures happens while processing the data. Application logs of all cartridges will publish to central WSO2 DAS which runs on separate AWS EC2 instances. Each team can view their applications logs in using central DAS dashboard just in case want to see the each services logs. Captured health statistics, CPU, memory usage ..etc publish from every cartridge agents in docker containers are records in DAS for later analytic, which can useful to identify how each member has worked while processing all 4 billion events.

    Are you ready for the Challenge?

    I’m highly recommend to participate to the hackathon who are up-to take challenges, it going to be a rare occasion to show your colours while having tons of fun using these cutting edge technologies.

    Will conduct several webinars, tutorials and samples on Apache Stratos and WSO2 Analytics Platform help you to sharp your knowledge in the pre hackathon week and during the hackathon IRC channels will open for 24 hours for any help.

    Srinath PereraWhy We need SQL like Query Language for Realtime Streaming Analytics?

    I was at O'reilly Strata in last week and certainly interest for realtime analytics was at it’s top.

    Realtime analytics, or what people call Realtime Analytics, has two flavours.  
    1. Realtime Streaming Analytics ( static queries given once that do not change, they process data as they come in without storing. CEP, Apache Strom, Apache Samza etc., are examples of this. 
    2. Realtime Interactive/Ad-hoc Analytics (user issue ad-hoc dynamic queries and system responds). Druid, SAP Hana, VolotDB, MemSQL, Apache Drill are examples of this. 
    In this post, I am focusing on Realtime Streaming Analytics. (Ad-hoc analytics uses a SQL like query language anyway.)

    Still when thinking about Realtime Analytics, people think only counting usecases. However, that is the tip of the iceberg. Due to the time dimension of the data inherent in realtime usecases, there are lot more you can do. Lets us look at few common patterns. 
    1. Simple counting (e.g. failure count)
    2. Counting with Windows ( e.g. failure count every hour)
    3. Preprocessing: filtering, transformations (e.g. data cleanup)
    4. Alerts , thresholds (e.g. Alarm on high temperature)
    5. Data Correlation, Detect missing events, detecting erroneous data (e.g. detecting failed sensors)
    6. Joining event streams (e.g. detect a hit on soccer ball)
    7. Merge with data in a database, collect, update data conditionally
    8. Detecting Event Sequence Patterns (e.g. small transaction followed by large transaction)
    9. Tracking - follow some related entity’s state in space, time etc. (e.g. location of airline baggage, vehicle, tracking wild life)
    10. Detect trends – Rise, turn, fall, Outliers, Complex trends like triple bottom etc., (e.g. algorithmic trading, SLA, load balancing)
    11. Learning a Model (e.g. Predictive maintenance)
    12. Predicting next value and corrective actions (e.g. automated car)

    Why we need SQL like query language for Realtime Streaming  Analytics?

    Each of above has come up in use cases, and we have implemented them using SQL like CEP query languages. Knowing the internal of implementing the CEP core concepts like sliding windows, temporal query patterns, I do not think every Streaming use case developer should rewrite those. Algorithms are not trivial, and those are very hard to get right! 

    Instead, we need higher levels of abstractions. We should implement those once and for all, and reuse them. Best lesson we can learn from Hive and Hadoop, which does exactly that for batch analytics. I have explained Big Data with Hive many time, most gets it right away. Hive has become the major programming API most Big Data use cases.

    Following is list of reasons for SQL like query language. 
    1. Realtime analytics are hard. Every developer do not want to hand implement sliding windows and temporal event patterns, etc.  
    2. Easy to follow and learn for people who knows SQL, which is pretty much everybody 
    3. SQL like languages are Expressive, short, sweet and fast!!
    4. SQL like languages define core operations that covers 90% of problems
    5. They experts dig in when they like!
    6. Realtime analytics Runtimes can better optimize the executions with SQL like model. Most optimisations are already studied, and there is lot you can just borrow from database optimisations. 
    Finally what are such languages? There are lot defined in world of Complex Event processing (e.g. WSO2 Siddhi, Esper, Tibco StreamBase,IBM Infoshpere Streams etc. SQL stream has fully ANSI SQL comment version of it. Last week I did a talk on Strata discussing this problem in detail and how CEP could match the bill. You could find the slide deck from below.

    Srinath PereraIntroducing WSO2 Analytics Platform: Note for Architects

    WSO2 have had several analytics products:WSO2 BAM and WSO2 CEP for some time (or Big Data products if you prefer the term).  We are adding WSO2 Machine Learner, a product to create, evaluate, and deploy predictive models, very soon to that mix. This post describes how all those fit within to a single story. 

    Following Picture summarises what you can do with the platform. 

    Lets look at each stages depicted the picture in detail. 

    Stage 1: Collecting Data

    There are two things for you to do.

    Define Streams - Just like you create tables before you put data into a database, first you define streams before sending events. Streams are description of how your data look like (Schema). You will use the same Streams to write queries at the second stage. You do this via CEP or BAM's admin console (https://host:9443/carbon) or via Sensor API described in the next step.

    Publish Event - Now you can publish events. We provide a one Sensor API to publish events for both batch and realtime pipelines. Sensor API available as Java clients (Thrift, JMS, Kafka), java script clients* ( Web Socket and REST) and 100s of connectors via WSO2 ESB. See How to Publish Your own Events (Data) to WSO2 Analytics Platform (BAM, CEP)  for details on how to write your own data publisher. 

    Stage 2: Analyse Data

    Now time to analyse the data. There are two ways to do this: analytics and predictive analytics. 

    Write Queries

    For both batch and realtime processing you can write SQL like queries. For batch queries, we support HIVE SQL and for realtime queries we support Siddhi Event Query Language

    Example 1: Realtime Query (e.g. Calculate Average Temperature over 1 minute sliding window from the Temperature Stream) 

    from TemperatureStream#window.time(1 min)
    select roomNo, avg(temp) as avgTemp
    insert into HotRoomsStream ;

    Example 2: Batch Query (e.g. Calculate Average Temperature per each hour from the Temperature Stream)

    insert overwrite table TemperatureHistory
    select hour, average(t) as avgT, buildingId
    from TemperatureStream group by buildingId, getHour(ts);

    Build Machine Learning (Predictive Analytics) Models

    Predictive analytics let us learn “logic” from examples where such logic is complex. For example, we can build “a model” to find fraudulent transactions. To that end, we can use machine learning algorithms to train the model with historical data about Fraudulent and non-fraudulent transactions.

    WSO2 Analytics platform supports predictive analytics in multiple forms
    1. Use WSO2 Machine Learner ( 2015 Q2) Wizard to build Machine Learning models, and we can use them with your Business Logic. For example, WSO2 CEP, BAM and ESB would support running those models.
    2. R is a widely used language for statistical computing, and we can build model using R, export them as PMML ( a XML description of Machine Learning Models), and use the model within WSO2 CEP. Also you can directly call R Scripts from CEP queries
    3. WSO2 CEP also includes several streaming Regression and Anomaly Detection Operators

    Stage 3: Communicate the Results

    OK now we have some results, and we communicate those results to users or systems that cares for these results. That communications can be done in three forms.
    1. Alerts detects special conditions and cover the last mile to notify the users ( e.g. Email, SMS, and Push notifications to a Mobile App, Pager, Trigger physical Alarm ). This can be easily done with CEP.
    2. Visualising data via Dashboards provide the “Overall idea” in a glance (e.g. car dashboard). They supports customising and creating user's own dashboards. Also when there is a special condition, they draw the user's attention to the condition and enable him to drill down and find details. Upcoming WSO2 BAM and CEP 2015 Q2 releases will have a Wizard to start from your data and build custom visualisation with the support for drill downs as well.
    3. APIs expose Data as to users external to the organisational boundary, which are often used by mobile phones. WSO2 API Manager is one of the leading API solutions, and you can use it to expose your data as APIs. In the later releases, we are planning to add support to expose data as APIs via a Wizard.

    Why choose WSO2 Analytics Platform?

    Reason 1: One Platform for both Realtime, Batch, and Combined Processing - with Single API for publish events, and with support to implement combined usecases like following
    1. Run the similar query in batch pipeline and realtime pipeline ( a.k.a Lambda Architecture)
    2. Train a Machine Learning model (e.g. Fraud Detection Model) in the batch pipeline, and use it in the realtime pipeline (usecases: Fraud Detections, Segmentation, Predict next value, Predict Churn)
    3. Detect conditions in the realtime pipeline, but switch to detail analysis using the data stored in the batch pipeline (e.g. Fraud, giving deals to customers in a e-commerce site)
    Reason 2: Performance - WSO2 CEP can process 100K+ events per second and one of the fastest realtime processing engines around. WSO2 CEP was a Finalist for DEBS Grand Challenge 2014 where it processed 0.8 Million events per second with 4 nodes.

    Reason 3: Scalable Realtime Pipeline with support for running SQL like CEP Queries Running on top of Storm. - Users can provide queries using SQL like Siddhi Event Query Language. SQL like query language provides higher level operators to build complex realtime queries. See SQL-like Query Language for Real-time Streaming Analytics for more details. 
    For batch processing, we use Apache Spark ( 2015 Q2 release forward), and for realtime processing, users can run those queries in one of the two modes.
    1. Run those queries using a two CEP nodes, one nodes as the HA backup for the other. Since WSO2 CEP can process in excess of hundred thousand events per second, this choice is sufficient for many usecases.
    2. Partition the queries and streams, build a Apache Storm topology running CEP nodes as Storm Sprouts, and run it on top of Apache Storm. Please see the slide deck Scalable Realtime Analytics with declarative SQL like Complex Event Processing Scripts. This enable users to do complex queries as supported by Complex Event Processing, but still scale the computations for large data streams. 
    Reason 4: Support for Predictive analytics support building Machine learning models, comparing them and selecting the best model, and using them within real life distributed deployments.

    Almost forgot, all these are opensource under Apache Licence. Most design decisions are discussed publicly at

    Refer to following talk at wso2con Europe for more details. ( slides).

    If you find this interesting, please try it out. Please reach out to me or through if you want to know more information.

    Shani Ranasinghe[WSO2 APIM ] [Gateway] How to limit traffic to WSO2 APIM Gateway using API Handlers

    Recently I had to find a solution to limit the traffic that passes through the WSO2 APIM Gateway. The requests that are allowed to pass through are the requests from the testing team. The architecture of the system is quite straight forward. Please find the drawing of the system architecture in brief below.
    When it comes to limiting the traffic to the Gateway, there are several ways to handle this. This can be handled from the network level (e.g. routing rules) or even simply starting the server on a another port that is not visible to the outside world. Here, in this article I am going to describe how to do it from the Gateway. Using this method is only another option to achieve the goal.

     When an API is created from the WSO2 APIM, using the publisher, the publisher will propagate the changes to the WSO2 APIM Gateway. By this it means it will create a synapse configuration on the Gateway. This synapse configuration of the API will hold a set of handlers. These handlers are placed to achieve different functionality. For e.g. the APIAuthenticationHandler is intended to validate the token. More information on the handlers can be found at [1]. The Handlers are placed in an order in the synapse configuration, and they will be executed in the order they appear. Since the first point of contact in the API is their handlers, we can use a handler to filter out the request if it is from the testing device or not.

     To achieve this, we need to have an identifier being sent from the request. If we have this, it is easily possible to filter them out. First things first, we need to figure out what the identifier is. In my case, the requests sends the device ID in the header, under the parameter "Auth". So in my handler I will read the header and check for this Auth value.
     How do I tell which device Id's are able to continue? For this, I will read the device Id's from a system property so that the allowed device Id's could be passed from the command line as a system property when the server is start up.

     Okay, so given a brief description of what we are going to achieve, lets see how we can do this.

    1. Create the Custom Handler. To create the custom handler we need to create a maven project, and create a new class. This class must be extending the "". Please find the sample code below.

    package org.wso2.CustomHandler;

    import java.util.Map;

    import org.apache.synapse.MessageContext;
    import org.apache.synapse.core.axis2.Axis2MessageContext;

    public class GatewayTrafficContrlHandler extends AbstractHandler {

    public boolean handleRequest(MessageContext arg0) {
    String deviceId = null;
    String identifier = null;
    String[] identifiers;

    //Obtain the identifier .
    Map headers = (Map)((Axis2MessageContext) arg0).getAxis2MessageContext().
    identifier = (String) headers.get("Auth");

    if (identifier == null || identifier.isEmpty()) {
    //Get the first identifier which is the device ID.
    identifiers = identifier.split("\\|");
    if (identifiers != null && identifiers.length > 0) {
    deviceId = identifiers[0];

    //Get the device ID list which is passed as a system property from //command line.Only these deviceId's will be allowed to pass 
                            //through the handler.
    String[] supportedDeviceIdList = null;
    String supportedDeviceIds = System.getProperty("deviceIds");
    if (supportedDeviceIds != null && !supportedDeviceIds.isEmpty()) {
    supportedDeviceIdList = supportedDeviceIds.split(",");

    //Check if the device Id which is sent in the request is in the
    //list of deviuce Id's passed as a system property.
    if (supportedDeviceIdList != null && supportedDeviceIdList.length > 0) {
    for (int index = 0; index < supportedDeviceIdList.length; index++) {
    if (supportedDeviceIdList[index].equals(deviceId) == true) {
    return true;
    return false;

    public boolean handleResponse(MessageContext arg0) {
    // TODO Auto-gGatewayTrafficContrlHandlerenerated method stub
    return false;


    2. Once we create the Class to extract and find for the identifier, we then need to build the jar.
    3. Copy the created Jar to the /repository/components/lib folder. 
    4. Then start the APIM server with the system property -DdeviceIds, and log into the APIM      
         Gateway's management console. for e.g :- ./ -DdeviceIds=123,456,789 
    5. Go to Service Bus > Source View in the Main menu.
    6. In the configuration select the API, and then check the Handler section. In the handler section, add 
        the class to point to the newly created handler.

    <handler class="org.wso2.CustomHandler.GatewayTrafficContrlHandler"/>
    <handler class=""/>
    <handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
    <property name="id" value="A"/>
    <property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
    <handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageHandler"/>
    <handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtGoogleAnalyticsTrackingHandler"/>
    <handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerExtensionHandler"/>

    7. Make sure the newly created handler is the first handler in the list.
    8. Once we do the change save, and observe the console for reloading of the API. 
    9. Test the Handler by doing a REST API call.

    And that's it. 

     The above solution will only let the requests with deviceId's 123 or 456 or 789 to pass through.



    sanjeewa malalgodaHow to get MD5SUM of all files available in conf directory

    We can use following command to get MD5SUM of all files available in the system. We can use this approach to check status of configuration file of multiple servers and check those are same or not.
    find ./folderName -type f -exec md5sum {} \; > test.xml

    Prabath SiriwardenaUnderstanding Logjam and making WSO2 servers safe

    LogJam’s discovery(May 2015) came as a result of follow-up investigations into the FREAK (Factoring attack on RSA-EXPORT Keys) flaw, which was revealed in March.

    Logjam - vulnerabilities in Diffie-Hellman key exchange affect browsers and servers using TLS. Before we delve deep into the vulnerability, lets have a look at how Diffie-Hellman key exchange works.

    How does Diffie-Hellman key exchange work?

    Let's say Alice wants to send a secured message to Bob over an unsecured channel. Since the channel is not secured, the message has to be secured itself, to avoid by being seen by outsiders other than Bob.

    Diffie-Hellman key exchange provides a way to exchange keys between two parties over an unsecured channel, so the established key can be used to encrypt the messages later. First both Alice and Bob have to agree on a prime modulus (p) and a generator (g) - publicly. These two numbers need not be protected. Then Alice selects a private random number (a) and calculates g^a mod p , which is also known as Alice's public secret - let's say its A.

    In the same manner Bob also picks his own private random number (b) and calculates g^b mod p, which is also known as Bob's public secret - let's say its B.

    Now, both will exchange their public secrets over the unsecured channel, that is A and B - or g^a mod p and g^b mod p.

    Once Bob, receives A from Alice - he will calculate the common secret (s) in the following manner, A^b mod p and in the same way Alice will also calculate the common secret (s) - B^a mod p.

    Bob's common secret: A^b mod p -> (g^a mod p ) ^b mod p -> g^(ab) mod p

    Alice's common secret: B^a mod p -> (g^b mod p ) ^a mod p -> g^(ba) mod p

    Here comes the beauty of the modular arithmetic. The common secret derived at the Bob's is the same which is derived at the Alice's end. The bottom line is - to derive the common secret either you should know p,g, a and B or p,g,b and A. Anyone intercepting the messages transferred over the wire would only know p,g,A and B. So - they are not in a position to derive the common secret.

    How does Diffie-Hellman key exchange relate to TLS? 

    Let's have look at how TLS works.

    I would not go through here, explaining each and every message flow in the above diagram - but would only focus on the messages Server Key Exchange and the Client Key Exchange.

    The Server Key Exchange message will be sent immediately after the Server Certificate message (or the Server Hello message, if this is an anonymous negotiation).  The Server Key Exchange message is sent by the server only when the Server Certificate message (if sent) does not contain enough data to allow the client to exchange a premaster secret. This is true for the following key exchange methods:
    • DHE_DSS 
    • DHE_RSA 
    • DH_anon
    It is not legal to send the Server Key Exchange message for the following key exchange methods:
    • RSA 
    • DH_DSS 
    • DH_RSA 
    Diffie-Hellman is used in TLS to exchange keys based on the crypto suite agreed upon during the Client Hello and Server Hello messages. If it is agreed to use DH as the key exchange protocol, then in the Server Key Exchange message server will send over the values of p, g and its public secret (Ys) - and will keep the private secret (Xs) for itself. In the same manner using the p and g shared by the server - client will generate its own public secret (Yc) and the private secret (Xc) - and will share Yc via the Client Key Exchange message with the server. In this way, both the parties can derive their own common secret.

    How would someone exploit this and what is in fact LogJam vulnerability?

    On 20th May, 2015, a group from INRIA, Microsoft Research, Johns Hopkins, the University of Michigan, and the University of Pennsylvania published a deep analysis of the Diffie-Hellman algorithm as used in TLS and other protocols. This analysis included a novel downgrade attack against the TLS protocol itself called Logjam, which exploits EXPORT cryptography.

    In the DH key exchange, the cryptographic strength relies on the prime number (p) you pick, not in fact on the random numbers picked either by the server side or by the client side. It is recommended to have the prime number to be 2048 bits. Following table shows how hard it is to break DH key exchange based on the length of the prime number.

    Fortunately no one is using 512 bits long prime numbers - but, except in EXPORT cryptography. During the crypto wars happened in 90's it was decided to make ciphers weaker when its being used to communicate outside USA and these weaker ciphers are known as EXPORT ciphers. This law was out-turned later, but unfortunately TLS was designed before that and it has the support for EXPORT ciphers. According to the EXPORT ciphers the DH prime numbers cannot be longer than 512 bits. If the client wants to use DH EXPORT ciphers with 512 bit prime number, then during the Client Hello message of the TLS handshake its has to send DH_EXPORT cipher suite.

    None of the legitimate clients do not want to use a weak prime number - so will never suggest the server to use DH_EXPORT - but still - most servers do support DH_EXPORT cipher suite. That means, if someone in the middle manages to intercept the Client Hello initiated by the client and change the requested cipher suite to DH_EXPORT - then still the server will support it and key exchange will happen using a weaker prime number. These types of attacks are known as TLS downgrade attacks - since the original cipher suite used by the client was downgraded by changing the Client Hello message.

    But, wouldn't this change ultimately detected by the TLS protocol itself. TLS has the provision to detect if any of the messages in the handshake got modified in the middle by validating the hash of all the messages sent and received by both the parties - at both the ends. This happens at the end of the handshake. Client derives the hash of the messages sent and received by it and sends the hash to the server - and server will validate the hash against the hash of all the messages sent and received by it. Then once again server derives the hash of the messages sent and received by it and sends the hash to the client  - and client will validate the hash against the hash of all the messages sent and received by it. Since by this time the common secret is established - the hash is encrypted by the derived secret key - which, at this point is known to the attacker. So, the attacker can create a hash that is accepted by both the parties - encrypts it and sends it over to both the client and the server.

    To protect from this attack, the server should not respond to any of the weaker ciphers, in this case DHE_EXPORT.

    How to remove the support for weaker ciphers from WSO2 Carbon 4.0.0+ based products ?

    The cipher set which is used in a Carbon server is defined by the embedded Tomcat server (assuming JDK 1.7.*)
    • Open CARBON_HOME/repository/conf/tomcat/catalina-server.xml file. 
    • Find the Connector configuration corresponding to TLS. Usually there are only two connector configurations and connector corresponding to TLS have connector property, SSLEnabled=”true”. 
    • Add new property “ciphers” inside the TLS connector configurations with the value as follows.
      • If you are using tomcat version 7.0.34 :
      • If you are using tomcat version 7.0.59:
    • Restart the server. 
    Now to verify the configurations are all set correctly, you can run TestSSLServer.jar which can be downloaded from here

    $ java -jar TestSSLServer.jar localhost 9443 

    In the output you get by running the above command, there is a section called “Supported cipher suites”. If all configurations are done correctly, it should not contain any export ciphers. 

    With Firefox v39.0 onwards it does not allow to access web sites which support DHE with keys less than 1023 bits (not just DHE_EXPORT). The key length of 768 bits and 1024 bits are assumed to be attackable depending on the computing resources the attacker has.  Java 7 uses keys with length 768 bits even for non export DHE ciphers. This will probably not be fixed until Java 8, so we cannot use these ciphers. Its recommended not just remove DHE_EXPORT cipher suites - but also all the DHE cipher suites. In that case use following for the 'ciphers' configuration.
    • If you are using tomcat version 7.0.34 :
    • If you are using tomcat version 7.0.59:
    The above is also applicable for Chrome v45.0 onwards.

        How to remove the support for weaker ciphers from WSO2 pre-Carbon 4.0.0 based products ?

        • Open CARBON_HOME/repository/conf/mgt-transports.xml
        • Find the transport configuration corresponding to TLS - usually this is having the port as 9443 and name as https.
        • Add the following new element.
          • <parameter name="ciphers">SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_RSA_WITH_DES_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA</parameter>

        Jayanga DissanayakeWSO2 BAM : How to change the scheduled time of a scripts in a toolbox

        WSO2 Business Activity Monitor (WSO2BAM) [1] is a fully open source, complete solution for monitor/store a large amount of business related activities and understand business activities within SOA and Cloud deployments.

        WSO2 BAM comes with predefined set of toolboxes.

        A toolbox consist of several components
        1. Stream definitions
        2. Analytics scripts
        3. Visualizations components

        Non of the above 3 components are mandatory.
        You can have a toolbox which has only Stream definitions and Analytics scripts but not Visualization components.

        In WSO2 BAM, toolbox always get the precedence. Which means if you manually change anything related to any component published via a toolbox. It will be override once the server is restarted.

        If you update,
        1. Schedule time
        This will update the schedule time, and newly update value will be only effective until the next restart. This will not get persisted. Once the server is started, schedule time will have the original value form the toolbox

        2. Stream definition
        If you change anything related to stream definition, it might cause some consistency issues. When the server is restarted, it will find that there is already a stream definition exist with the given name and the configurations are different. So an error will be logged.

        So it is highly discouraged to manually modify the components deployed via a toolbox

        The recommended way to change anything associated with a toolbox, is to,
        1. Unzip the toolbox.
        2. Make the necessary changes.
        3. Create a zip the files again.
        4. Rename the file as <toolbox_name>.tbox
        5. Redeploy the toolbox

        So, if you need to change the scheduled time of Service_Statistics_Monitoring Toolbox,
        Get a copy of Service_Statistics_Monitoring.tbox file resides in [BAM_HOME]/repository/deployment/server/bam-toolbox directory.

        Unzip the file. Open the file Service_Statistics_Monitoring/analytics/

        Set the following configuration according to your requirement
        analyzers.scripts.script1.cron=0 0/20 * * * ?

        Create a zip file and change the name of the file to Service_Statistics_Monitoring.tbox

        And redeploy the toolbox.

        Now your changes is embed into the toolbox and each time the toolbox is deployed, it will have the modified value.


        Jayanga DissanayakePublishing WSO2 APIM Statistics to WSO2 BAM

        WSO2 API Manager (WSO2APIM) [1] is a fully open source, complete solution for creating, publishing and managing all aspects of an API and its lifecycle.

        WSO2 Business Activity Monitor (WSO2BAM) [2] is a fully open source, complete solution for monitor/store a large amount of business related activities and understand business activities within SOA and Cloud deployments.

        Users can use these two products together, which collectively gives total control over management and monitoring of APIs.

        In this post I'm going to explain how APIM stat publishing and monitoring happens in WSO2APIM and WSO2BAM.

        Configuring WSO2 APIM to publish statistics

        You can find more information on setting up statistics publishing in [3]. Once you do your configurations, it should look like the below.


        <!-- Enable/Disable the API usage tracker. -->
        <BAMServerURL>tcp://<BAM host IP>:7614/</BAMServerURL>
        <!-- JNDI name of the data source to be used for getting BAM statistics. This data source should
        be defined in the master-datasources.xml file in conf/datasources directory. -->


        <description>The datasource used for getting statistics to API Manager</description>
        <definition type="RDBMS">
        <validationQuery>SELECT 1</validationQuery>

        Configuring WSO2 BAM

        You can find more information on setting up statistics publishing in [3].

        Note that you only need to copy API_Manager_Analytics.tbox into super tenant space. (No need to do any configuration in tenant space)

        Above digram illustrate how the stat data is published and eventually view though the APIM Statistic view.

        1. Statistics information about APIs from all the tenants are published to the WSO2 BAM via a single data publisher.

        2.  API_Manager_Analytics.tbox has stream definitions and hive scripts needed to summarize statistics. These hive scripts get periodically executed and summarized data is pushed into a RDBMS.

        3. When you visit statistics page in WSO2 APIM, it will retrieve summerized statistics form the RDBMS and shows it to you.

        Note: If you need to view statistics of a API which is deployed in a particular tenant. Login in to WSO2 APIM in particular tenant and view statistics page.
        (You don't need to do any additional configuration to support tenant specific statistics.)


        Niranjan KarunanandhamCreating a new Profile for WSO2 Products

        The logical grouping of a set of features is known as a Server Profile in WSO2 Carbon Servers.  All WSO2 products have atleast one profile which is the default profile. This profile consists of all the features in a particular product. Apart from the default profile, WSO2 Carbon servers can be run in mulitple profiles.

        There can be a requirement where you want to run to profiles in the same instance. This can be done by creating a new profile using a maven pom.xml and leveraging carbon-p2-plugin. Pre-requisites are Apache Maven 3.0.x and Apache Ant.

        Steps to create the pom.xml:
        1. Add the WSO2 Carbon Core zip of the Product you are creating the new profile for.
        2. Add the feature artifacts to generate the p2-repo
        3. Add the feature that you want to be grouped into the new profile.
        4. Create the carbon.product which is used to manage all aspects of the product.
        5. Now execute the "mvn clean install" command.
        6. This will create the "p2-repo" and "wso2carbon-core-<carbon kernel version>" to the target folder.
        7. Copy the newly created profile folder at target/wso2carbon-core-<carbon kernel version>/repository/components/<new profile name> and target/wso2carbon-core-<carbon kernel version>/repository/components/p2/org.eclipse.equinox/profileRegistry/<new profile name>.profile to the corresponding location of the product you want to create the new profile.
        8. Now execute "./bin/ -Dprofile=<new profile name>" to start the product in the new profile.

        Sample new profile available for APIM 1.8.0.

        Darshana GunawardanaFREAK Vulnerability and Disabling weak export cipher suites in WSO2 Carbon 4.2.0 Based Products

        A group of researchers from Microsoft Research, INRIA and IMDEA have discovered a serious vulnerability in some SSL\TLS servers and clients, that allows a man in the middle attacker to downgrade the security of the SSL\TLS connection and gain the access to all the encrypted data transferred between client and server.

        Web servers which supports export ciphers are vulnerable to the FREAK attack. The attack is carried out in a way that attacker can downgrade the connection to the web server from strong RSA to (weak) export grade RSA cipher, and get a message signed with the weak RSA key. Quoting [2],

        Thus, if a server is willing to negotiate an export ciphersuite, a man-in-the-middle may trick a browser (which normally doesn’t allow it) to use a weak export key. By design, export RSA moduli must be less than 512 bits long; hence, they can be factored in less than 12 hours for $100 on Amazon EC2.

        Now you can understand the severity of this vulnerability. If you want to learn more on the FREAK attack, there are good references listed at the bottom on this post.

        Now lets have a look at the most important part, How to avoid FREAK in WSO2 Carbon products?

        In order to avoid FREAK vulnerability, the web server should avoid supporting weak export-grade RSA ciphers.

        How to disable weak export cipher suites in WSO2 Carbon 4.2.0 Based Products.

        The cipher set used in a carbon server is defined by the embedded tomcat server. So this configuration should be done in “catalina-server.xml”.

        1. Open <CARBON_HOME>//repository/conf/tomcat/catalina-server.xml file.
        2. Find the Connector configuration corresponding to TLS. Usually there are only two connector configurations and connector corrosponding to TLS have connector property, SSLEnabled=”true”.
        3. Add new property “ciphers” inside the TLS connector configurations with the value as follows,
          • If you are using tomcat version 7.0.34,
          • If you are using tomcat version 7.0.59,
        4. Restart the server.

        Now to verify whether all configurations are set correctly, you can run TestSSLServer.jar which can be downloaded from here.

        $ java -jar TestSSLServer.jar localhost 9443

        In the output you get running above command, there is a section called “Supported cipher suites”.

        Supported cipher suites (ORDER IS NOT SIGNIFICANT):
        (TLSv1.0: idem

        If all configurations are set correctly, it should not contain any export ciphers (like “RSA_EXPORT_WITH_RC4_40_MD5”, “RSA_EXPORT_WITH_DES40_CBC_SHA” or “DHE_RSA_EXPORT_WITH_DES40_CBC_SHA”). Output should be similar to above.

        Cipher suites supported in the carbon products is defined by the java version of the server running on, hence the output you see when you run might differ from the above output. Bottom line is the list should not contain any ciphers which have “_EXPORT_” in its name.

        References :

        [1] Washingtonpost article on FREAK :

        [2] “SMACK: State Machine AttaCKs” :

        [3] “Attack of the week: FREAK (or ‘factoring the NSA for fun and profit’)” :

        [4] “Akamai Addresses CVE 2015-0204 Vulnerability” :

        Nadeeshaan GunasingheIntegrating WSO2 ESB Connectors in real world integration Scenarios

        For a moment consider some of the third party services we use daily. For an example We can consider Twitter as such a service. When we consider this fact we can identify a lot of service we use in our day to day life. Usually we use APIs to connect to these services as developers. WSO2 ESB Connectors allow you to interact with such third parties’ API from ESB Message flow. With these connectors you can access the API interact with it through the exposed functionalities.

        Eg: With the Twitter Connector you can access the twitter API and then retrieve tweets of a user.

        At the moment you can find more than hundred of connectors at Connector Repo. And you can build one connector at a time simply with the maven build tool.

        Working with one connector can be much simpler as well as less frequently used. Consider a scenario in which we are communicating with two such third parties’ APIs. Such scenarios are complex to handle and messy in configuring them. With WSO2 ESB Connectors you can easily acquire such integrations for real world scenarios as well as business scenarios.

        During this blog post Let’s have a look at such real world scenario which addresses an important project management task.

        Integration Scenario with RedMine, Google Spreadsheet and Gmail.

        In this integration scenario my objective is to track the expired Redmine assignments to a particular user/all the users and then retrieve the information as well as sending an email notification and log them in to google spreadsheet.

        In order to accomplish this task we need to have the Redmine Connector, Google Spreadsheet Connector and the Gmail Connector. After building the connectors, before you write the configuration you need to upload them to ESB. Make sure to enable the connector after uploading them.

        Enable Connectors

        Configuring connectors

        Gmail Connector
        <username>your_user_name</username> <oauthAccessToken>your_access_token</oauthAccessToken>

        Redmine Connector

        Google Spreadsheet Connector

        Note: For the development purposes you can access Google OAuth 2.0 Playground in order to get the access tokens for gmail and google spreadsheet connectors.

        <?xml version="1.0" encoding="UTF-8"?> <definitions xmlns=""> <registry provider="org.wso2.carbon.mediation.registry.WSO2Registry"> <parameter name="cachableDuration">15000</parameter> </registry> <taskManager provider="org.wso2.carbon.mediation.ntask.NTaskTaskManager"/> <import name="gmail" package="org.wso2.carbon.connector" status="enabled"/> <import name="redmine" package="org.wso2.carbon.connector" status="enabled"/> <import name="evernote" package="org.wso2.carbon.connector" status="enabled"/> <import name="googlespreadsheet" package="org.wso2.carbon.connectors" status="enabled"/> <proxy name="evernote_gtask_proxy" transports="https http" startOnLoad="true" trace="disable"> <description/> <target> <inSequence onError="faultHandlerSeq">

        <redmine.listIssues> <statusId>*</statusId> <limit>2</limit> <assignedToId>me</assignedToId> </redmine.listIssues>
        <property name="cur_date" expression="get-property('SYSTEM_DATE', 'yyyy-MM-dd')" scope="default"/> <iterate continueParent="true" preservePayload="true" attachPath="//issues" expression="//issues/issue"> <target> <sequence> <log level="custom"> <property name="ITERATOR" value="Iterating Over the redmine feature list"/> </log> <property name="issue-id" expression="//issues/issue/id"/> <property name="project-name" expression="//issues/issue/id/@name"/> <property name="description" expression="//issues/issue/description"/> <property name="due-date" expression="//issues/issue/due_date"/> <script language="js">var current_date = mc.getProperty("cur_date").split("-"); var due_date = mc.getProperty("due_date"); if (due_date === null){ mc.setProperty("is_due","false"); } else{ var due_date_arr = due_date.split("-"); var due_date_obj = new Date(due_date_arr[0],due_date_arr[1],due_date_arr[2]); var cur_date_obj = new Date(current_date[0],current_date[1],current_date[2]); if((cur_date_obj&gt;due_date_obj)&gt;0){ mc.setProperty("is_due","true"); } else{ mc.setProperty("is_due","true"); } } </script> <gmail.init> <username>username</username> <oauthAccessToken>access_token</oauthAccessToken> </gmail.init> <filter source="get-property('is_due')" regex="true"> <then> <gmail.sendMail> <subject>Subject</subject> <toRecipients>recipients</toRecipients> <textContent>Email_Body</textContent> </gmail.sendMail> </then> </filter> </sequence> </target> </iterate> <script language="js">var current_date = mc.getProperty("cur_date").split("-"); var issues = mc.getPayloadXML().issue; var returnCsv = "Issue_ID,Project_Name,Due_Date\n"; for(i=0;i&lt;issues.length();i++){ var id = issues[i].id; var name = issues[i].project.@name; var due_date = issues[i].due_date; if (due_date != null){ var due_date_arr = due_date.split("-"); var due_date_obj = new Date(due_date_arr[0],due_date_arr[1],due_date_arr[2]); var cur_date_obj = new Date(current_date[0],current_date[1],current_date[2]); if((cur_date_obj&gt;due_date_obj)&gt;0){ mc.setProperty("task_due","true"); returnCsv=returnCsv+id+","+name+","+due_date+"\n"; } else{ mc.setProperty("task_due","false"); } } }                mc.setPayloadXML(                    &lt;text&gt;{returnCsv}&lt;/text&gt; );</script> <log level="full"/> <googlespreadsheet.oAuth2init> <oauthConsumerKey>consumer_key</oauthConsumerKey> <oauthConsumerSecret>consumer_secret</oauthConsumerSecret> <oauthAccessToken>access_token</oauthAccessToken> <oauthRefreshToken>refresh_token</oauthRefreshToken> </googlespreadsheet.oAuth2init> <filter source="get-property('task_due')" regex="true"> <then> <googlespreadsheet.importCSV> <spreadsheetName>spread_sheet_name</spreadsheetName> <worksheetName>work_sheet_name</worksheetName> <batchEnable>true</batchEnable> <batchSize>10</batchSize> </googlespreadsheet.importCSV> </then> </filter> <respond/> </inSequence> <outSequence> <log level="full"/> <send/> </outSequence> </target> </proxy> <localEntry key="csv_transform"> <xsl:stylesheet xmlns:xsl="" version="1.0"> <xsl:output method="text" indent="no" encoding="UTF-8"/> <xsl:template match="/"> Issue_ID,Project_Name,Due_Date <xsl:value-of select="//issues/issue/id"/> <xsl:text>,</xsl:text> <xsl:value-of select="//issues/issue/project/@name"/> <xsl:text>,</xsl:text> <xsl:value-of select="//issues/issue/due_date"/> </xsl:template> </xsl:stylesheet> <description/> </localEntry> <sequence name="fault"> <log level="full"> <property name="MESSAGE" value="Executing default 'fault' sequence"/> <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/> <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/> </log> <drop/> </sequence> <sequence name="main"> <in> <log level="full"/> <filter source="get-property('To')" regex="http://localhost:9000.*"> <send/> </filter> </in> <out> <send/> </out> <description>The main sequence for the message mediation</description> </sequence> </definitions>

        According to the above configuration, at first via the redmine connector we get the corresponding redmine issues. Then we use the iterate mediator to iterate through the result list. Depending on the result, use the script mediator to extract the required data fields and append the data to the payload accordingly. Then use the google spreadsheet connector to write them to the spreadsheet.

        Get the Connectors Here

        Chathurika Erandi De SilvaBits of being a QA

        The job of a QA is never easy, as you have to be the middle man between the developer and the client, it's a responsible job which is similar to a blade with two sharp edges.

        In order to make the life of a QA easier, i have come up with following points, which is entirely based on my experience.

        1. Know what you are testing

        It's very important for a QA to know what's being tested. As the QA has to put him / her self in the client shoes, its very crucial for the QA to know the functionality of the system / product entirely. It's very important to think out of the box and not to limit yourself. As an example, if you are testing a product with lot of configurations, its very important to know why the configurations are there and how they will work.

        2. Plan what you test

        Don't try to be smart by just trying to test as the QA cycle begins. Plan the test. After getting the functionality in to your head, build diagrams, etc.. to help plan the test. Design the flow in which you will be testing. Set priority to the tests, so your flow will be continous and nice

        3. Derive test cases

        Maybe your company doesn't want you to use test cases but derive them your self, the test cases are simply what the client does and expects from the system. So derive them yourself. Make sure you think as the end user and derive them covering the functionality.

        4. Adjust the test cases

        Adjust your test cases so that it adhers to the above flow that you have created. So testing will be quick, easy and you will be able to find defects quickly.

        5. Execute the tests by being in customer's shoes.

        Be the customer for the product, before it reaches the real users.

        Happy Testing !!!!! :)

        Kalpa WelivitigodaHow to move a XML element to be the first child of the parent element

        I was using enrich mediator in WSO2 ESB to add a child to a parent element in the payload. The new element got added as the last element of the children where I wanted it as the first. I tried action="sibling" but as per the synapse code it also adds the element after the target element [1]. So I decided to make use of XSLT and following is a sample stylesheet along with the input and expected messages.

        Input XML
        <B>value 1</B>
        <C>value 1</C>
        <D>value 1</D>
        <A>value 1</A>
        Expected XML
        <A>value 1</A>
        <B>value 1</B>
        <C>value 1</C>
        <D>value 1</D>
        <xsl:stylesheet xmlns:xsl="" version="1.0">
        <xsl:output indent="yes" />
        <xsl:strip-space elements="*" />
        <xsl:template match="node() | @*">
        <xsl:apply-templates select="node() | @*" />
        <xsl:template match="//A" />
        <xsl:template match="//data">
        <xsl:copy-of select="//A" />
        <xsl:apply-templates />

        Kalpa WelivitigodaDate time format conversion with XSLT mediator in WSO2 ESB

        I recently came across this requirement where a xsd:datetime in the payload is needed to be converted to a different date time format as follows,

        Original format : 2015-01-07T09:30:10+02:00
        Required date: 2015/01/07 09:30:10

        In WSO2 ESB, I found that this transformation can be achieved through a XSLT mediator, class mediator or a script mediator. In an overview, XSLT mediator uses a XSL stylesheet to format the xml payload passed to the mediator whereas in class mediator and script mediator we use java code and javascript code respectively to manipulate the message context. In this blog post I am going to present how this transformation can be achieved by means of the XSLT mediator.

        XSL Stylesheet
        <?xml version="1.0" encoding="UTF-8"?>
        <localEntry xmlns="" key="dateTime.xsl">
        <xsl:stylesheet xmlns:xsl="" xmlns:xs="" version="2.0">
        <xsl:output method="xml" omit-xml-declaration="yes" indent="yes" />
        <xsl:param name="date_time" />
        <xsl:template match="/">
        <xsl:value-of select="format-dateTime(xs:dateTime($date_time), '[Y0001]/[M01]/[D01] [H01]:[m01]:[s01] [z]')" />
        <description />

        Proxy configuration
        <?xml version="1.0" encoding="UTF-8"?>
        <proxy xmlns="" xmlns:xs="" name="DateTimeTransformation" transports="https http" startOnLoad="true" trace="disable">
        <property name="originalFormat" expression="$body/dateTime/original" />
        <xslt key="dateTime.xsl">
        <property name="date_time" expression="get-property('originalFormat')" />
        <log level="full" />

        dateTime.xsl XLS style sheet is stored as an inline xml local entry in ESB.

        In the proxy, the original date is passed as an parameter ("date_time") to the XLS style sheet. I have used format-dateTime function, a function of XSL 2.0, to do the transformation.

        Sample request
        <?xml version="1.0" encoding="UTF-8"?>
        <soap:Envelope xmlns:soap="">
        <soap:Header />

        Console output
        <?xml version="1.0" encoding="UTF-8"?>
        <soap:Envelope xmlns:soap="">
        <dateTime xmlns="" xmlns:xs="">
        <required>2015/01/07 09:30:10 GMT+2</required>

        Kalpa WelivitigodaWorkaround for absolute path issue in SFTP in VFS transport

        In WOS2 ESB, VFS transport can be used to access SFTP file system. The issue is that we cannot use absolute paths with SFTP and this affects to WSO2 ESB 4.8.1 and prior versions. The reason is that SFTP uses SSH to login, and it will by default log into the user's home directory and the path specified will be considered relative to the user's home directory.

        For example consider the VFS URL below,
        The requirement is to refer to /myPath/file.xml but it will refer /home/kalpa/myPath/file.xml (/home/kalpa) is the user's home directory.

        To overcome this issue we can create a mount for the desired directory in the home directory of the user in the FTP file system. Considering the example above, we can create the mount as follows,
        mount --bind /myPath /home/kalpa/myPath
         With this the VFS URL above will actually refer to /myPath/file.xml.

        Chandana NapagodaConfigure External Solr server with Governance Registry

        In WSO2 Governance Registry 5.0.0, we have upgraded Apache Solr version into 5.0.0 release. With that you can connect WSO2 Governance Registry into an external Solr server or Solr cluster. External Solr integration provides features to gain comprehensive Administration Interfaces, High scalability and Fault Tolerance, Easy Monitoring and many more Solr capabilities.

        Let me explain how you can connect WSO2 Governance Registry server with an external Apache Solr server.

        1). First, you have to download Apache Solr 5.x.x from the below location.
        Please note that we have only verified with Solr 5.0.0, 5.2.0 and 5.2.1 versions only.

        2). Then unzip Solr Zip file. Once unzipped, it's content will look like the below.

        The bin folder contains the scripts to start and stop the server. Before starting the Solr server, you have to make sure JAVA_HOME variable is set properly. Apache Solr is shipped with an inbuilt Jetty server.

        3). You can start the Solr server by issuing "solr start" command from the bin directory. Once the Solr server is started properly, following message will be displayed in the console. "Started Solr server on port 8983 (pid=5061). Happy searching!"

        By default, server starts on port "8983" and you can access the Solr admin console by navigating to "http://localhost:8983/solr/".

        4) Next, you have to create a new Solr Core(In Solr Cloud mode this will be called as a Collection) which will be used in the WSO2 Governance Registry. To create a new Core, you have to execute "solr create -c registry-indexing -d basic_configs" command from the Solr bin directory. This will create a new Solr Core named "registry-indexing".

        5). After creating "registry-indexing" Solr core, you can see it from the Solr admin console as below.

        6). To integrate newly created Solr core with WSO2 Governance Registry, you have to modify registry.xml file located in <greg_home>/repository/conf directory. There you have to add "solrServerUrl" under indexingConfiguration as follows.

            <!-- This defines index cofiguration which is used in meta data search feature of the registry -->

        <!--number of resources submit for given indexing thread -->
        <!--number of worker threads for indexing -->

        7). After completing external Solr configurations as above, you have to start the WSO2 Governance Registry server. If you have configured External Solr integration properly, you can notice below log message in the Governace Registry server startup logs(wso2carbon log).

        [2015-07-11 12:50:22,306] INFO {org.wso2.carbon.registry.indexing.solr.SolrClient} - Http Sorl server initiated at: http://localhost:8983/solr/registry-indexing

        Further, you can view indexed data by querying via Solr admin console as well.

        Happy Indexing and Searching...!!!

        Note: You can download G-Reg 5.0.0 Alpha pack from : Product-Greg GitHub repo

        Lali DevamanthriACES Hackathon 2015

        ACES Hackathon is a coding event that lasts three days,which is held annually in the Faculty of Engineering,University of Peradeniya. Yesterday(10th July) 2015 Hackathon was launch and I participated as a mentor.

        ACES Hackathon has proven to be a well-established and perfectly-organized event aiming to bring together large number of undergraduates and engage them into the development of creative and innovative IT solutions.

        “ACES Hackathon 2015″ is the 4th event of its kind to be organized by ACES, following the success of ACES Hackathon 2014. This year ACES is planning to bring the hackathon event to the next level calling contribution from mentors, who are industry experts to help the teams out during the event. The main purpose behind this event is to bring fresh talent to the IT industry and provide motivation and experience to a large number of participants.

        Yesterday is the first day of hackathon and its for pitch the ideas of students. Students have came up with creative ideas in short time. Following are the some ideas that would implement in the  week end.

        1. CardioLab – [Software] – by Wijethunga W.M.D.A
        2. PIN POTHA- [Software] – by Wickramarachchi A.O
        3. Multiplayer “omi” card game – [Software] by Rajapakse K. G. De A.
        4. Smart DriveMate – The Driver’s best friend.- [software] by Geesara Prathap Kulathunga
        5. Unified web platform for consumer market [software] by Jayawardena J.L.M.M
        6. Android app: Auto message – [software] by Balakayan k
        7. Local Responsive Cloud Storage and a Service Platform – [Networks and Systems] by Nanayakkara NBUS
        8. Analytical Tool for Cluster Resources – [Networks and Systems] by Nanayakkara NBUS
        9. Algorithms runner – [Networks and Systems] by Nanayakkara NBUS
        10. Trust management API – [Networks and Systems] by Nanayakkara NBUS
        11. Location based trigger reminders – [Software] by Shantha E.L.W.
        12. Wifi data transfer – [Software] by Ukwattage U.A.I
        13. Fly healthy – [Software] by Dias E.D.L.
        14. Attendance marking with image processing- [Software] by Titus Nanda Kumara
        15. Centralize the photocopy center with internet – Network by Titus Nanda Kumara
        16. URL shortening service with lk domain [network] by Titus Nanda Kumara
        17. Video tutorials and past paper sharing web space by Titus Nanda Kumara
        18. Online Judging System by Mudushan Weerawardhana
        19. Smart reload by Prasanna Rodrigo

        ACES alumini members will be visit the engineering faculty and mentor the students for implement those ideas.  Also leading IT organizations are sponsoring the event.

        Happy coding guys!

        Evanthika AmarasiriDealing with "HTTP 502 Bad Gateway"

        Recently, while configuring WSO2 products with NginX, we were struggling to connect to the management console of the product since it was returning "HTTP 502 Bad Gateway" error. After doing some research, found out that it was a problem with SELinux.

        There are two SELinux boolean settings available by default. One of them is httpd_can_network_connect which allows httpd to connect to anything on the network.

        So to enable this, I executed the below command and everything worked well.

        sudo setsebool -P httpd_can_network_connect 1
        However, there are other similar variables that can be enabled other than this and they are explained in detail in [1].

        [1] -

        Dhananjaya jayasingheHow to deal with "java.nio.charset.MalformedInputException: Input length = 1" WSO2 ESB

        Some times we are receiving unusual characters in our responses from various back ends. Then WSO2 ESB is facing problems in understanding those characters from the response and it tends to throw following exception.

        [2015-07-09 12:42:49,651] ERROR - TargetHandler I/O error: Input length = 1
        java.nio.charset.MalformedInputException: Input length = 1
        at java.nio.charset.CoderResult.throwException(
        at org.apache.http.impl.nio.reactor.SessionInputBufferImpl.readLine(
        at org.apache.http.impl.nio.codecs.AbstractMessageParser.parse(
        at org.apache.synapse.transport.http.conn.LoggingNHttpClientConnection$LoggingNHttpMessageParser.parse(
        at org.apache.synapse.transport.http.conn.LoggingNHttpClientConnection$LoggingNHttpMessageParser.parse(
        at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(
        at org.apache.synapse.transport.http.conn.LoggingNHttpClientConnection.consumeInput(
        at org.apache.synapse.transport.passthru.ClientIODispatch.onInputReady(
        at org.apache.synapse.transport.passthru.ClientIODispatch.onInputReady(
        at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(
        at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(
        at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(
        at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(
        at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(
        at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(
        at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$

        Most of the case,  by adding following property to the "" file will solve the case.

        Eg:  once i was getting a response like following

        [2015-07-09 12:47:55,236] DEBUG - wire >> "HTTP/1.1 100 Continue[\r][\n]"
        [2015-07-09 12:47:55,237] DEBUG - wire >> "[\r][\n]"
        [2015-07-09 12:47:55,274] DEBUG - wire >> "HTTP/1.1 201 Cr[0xe9]e[\r][\n]"
        [2015-07-09 12:47:55,274] DEBUG - wire >> "Set-Cookie: JSESSIONID=87101C43FCABBB97D049C0F0C8DD216D; Path=/api/; Secure; HttpOnly[\r][\n]"
        [2015-07-09 12:47:55,274] DEBUG - wire >> "Location:[\r][\n]"
        [2015-07-09 12:47:55,274] DEBUG - wire >> "Content-Type: application/json[\r][\n]"
        [2015-07-09 12:47:55,274] DEBUG - wire >> "Transfer-Encoding: chunked[\r][\n]"
        [2015-07-09 12:47:55,275] DEBUG - wire >> "Vary: Accept-Encoding[\r][\n]"
        [2015-07-09 12:47:55,275] DEBUG - wire >> "Date: Thu, 09 Jul 2015 16:47:55 GMT[\r][\n]"
        [2015-07-09 12:47:55,275] DEBUG - wire >> "Server: qa-byp[\r][\n]"
        [2015-07-09 12:47:55,275] DEBUG - wire >> "[\r][\n]"
        [2015-07-09 12:47:55,275] DEBUG - wire >> "1f[\r][\n]"
        [2015-07-09 12:47:55,275] DEBUG - wire >> "{"person":john}[\r][\n]"
        [2015-07-09 12:47:55,275] DEBUG - wire >> "0[\r][\n]"
        [2015-07-09 12:47:55,275] DEBUG - wire >> "[\r][\n]"

        Then it thew the above exception.

        By adding the above entry as follows, i could get rid of that.


        Dhananjaya jayasingheසිතුවිල්ල

        එදා සුදු ගවුම ඇඳන් බස් එකෙන් ඇවිත් බහිනකල්
        නොඉවසිල්ලෙන් බස් ස්ටැන්ඩ් එකට වෙලා බලන් උන්නු මම...
        පුංචි පුතාගෙ හුරතල් හිනාව බලන්න
        ගෙදර යන්න ඔරලොසුවෙ කටුව කැරකෙනකල් බලන් ඉන්නවා..
        කාලය... නුඹේ අරුමය..

        Danushka FernandoWSO2 Products - How User Stores work

        Today we keep our users and profiles in several forms. Some times they are in a LDAP. Some uses Active Directory (AD). Some uses databases and etc. WSO2 Products are written in a way any of these format could support. If some one have their own way of storing users and any one can easily plug them in to WSO2 products by just writing a custom user store. In this post I will explain how these user stores works and the other components connected to them.

        When we discuss about user management in WSO2 world, there are several key components. They are
        1. User Store Manager
        2. Authorization Manger
        3. Tenant Manager
        In simple user management we need to authorize some user to some action / permission. Normally we group these actions / permissions as groups and assign these groups / roles to users. So there are two kind of mappings that we need to consider. They are
        1. User to Role Mapping
        2. Role to Permission Mapping
        User to Role Mapping is managed by user store implementation and Role to Permission mapping is managed by authorization manger implementation. These things are configured in the configuration file under [1].

        Tenant Manager comes in to play when Multi Tenancy is considered. This is configured under [2]. Lets discuss this later.

        User Management

        By default WSO2 products (except WSO2 Identity Server) it stores every thing in DB. There it use [3] as the user store manager implementation. In WSO2 Identity Server, it's shipped with an internal LDAP and users are stored in that LDAP. So there it uses the [4] as the implementation of the User Store Manager. And by default all WSO2 servers uses the DB to store Role to Permission Mapping. There it uses [5] as the authorization manager implementation.

        I will explain the WSO2 Identity server case since it contains most of the elements. Since it has an LDAP user store, all users in the system, all roles in the system and the all the user to role mappings are saved in the LDAP. User Store Manager Implementation which is based on the interface [6]. WSO2 Products contains an Abstract User Store Implementation which includes the most of the common implementation is done and extension points provided for plug external implementations [7]. It is recommended to use [7] as the base when writing a user store manager always. All users, roles and user role mappings are  managed through [4] which are in LDAP. And all the Role to Permissions mappings are persisted in DB and it was handled via [5].

        Figure 1 : User Stores and Permission Stores

        Figure 1 is about the relationships among users, roles and permissions and where they are stored and who is handling them.

        Multi tenancy

        Lets get in to multi tenancy now. Some of you may already know what is multi tenancy. But on behalf of the others in multi tenancy we create a space ( a tenant ) which is isolated from everything and nobody other than the people in the tenant don't know the existence of the tenant.

        So in WSO2 products OOTB its supporting for a completely isolated tenants. Each tenant will have their own artifact space, registry space and user store (We call this user realm). Figure 2 graphically explains this story.

        Figure 2 : Each Tenant having their own user store

        Since there could be lots of tenants in the system in WSO2 products, it won't load all the tenants to the memory. It will load the tenant to the memory when the tenant is active. And when the tenant is idle for some time it will get unloaded. When tenant get loaded it will load registry, user realm and artifacts belongs to the tenant.

        In this user realm it contains all the users, roles, permissions and their mappings. Realm gets loaded from the implementation of [8] which we mention in [2]. So by having your own implantation you can plug your tenant structure to WSO2 Products.

        In next post I will explain plugging a custom LDAP structure to WSO2 Products.

        [1] $CARBON_HOME/repository/conf/user-mgt.xml
        [2] $CARBON_HOME/repository/conf/tenant-mgt.xml
        [3] org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager
        [4] org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager
        [5] org.wso2.carbon.user.core.authorization.JDBCAuthorizationManager
        [8] org.wso2.carbon.user.core.config.multitenancy.MultiTenantRealmConfigBuilder

        Ushani BalasooriyaEnable email login in WSO2 carbon products

        To enable email address the below steps can be followed in any carbon product.

        1. EnableEmailUserName in carbon.xml


        2. Then provide the correct regex to allow email address in user store configuration in user-mgt.xml for JDBC user store

            <Property name="UsernameJavaRegEx">[a-
        3. Create admin user with email address in user in user-mgt.xml.

        By the above configurations, it will enable email address.

        If you want to give the both support, email address and username, you can include the below property in user store configuration.

        4.  <Property name="
        To know how to do this for a LDAP, refer this well explained blog post [1] done for Identity server which is applicable for other carbon products as well. This document also explains the properties [2]

        Dhananjaya jayasingheWSO2 ESB API with JMS Queues - HTTP GET

        We are going to discuss on how we can handle HTTP GET methods with JMS Proxy services in WSO2. With this blog we are going to discuss following message flow.

        Flow is like follows;

        • Client invokes an API defined in ESB
        • From the API it sends the message to a JMS queue specifying a "ReplyTo" queue
        • There is a JMS proxy consume message from the above queue 
        • After consuming message, it will invoke the backend and response will be sent back to JMS "Reply To" queue
        • Rest API will consume the message from that "Reply To" queue and send back the response to the client.

        In order to have this flow working, you need to setup ActiveMQ with wso2 ESB. You can find the documentation to do it in this documentation page.

        Lets see the API

        <api xmlns="" name="WeatherAPI" context="/getQuote">
        <resource methods="POST GET" uri-template="/details?*">
        <property name="transport.jms.ContentTypeProperty" value="Content-Type" scope="axis2"></property>
        <property name="httpMethod" expression="get-property('axis2','HTTP_METHOD')"></property>
        <property name="HTTP_METHOD" expression="get-property('axis2','HTTP_METHOD')" scope="transport" type="STRING"></property>
        <address uri="jms:/SMSStore?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory&java.naming.provider.url=tcp://localhost:61616&transport.jms.DestinationType=queue&transport.jms.ContentTypeProperty=Content-Type&transport.jms.ReplyDestination=SMSReceiveNotificationStore"></address>
        <property name="TRANSPORT_HEADERS" scope="axis2" action="remove"></property>

        In the above API , there are few things to be focused on.
        • You can see that we are setting the HTTP_METHOD to a property extracted from the incoming message as follows.  By default, for HTTP POST method, API invocation works perfectly with out setting this property at the end. But If we are doing a HTTP GET , we need to obtain the HTTP method at the consumer side of the queue to invoke the back end with a http GET. So we are setting it here. 

        <property name="HTTP_METHOD" expression="get-property('axis2','HTTP_METHOD')" scope="transport" type="STRING"></property>

        • In the endpoint of the send mediator we are setting up following information which you should have 
          • transport.jms.ContentTypeProperty=Content-Type  - We are saying that we are passing the content type in JMS transport
          • ransport.jms.ReplyDestination=SMSReceiveNotificationStore - We are saying that ESB is expecting a response to this queue for this request 

        Then in the JMS proxy which we consume the message, it will look like follows.

        <proxy xmlns=""
        <target faultSequence="fault">
        <property name="HTTP_METHOD" expression="$trp:HTTP_METHOD" scope="axis2"/>
        <address uri="http://localhost:8080/foo">
        <parameter name="transport.jms.ContentType">
        <parameter name="transport.jms.ConnectionFactory">myQueueConnectionFactory</parameter>
        <parameter name="transport.jms.DestinationType">queue</parameter>
        <parameter name="transport.jms.Destination">SMSStore</parameter>

        In the above proxy configuration, you can see following line

         <property name="HTTP_METHOD" expression="$trp:HTTP_METHOD" scope="axis2"/>

        In that , what we do is , we read the HTTP_METHOD we already set in previous API before sending the message to JMS Queue and now we are reading it from transport level property.  Then we are setting it to "HTTP_METHOD" property with the "axis2" scope.

        With that it will be able to figure out the HTTP method to be used on invoking the actual endpoint.

        After getting a response, this Proxy will send the reply to the "SMSReceiveNotificationStore" queue. Then it will consumed from the previous API and send back the response to the client.

        This will be working perfectly for HTTP POST requests. If it is a HTTP GET Request , you need to do some additional changes from the client side and the server side.

        Client Side Changes

        • We need to pass the content type from the client side as follows. You can set it as a header in SOAP UI
               contentType:  application/x-www-form-urlencoded

        Server Side Changes
        • You need to change the builder and formatter for above content type in the axis2.xml file located in WSO2ESB/repository/conf/axis2 directory as follows. 


        <messageFormatter contentType="application/x-www-form-urlencoded"


        <messageBuilder contentType="application/x-www-form-urlencoded"


        If you does not set above , there will be message building and formatting errors.

        With this way you ll be able to get this working for GET http methods.

        This is tested with WSO2 ESB 4.8.1 with it's latest patches and Apache Activemq 5.10.0

        Manoj KumaraInstall Citrix Receiver on Ubuntu 14.04

        Last week I need to install Citrix on my Ubuntu machine to access a remote server until I get the permissions for the network. I found [1] is useful during setting up Citrix on ubuntu servers.

        Once installed when opening a session I got the following error ‘“Thawte Premium Server CA”, the issuer of the server’s security certificate (SSL error 61)’ to resolve this when searching I found steps to resolve on [2].

        Hope this will be useful to me or anyone in future.


        Ushani BalasooriyaA reason for getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

        Sometimes you will end up in getting an exception like below when starting a server pointed to MySQL DB.

        ERROR - DatabaseUtil Database Error - Communications link failure

        The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
        com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

        The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server

        One possible reason can be since you have mapped, bind-address to an IP in /etc/mysql/my.cnf

        The default is which is all interfaces. This setting does not restrict which IPs can access the server, unless you specified for localhost only or some other IP.

        bind-address            =

        Once this is done, restart mysql server.

         sudo service mysql restart

        John MathonThe Internet of Things (IoT) Integration Disaster – The state of affairs today

        The Internet of Things (IoT) Integration Explosion

        Since I started my project to integrate my home I have discovered a lot of cool tools.   There are a lot of choices now to bridge the integration of the different technologies, but a lot of these tools are immature.   Let’s look at the types of tools available.   These tools can be used in combination to provide a variety of ways to solve the IoT usage and integration problem.

        Screen Shot 2015-07-06 at 4.53.18 PM

        APIs (Spotty)

        Six of the devices/services above have supported and well documented APIs.   Thank you.   Another 3 have documented APIs although no support.  10 devices don’t have an API and I may be able to do only partial integration or wait.    The Eagle has a simple http REST well defined API.   The Davis doesn’t, but fortunately there is a tool called Weathersnoop that provides a nice API to access data locally or remotely.  Followmee has a good API service well documented.  The ATT M2X is similar.    The Tesla has an “unsupported” but reasonably well documented API.  The WEMO has a similar unsupported SOAP / http interface.  The Bodymedia has a good API definition.  The Carrier does not although there are hints that a new device and service is coming shortly that will improve the situation significantly.   The IFTTT service does not have an API service, the same with Thalmic.  The Myq has an undocumented interface unsupported again.   The Lynx does not have any support for external interaction yet.

        I like the Followmee service and it works well.  They did make me pay $10/year for access to the API recently which several months ago when I first signed up for the service was free.   At least it is reasonably priced.

        Each of these APIs is significantly different.   A considerable amount of research is needed to find, write the interface to the device, check it.  This is especially true in the undocumented unsupported cases.   Most of these API’s have poor security.   Tesla upgraded its API to use Oauth2 which is good but its implementation is non-standard.

        I believe this is common situation.   To write and get working all of these services has been more work than I wanted but I am also happy with the functionality I am getting so far.

        The Tesla API lets me see data I can’t see about my car that I can’t see any other way that could be very helpful.   I have automated some functions to do things like “Announce” the car which during daytime honks the horn, waits 10 seconds, honks the horn again and opens the trunk which makes the car very visible.  At night the same function will blink the lights wait 10 seconds and do it again.  It also unlocks the car, turns on the heating or airconditioning depending on the internal temperature in the car and enables the car for driving (starting it.)   :)

        Overall this has been much worse than I expected.   Wemo has some integration with IFTTT but none of the services I have explored has any knowledge of any of these devices to simplify integration.   So far, the integration requires writing my own code for everything I want to do.     All these services claim to be IoT services but then have not opened their APIs, documented them, they have not worked with other vendors to create interoperability.   I am very dissatisfied with the state of the industry.

        The APIs are very inconsistent.  Some use get for everything, some use get, post and put for different things but not in any logical way you would know intuitively.  Some require some things on the URI and some require headers or fields in the body.   Some use a security token that you have to remember and use in subsequent calls.   Some work through a proxy service available in the cloud and some can be accessed locally.

        This is a pathetic situation.   I am publishing on open source the code for these devices so at least someone might be able leverage the experience I am gaining with this experiment.    I should have that up in a week or less and will let you know the location.


        I am very happy so far with the ATT M2X service for recording all my data.  I have tested a number of these data storage solutions.   Some had no consumer pricing or limited trials that were not useful.  Some of the services wanted 500 dollars+/year  for data storage for a consumer which makes no sense.   Others limited me to a total of 8 sensors per device and a small number of devices.   The ATT service is well documented, works brilliantly, is free for consumer uses that are reasonable.   I have nearly 50 sensors in it and it is working well.   Some of the services I tried turned out to not be functional at this time.  They were more promise than fact.   The ecosystem is barely functional.   A lot of vendors are focusing on the Enterprise market.  They claim the ability to take many thousands of devices data and claim significant data analytics capability.

        I am still waiting for some of these analytics things to be useful at the consumer level.  I get real time data from my energy meter but the company that promises to be able to give me powerful analysis won’t be ready for some time to offer it.


        I am using google spreadsheets for my visualization at this time.   The scripting and functionality provided in google sheets is comprehensive and flexible.  The spreadsheet metaphor makes it easy for me to test out some of my automations first before implementing them and also providing a simple easily modified user interface for viewing the data and initiating actions anywhere.

        The spreadsheet solution is great for testing during this phase of adding devices and playing with the rules.  Once I have decided on what I want to do about visualization and control I will look at more of the options.  There are options to have physical special purpose devices, build a phone app or web service.


        Most of the vendors still seem to believe that IoT means I buy their device and then I download a phone app and connect to the cloud.  I am then supposed to bring up their app to control their device and only their device.   This is NOT making it easier.  I don’t know if these vendors really think this is the way IoT will work.   I think many of the consumer companies are suffering from not understanding the paradigm shift in the works.   I don’t want 20 different apps on my phone and switching between them.  I don’t want to control things myself anyway.   The whole point of IoT is to make things smarter so I don’t need to interact with the device as much.   I don’t want their app except as a last resort if my automation fails.

        I am still debating where to put most of my automation.    There are dozens of options in this category.    I have downloaded some and played with them a little.  I am debating where to put the automation.   I believe it would be ideal to put my automation local and get a hub like Smarthings, Vera or Ninja Smartblocks but I am also waiting to see what Apple does and new versions of some of the products out there.    To their credit it does seem like Smartthings, Vera are making big efforts to create more off the shelf integration.  Discussion boards at these companies talk about integrating almost all of the devices I have listed above.   However, at this time none of them has more than a couple checkboxes yet.  Their progress looks a lot more than others.

        An issue is that  even if I try to have local decision making and cloud backup a number of the devices I have don’t allow that and require the device connect directly with the cloud and I access functionality from the cloud. That really pisses me off.   An example of this is MYQ from Chamberlain / Liftmaster.   In retrospect I really wish I didn’t have their garage door opener.  Like other companies they have decided I am going to build my entire IoT strategy apparently on their garage door system and cloud service!  NOT! They advertise this great internet gateway, claim a great flexible expandable service but the API is unsupported and undocumented as well as being ridiculously verbose.  It took more time to get it working because of the poor documentation I have found.  I had to tear the wall controller apart and solder some wires to provide the functionality I wanted.

        The Future

        I am still hopeful that we are early in this revolution.   I believe Apple and Google will drive many of these companies to much more interoperability.   I believe that most of these companies will realize their “do it alone” strategy is doomed.   Nobody is going to base their IoT for their home or business on any one vendor.  None of them provide a wide enough range of devices to make it a compelling argument.

        Other articles you may find interesting:

        Why would you want to Integrate IOT (Internet of Things) devices? Examples from a use case for the home.

        Integrating IoT Devices. The IOT Landscape.

        Iot (Internet of Things), $7 Trillion, $14 Trillion or $19 Trillion? A personal look

        Siri, SMS, IFTTT, and Todoist.


        A Reference Architecture for the Internet of Things



         Alternatives to IFTTT

        Home Automation Startups

        Evanthika AmarasiriSolving " An unsupported signature or encryption algorithm was used"

        While trying out the scenario which I have explained in my previous post Accessing a non secured backend from a secured client with the help of WSO2 ESB, with security scenario 3 onward, you might have come across an issue as below on client side.

        org.apache.axis2.AxisFault: Error in encryption
            at org.apache.rampart.handler.RampartSender.invoke(
            at org.apache.axis2.engine.Phase.invokeHandler(
            at org.apache.axis2.engine.Phase.invoke(
            at org.apache.axis2.engine.AxisEngine.invoke(
            at org.apache.axis2.engine.AxisEngine.send(
            at org.apache.axis2.description.OutInAxisOperationClient.send(
            at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(
            at org.apache.axis2.client.OperationClient.execute(
            at org.apache.axis2.client.ServiceClient.sendReceive(
            at org.apache.axis2.client.ServiceClient.sendReceive(
            at SecurityClient.runSecurityClient(
            at SecurityClient.main(
        Caused by: org.apache.rampart.RampartException: Error in encryption
            at org.apache.rampart.builder.AsymmetricBindingBuilder.doSignBeforeEncrypt(
            at org.apache.rampart.handler.RampartSender.invoke(
            ... 11 more
        Caused by: An unsupported signature or encryption algorithm was used (unsupported key transport encryption algorithm: No such algorithm:; nested exception is:
   Cannot find any provider supporting RSA/ECB/OAEPPadding
            at org.apache.rampart.builder.AsymmetricBindingBuilder.doSignBeforeEncrypt(
            ... 14 more
        Caused by: Cannot find any provider supporting RSA/ECB/OAEPPadding
            at javax.crypto.Cipher.getInstance(
            ... 17 more
        Exception in thread "main" java.lang.NullPointerException
            at SecurityClient.main(

        The reason for this is that the Bouncycastle jar required to run this scenario, is not found in the class path at the client side.

        To overcome this issue, you need to place the relevant bouncycastle jar downloaded from the

        E.g.:- If you are running your client on JDK1.7, then the jar you need to download is bcprov-jdk15on-150.jar.
        A point to note : I tried this scenario pointing the $ESB_HOME/repository/plugins folder to the Eclipse project and pointed to a bouncycastle jar which was at a different location. For some reason, it did not load the jar until I dropped it inside the $ESB_HOME/repository/plugins folder.

        NOTE: Sometimes, you will have to clear the Eclipse/IntelliJ Idea cache in order for the classes to pick up the jars properly.

        Nirmal FernandoSneak Peek into WSO2 Machine Learner 1.0

        This article is about one of the newest products of WSO2, WSO2 Machine Learner (WSO2 ML). These days we are working on the very first general availability release of WSO2 ML and it will be released in mid-July, 2015. For people who are wondering, when did I move from Stratos team to ML team, it happened January this year (2015) on my request (Yes, WSO2 was kind enough to accommodate my request :-)). We are a 7 member team now (effectively 3 in R&D) and lead by Dr. Srinath Perera, VP Research. We also get the assistance from a member of UX team and a member of documentation team.

        What is Machine Learning?

        “Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. Machine learning explores the construction and study of algorithms that can learn from and make predictions on data.”

        More simplified definition from Professor Andrew Ng of Stanford University;

        “Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI.” (source:

        In simple terms, with machine learning we are trying to make the computer learn patterns from a vast amount of historical data and then use the learnt patterns to make predictions.

        What is WSO2 Machine Learner?

        WSO2 Machine Learner is a product which helps you to manage and explore your data, build machine learning models after analyzing the data using machine learning algorithms, compare and manage generated machine learning models and predict using the built models. Following image depicts the high level architecture of WSO2 ML.

        WSO2 ML exposes all its operations via a REST API. We use well-known Apache Spark to perform various operations on datasets in a scalable and efficient manner. Currently, we support number of machine learning algorithms, covering regression and classification types from supervised learning techniques and clustering type from unsupervised learning techniques. We use Apache Spark's MLLib to provide support for all currently implemented algorithms.

        In this post, my main focus is to go through the feature list of WSO2 ML 1.0.0 release, so that you could see, whether it can be used to improve the way you do machine learning.

        Manage Your Datasets

        We help you manage your data, through our dataset versioning support. In a typical use case, you would have an X amount of data now and you would collect another Y amount of data in a month time. With WSO2 ML you could create a dataset with version 1.0.0 which points to X data and in a month time you could create version 1.1.0 which points to (X+Y) data. Then, you could pick these different dataset versions, run a machine learning analysis on top of them and generate models.

        WSO2 ML accepts CSV, TSV data formats and the dataset files can reside in file system or in an HDFS. In addition to these storages, we support pulling data from a WSO2 Data Analytics Server generated data table [doc].

        Explore Your Data

        Once you uploaded datasets into WSO2 ML, you could explore few key details about your dataset such as feature set, scatter plots to understand the relationship of two selected features, histogram of each feature, parallel sets to explore categorical features, trellis charts and cluster diagrams [doc].

        Manage Your ML Projects

        WSO2 ML has a concept call 'Project' which is basically a logical grouping of set of machine learning analyses you would perform on a selected dataset. Note that when I say a dataset, it implies multiple dataset versions belong to a particular dataset. WSO2 ML allows you to manage your machine learning projects based on datasets and also based on users.

        Build and Manage Analyses

        WSO2 ML has a concept call 'Analysis' which holds a pre-processed feature set, a selected machine learning algorithm and its calibrated set of hyper-parameters. Each analysis belongs to a project and a project can have multiple analyses. Once you create an analysis, you cannot edit it but you can view it and also delete it. Analysis creation can be done using the wizard provided by WSO2 ML.

        Run Analyses and Manage Models

        Once you followed the wizard and generate an analysis, final step is to pick a dataset version from the available versions of the project's dataset and run the analysis. Outcome of this process is a machine learning model. Same analysis can be run on different dataset versions and generate multiple models.

        Once a model is generated you could perform various operations on it such as viewing the model summary, downloading the model object as a file, publishing the model into WSO2 registry and predicting.

        Compare Models

        The ultimate goal of you is to build an accurate model which can later be used for prediction. To help you out here, i.e. to allow you to easily compare all the different models got created using different analyses, we have a model comparison view.

        In a Classification problem case, we will sort the models using their accuracy values, and in numerical prediction case we sort base on the mean squared error.

        ML REST API

        All the underlying WSO2 ML operations are exposed using the REST API and in fact our UI client is built on top of the ML REST API [doc]. If you wish, you could write a client in any language, on top of our REST API. It currently supports basic auth and session based authentication.

        ML UI

        Our Jaggery based UI is built using latest UX designs and you probably have felt it from the screenshots seen thus far in this post.

        ML-WSO2 ESB Integration

        We have written a ML-ESB mediator which could be used to do prediction of data collected from an incoming request against a ML model generated using WSO2 ML [doc].

        ML-WSO2 CEP Integration

        In addition to ESB mediator, we have written a ML-CEP extension, which could use to do real-time predictions against a generated model [doc].

        External Spark Cluster Support

        WSO2 ML by default ships an embedded Spark runtime, so that you could simply unzip the pack and start playing with it. But it can be configured to connect to an external Spark cluster [doc].

        The Future

        * Deep Learning algorithm support using H2O - this is currently underway as a GSoC project.
        * Data pre-processing using DataWrangler - current GSoC project
        * Recommendation algorithm support - current GSoC project
         ... whole lot of other new features and improvements.

        This is basically a summary of what WSO2 ML 1.0 is all about. Please follow our GitHub repository for more information. You are most welcome to try it out and report any issues in our Jira.

        Ajith VitharanaAdd "getRecentlyAddedAPIs" operation for Store API - WSO2 API Manager.

        The WSO2 API Manager doesn't expose the "getRecentlyAddedAPIs" operation through the Store API ( But you can do the following workaround to expose that operation.

        1. Open the file <am_home>/repository/deployment/server/jaggeryapps/store/site/blocks/api/recently-added/ajax/list.jag and change it as bellow.

        (function () {
            response.contentType = "application/json; charset=UTF-8";
            var mod, obj, tenant, result, limit,

                    msg = require("/site/conf/ui-messages.jag"),
                    action = request.getParameter("action");
            if (action == "getRecentlyAddedAPIs") {
                tenant = request.getParameter("tenant");
                limit = request.getParameter("limit");
                mod = jagg.module("api");
                result = mod.getRecentlyAddedAPIs(limit,tenant);
                if (result.error) {
                    obj = {
                } else {
                    obj = {
            } else {
        2. Now you can invoke the getRecentlyAddedAPIs operation as bellow.

        curl -b cookies 'http://localhost:9763/store/site/blocks/api/recently-added/ajax/list.jag?action=getRecentlyAddedAPIs&limit=10&tenant=carbon.super'

        limit - Number of recently added APIs
        tenant - Tenant domain.


        Sumedha KodithuwakkuHow to delete a random element from the XML payload with the use of Script mediator in WSO2 ESB

        In WSO2 ESB, we can use the Script Mediator manipulate a XML payload. Here I have used JavaScript/E4X for accessing/manipulating the elements.

        Example XML payload;

        <name>Belgian Waffles</name>
        <name>Strawberry Belgian Waffles</name>
        <name>Berry-Berry Belgian Waffles</name>

        Lets assume we want to remove the last food element (Berry-Berry Belgian Waffles); In this scenario, breakfast_menu is the root element and the set of children elements will be food.

        The length of the child elements (food) can be obtained as follows;

        var payload = mc.getPayloadXML();
        var length =;

        Then delete the last element as follows; Here the index of the last element would be length-1

        delete payload.cuidInfo[length-1];

        Complete Script Mediator configuration would be as follows;

        <script language="js">
        var payload = mc.getPayloadXML();
        var length =;

        The output of the script mediator would be as follows;

        <name>Belgian Waffles</name>
        <name>Strawberry Belgian Waffles</name>

        Likewise  we can delete the required elements from the payload.

        sanjeewa malalgodaHow to use SAML2 grant type to generate access tokens in web applications (Generate access tokens programatically using SAML2 grant type). - WSO2 API Manager

        Exchanging SAML2 bearer tokens with OAuth2 (SAML extension grant type)

        SAML 2.0 is an XML-based protocol. It uses security tokens containing assertions to pass information about an enduser between a SAML authority and a SAML consumer.
        A SAML authority is an identity provider (IDP) and a SAML consumer is a service provider (SP).
        A lot of enterprise applications use SAML2 to engage a third-party identity provider to grant access to systems that are only authenticated against the enterprise application.
        These enterprise applications might need to consume OAuth-protected resources through APIs, after validating them against an OAuth2.0 authentication server.
        However, an enterprise application that already has a working SAML2.0 based SSO infrastructure between itself and the IDP prefers to use the existing trust relationship, even if the OAuth authorization server is entirely different from the IDP. The SAML2 Bearer Assertion Profile for OAuth2.0 helps leverage this existing trust relationship by presenting the SAML2.0 token to the authorization server and exchanging it to an OAuth2.0 access token.

        You can use SAML grant type for web applications to generate tokens.

        Sample curl command .
        curl -k -d "grant_type=urn:ietf:params:oauth:grant-type:saml2-bearer&assertion=&scope=PRODUCTION" -H "Authorization: Basic SVpzSWk2SERiQjVlOFZLZFpBblVpX2ZaM2Y4YTpHbTBiSjZvV1Y4ZkM1T1FMTGxDNmpzbEFDVzhh, Content-Type: application/x-www-form-urlencoded" https://serverurl/token

        How to invoke token API from web app and get token programmatically.

        To generate user access token using SAML assertion you can add following code block inside your web application.
        When you login to your app using SSO there would be access you will get SAML response. You can store that in application session and use it to get token whenever requires.

        Please refer following code for Access token issuer.

        import org.apache.amber.oauth2.client.OAuthClient;
        import org.apache.amber.oauth2.client.URLConnectionClient;
        import org.apache.amber.oauth2.client.request.OAuthClientRequest;
        import org.apache.amber.oauth2.common.token.OAuthToken;
        import org.apache.catalina.Session;
        import org.apache.commons.logging.Log;
        import org.apache.commons.logging.LogFactory;

        public class AccessTokenIssuer {
            private static Log log = LogFactory.getLog(AccessTokenIssuer.class);
            private Session session;
            private static OAuthClient oAuthClient;

            public static void init() {
                if (oAuthClient == null) {
                    oAuthClient = new OAuthClient(new URLConnectionClient());

            public AccessTokenIssuer(Session session) {
                this.session = session;

            public String getAccessToken(String consumerKey, String consumerSecret, GrantType grantType)
                    throws Exception {
                OAuthToken oAuthToken = null;

                if (session == null) {
                    throw new Exception("Session object is null");
        // You need to implement logic for this operation according to your system design. some url
                String oAuthTokenEndPoint = "token end point url"

                if (oAuthTokenEndPoint == null) {
                    throw new Exception("OAuthTokenEndPoint is not set properly in digital_airline.xml");

                String assertion = "";
                if (grantType == GrantType.SAML20_BEARER_ASSERTION) {
            // You need to implement logic for this operation according to your system design
                    String samlResponse = "get SAML response from session";
            // You need to implement logic for this operation according to your system design
                    assertion = "get assertion from SAML response";
                OAuthClientRequest accessRequest = OAuthClientRequest.
                oAuthToken = oAuthClient.accessToken(accessRequest).getOAuthToken();

                session.getSession().setAttribute("OAUTH_TOKEN" , oAuthToken);
                session.getSession().setAttribute("LAST_ACCESSED_TIME" , System.currentTimeMillis());

                return oAuthToken.getAccessToken();

            private static org.apache.amber.oauth2.common.message.types.GrantType getAmberGrantType(
                    GrantType grantType) {
                if (grantType == GrantType.SAML20_BEARER_ASSERTION) {
                    return org.apache.amber.oauth2.common.message.types.GrantType.SAML20_BEARER_ASSERTION;
                } else if (grantType == GrantType.CLIENT_CREDENTIALS) {
                    return org.apache.amber.oauth2.common.message.types.GrantType.CLIENT_CREDENTIALS;
                } else if (grantType == GrantType.REFRESH_TOKEN) {
                    return org.apache.amber.oauth2.common.message.types.GrantType.REFRESH_TOKEN;
                } else {
                    return org.apache.amber.oauth2.common.message.types.GrantType.PASSWORD;

        After you login to system get session object and initiate access token issuer as follows.
        AccessTokenIssuer accessTokenIssuer = new AccessTokenIssuer(session);

        Then keep reference for that object during session.
        Then when you need access token request token as follows. You need to pass consumer key and secret key.

        tokenResponse = accessTokenIssuer.getAccessToken(key,secret, GrantType.SAML20_BEARER_ASSERTION);

        Then you will get access token and you can use it as required.

        sanjeewa malalgodaHow to change endpoit configurations, timeouts of already created large number of APIs - WSO2 API Manager

        How to add additional properties for already create APIs. Sometimes in deployments we may need to change endpoint configurations and some other parameters after we created them.
        For this we can go to management console, published and change them. But if you have large number of APIs that may be extremely hard. In this post lets see how we can do it for batch of API.

        Please note that test this end to end before you push this change to production deployment. And also please note that some properties will be stored in registry, database and synapse configurations. So we need to change all 3 places. In this example we will consider endpoint configurations only(which available on registry and synapse).

        Changing velocity template will work for new APIs. But when it comes to already published APIs, you have to do following process if you are not modifying it manually.

        Write simple application to change synapse configuration and add new properties(as example we can consider timeout value).
         Use a checkin/checkout client to edit the registry files with the new timeout value.
           you can follow below mentioned steps to use the checkin/checkout client,
         Download Governance Registry binary from and extract the zip file.
         Copy the content of Governance Registry in to APIM home.
         Go into the bin directory of the Governance Registry directory.
         Run the following command to checkout registry files to your local repository.
                 ./ co https://localhost:9443/registry/path -u admin -p admin  (linux environment)
                   checkin-client.bat co https://localhost:9443/registry/path -u admin -p admin (windows environment)
        Here the path is where your registry files are located. Normally API meta data will be listed under each provider '_system/governance/apimgt/applicationdata/provider'.

        Once you run this command, registry files will be downloaded to your Governance Registry/bin directory. You can find the directories with user names who created the API.
        Inside those directories there are files with same name 'api' in the location of '{directory with name of the api}/{directory with version of the api}/_system/governance
        /apimgt/applicationdata/provider/{directory with name of the user}\directory with name of the api}/{directory with version of the api}' and you can edit the timeout value by
        using a batch operation(shell script or any other way).

        Then you have to checkin what you have changed by using the following command.
             ./ ci https://localhost:9443/registry/path -u admin -p admin  (linux)
              checkin-client.bat ci https://localhost:9443/registry/path -u admin -p admin (windows)

        Open APIM console and click on browse under resources. Provide the loaction as '/_system/governance/apimgt/applicationdata/provider'. Inside the {user name} directory
        there are some directories with your API names. Open the 'api' files inside those directories and make sure the value has been updated.

        Its recommend to change both registry and synapse configuration. This change will not be applicable to all properties available in API Manager.
        This solution specifically designed for endpoint configurations such as time outs etc.

        sanjeewa malalgodaHow to add secondry user store domain name to SAML response from shibboleth side. WSO2 Identity server SSO with secondary user store.

        When we configure shibboleth as identity provider in WSO2 Identity server as described in this article( deployment would be something like below.

        In this case shibboleth will act as identity provider for WSO2 IS and will provide SAML assertion to WSO2 IS. But actual permission check will happen from IS side and we may need complete user name for that. If we configured user store as secondary user store then user store domain should be part of name. But shibboleth do not know about secondary user store. So in IS side you will username instead of DomainName/UserName. Then it will be an issue if we try to validate permissions per user.

        To over come this we can configure shibboleth to send domain aware user name from their end. Let say domain name is LDAP-Domain then we can set it from shibboleth side with following configuration. Then it will send user name like this LDAP-Domain/userName.


            <!-- This is the NameID value we send to the WS02 Identity Server. -->
            <resolver:AttributeDefinition xsi:type="ad:Script" id="eduPersonPrincipalNameWSO2">
                <resolver:Dependency ref="eduPersonPrincipalName" />

                <resolver:AttributeEncoder xsi:type="enc:SAML2StringNameID" nameFormat="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent" />


                        eduPersonPrincipalNameWSO2 = new BasicAttribute("eduPersonPrincipalNameWSO2");
                        eduPersonPrincipalNameWSO2.getValues().add("LDAP-Domain/" + eduPersonPrincipalName.getValues().get(0));


        sanjeewa malalgodaHow to write custom throttle handler to throttle requests based on IP address - WSO2 API Manager

        Please find the sample source code for custom throttle handler to throttle requests based on IP address. Based on your requirements you can change the logic here.

        package org.wso2.carbon.apimgt.gateway.handlers.throttling;import;
        import org.apache.axis2.context.ConfigurationContext;
        import org.apache.commons.logging.Log;
        import org.apache.commons.logging.LogFactory;
        import org.apache.http.HttpStatus;
        import org.apache.neethi.PolicyEngine;
        import org.apache.synapse.Mediator;
        import org.apache.synapse.MessageContext;
        import org.apache.synapse.SynapseConstants;
        import org.apache.synapse.SynapseException;
        import org.apache.synapse.config.Entry;
        import org.apache.synapse.core.axis2.Axis2MessageContext;
        import org.wso2.carbon.apimgt.gateway.handlers.Utils;
        import org.wso2.carbon.apimgt.impl.APIConstants;
        import org.wso2.carbon.throttle.core.AccessInformation;
        import org.wso2.carbon.throttle.core.RoleBasedAccessRateController;
        import org.wso2.carbon.throttle.core.Throttle;
        import org.wso2.carbon.throttle.core.ThrottleContext;
        import org.wso2.carbon.throttle.core.ThrottleException;
        import org.wso2.carbon.throttle.core.ThrottleFactory;

        import java.util.Map;
        import java.util.TreeMap;

        public class IPBasedThrottleHandler extends AbstractHandler {

            private static final Log log = LogFactory.getLog(IPBasedThrottleHandler.class);

            /** The Throttle object - holds all runtime and configuration data */
            private volatile Throttle throttle;

            private RoleBasedAccessRateController applicationRoleBasedAccessController;

            /** The key for getting the throttling policy - key refers to a/an [registry] entry    */
            private String policyKey = null;
            /** The concurrent access control group id */
            private String id;
            /** Version number of the throttle policy */
            private long version;

            public IPBasedThrottleHandler() {
                this.applicationRoleBasedAccessController = new RoleBasedAccessRateController();

            public boolean handleRequest(MessageContext messageContext) {
                return doThrottle(messageContext);

            public boolean handleResponse(MessageContext messageContext) {
                return doThrottle(messageContext);

            private boolean doThrottle(MessageContext messageContext) {
                boolean canAccess = true;
                boolean isResponse = messageContext.isResponse();
                org.apache.axis2.context.MessageContext axis2MC = ((Axis2MessageContext) messageContext).
                ConfigurationContext cc = axis2MC.getConfigurationContext();
                synchronized (this) {

                    if (!isResponse) {
                        initThrottle(messageContext, cc);
                }        // if the access is success through concurrency throttle and if this is a request message
                // then do access rate based throttling
                if (!isResponse && throttle != null) {
                    AuthenticationContext authContext = APISecurityUtils.getAuthenticationContext(messageContext);
                    String tier;            if (authContext != null) {
                        AccessInformation info = null;
                        try {

                            String ipBasedKey = (String) ((TreeMap) axis2MC.
                            if (ipBasedKey == null) {
                                ipBasedKey = (String) axis2MC.getProperty("REMOTE_ADDR");
                            tier = authContext.getApplicationTier();
                            ThrottleContext apiThrottleContext =
                                            getApplicationThrottleContext(messageContext, cc, tier);
                            //    if (isClusteringEnable) {
                            //      applicationThrottleContext.setConfigurationContext(cc);
                            info = applicationRoleBasedAccessController.canAccess(apiThrottleContext,
                                                                                  ipBasedKey, tier);
                            canAccess = info.isAccessAllowed();
                        } catch (ThrottleException e) {
                            handleException("Error while trying evaluate IPBased throttling policy", e);
                }        if (!canAccess) {
                    return false;

                return canAccess;
            }    private void initThrottle(MessageContext synCtx, ConfigurationContext cc) {
                if (policyKey == null) {
                    throw new SynapseException("Throttle policy unspecified for the API");
                }        Entry entry = synCtx.getConfiguration().getEntryDefinition(policyKey);
                if (entry == null) {
                    handleException("Cannot find throttling policy using key: " + policyKey);
                Object entryValue = null;
                boolean reCreate = false;        if (entry.isDynamic()) {
                    if ((!entry.isCached()) || (entry.isExpired()) || throttle == null) {
                        entryValue = synCtx.getEntry(this.policyKey);
                        if (this.version != entry.getVersion()) {
                            reCreate = true;
                } else if (this.throttle == null) {
                    entryValue = synCtx.getEntry(this.policyKey);
                }        if (reCreate || throttle == null) {
                    if (entryValue == null || !(entryValue instanceof OMElement)) {
                        handleException("Unable to load throttling policy using key: " + policyKey);
                    version = entry.getVersion();            try {
                        // Creates the throttle from the policy
                        throttle = ThrottleFactory.createMediatorThrottle(
                                PolicyEngine.getPolicy((OMElement) entryValue));

                    } catch (ThrottleException e) {
                        handleException("Error processing the throttling policy", e);
            }    public void setId(String id) {
       = id;
            }    public String getId(){
                return id;
            }    public void setPolicyKey(String policyKey){
                this.policyKey = policyKey;
            }    public String gePolicyKey(){
                return policyKey;
            }    private void handleException(String msg, Exception e) {
                log.error(msg, e);
                throw new SynapseException(msg, e);
            }    private void handleException(String msg) {
                throw new SynapseException(msg);
            }    private OMElement getFaultPayload() {
                OMFactory fac = OMAbstractFactory.getOMFactory();
                OMNamespace ns = fac.createOMNamespace(APIThrottleConstants.API_THROTTLE_NS,
                OMElement payload = fac.createOMElement("fault", ns);        OMElement errorCode = fac.createOMElement("code", ns);
                OMElement errorMessage = fac.createOMElement("message", ns);
                errorMessage.setText("Message Throttled Out");
                OMElement errorDetail = fac.createOMElement("description", ns);
                errorDetail.setText("You have exceeded your quota");

                return payload;
            }    private void handleThrottleOut(MessageContext messageContext) {
                messageContext.setProperty(SynapseConstants.ERROR_CODE, 900800);
                messageContext.setProperty(SynapseConstants.ERROR_MESSAGE, "Message throttled out");

                Mediator sequence = messageContext.getSequence(APIThrottleConstants.API_THROTTLE_OUT_HANDLER);
                // Invoke the custom error handler specified by the user
                if (sequence != null && !sequence.mediate(messageContext)) {
                    // If needed user should be able to prevent the rest of the fault handling
                    // logic from getting executed
                }        // By default we send a 503 response back
                if (messageContext.isDoingPOX() || messageContext.isDoingGET()) {
                    Utils.setFaultPayload(messageContext, getFaultPayload());
                } else {
                    Utils.setSOAPFault(messageContext, "Server", "Message Throttled Out",
                                       "You have exceeded your quota");
                org.apache.axis2.context.MessageContext axis2MC = ((Axis2MessageContext) messageContext).

                if (Utils.isCORSEnabled()) {
                    /* For CORS support adding required headers to the fault response */
                    Map headers = (Map) axis2MC.getProperty(org.apache.axis2.context.MessageContext.TRANSPORT_HEADERS);
                    headers.put(APIConstants.CORSHeaders.ACCESS_CONTROL_ALLOW_ORIGIN, Utils.getAllowedOrigin((String)headers.get("Origin")));
                    headers.put(APIConstants.CORSHeaders.ACCESS_CONTROL_ALLOW_METHODS, Utils.getAllowedMethods());
                    headers.put(APIConstants.CORSHeaders.ACCESS_CONTROL_ALLOW_HEADERS, Utils.getAllowedHeaders());
                    axis2MC.setProperty(org.apache.axis2.context.MessageContext.TRANSPORT_HEADERS, headers);
                Utils.sendFault(messageContext, HttpStatus.SC_SERVICE_UNAVAILABLE);

        As listed above your custom handler class is : "org.wso2.carbon.apimgt.gateway.handlers.throttling.IPBasedThrottleHandler", the following will be the handler definition for your API.

        <handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.IPBasedThrottleHandler">
        <property name="id" value="A"/>
        <property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>

        Then try to invoke API and see how throttling works.

        John MathonA case of disruption gone wrong? Uber


        June 2015 – France Taxi Drivers revolt, judge arrests 2 uber officials for illegal operation

        March 2015 – Netherlands – preliminary judgment that Uber must stop its UberPop service

         June 2015 – San Francisco – The California Labor Commission ruled an Uber driver should be considered a company employee, not an independent contractor

        May 2015 – Mexico – Hundreds of taxi drivers protested

        April 2015 –  Chicago – an Uber driver shot a 22-year-old man who had opened fire on a group of pedestrians in Chicago

         April 2015 – Brazil – Sao Paulo court backs taxi drivers, bans Uber

        April 2015 –  San Francisco – an Uber driver accused of running down a bicyclist 

        March 2015 –  U.N. women’s agency backed out of a partnership with Uber after a protest by trade unions and civil society groups.

        January 2015 – China –  government bans drivers of private cars from offering their services through taxi-hailing apps.

        January 2015 – India – New Delhi police say service is re-instated after death of woman. 

        December 2014 – Spain – Uber’s temporary suspension


        “Disruption” gone awry

        This could be a terrific example of “Disruption” gone wrong or not.

        The traditional disruption model is a company produces a product at lower cost or better features that eats away at the lower end of the dominant players market.   This model leads to little awareness of the disruption in play.  The bigger companies happily usually give away the low margin producing business initially eaten by the new entrants.

        In the case of Uber we have a different story.  Uber is displacing regular Taxi drivers around the world.  Unlike the car workers in Detroit or other industries which have experienced the pain of disruption there is rarely this kind of outcry especially against the disrupter who in many cases may be the next employer these workers may have to work for.   I have met many former taxi cab drivers who are happily Uber drivers now.

        So, what is the reason for the more vociferous response to Ubers entrance?  There could be many reasons:

        Let’s review the Uber model and approach as it is understood by me.   I don’t claim any special knowledge of Ubers business practices other than what I’ve learned in talking to drivers and seeing the news stories everybody else has.

        Uber is quite forceful

        Uber has moved into 200+ cities and 50 countries setting up shop and “using” locals within a few short years.  It has definitely been a shock to many people the rapidity with which Uber has been transforming this staid and seemingly heretofore permanently unchangeable business.

        Uber has been quite heavy handed in its approach to penetrating foreign and US markets.   They have been aggressive in hiring tactics, competing strategies.  Whether they are legitimate or not they have raised considerable controversy for being unique.  Lyft a comparable service doesn’t garner quite the antagonism so this could be related to Uber’s tactics and public relations.

        They suffered a public humiliation recently when a VP held an “off the record” meeting in which he explained how Uber was tracking people who were critical of it and was considering revealing personal details of the riders who criticized Uber as retribution.  They VP named a specific individual who he had looked into her travel records and could harm by revealing her personal information.   He suggested a multi-million dollar program like this could help Uber clean up its reputation with media and the public.  I’m not joking.    You can look through my tweets at @john_mathon to see how I called out the president of Uber to fire this individual and to institute new policies.

        The main problem Uber seems to have is that they run afoul of local regulations, ignore the system that exists and try to establish they are different and can do it their way.

        Uber seeks to exist outside the regulations

        They claim they are unregulated because they connect with drivers and passengers via internet and cell phone apps which are not specified in the regulations in any country explicitly.   This is merely an oversight however.  Most countries and cities which regulate things like this will rapidly add clauses regarding the types of services Uber delivers.   How to regulate them is not clear which leads to many places wanting to ban the service until they work out the laws or there becomes more of a consensus how to deal with such a service.

        Uber’s model inherently is a lower cost method of providing workers which means that they consistently offer a lower cost service than local companies can offer.  This obviously disrupts the local drivers of taxis and creates demand.  They purposely avoid compliance with local regulations seeking to keep the model that they originated with no changes.   They avoid training workers as many countries demand of taxi drivers, they eschew employing locals or dealing with medallions or other local regulations seeking always to be on the unregulated outside of the definition of “taxi” services where possible by using their simple hands off approach.

        Uber vs Conventional Taxis

        The traditional taxi ride

        I am going to start by saying I exclude London taxis.   I have had the greatest experiences in London taxis.  I have found the drivers engaging, always interesting to talk to, always knowledgable and the service, the vehicles impeccable.  I estimate in all the trips I have taken 500 taxi rides in London.   Also, I have rarely ever had a real problem getting a taxi in London.  They really are an exception in my opinion.

        The rest of the world:

        In my history I have been ripped off by cab drivers more times than I care to admit.  I have been taken far afield of where I wanted to go either on purpose or accidentally on too many occasions.  I have found taxi drivers all the time that I have to give directions because they have no idea how to get to my destination.   I have had taxi drivers who stink, who smell of drugs, taxi cars that I felt very unsafe with, that smelled or were unhygienic.   I am sure many of the cabs I’ve been in were in violation of several laws for motor vehicles.  I’ve had trouble communicating with drivers, drivers who I’ve fought with, drivers who seemed incompetent or dangerous to drive with.  Drivers who were rude to me and other drivers or people.  I’ve found it sometimes impossible to find a taxi because of the load or strikes even though I looked for an HOUR.   I remember several drives where I feared for my life.  I’ve been in Taxis that have had accidents while I am in them.  I have also felt ripped off even by normal taxi fares paying sometimes over $100 for a simple drive from SFO to home less than 20 miles away that is the legitimate fare in some cases.

        Overall, the situation has improved over the years but it still leads me to trepidation when getting into a Taxi.  I always make sure they are a real taxi.  I have been hassled by too many hucksters seeking to rip me off.  I now track my ride all the time with a mapping app to make sure I am going to the right place or the best route.  I make sure to always insist on a meter taxi.  Even with precautions the number of bad experiences is still too many.  This is one reason I think many people want an alternative.

        In Sydney recently I was shocked when locals told me they hated their local taxi drivers.   Apparently this is a common perception because I went to a comedy show in London soon after and the comedian (from Sydney) was making a lot of fun of the Sydney taxi drivers.

        My Uber experience

        I have taken Uber in countries all over the world in Asia and Europe as well as in America in half a dozen cities.    My experience in uniformly much better than cabs except in London.   Uber drivers are rated after each ride.  They can be booted from the system if their rating falls even a fraction of 1 point.   Several disgruntled passengers early in an Uber drivers career will doom them and their income.  As a result the system works incredibly well.  The Uber drivers have always been incredibly pleasant, talkative and helpful when needed.  They have gone out of their way to help me.

        In a few of the rides the cars were maybe 5 or more years old.  Still, compared to the 10 or 20 or 30 years old some of the taxis i’ve been in they seem positively new.  I’ve noticed that Uber drivers almost always soon get late model cars usually 2 years old or less.  They have ranged from BMW’s to fancy Japanese brands.   They usually have a range of comfort features including excellent air conditioning and heating as well as being universally clean and hygienic.     I really am not being paid in any way by Uber.   This is my actual experience.

        When I read of these people who have had bad experiences in Uber taxis I am not entirely surprised.   The law of averages would automatically mean that at least some crazy incidents would occur if you have millions of rides and tens of thousands of employees you are going to run into every situation possible.

        I have a couple complaints.

        1) I frequently have found the process of finding your Uber driver is problematic.   The Uber drivers do not get the address you are located at even if you type it in.  This is considered a security risk apparently so this means frequently I’ve been texting the driver telling him where I am and finding it costs us a couple minutes to finally get in the car.

        2) I believe the surge pricing system needs to be modified.    I understand all that goes into the current system but I find it very irritating.   I have a friend who uses Uber a lot more than me.  He says that surge zones can be quite small and a taxi can move into a surge zone to “up” their fees.   He claims that he has had on more than one occasion a situation where a driver cancelled his ride only to find that surge pricing went into effect immediately and when he got the next Uber he was paying 2 or 3 times what the fare just 2 minutes ago would have been.  He claims Uber doesn’t care if drivers abuse the system this way.  I don’t know how much this is done but I avoid surge pricing.

        The Uber model as I understand it

        Uber recruits drivers aggressively.   This has been subject of some concern to competitors who claim they actually employ people to go into competitors taxis and recruit drivers they like, then go only a block if they feel there is no chance of recruiting the driver.

        A driver for Uber usually receives a phone from Uber.  They also have many rules and standards they ask drivers to adhere to. Uber does not help pay for the vehicle, the health insurance, car insurance or anything else for the driver although I understand they do help recruit group plans and reduced rates for some policies.   Uber takes 20% of the drivers fare.    This is far less than other types of service take.  So, taxi drivers feel like they make more from Uber.   Many taxi drivers have told me they get more rides on Uber and in spite of the lower cab ride fares they make more money.   Many of the drivers drive late model fancy vehicles that would seem to be outside the price range of the drivers.   I believe they are able to deduct their vehicle on taxes in the US at least which would greatly reduce the cost of the vehicle.

        The main way Uber seems to have of enforcing its regulations is the same system effectively used and created originally by Ebay.   The rating system has been incredibly effective for E-bay which has grown to do billions of transactions / day efficiently and with little problems.   I know how many transactions E-bay does because they use WSO2 software to mediate all their messages to and from mobile applications and web services and all their services.  On peak days the number of transactions routinely exceeds 10 billion.    This is a well oiled machine.   People are remarkably concerned about their reputation in such rating systems.  You can imagine for Uber where your very livelihood would be in jeopardy that drivers are going to want you to give them a 5 star rating every time.  That explains pretty clearly why the service I’ve experienced is so good.

        Another important selling point to the Uber system is its “first mover advantage.”   I believe this is very significant.  One of the big advantages of Uber is that I am a known quantity on their system wherever I go.  Also, they are a known quantity to me.   I can go to Paris, Sydney, New York or one of more than 200 cities in the world where they have drivers and be assured I’m not going to be ripped off and have generally the same quality of service.  I don’t have to worry about local currency and other issues I’ve mentioned.   So, I may have 2 or 3 Taxi app services on my phone but I won’t be subscribing to every local countries App based taxi service.  I will naturally want to use the ones that work in most or all the places I go.    There is a tremendous pressure for Uber to expand to maintain its first mover advantage in as many markets as possible.

        Summary Comparison

        This is simple.  I get rides predictably from Uber where I may find I wait for an hour or more in some cases with traditional taxis. This is especially a problem if you, like me have to make meetings and need to be sure to get a ride.   I can take an Uber taxi anywhere in the world and not worry about being ripped off.   I don’t hassle with local currency, tipping rules or the whole money exchange process which typically adds a tedious and problematic end to the taxi ride.  I walk out of the taxi as soon as I get to the destination which is so liberating.   I have never had an Uber driver take me to the wrong location or take me on a circuitous path.   The drivers are friendly, the cars clean, in good functioning order and frequently as nice as any car you could be in.   This applies whether I have been in Paris, Asia in many countries including China, London and other places in Europe.   It applies whether in Florida or Nevada, Boston, New York.    The Uber fare is always surprisingly lower than the local comparable fair.   The only exception to this would be during surge pricing or taking a TukTuk in Asian countries.  There doesn’t seem to be a “Uber TukTuk” service.

        The Riots and Objections

        I’ve spoken to many people and read many articles which seem to assume that the fact Uber drivers are not regulated by some government means they must be criminals, loaded up on drugs, dangerous, unsafe.  The refrain is you don’t know who you’re getting.   However, I have no idea why people would make this conclusion.   It makes no sense as you have even less idea of who is driving a local taxi.  The Uber system like with E-bay seems to put an incredible onus on drivers to behave well far more than the assumption that people seem to ascribe to local regulatory authorities.  They also seem to attract a more intelligent driver in my experience.  However, in spite of this unassailable reality many people have an innate hostility to Uber and its service.

        Let’s take each of these points I made originally and consider the validity  as objectively as I can.

        1) Uber is “disrupting in foreign countries which are not used to disruption

        This seems clearly to have some truth to it.  Many countries haven’t seen a Toyota come in and displace millions of workers because in most cases they didn’t have an indigenous car manufacturing industry.   Many other disruptions have happened against high tech or large industrial companies which have high paid workers who usually aren’t protected as lower paid workers are.   So most people in the world and countries are not used to disruption like this.   It has come as a surprise to many people that Uber could offer a service and succeed in their markets.  Change itself is disturbing to people not used to it.

        2) Many International countries may be much less “docile” on labor rights than the US.

        Uber’s model means that they don’t employ the drivers.  A driver may receive a bad rating and lose their contract tomorrow.   Uber takes 20% leaving the driver to pay for their car, health and car insurance, maintenance etc… For most drivers this seems to result in a lot more money for them at cheaper fares for passengers and Uber still hauling in billions in income but the riders have no guaranteed income.   Nonetheless, this is a win-win-win if I’ve ever seen one but the down side is that drivers have no “protection” that many countries consider important.

        California recently ruled that a driver was really an employee.   California is particularly a stickler about contractors always trying to find a way to get more tax revenues.  I doubt seriously california is concerned for the drivers health care or unemployment insurance or whatever.   However, the point is valid.  If Uber employed its drivers instead of using them as contractors they would have to change the formula drastically and possibly raise rates.

        In most cases, becoming an employee would mean Uber would pay your taxes and insurance costs, possibly buy your vehicle, maintain it, similar to how many taxi companies work.  Another even more significant point is that Uber’s ability to fire an employee for a couple low ratings might disappear.   It might unravel the Uber model but I don’t think so.   I think they could still find a way to make the system work.   It would take changing their system, taking on additional liabilities and costs.  A lot of regulation to deal with and more hassle but they could do this and still maintain the basics of their service.   I think some countries or states or cities will require things like this and Uber will eventually have to deal with variations in its model.

        3) Politicians and others see an opportunity to gain traction with voters by siding with existing taxi drivers or nationalist sentiment

        I won’t venture to accuse any individual politician but this kind of thing must be happening.

        4) Graft, i.e people paid off to present obstacles to Uber

        Again, I have no idea that such techniques are in play in some places but common sense suggests it must be happening.  The opposite could be happening as well.  Unfortunately in my career I have known of situations where we have lost deals because we didn’t make appropriate contributions.  Fortunately I have worked for companies who refused to deal in such behavior and I know we lost deals as a result.  The fact is such behavior is more common than may be assumed by many people.

        5) Genuine concerns that Uber is trampling on people

        As I mentioned earlier many people may believe that Uber in fact does trample on people.   This is basically a political point and arguing it would be a waste of time for me.  The problem for Uber is that it is unlikely they can change the political situation of all the countries they want to deal in.  So, they are going to have to make concessions to their business model eventually.  They will presumably fight this as long as they can but at cost of being portrayed as the villain.

        6) The pace of change Uber is forcing on people is too fast

        Obviously Uber wants to grow as fast as it can and establish a foothold wherever they can.  They are moving at a blistering pace in acquiring new markets.  They just raised in May $1Billion just to expand in China.  Uber is the largest call Taxi service in many Chinese cities already reportedly.

        People in general can be resistant to change.  For a business that has seen little impact from all the technology change of the last 50 years the resistance is natural but usually people don’t start blowing things up because they fear change.

        7) Ubers model is flawed and may need to adjust especially in some countries to fit in with local laws

        Due to the historical facts like medallions and local regulation of traditional taxi drivers it is eminently possible that Uber has an unfair advantage.   Frequently, local taxi cab drivers are employees and have costs and taxes that Uber drivers don’t have. This is typical for disruptive companies.   It is possible Uber will have to face special taxes or other restrictions which level the playing field.

        A lot of people think Uber’s advantage is in its cost structure and lower fares.  I don’t find that is important.  To me the compelling advantages of Uber are in its service as I have described.  If their fares were the same or even higher than traditional cabs I would pay for the convenience.  So, I think they have considerable room for increased costs before it would really impact the business model.   Others believe that if Uber has to change its model to employ people or other changes it will kill their advantage.  I think not.

        8) Uber has become a flashpoint that isn’t the real issue but a convenient scapegoat

        Frequently one issue is used to deflect from the real purpose of something.  It is very possible that some are using the fear of Uber to drive other political change for their own purposes not because of a real concern for Uber’s purported damage or risk. I find the claims of people who say they are concerned about rape by Uber drivers or lack of safety or regulation as disingenuous.  There is no reason to believe that regular taxi cab drivers wouldn’t be just as likely to be rapists or more.  An incident is San Francisco claimed an Uber driver hit a biker on purpose.  Maybe the driver did but how many incidents with local cabs who have done the same thing?

        Wired magazine wrote a review that said the Uber driver knows where you live so of course you’ll give 5 *’s.   Isn’t it true if I take a taxi to my home he’ll know my home as well?  If I don’t tip him/her well or he/she is a nefarious person.  People who would do something like that would be in serious trouble.  It seems more likely it would be one of the taxi drivers I’ve been with than an Uber driver who rob my home.

        The article below is typical pointing out that Uber drivers have to pay for expenses.  They fail to mention that Uber drivers only pay 20% of the fare so they have ample income compared to regular taxi drivers to pay for these costs.

        I believe that people who make these claims are either very poorly informed or have ulterior motives.   The writer of the article never mentions experiences with regular cabs.  Have those always been perfection?

        Other Articles on this topic of interest:

        A look at challenges Uber has faced around the world

        Uber problems keep piling on

        Yumani RanaweeraPatch Management System - developed, maintained, enhanced using WSO2 Carbon products

        WSo2 support patching in brief

        WSO2 support system issues patches to fix defects that are found in product setups. This patching procedure needs to be methodically handled as the fix that you are applying should correctly solve the issue, it should not introduce regression, it should be committed to relevant public code base, it should be committed to support environment and validated that it correctly fit in with existing code base.

        Patch Management Tool (PMT)

        In an environment where you have large customer base and expects many clarifications daily, there are chances of developers missing certain best practices. Therefore a process that forces and reminds you of what need to be done while the process should be embedded to the system. WSO2 Patch Management Tool, PMT was built for above purpose.

        In the above process, we also need to maintain a database of patch information. In WSO2 support system, applying a patch to a server involves a method which ensures that a new fix is correctly applied to the product binaries. This patch applying procedure mainly relies on a numbering system. This means every patch has its own number. Since we support maintenance for many platform versions at the same time, there are parallel numbering systems for each WSO2 Carbon platform version. All above data should be correctly archived and should be available as an index when you need to create a new patch. PMT facilitates this by being the hub for all patches. Any developer who creates a new patch obtains the patch number for his work from PMT.

        WSO2 support system involves many developers working parallel in different or same product components, based on the customer issues they are handling. In such space, it is important to have meta data such as the jar versions that are patched, commit revisions, patch location, integration test cases, integration test revisions, developer/QA who worked on the patch, patch released date to be archived. These information are needed at a time of a conflict or an error. PMT facilitates an environment to capture these information.

        So as described in above the main purposes of PMT are managing patching process, generating patch numbers, archiving patch metadata and providing search facility. This was initially  introduced and designed by Samisa as a pet project for which Lasindu, Pulasthi helped with writing extensions as part of their intern projects and Yumani with QAing.

        To cater the project requirements, WSO2 Governance Registry was chosen as the back end server with WSO2 user base connected via WSO2 IS as the user store and MySQL for registry. Later WSO2 User Engagement Server was integrated as  the presentation layer, using Jaggery framework to develop the presentation logic. From WSO2 G-Reg, we are using the RXT, LifeCycle and handler concepts, search/ filtering facilities and governance API. From WSO2 UES we are using webapp hosting capability. WSO2 IS is LDAP user store.

        In the proceeding sections I will briefly describe how PMT evolved with different product versions, absorbing new features and enhancements.

        First version in wso2greg-4.5.1

        PMT was initially designed on G-Reg 4.5.1 version. Using the RXT concept, an artifact type called patch was built. This is to capture all metadata related to a patch. It also defines a storage path for patch metadata, a listing view which provides a quick glance on the existing patches. Few important parts of the rxt is discussed below;

        Our requirement was to capture data on JIRA's basic information, client/s to whom its issued, people involved, dates, related documentation and repositories. So the RXT was categorized into tables as Overview, People Information, JIRA Information, Patch Information, Test Information and Dates.

        <table name="Overview">
        <table name="People Involved">
        <table name="Patch Information">
        <table name="Test Information">
        <table name="Dates">

        Each above <table> has the attributes related to them. Most of these were captured via <field type="options"> or <field type="text">

        Above RXT (patchRXT) was associated to a Lifecycle to manage the patching process. Patch LifeCycle involves main stages such as Development, ReadyForQA, Testing, Release. Each above state includes a set of check list items, which lists the tasks that a developer or QA needs to following while in a particular Lifecycle state.

        Sample code segment below shows the configuration of the 'testing' state:

          <state id="Testing">
                                <data name="checkItems">
                                    <item name="Verified patch zip file format" forEvent="Promote">
                                    <item name="Verified README" forEvent="Promote">
                                    <item name="Verified EULA" forEvent="Promote">
                                    <item name="Reviewed the automated tests provided for the patch">
                                    <item name="Reviewed info given on public/support commits" forEvent="Promote">
                                    <item name="Verified the existance of previous patches in the test environment" forEvent="Promote">
                                    <item name="Auotmated tests framework run on test environment">
                                    <item name="Checked md5 checksum for jars for hosted and tested" forEvent="Promote">
                                    <item name="Patch was signed" forEvent="Promote">
                                    <item name="JIRA was marked resolved" forEvent="Promote">
                                 <data name="transitionUI">
                                    <ui forEvent="Promote" href="../patch/Jira_tanstionUI_ajaxprocessor.jsp"/>
                                    <ui forEvent="ReleasedNotInPublicSVN" href="../patch/Jira_tanstionUI_ajaxprocessor.jsp"/>
                                    <ui forEvent="ReleasedNoTestsProvided" href="../patch/Jira_tanstionUI_ajaxprocessor.jsp"/>
                                    <ui forEvent="Demote" href="../patch/Jira_tanstionUI_ajaxprocessor.jsp"/>
                            <transition event="Promote" target="Released"/>
                            <transition event="ReleasedNotInPublicSVN" target="ReleasedNotInPublicSVN"/>
                            <transition event="ReleasedNoTestsProvided" target="ReleasedNoTestsProvided"/>
                            <transition event="Demote" target="FailedQA"/>

        As you may have noticed the Lifecycle transitionUI in which user is given an additional interface in between state changes. In above, on completion of all check list items of the testing state and at the state transition, it pauses to log the time that was spent on the tasks. This information will directly update the live support JIRA and is used for billing purposes. The ui-handlers were used to generate this transition UI dynamically. Later in the cycle wee removed it when Jagath and Parapran introduced a new time logging web application.

        The various paths that the patch might need to be parked given the environment factors, we had to introduce different intermediate states such as 'ReleasedNotInPublicSVN', 'ReleasedNoTestsProvided'.  <transition event> tag was helpful in this. For example a patch can be promoted to 'released', or 'ReleasedNotInPublicSVN' or 'ReleasedNoTestsProvided' states or it can be demoted to FailedQA' states using <transition event> option. 


        Performance Issues in G-Reg 4.5.1

        When compared with the low concurrency rate we had at the time, the response time was high while PMT is hosted in G-Reg-4.5.1. This was even observed in JIRA details page. Suspect was that a *social* feature which pops up worklist items in the UI, as per the design at the time, use to run at each page refresh. Also there was a delay when working with Lifecycle state checklist items too.

        Migrated to G-Reg 4.5.3

        Considering the performance problems and some of the new enhancements to RXTs, PMT was migrated to G-Reg 4.5.3. Migration process was very smooth; there were no database schema changes. We were able to point to the same database that was used earlier. Above issue was solved in new version of G-Reg.

        There was also RXT related enhancements, where we could now have a pop-up calender for dates opposed to the text box in the previous version.Filtering; In the earlier version patch filtering was done through a custom development which we had deployed in CARBON_HOME/repository/component/lib. But in the new version this was an inbuilt feature. There were also enhancements in G-Reg's governance API that we could laverage in generating reports and patch numbers. 

        Moving historical data to live environment.

        Prior to PMT, there was a use of Google spreadsheets to maintain patches related metadata. Since PMT is being used as repository to 'search' patches and related information; it was time to move old data also into PMT. All the previous data that were captured in 6 different spread sheets were moved to PMT. This involved data analysis, since some of the field names were different from spreadsheet to spreadsheet and from spreadsheets to PMT. After background adjustments, Ashan from support team moved the historical data by retrieving data from spreadsheets using dataservices and writing them to PMT using a RemoteRegistry API  [2 -svn location of gov-api code].

        This added 900+ patches to the live system and also effected performance as below. This was done using data service to get data from spreadsheets and a java client to add them to PMT [attached].

        Looking for new avenues


        With the patch database becoming larger by data migrated from archives, as well as PMT being used daily by rotational support teams; there arose an issue where you get an unpredictable stability issues. PoolExhaustedException was one of which that was reproducible when the loaded database is idle for couple of days[1].

        After checking into details, leading G-Reg engineers Ajith, Shelan proposed some changes to data-sources configuration which helped.

        Introduced 'removeAbandoned' and 'removeAbandonedTimeout':
        removeAbandonedTimeout="<This value should be more than the longest possible running transaction>"

        Updated 'maxActive' to a higher value than the default value of 80.

        But we had couple of more problems, we noticed that it the response time for listing the patches is unreasanably high. Some of the reasons that were isolated with the help of Ajith was that the default setting for versioning which was set to true. In G-reg Lifecycle related data used to be stored storing as properties, so when <versioningProperties>true</versioningProperties> is set, it massively grows the REG_RESOURCE_PROPERTY and REG_PROPERTY tables with the resource updates.The patch LifeCycle (contains 20 check list items) , when one patch resource going through all the LC states, it is adding more than 1100 records to REG_RESOURCE_PROPERTY and REG_PROPERTY when versioning is set to true.

        Summary of Ajith's tests;

        Versioning on:
        Adding LC : 22
        One click on check list item : 43
        One promote or demote: 71.

        Versoning off:
        Adding LC : 22
        One click on check list item : 00
        One promote or demote: 24.

        As a solution following were done;
        1) Created a script to delete the unwanted properties

        2) Disabled the versioning properties, comments and Ratings, from static configurations in registry.xml [2 -]

        There was also another bug found related to artifact listing. There is a background task in G-Reg to cache all the generic artifacts. That help us to reduce the database calls when we need to retrieve the artifacts(get from cache instead of database).In G-Reg 4.5.3 it doesn't work for the custom artifacts such as patchRXT and it only works for service,policy,wsdl and schema.This issue was patched by the GReg team.

        Process Validation:

        With increase of number of users daily basis, fastening link with the process became very important. The existing UI at that time was G-Reg management console, but our requirement was to validate Lifecycle events with data in RXT.

        Jaggery based web application and GReg upgrade

        GReg migration to 4.6.0 version was done by Parapran to overcome above issues which was known and fixed in new GReg and a new jaggery based web application was developed by support intern at the time, Nazmin as per Yumani's design and Jagath and team's valuable feedback.

        The intention of the new PMT application was to provide a very user friendly view and to assure the developers of the patches follow all necessary steps in creating a patch. For example a developer cannot proceed from the 'development' state if he has not updated the support JIRA, public JIRA, svn revision fields. If he selects to say automation testing is 'Not possible' he has to give reasons. If he has done tests, he has to give the test commit location etc. Yes, we had occasional disturbed use cases. Special thanks to Paraparan and Ashan who fixed these cases with no time.

        Image 1: Patch Listing

        Image 2: Patch Details

        This application was highly appreciated by the users as it allowed auto generation of patch numbers, pre-populated data for customer projects, users, products and versions etc. As promised it duly validates all mandatory inputs by creating a direct bind to user activities.

        Today, the application has been further developed to capture various patch categories such as ported patches, critical patches, preQA patches. GReg based life cycle is also further developed now to cator pre Patch creation and post patch creation tasks. It was Inshaf who implemented these enhancement. We are also working on new features such as updating customers with the ETAs for patches based on three point estimation, looping leads and rigorously following-up in delays.

        The application was also deployed in an HA setup by Chamara, Inshaf and Ashan for higher scalability, where the back-end GReg is clustered to 4 nodes fronted by Nginx which is sitting between the client application and the BE. We have about 100+ engineers accessing this application daily with around 15 concurrency for adding patches to patch queue, generating patch numbers, progressing patches through patch life cycle and searching for meta information.

        Additionally the PMT database is routinely queried for several other client programs such Patch Health clients, where we generate reports on patches which are not process complete, Service Pack generation where the whole of database is read to extract patches belonging to a given product version and its kernal, Reports for QA at the time of the release testing where they seek for use cases patched by the customer. Most of these client applications are written using governance API.

        PMT is a good example of a simple use case which expanded to a full scale system. It is much used in WSO2 support today and we extended it to have many new features to support the increasing support demands. We have been able to leverage Jaggery framework to all UI level enhancements. Governance registry's ability of defining any type of governance asset and customizable life cycle management feature pioneered the inception of this tool and helped in catering the different data patterns that we wanted to preserve and life cycle changes that needed to be added later according to the changes in the process. We were able to add customized UIs to the application with the use of handler concept in GReg. When the front end jaggery web application was introduced, we had a very seamless integration since the back-end could be accessed via governance API. Increasing load demands were well supported by Carbon clustering and high availability concepts.

        sanjeewa malalgodaHow to send specific status code and message based on different authentication faliures WSO2 API Manager

        In WSO2 API Manager all authentication faliures will hit auth failure handler. There you will be able to change message body, content, header based on internal error codes.
        As example if we got resource not found error while doing token validation then Error Code will be 900906. So in same way we will have different error codes for different failures.

        So in this sample will generate custom message for resource not found issues while doing token validation.
        For this we will specifically check error code 900906 and then route request to specific sequence.

        Please refer following sequence and change to auth_failure_handler to call sequence.


        <sequence name="_auth_failure_handler_" xmlns="">
            <property name="error_message_type" value="application/xml"/>   
            <filter source="get-property('ERROR_CODE')" regex="900906">
                  <sequence key="sample"/>
            <sequence key="_build_"/>


        <?xml version="1.0" encoding="UTF-8"?>
        <sequence xmlns="" name="sample">
            <payloadFactory media-type="xml">
                    <am:fault xmlns:am="">  
                        <am:message>Resource not found</am:message>
                        <am:description>Wrong http method</am:description>
            <property name="RESPONSE" value="true"/>
            <header name="To" action="remove"/>
            <property name="HTTP_SC" value="405" scope="axis2"/>
            <property name="messageType" value="application/xml" scope="axis2"/>

        Dhananjaya jayasingheHow to generate a custom Error Message with Custom HTTP Status Code for unavailable Resources in WSO2 ESB

        WSO2 ESB 4.8.1  does not throw any exception or error message when an API defined is access with incorrect HTTP method and it will just respond with 202.  In this blog post , i am explaining on how we can get a custom HTTP status code for the above.

        In order to get a custom error message , you need to add following sequence to ESB which is not there by default.

        <?xml version="1.0" encoding="UTF-8"?>
        <sequence xmlns="" name="_resource_mismatch_handler_">
        <payloadFactory media-type="xml">
        <tp:fault xmlns:tp="">
        <tp:type>Status report</tp:type>
        <tp:message>Method not allowed</tp:message>
        <tp:description>The requested HTTP method for resource (/$1) is not allowed.</tp:description>
        <arg xmlns:ns="http://org.apache.synapse/xsd"
        <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
        <property name="HTTP_SC" value="405" scope="axis2"/>

        In ESB documentation [1] , it has explained that in order to handle non-matching resources, it is needed to define this sequence _resource_mismatch_handler_


        Dhananjaya jayasingheHow to generate a custom Error Message with Custom HTTP Status Code for unavailable Resources in WSO2 API Manager

        We are going to explain on how we can generate a custom HTTP Status code for a request which is addressed to a un-matching resource of an API.

        Problem :

        When an API exposed with resource "GET" , if the client invoke the API with "POST","PUT" or any other which is not "GET", By default API manager returns following.

        "type":"Status report",
        "message":"Runtime Error",
        "description":"No matching resource found in the API for the given request"

        In the RAW level you ll see it as follows

        HTTP/1.1 403 Forbidden
        Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
        Access-Control-Allow-Origin: *
        Access-Control-Allow-Methods: GET,PUT,POST,DELETE,OPTIONS
        Content-Type: application/xml; charset=UTF-8
        Date: Mon, 29 Jun 2015 14:46:29 GMT
        Server: WSO2-PassThrough-HTTP
        Transfer-Encoding: chunked
        Connection: Keep-Alive

        <ams:fault xmlns:ams="">
        <ams:message>No matching resource found in the API for the given request</ams:message>
        <ams:description>Access failure for API: /sss, version: v1 with key: 4a33dc81be68d1b7a5b48aeffebe7e</ams:description>

        Expected Solution :

        We need to change this HTTP Response code 405 [1] with a custom error message.

        Solution :

        We need to create a sequence which builds the custom error message and the error code and deploy it in API manager's default sequences folder.

        <?xml version="1.0" encoding="UTF-8"?>
        <sequence xmlns="" name="converter">
        <payloadFactory media-type="xml">
        <am:fault xmlns:am="">
        <am:message>Resource not found</am:message>
        <am:description>Wrong http method</am:description>
        <property name="RESPONSE" value="true"/>
        <header name="To" action="remove"/>
        <property name="HTTP_SC" value="405" scope="axis2"/>
        <property name="messageType" value="application/xml" scope="axis2"/>

        You can save this as converter.xml in wso2am-1.8.0/repository/deployment/server/synapse-configs/default/sequences folder.

        Then we need to invoke this sequence in _auth_failure_handler_.xml which is located in the above sequences folder. In order to do that , we need to change it as follows.

        <?xml version="1.0" encoding="UTF-8"?>
        <sequence xmlns="" name="_auth_failure_handler_">
        <property name="error_message_type" value="application/xml"/>
        <filter source="get-property('ERROR_CODE')" regex="900906">
        <sequence key="converter"/>
        <sequence key="_build_"/>

        Once you done the above changes, save them. Then you can test your scenario. If you are successful with this , you ll be able see following response

        HTTP/1.1 405 Method Not Allowed
        Content-Type: application/xml
        Date: Mon, 29 Jun 2015 14:59:12 GMT
        Server: WSO2-PassThrough-HTTP
        Transfer-Encoding: chunked
        Connection: Keep-Alive

        <am:fault xmlns:am="">
        <am:message>Resource not found</am:message>
        <am:description>Wrong http method</am:description>

        Explanation : 

        By default, when we invoke an non-existing resource it will send the default 403 error code with the message "No matching resource found in the API for the given request". If you check the log of the WSO2 AM, you can see that it has thrown following exception in the backend.

        [2015-06-29 10:59:12,103] ERROR - APIAuthenticationHandler API authentication failure Access failure for API: /sss, version: v1 with key: 4a33dc81be68d1b7a5b48aeffebe7e
        at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(
        at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(
        at org.apache.axis2.engine.AxisEngine.receive(
        at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(
        at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(
        at org.apache.axis2.transport.base.threads.NativeWorkerPool$
        at java.util.concurrent.ThreadPoolExecutor.runWorker(
        at java.util.concurrent.ThreadPoolExecutor$

        When it throws above exception, the flow will hit the  _auth_failure_handler_.xml
        sequence. So what we have done in this sequence, with using the filter mediator, we have filtered the error code "900906" and for that error code, we invoke our custom sequence and drop the message then.

        In the custom sequence , we have used the payload factory mediator to create the payload and added required properties to make it as response. You can find the information further on each of those properties from [2][3][4]

        Then after invoking the custom sequence, it will invoke the "_build_" sequence in the same folder which invoke the message builders to build the message.

        I have used resources [4] on creating this blog post.


        Chanaka FernandoWSO2 ESB Error Handling Tutorial - Part I (Client side error handling)

        Recently, I found a nice video on facebook which was shared by Sanjiva Weerawarana (CEO @ WSO2), which was a narration by Matt Damon. The original paragraph was taken from a speech by Howard Zinn's 1970 speech.

        According to that, world is topsy turvy (upside down). Wrong people are in power, Wrong people are out of power. But there is one thing missing in that speech. Which is that wrong people are using software, wrong people are not using software :).

        Sorry about going out of the topic. But this speech has really moved me. Anyway, let's start talking about the subject. WSO2 ESB is the central hub of your SOA architecture. It will communicate with all kinds of heterogenous systems. These systems can go mad sometimes. In such a scenarios, WSO2 ESB should not go mad. If you haven't done proper error handling at WSO2 ESB, even though it does not go mad, people will feel that it has gone mad by looking at lengthy error logs and exceptions. So why do you let people to think in that way. Rather we can implement a proper error handling mechanism and make the WSO2 ESB looks solid at all time.

        Here is a typical message flow in your enterprise system which involves WSO2 ESB.
        In the above message flow, things can go wrong in all 3 components. 
        • Client Error
        • ESB Error
        • Server Error
        In all 3 scenarios, we need to have a proper handling mechanism to identify the error scenarios as soon as they occur. Otherwise, it will cost your organization's business. This can be ranged from hundreds of dollars to millions of dollars. Let's discuss about error handling at each and every component depicted above.

        Handling Client errors

        Anything can go wrong at any time. That is a fact in this world. So is the Clients who are using the services/APIs exposed through WSO2 ESB. Here are some example scenarios where clients go mad during the message execution.

        • Sending wrong messages (non-allowed content)
        • Closing connections early
        • Sending requests to wrong URLs
          Let's discuss these scenarios one by one and learn about the error handling mechanisms which we can take to get over these.

        Sending wrong messages

        Let's say your client is sending XML messages to the ESB. Due to a mistake in the client code, let's say client sends the following message.

        <soapenv:Envelope xmlns:soapenv="" xmlns:ser="http://services.samples" xmlns:xsd="http://services.samples/xsd">

        In the above message, client is sending '&' character within the XML message (This is only an example). Let's say we have a very simple PassThrough Proxy service defined in the ESB. 

        <?xml version="1.0" encoding="UTF-8"?>
        <proxy xmlns=""
                 <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>

        In this case, ESB will simply pass-through the message to the backend without worrying about the content. According to the capabilities of the back end server, it will respond with some message and ESB will pass the response to the client. No worries at this point. 

        All good with the above proxy service. Let's add a log mediator with log level "full". Once we have this content-aware mediator in the message flow, ESB tries to convert the incoming data stream into a canonical XML message. Here comes the exception. Since this message contains wrong XML characters, ESB will fail during the message building process. Now the proxy looks like below.

        <?xml version="1.0" encoding="UTF-8"?>
        <proxy xmlns=""
                 <log level="full"/>
                 <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>

        Once we have this content-aware mediator in place, ESB will try to build the message and it will fail with an exception similar to this.

        ERROR - NativeWorkerPool Uncaught exception ParseError at [row,col]:[8,29]
        Message: The entity name must immediately follow the '&' in the entity reference.

        This is fine, since the message is wrong. But what is wrong here is that client did not get any message from ESB related to this error scenario. Then client will timeout the connection after waiting for the configured time duration and you can see the following log in the ESB log console.

        [2015-06-27 17:26:08,885]  WARN - SourceHandler Connection time out after request is read: http-incoming-2

        We need to handle this situation with a proper error handler. We can define a fault sequence at the proxy service level. Then the message will go through this fault handler sequence and we can send a fault message to the client if we encounter this kind of message. Let's add the fault sequence to the proxy service.

        <?xml version="1.0" encoding="UTF-8"?>
        <proxy xmlns=""
                 <log level="full"/>
                 <makefault version="soap11">
                    <code xmlns:soap11Env=""
                    <reason expression="$ctx:ERROR_MESSAGE"/>
                 <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
        Once we have the fault sequence, client will get a fault message as the response for this message.

        Closing connections early

        Another common use case is that client closes the connection before the server respond back with the message. When this happens, you can observe the following error message in the server log file.

        [2015-06-28 19:28:37,524]  WARN - SourceHandler Connection time out after request is read: http-incoming-1

        This slowness can be occurred due to back end slowness or due to ESB server contention. In both scenarios, you need to increase the client timeout to get rid of this error. You can configure the client timeout to a considerable value which is greater than the maximum response time of the server. But configuring the client timeout only will not make this scenario work for you. The reason is that, even though the client has increased the timeout, ESB will close the connection after 60 seconds (default value). Therefore, you need to configure the client side HTTP connection timeout in the ESB_HOME/repository/conf/ file with the following parameter. Add this parameter if it is not already there.


        Sending Requests to wrong URL

        Sometimes, client may send requests to non existing URLs. For example, let's say you have an API defined in the ESB like below.

        <api xmlns="" name="test" context="/test">
           <resource methods="POST GET" url-mapping="/echo">
                 <log level="full"></log>
                       <address uri="http://localhost:9000/services/SimpleStockQuoteService"></address>

        According to the definition, you need to send the request to following URL.


        But due to a mistake by the client, it sends a request to the following URL


        Here what happens is, ESB will respond with 202 accepted message to the client. That is not the correct message ESB should send to the client since ESB is not processing this message correctly. What it should do is that, it needs to respond to the client with some error message such that client can go though error response and identify the root cause.

        We need to define a special sequence for handling this kind of failed requests. You can define this sequence as given below.

        <sequence xmlns="" name="_resource_mismatch_handler_">
           <payloadFactory media-type="xml">
                 <tp:fault xmlns:tp="">
                    <tp:type>Status report</tp:type>
                    <tp:message>Not Found</tp:message>
                    <tp:description>The requested resource (/$1) is not available.</tp:description>
                 <arg xmlns:ns="http://org.apache.synapse/xsd" xmlns:ns3="http://org.apache.synapse/xsd" expression="$axis2:REST_URL_POSTFIX" evaluator="xml"></arg>
           <property name="RESPONSE" value="true" scope="default"></property>
           <property name="NO_ENTITY_BODY" action="remove" scope="axis2"></property>
           <property name="HTTP_SC" value="404" scope="axis2"></property>
           <header name="To" action="remove"></header>

        In the above sequence, you can change the internal mediators as per your wish. But the name of the sequence should be as it is (_resource_mismatch_handler_). One you have this sequence in place, clients will get the following error message if they send requests to non-existing API resources.

        <tp:type>Status report</tp:type>
        <tp:message>Not Found</tp:message>
        <tp:description>The requested resource (//echo-test) is not available.</tp:description>

        I will be discussing about the rest of the 2 scenarios in a future blog post.

        Handling back end Server errors

        Handling ESB errors

        Ajith VitharanaAdd third party library to custom feature developed for WSO2 product.

        This is a great article which explain how to write a custom feature for WSO2 product.(

        This blog post is going to explain how to add third party library to your custom feature.

        1. You can create orbit bundle from that third part library.

        Eg : This[i] is to make the Apache POI library as OSGI bundle.


        2. Build your orbit bundle. (use maven)

        3. Then you need to add that dependency to student-manager/features/org.wso2.carbon.student.mgt.server.feature/pom.xml file.


        4. Add new <bundleDef> entry inside the <bundles> element in student-manager/features/org.wso2.carbon.student.mgt.server.feature/pom.xml file.

        The format of the <bundleDef> should be , <bundleDef>[groupId]:[artifactId]</bundleDef>


        org.apache.poi.wso2 - groupId of above dependency
        poi                           - artifactId of above dependency

        5. Build the student-manager project again, now you should see that poi_3.9.0.wso2v1.jar file in student-manager/repository/target/p2-repo/plugins directory.

        6. Finally when you install student-manager feature to WSO2 server (from that p2-repo), that third party library(poi_3.9.0.wso2v1.jar) will automatically install.

        Denis WeerasiriBird-eye view of Sri Lanka

        If you are travelling to Sri Lanka or a nature lover haven't even heard about this beautiful island, these nature documentaries are for you.

        Ocean of Giants

        Land of Lakes

        Forest of Clouds

        Dhananjaya jayasingheHow to add a thread sleep to a Proxy Service

        Here i am going to provide you a example on how we can create a mock service with WSO2 ESB and adding a sleep to that service.

        In order to do that we need to use ;

        1. Payload Factory mediator to create the mock response
        2. script mediator to do a thread sleep
        Here is the simple mock service proxy with a thread sleep.

        <proxy xmlns=""
        <property name="===Before sleep===" value="===Before sleep==="/>
        <script language="js">java.lang.Thread.sleep(75000);</script>
        <property name="===After sleep===" value="===After sleep==="/>
        <payloadFactory media-type="xml">
        <Response xmlns="">
        <header name="To" action="remove"/>
        <property name="RESPONSE" value="true" scope="default" type="STRING"/>
        <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>

        I have used the blog of miyuru [1] to create this.


        Dhananjaya jayasingheWSO2 IS User Store as ReadOnly/ReadWrite LDAP secondary user store

        In most of the testing scenarios, we need to connect our products in to a secondary user store which is ReadOnly or ReadWrite Ldap User stores.

        This is a simple way to get it done with WSO2 Identity Server.

        Not as other WSO2 products, IS ships LDAP User store as it's primary user store. So if we need to point any of the other products in to a LDAP secondary user store, we can easily use WSO2 IS for that.

        Case 01: Pointing WSO2 AM to a ReadOnlyLDAP Secondary user store

        • Download, Extract, Start WSO2 IS
        • Download, Extract WSO2 AM
        • If we are running both products in the same machine, we need to change the offset of the AM
        • Open the carbon.xml file located in "wso2am-1.9.0/repository/conf" folder and change the Offset value to "1". (By default it is "0")
        • Start AM
        • Browse url https://localhost:9444/carbon/
        • Login with credentials admin/admin
        • From the left menu , click on "Configure"

        • Click on "User Store Management"
        • Then click on "Add Secondary User Store" button 
        • From the drop down at the top, select "ReadOnlyLdapUserStoreManager" as the user store manager class.
        • Then provide parameters as follow
          • Domain Name : Any Name (
          • Connection Name : uid=admin,ou=system
          • Connection URL : ldap://localhost:10389
          • Connection Password : admin
          • User search base : ou=Users,dc=wso2,dc=org
          • User Object Class : (objectClass=person)
          • Username Attribute : uid
          • User search filter : (&(objectClass=person)(uid=?))
        • Then click on Add. 
        • After few seconds, it will be displayed in the user Store list 
        • You can find these configurations in user-mgt.xml file located in  "wso2am-1.9.0/repository/conf" folder. But you need to focus on the parameter "User search base".  By default it is given as "ou=system". But with that you ll not be able to view the users of the secondary user store. Here i have added the correct parameter value " ou=Users,dc=wso2,dc=org"

        Case 02: Pointing WSO2 AM to a ReadWriteLDAP Secondary user store

        Please follow the documentation

        Kavith Thiranga LokuhewageHow to use DTO Factory in Eclipse Che

        What is a DTO?

        Data transfer objects are used in Che to do the communication between client and server. In a code level, this is just an interface annotated with @DTO com.codenvy.dto.shared.DTO. This interface should contain getters and setters (with bean naming conventions) for each and every fields that we need in this object.
         For example, following is a DTO with a single String field.

        public interface HelloUser {
        String getHelloMessage();
        void setHelloMessage(String message);
        By convention, we need to put these DTOs to shared package as it will be used by both client and server side.

        DTO Factory 

        DTO Factory is a factory available for both client and server sides, which can be used to serialize/deserialize DTOs. DTO factory internally uses generated DTO implementations (described in next section) to get this job done. Yet, it has a properly encapsulated API and developers can simply use DTOFactoy instance directly.

        For client side   : com.codenvy.ide.dto.DtoFactoryFor server side  : com.codenvy.dto.server.DtoFactory

        HelloUser helloUser = DtoFactory.getInstance().createDto(HelloUser.class);
        Above code snippet shows how to initialize a DTO using DTOFactory. As mentioned above, proper DtoFactory classes should be used by client or server sides. 

        Deserializing in client side

        //important imports

        //invoke helloService
        Unmarshallable<HelloUser> unmarshaller = unmarshallerFactory.newUnmarshaller(HelloUser.class);

        helloService.sayHello(sayHello, new AsyncRequestCallback<HelloUser>(unmarshaller) {
        protected void onSuccess(HelloUser result) {
        protected void onFailure(Throwable exception) {

        When invoking a service that returns a DTO, client side should register a callback created using relevant unmarshaller factory. Then, the on success method will be called with a deserialized DTO. 

        De-serializing in server side

        public ... sayHello(SayHello sayHello){
        ... sayHello.getHelloMessage() ...
        Everest (JAX-RS implementation of Che) implementation automatically deserialize DTOs when they are used as parameters in rest services. It will identify serialized DTO with marked type -  @Consumes(MediaType.APPLICATION_JSON)  - and use generated DTO implementations to deserialize DTO. 

        DTO maven plugin

        As mentioned earlier, for DtoFactoy to function properly, it needs some generated code that will contain concrete logic to serialize/deserialize DTOs. GWT compiler should be able to access generated code for client side and generated code for server side should go in jar file.
        Che uses a special maven plugin called “codenvy-dto-maven-plugin” to generate these codes. Following figure illustrates a sample configuration of this plugin. It contains separate executions for client and server sides. 
        We have to input correct package structures accordingly and file paths to which these generated files should be copied. 

        Other dependencies if DTOs from current project need them.
        package - package, in which, DTO interfaces resides
        outputDirectory -  directory, to which, generated files should be copied
        genClassName - class name for the generated class
        You should also configure your maven build to use these generated classes as a resource when compiling and packaging. Just add following line in resources in build section.


        Kavith Thiranga LokuhewageGWT MVP Implementation in Eclipse Che

        MVP Pattern

        Model View Presenter (aka MVP) is a design pattern that attempts to decouple the logic of a component from its presentation. This is similar to the popular MVC (model view controller) design pattern, but has some fundamentally different goals. The benefits of MVP include more testable code, more reusable code, and a decoupled development environment.

        MVP Implementation in Che

        Note : Code example used in this document are from a sample project wizard page for WSO2 DSS.  

        There are four main java components used to implement a Che component that follows MVP.

            1. Interface for View functionality
            2. Interface for Event delegation
            3. Implementation of View
            4. Presenter


        To reduce the number of files created for each MVP component, No. 1 and No. 2 are created within a single java file. To be more precise, event delegation interface is defined as a sub interface within view interface.

        View interface should define methods that will be used by presenter to communicate with view implementation. Event delegation interface should define methods that will be implemented by presenter so that view can delegate events to presenter using these methods.

        Following code snippet demonstrates these two interfaces that we created for DSS project wizard page.

        publicinterfaceDSSConfigurationViewextendsView<DSSConfigurationView.ActionDelegate> {
        String getGroupId();
        void setGroupId(String groupId);
        String getArtifactId();
        void setArtifactId(String artifactId);
        String getVersion();
        void setVersion(String version);

        interfaceActionDelegate {
        void onGroupIdChanged();
        void onArtifactIdChanged();
        void onVersionChanged();

        VIew and Event Handler interfaces

        Interface for view should extend from com.codenvy.ide.api.mvp.View interface. This com.codenvy.ide.api.mvp.View interface only defines a single method - void setDelegate(T var1).

        ... interface DSSConfigurationView extends View<DSSConfigurationView.ActionDelegate> ...

         Using generics, we need to inform this super interface about our event handling delegation interface.

        View Implementation

        View implementation often can extend from any abstract widget such as Composite. It may also use UIBinder  to implement the UI if necessary. It is possible to implement view by following any approach and using any GWT widget. The only must is that it should implement view interface (created in previous step) and IsWidget interface (Or extend any subclass of IsWidget).

        public class DSSConfigurationViewImpl extends ... implements DSSConfigurationView {

        /***Other code**/

        // Maintain a reference to presenter
        private ActionDelegate delegate;

        // provide a setter for presenter
        public void setDelegate(ActionDelegate delegate) {
        this.delegate = delegate;

        /***Other code**/

        // Implement methods defined in view interface
        public String getGroupId() {
        return groupId.getText();

        public void setGroupId(String groupId) {

        /***Other code**/

        // Notify presenter on UI events using delegation methods
        public void onGroupIdChanged(KeyUpEvent event) {

        public void onArtifactIdChanged(KeyUpEvent event) {

        /***Other code**/

        View implementation 

        As shown in above code snippet (see full code), main things to do in view implementation can be summarised as below.

            1. Extend any widget from GWT and implement user interface by following any approach
            2. Implement view interface (created in previous step)
            3. Manage a reference to action delegate (presenter - see next section for more info)
            4. Upon any UI events inform presenter using the delegation methods so that presenter can execute business logic accordingly   


        Presenter can extend from many available abstract presenters such as AbstractWizardPage, AbstractEditorPresenter and BasePresenter, anything that implements com.codenvy.ide.api.mvp.Presenter. It also should implement Action Delegation interface so that upon any UI events, those delegation methods will be called.

        public class DSSConfigurationPresenter extends ... implements DSSConfigurationView.ActionDelegate {

        // Maintain a reference to view
        private final DSSConfigurationView view;

        /** Other Code*/

        public DSSConfigurationPresenter(DSSConfigurationView view, ...) {

        this.view = view;
        // Set this as action delegate for view

        /** Other Code*/

        /** Other Code*/

        // Init view and set view in container
        public void go(AcceptsOneWidget container) {

        // Execute necessary logic upon ui events
        public void onGroupIdChanged() {

        // Execute necessary logic upon ui events
        public void onArtifactIdChanged() {



        Depending on the extending presenter, there may be various abstract method that needs to be implemented by presenter. For example, if you extend AbstractEditorPresenter, you need to implement initializeEditor(), isDirty() and doSave(), etc. methods. If it is AbstractWizardPage, you need to implement isCompleted(), storeOptions(), removeOptions(), etc methods.

        Yet, as shown in above code snippet (seefullcode), following are the main things that you need to do in presenter.

            1. Extend any abstract presenter as needed and implement abstract methods/or override behaviour as needed
            2. Implement Action delegation interface
            3. maintain a reference to view
            4. set this as the action delegate of view using set delegate method
            5. init view and set view in the parent container (go method)
            6. use methods defined in view interface to communicate with view

        The go method is the one that will be called by Che UI framework, when this particular component is need to be shown in IDE. This method will be called with a reference to parent container.

        sanjeewa malalgodaWSO2 API Manager CORS support and how it works with API gateway - APIM 1.8.0

        According to wiki cross-origin resource sharing (CORS) is a mechanism that allows restricted resources (e.g. fonts, JavaScript, etc.) on a web page to be requested from another domain outside the domain from which the resource originated.  Also "Cross-domain" AJAX requests are forbidden by default because of their ability to perform advanced requests (POST, PUT, DELETE and other types of HTTP requests, along with specifying custom HTTP headers) that introduce many security issues as described in cross-site scripting.

        In WSO2 API Manager cross domain resource sharing is happening between AM and the client application.
        See following sample CORS specific headers
        < Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
        < Access-Control-Allow-Origin: localhost
        < Access-Control-Allow-Methods: GET,PUT,POST,DELETE,OPTIONS
        'Access-Control-Allow-Origin' header in the response is set in API gateway by validating the 'Origin' header from the request.
        (CORS related requests should have a 'Origin' header to identify the requesting domain).
        Please refer following config element in api-manager.xml file.

            <!--Configuration to enable/disable sending CORS headers from the Gateway-->
            <!--The value of the Access-Control-Allow-Origin header. Default values are
                API Store addresses, which is needed for swagger to function.-->

            <!--Configure Access-Control-Allow-Methods-->
            <!--Configure Access-Control-Allow-Headers-->

        We set the CORS related headers in the response from the APIAuthenticationHandler before we send the response back to the client application.

        API gateway first we check the 'Origin' header value from the request (one sent by the client) against the list of defined in the api-manager.xml.
        If this host is in the list, we set it in the Access-Control-Allow-Origin header of the response.
        Otherwise we set it to null. If this is null, then this header will be removed from the response(not allow access).

        See following sample curl commands and responses to see how this origin header change response.

        curl -k -v -H "Authorization: Bearer 99c85b7da8691f547bd46d159f1d581" -H "Origin: localhost"
        < HTTP/1.1 200 OK
        < ETag: "b1-4fdc9b19d2b93"
        < Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
        < Vary: Accept-Encoding
        < Access-Control-Allow-Origin: localhost
        < Last-Modified: Wed, 09 Jul 2014 21:50:16 GMT
        < Access-Control-Allow-Methods: GET,PUT,POST,DELETE,OPTIONS
        < Content-Type: text/html
        < Accept-Ranges: bytes
        < Date: Wed, 24 Jun 2015 14:17:16 GMT
        * Server WSO2-PassThrough-HTTP is not blacklisted
        < Server: WSO2-PassThrough-HTTP
        < Transfer-Encoding: chunked

         curl -k -v -H "Authorization: Bearer 99c85b7da8691f547bd46d159f1d581" -H "Origin: localhostXX"
        < HTTP/1.1 200 OK
        < ETag: "b1-4fdc9b19d2b93"
        < Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
        < Vary: Accept-Encoding
        < Last-Modified: Wed, 09 Jul 2014 21:50:16 GMT
        < Access-Control-Allow-Methods: GET,PUT,POST,DELETE,OPTIONS
        < Content-Type: text/html
        < Accept-Ranges: bytes
        < Date: Wed, 24 Jun 2015 14:17:53 GMT
        * Server WSO2-PassThrough-HTTP is not blacklisted
        < Server: WSO2-PassThrough-HTTP
        < Transfer-Encoding: chunked

        As you can see Access-Control-Allow-Origin header is missed in 2nd response as we send origin which was not defined in cors configuration in api-manager.xml file.

        John MathonThe Indisputable list of 10 most Innovative Influential Tech Companies

        Innovation is the lifeblood of truly transformative business success but what is it and who has it?

        2-bmwiremoteappcesaward1thevoltreportlowLatencyHeadTrackingmigraine reducer - cefaly-400x400

        Let’s define innovation a little.  If we are looking for companies issuing the most number of patents that is one measure that is relatively meaningless.  Some companies encourage and facilitate patents and become patent machines.  Some companies are open source and opposed to patenting things.    Many companies don’t see any advantage to patenting things.    So, I don’t think it is a good measure of innovation or influence.


        A lot of companies have some initial “innovation” spike where they generate ideas when they first get started and shortly thereafter as they build market.  They may or may not continue to be an influential or innovative company after that first spark.    A good example of that is Uber.  After the initial idea of connecting people by cell phone to get Taxis they have made a number of very interesting innovations including innovations in billing, innovations in new services such as UberBlack and UberSUV and I’ve even heard of UberIceCream delivery.  However, even though Uber looks to be a promising entry for influential and innovative I think it is still too early to tell.  I think we have to exclude companies less than 5 years old living off a single innovation that probably a single creator came up with and the natural evolution of that idea.    I don’t think we can call a company innovative or influential if it really has only had one instance of creativity.

        One big question is do the innovations have to be commercial success to be considered innovative or influential?  Apple creating the smart cell phone is certainly hugely successful and the fact it generated the highest profitability of any company in history means this counts as innovation and influential under anybodies definition. Netflix created Chaos Monkey and other open source tools that have had significant impact on the way many companies manage their IT infrastructure and deliver their cloud services.  These things haven’t resulted in an iota of income to Netflix and yet it has had big impact on many people, companies and industries. I would say that Netflix’s innovation in delivering its service and the fact it has made much of that innovation open source means that it has impacted and influenced.  One could say these innovations have allowed them to succeed as a business delivering their services better than anyone else.

        100 most influential people

        One book I read tried to categorize the 100 most important influential people in history.   It did this by adding all the citations and the length of articles in books.    This seems like a possible answer to how objectively identify who are the most influential companies or innovative companies.   This book admits this system breaks down for more recent innovations so it won’t work.

        I want to distinguish between companies which were once innovative and no longer seem to be producing innovations for some reason.   This is a list of companies which have done something for us lately not the has beens.

        As a result of thinking about this I believe that deciding what companies are the most innovative and influential is subjective and probably can’t be objectively defined.    So, let’s start by listing those companies who are indisputably incredibly innovative and influential.   I will call this list the indisputables.

        To summarize my list excludes very young companies that have only a single main innovation with numerous related innovations.  I exclude companies that were innovative and have not innovated significantly in the last 5 years.  I exclude companies outside of tech simply because this is my area of expertise.   I am not using a rigorous methodology because I think it’s not possible at this time.

        This is my “indisputable list.”  It might be interesting to add a more disputable list of companies or maybe recently innovative new companies.

        The Indisputable list of most Innovative Influential Tech Companies (Alphabetical Listing)

        adobe photoshop01 Adobe_Systems_logo_and_wordmark.svg

        1.  Adobe

        Adobe has innovated in numerous fields and numerous ways over the years.  They have spearheaded the technology for creative professionals in a number of fields and they have made their impact on the web with Flash.  They have had amazing sticking power for some of their ideas like PDF format.   Most recently Adobe has been able to engineer a transition from a license oriented company to a SaaS company with a rising stock price something thought nearly impossible.   You have to admit that Adobe’s success is improbable.  The industries it caters to are notably fickle, resistant to long term value creation.  Yet Adobe has time after time produced lasting innovative change.


        2. Amazon

        I don’t believe anybody imagined Amazon would invent “The cloud.”   Jeff Bezos deserves a little credit for the durability and strength of innovation in his company.  Not many book retailers transform the world.  Let’s say NONE other than amazon.  The cloud is a $100 billion business in 2014.  Amazon has a good stake in the business it is creating and leads but it is only one of many companies taking advantage of the ideas it has pioneered.  Some of the recent innovations Amazon has spearheaded beside “the cloud” include the Echo which was released recently and its soon to be released Drone delivery service.   I don’t think if anyone seriously thinks of innovative companies you can exclude Amazon.  Amazon invented the Kindle, Amazon prime service and numerous other retail innovations that I think make it easily one of the most innovative companies in the world.

        2-bmwiremoteappcesaward1apple logo

        3. Apple

        I remember purchasing an Ipod in the very early days of Ipods.  When it broke I decided to try a  a competitors product.  After all, in the consumer electronics industry there is hardly ever any company with >5% market share and the products are all roughly competitive.  I returned the competitive product within days when I learned how far off the competition was from Apple.  I am constantly astounded that Apple maintains a 70+% market share in markets where 5% is typical.  Competitors simply don’t “get it” even when the thing is sitting right in front of them.  How stupid are the competitors of Apple?  Some of the amazing things Apple has done recently that astound include the App Store which generated 600,000 apps in 2 years.  Nobody imagined this.   Apple single-handedly has changed what is considered a user interface to applications.  iTunes which revolutionized music delivery.  The iPad which I never thought would be successful or recently the iWatch which I refuse to comment on for fear of being wrong about Apple again.    Apple Pay, Apple Health, and soon Apple Car in Chrysler and other vehicles will be shipped.  Apple has so many things in the works it is single handedly transforming the world of consumer electronics and Internet of Things which is estimated at $10 trillion.  It is even moving into health care.  How can you seriously exclude Apple from a list of the most innovative influential companies EVER.

        Cisco logo

        4. Cisco

        Cisco has been an incredible innovator over the years making the internet possible.   The evolution of communications technologies is “behind the scenes” for most people but a bewildering evolution of technologies over the years has proven Ciscos continuous innovation credentials.   Over the last 10 years it hasn’t been as clear that Cisco has been innovating as much or fast.   Companies such as Qualcomm communications and LG in Europe, have eaten their cake in mobile.   Cisco has been spearheading SDN which is a technology critical to making the cloud cheaper and easier to manage.


        5. Google

        This is another hands down no questions influential company.  When Google started people asked how would the company ever make money?  Nobody had any idea and it became a joke.  It is no joke today.  The company routinely innovates new services, new products on a torrid pace.   It is simply beyond question one of the most innovative companies ever.   Google recently has been experimenting with self-driving cars which nobody thought was even remotely possible a few years ago.  Delivery from first manufacturers is planned for 2020. Google consistently innovates in the software industry with its open source projects it drives progress in all aspects of software.  Its influence cannot be overestimated.  Some of the innovation I worry about.  Its almost monopoly position in many businesses such as personal cloud services, Google search, advertising mean it has more data and knowledge of every living person on this earth than any other government or company.   Recent innovations in Internet of things is very interesting and unproven.  They have been doing groundbreaking work on computer artificial intelligence with deep learning and using quantum computers.   Android is the only serious competitor to apples iOS for cell phones.  Their ability to maintain competitiveness with Apple points to the strength of their innovation.   I personally think many software aspects of their phones are better than Apples.

        IBM Logo

        6. IBM

        I put IBM on the list because I believe that this is a company I never thought would be relevant today.  When Microsoft and the PC virtually destroyed the mainframe it seemed a virtual certainty the company would die or become a has been.  IBM continues to embrace and innovate in technology and services.  It has had to rebuild itself from the ground up.  It has recently done things such as Watson in artificial intelligence and is working on quantum computers.   It has significant efforts in open source projects, cloud and bigdata.  IBM is unbelievable in its ability to keep reinventing itself and remaining relevant.   While I see that it is struggling to remain on this list I am impressed by the its history of innovation as well.   I am convinced this is a company we have not heard the last from and is constantly underestimated.


        7. Intel

        This is another no-brainer.  Intel has been behind Moore’s law for decades.  Innovation at Intel at a breakneck pace was de rigueur.  Innovation on schedule.  150% a year year after year for decades.  Intel today is still the biggest CPU maker but it has not been as innovative in taking its hardware business into Mobile.  It has significant legs in IoT  but has missed the initial hobbyist and most cost-effective IoT hardware spaces.  I put Intel on notice that they’re indisputable record over time in innovation is tarnishing badly.  They need some big wins in IoT and in Quantum computers and / or some other areas nobody imagines.  Possibly cars, batteries or something new.


        8. Netflix

        Netflix started as a DVD rental company.  Many may not remember that.  Its transformation to the leading media distribution company hasn’t been easy.  There have been numerous changes.   Along the way Netflix broke it’s business in two.  Then reintegrated it.  It decided to build on the cloud initially and has spearheaded making the cloud work along the way using open source and contributing to the development of cloud technology that many companies now leverage.  Netflix started creating their own content, a revolutionary idea for what is primarily a technology company.


        9. Salesforce

        When Marc Benioff started Salesforce I spoke at many conferences where Marc would speak.  I saw the brilliance he was trying to achieve but they had significant obstacles.    Nobody knew if you could take a SaaS version of CRM and make it succeed.   Along the way Salesforce has innovated its way into a powerhouse that drives business around the world.  It has legitimized SaaS for businesses.    It has laid ruin to its competitors and created new paradigms for how businesses leverage their sales information.   The development of the platform was significant innovation in delivery of applications and integration.   Salesforce continues to innovate in social aspects of sales.


        10. Tesla

        I admit that Tesla is on the margin of my definition of innovative companies since it is young and basically has one product which is the S model electric car.    However, Tesla has innovated at an incredible pace and is innovating well outside the traditional car manufacturer.   I believe Tesla has created disruptive potential for the car service industry, the IoT car, the self-driving car, the user interface of a car, the safety of a car, the fuel industry and fuel distribution.  Innovating and disrupting each of these areas is not necessary to the basic business of building an electric car.  Tesla sees its success depends on solving the holistic problem for the car owner.   This is similar to Steve Job’s idea that to compel loyalty and grow an innovation you have to give the consumer the ability to use the innovation, service it, really live with it and grow with it.    I have written a blog about all this innovation Tesla has created that is disrupting multiple aspects of the car business.  See my blog  :tesla-update-how-is-the-first-iot-smart-car-connected-car-faring 


        Just barely missed my list:

        11. Samsung

        12. TIBCO

        13. SpaceX

        14. Oracle


        I want to make clear I do not consider this list comprehensive and am absolutely happy to have additional candidates to this list.   My conditions for companies to be on this list are that they have a history of world changing innovation more than once.  That they continue to innovate in the last 5 years.

        Afkham AzeezAWS Clustering Mode for WSO2 Products

        WSO2 Clustering is based on Hazelcast. When WSO2 products are deployed in clustered mode on Amazon EC2, it is recommended to use the AWS clustering mode. As a best practice, add all nodes in a single cluster to the same AWS security group.

        To enable AWS clustering mode, you simply have to edit the clustering section in the CARBON_HOME/repository/conf/axis2/axis2.xml file as follows:

        Step 1: Enable clustering

        <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"

        Step 2: Change membershipScheme to aws

        <parameter name="membershipScheme">aws</parameter>

        Step 3: Set localMemberPort to 5701

        Any value between 5701 & 5800 are acceptable
        <parameter name="localMemberPort">5701</parameter>

        Step 4: Define AWS specific parameters

        Here you need to define the AWS access key, secret key & security group. The region, tagKey & tagValue are optional & the region defaults to us-east-1

        <parameter name="accessKey">xxxxxxxxxx</parameter>
        <parameter name="secretKey">yyyyyyyyyy</parameter>
        <parameter name="securityGroup">a_group_name</parameter>
        <parameter name="region">us-east-1</parameter>
        <parameter name="tagKey">a_tag_key</parameter>
        <parameter name="tagValue">a_tag_value</parameter>

        Provide the AWS credentials & the security group you created as values of the above configuration items.

        Step 5: Start the server

        If everything went well, you should not see any errors when the server starts up, and also see the following log message:

        [2015-06-23 09:26:41,674]  INFO - HazelcastClusteringAgent Using aws based membership management scheme

        and when new members join the cluster, you should see messages such as the following:
        [2015-06-23 09:27:08,044]  INFO - AWSBasedMembershipScheme Member joined [5327e2f9-8260-4612-9083-5e5c5d8ad567]: /

        and when members leave the cluster, you should see messages such as the following:
        [2015-06-23 09:28:34,364]  INFO - AWSBasedMembershipScheme Member left [b2a30083-1cf1-46e1-87d3-19c472bb2007]: /

        The complete clustering section in the axis2.xml file is given below:
        <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
        <parameter name="AvoidInitiation">true</parameter>
        <parameter name="membershipScheme">aws</parameter>
        <parameter name="domain">wso2.carbon.domain</parameter>

        <parameter name="localMemberPort">5701</parameter>
        <parameter name="accessKey">xxxxxxxxxxxx</parameter>
        <parameter name="secretKey">yyyyyyyyyyyy</parameter>
        <parameter name="securityGroup">a_group_name</parameter>
        <parameter name="region">us-east-1</parameter>
        <parameter name="tagKey">a_tag_key</parameter>
        <parameter name="tagValue">a_tag_value</parameter>

        <parameter name="properties">
        <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
        <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
        <property name="subDomain" value="worker"/>

        sanjeewa malalgodaEnable debug logs and check token expire time in WSO2 API Manager

        To do that you can enable debug logs for following class.

        Then it will print following log
        log.debug("Checking Access token: " + accessToken + " for validity." + "((currentTime - timestampSkew) > (issuedTime + validityPeriod)) : " + "((" + currentTime + "-" + timestampSkew + ")" + " > (" + issuedTime + " + " + validityPeriod + "))");

        Then whenever this call fails we need to check for this log during that time. Then we can get clear idea about validity period calculation.

        To enable debug logs add below line to that reside in /repository/conf/

        And restart the server. You need to do enable debug log in Identity Server side if you use IS as key manager scenario.

        Then you can check how token validity period behave with each API call we make.

        Pulasthi SupunWSO2 Governance Registry - Lifecycle Management Part 2 - Transition Validators

        This is the second post of "WSO2 Governance Registry - Lifecycle Management" post series. In the first post - Part 1 - Check Items we gave a small introduction to lifecycle management in WSO2 Governance Registry and looked at how check items can be used and did a small sample on that. 

        In this post we will look at Transition Validators as mentioned in the previous post. As mentioned in part 1 transition validations can be used within check items and it can also be used separately ( All the validators will be called only during a state transition, checking a check item will not call the validator ). we will take a look at the same config this time with two transition validation elements.

        <aspect name="SimpleLifeCycle" class="org.wso2.carbon.governance.registry.extensions.aspects.DefaultLifeCycle">
        <configuration type="literal">
        <scxml xmlns=""
        <state id="Development">
        <data name="checkItems">
        <item name="Code Completed" forEvent="Promote">
        <permission roles="wso2.eng,admin"/>
        <validation forEvent="" class="">
        <parameter name="" value=""/>
        <item name="WSDL, Schema Created" forEvent="">
        <item name="QoS Created" forEvent="">
        <data name="transitionValidation">
        <validation forEvent="" class="">
        <parameter name="" value=""/>
        <transition event="Promote" target="Tested"/>
        <state id="Tested">
        <data name="checkItems">
        <item name="Effective Inspection Completed" forEvent="">
        <item name="Test Cases Passed" forEvent="">
        <item name="Smoke Test Passed" forEvent="">
        <transition event="Promote" target="Production"/>
        <transition event="Demote" target="Development"/>
        <state id="Production">
        <transition event="Demote" target="Tested"/>

        The first transition validation is within an check item ( this part was commented out in the previous post). And the second one is as a separate element, both are supported.

        Writing Validators

        A validator is java class that implements  the "CustomValidations" interface there are several validators that are already implemented and it is also possible to write you own custom validator and add it. we will be looking at one of the validators that is shipped with the product. a custom validator will need to written similarly . Please refer Adding an Extension documentation to see how a new extension can be added into the Governance Registry through the GUI.

        The following is a validator that is shipped with the WSO2 Governance Registry.

        import org.apache.commons.logging.Log;
        import org.apache.commons.logging.LogFactory;
        import org.wso2.carbon.governance.api.common.dataobjects.GovernanceArtifact;
        import org.wso2.carbon.governance.api.exception.GovernanceException;
        import org.wso2.carbon.governance.api.util.GovernanceUtils;
        import org.wso2.carbon.governance.registry.extensions.interfaces.CustomValidations;
        import org.wso2.carbon.registry.core.RegistryConstants;
        import org.wso2.carbon.registry.core.exceptions.RegistryException;
        import org.wso2.carbon.registry.core.jdbc.handlers.RequestContext;
        import org.wso2.carbon.registry.core.session.UserRegistry;

        import java.util.Map;

        public class AttributeExistenceValidator implements CustomValidations {

        private static final Log log = LogFactory.getLog(AttributeExistenceValidator.class);
        private String[] attributes = new String[0];

        public void init(Map parameterMap) {
        if (parameterMap != null) {
        String temp = (String) parameterMap.get("attributes");
        if (temp != null) {
        attributes = temp.split(",");

        public boolean validate(RequestContext context) {
        if (attributes.length == 0) {
        return true;
        String resourcePath = context.getResourcePath().getPath();
        int index = resourcePath.indexOf(RegistryConstants.GOVERNANCE_REGISTRY_BASE_PATH);
        if (index < 0) {
        log.warn("Unable to use Validator For Resource Path: " + resourcePath);
        return false;
        index += RegistryConstants.GOVERNANCE_REGISTRY_BASE_PATH.length();
        if (resourcePath.length() <= index) {
        log.warn("Unable to use Validator For Resource Path: " + resourcePath);
        return false;
        resourcePath = resourcePath.substring(index);
        try {
        UserRegistry registry = ((UserRegistry) context.getSystemRegistry())
        GovernanceArtifact governanceArtifact =
        GovernanceUtils.retrieveGovernanceArtifactByPath(registry, resourcePath);
        for (String attribute : attributes) {
        if (!validateAttribute(governanceArtifact, attribute)) {
        return false;
        } catch (RegistryException e) {
        log.error("Unable to obtain registry instance", e);
        return true;

        protected boolean validateAttribute(GovernanceArtifact governanceArtifact, String attribute)
        throws GovernanceException {
        return (governanceArtifact.getAttribute(attribute) != null);

        The "init" method is were the parameter that are defined under the validator tag is initialized. The "validate" method is were your validation logic goes this is the method that is called to do the validation. What this validator does is check whether the attributes names given as a parameter actually exist in the given aspect. If the attribute does not exist the validation will fail.

        Configuring the Validator

        <validation forEvent="Promote" class="org.wso2.carbon.governance.registry.extensions.validators.AttributeExistenceValidator">
        <parameter name="attributes" value="overview_version,overview_description"/>

        The fully qualified class name needs to be provided as the class name, the "forEvent" attribute specifies the action on which the validation needs to be triggered here it is set to Promote. For a complete list of validators that are available please refer to Supported Standard Validators documentation. Now you can add the validator configuration we commented out in the First Post and check out the functionality of validators.

        Please leave a comment you need any more clarification. The next post of this series will cover transition permissions.

        Pulasthi SupunWSO2 Governance Registry - Lifecycle Management Part 1 - Check Items

        The Lifecycle Management(LCM) plays a major role in SOA Governance. The default LCM supported by the WSO2 Governance Registry allows users to promote and demote life cycle states of a given resource. Furthermore, it can be configured to use checklists as well check out the documentation here.

        The Lifecycle configuration templates allows advance users to extend its functionality through 6 data elements which are listed below
        • check items
        • transition validations
        • transition permissions
        • transition executions
        • transition UI
        • transition scripts
        Bellow is the full template of the lifecycle configuration. in this article series we will take a look at each item and see how they can be used to customize lifecycle management in WSO2 Governance Registry. In this article we will look at check items. 

        check items

        Check items allow you to define a list, ideally an check list that can be used to control changes in lifecycle states and make sure specific requirements are met before the lifecycle is changed to the next state. It is also possible to
        • Define permissions for each check item
        • Define custom validations for each check item
        To check this out we will create a sample lifecycle with a new set of check items. First we have to create a new  lifecycle. The steps to create a new lifecycle can be found here - Adding Lifecycles. There will be a default lifecycle configuration when you create one using the steps since it is a complex configuration we will replace it with the following configuration. 

        <aspect name="SimpleLifeCycle" class="org.wso2.carbon.governance.registry.extensions.aspects.DefaultLifeCycle">
        <configuration type="literal">
        <scxml xmlns=""
        <state id="Development">
        <data name="checkItems">
        <item name="Code Completed" forEvent="Promote">
        <permission roles="wso2.eng,admin"/>
        <validation forEvent="" class="">
        <parameter name="" value=""/>
        <item name="WSDL, Schema Created" forEvent="">
        <item name="QoS Created" forEvent="">

        <transition event="Promote" target="Tested"/>
        <state id="Tested">
        <data name="checkItems">
        <item name="Effective Inspection Completed" forEvent="">
        <item name="Test Cases Passed" forEvent="">
        <item name="Smoke Test Passed" forEvent="">
        <transition event="Promote" target="Production"/>
        <transition event="Demote" target="Development"/>
        <state id="Production">
        <transition event="Demote" target="Tested"/>

        As you can see several check items are listed below the "Development" and "Tested" states, the two main attributes in the check list data item is name and forEvent. 

        name - The name of the check item, this is the text that will be displayed for the check item.
        forEvent - The event that is associated with this check item, for example if the forEvent is set to "Promote" this check item must be clicked in order to proceed with the promote operation for that state.

        Custom permissions

        As you can see in the "Development" state there is a sub element as follows
        <permission roles="eng,admin"/>

        In this element it is possible to define a set of roles that are allowed to check this check item. in this sample only engineers and admins are allowed to check this item

        Custom validations
        <validation forEvent="" class="">
        <parameter name="" value=""/>

        As seen in the commented out section under the "Code Completed" check item it is also possible to define custom validations. But the validations will only be called when during a state transition. we will look into custom validations under "transition validations" in the next post. 

        Now you can save the newly created lifecycle configuration and use it in an artifact like an "api" or "service" and see its functionality.

        We will look at Transition Validations and how to use them in the next post of this series.

        Afkham AzeezHow AWS Clustering Mode in WSO2 Products Works

        In a previous blog post, I explained how to configure WSO2 product clusters to work on Amazon Web Services infrastructure. In this post I will explain how it works.

         WSO2 Clustering is based on Hazelcast.

        All nodes having the same set of cluster configuration parameters will belong to the same cluster. What Hazelcast does is, it calls AWS APIs, and then gets a set of nodes that satisfy the specified parameters (region, securityGroup, tagKey, tagValue).

        When the Carbon server starts up, it creates a Hazelcast cluster. At that point, it calls EC2 APIs & gets the list of potential members in the cluster. To call the EC2 APIs, it needs the AWS credentials. This is the only time these credentials are used. AWS APIs are only used on startup to learn about other potential members in the cluster. Then it tries to connect to port 5701 of those potential members, and if 5701 is unavailable, it does a port scan up to 5800. If one of those ports are available, it will do a Hazelcast handshake to make sure that those are indeed Hazelcast nodes, and will add them to the cluster if they are Hazelcast nodes.

        Subsequently, the connections established between members are point to point TCP connections.  Member failures are detected through a TCP ping. So once the member discovery is done, the rest of the interactions in the cluster are same as when the multicast & WKA (Well Known Address) modes are used.

        With that facility, you don't have to provide any member IP addresses or hostnames, which may be impossible on an IaaS such as EC2.

        sanjeewa malalgodaHow to enable AWS based clustering mode in WSO2 Carbon products (WSO2 API Manager cluster with AWS clustering)

        To try aws based clustering on AWS you can change the membership scheme to AWS, and then provide the following parameters in the clustering section of the axis2.xml file. Before you try this on API Manager 1.8.0 please download this jar[JarFile] files and add them as patch.

        1. accessKey       
        <parameter name="accessKey">TestKey</parameter>

        2. secretKey
         <parameter name="secretKey">testkey</parameter>

        3. securityGroup       
        <parameter name="securityGroup">AWS_Cluster</parameter>

        4. connTimeout (optional)
        5. hostHeader (optional)
        6. region (optional)
        7. tagKey (optional)
        8. tagValue (optional)

        See following sample configuration. Edit clustering section in the CARBON_HOME/repository/conf/axis2/axis2.xml file as follows.

        <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
                <parameter name="AvoidInitiation">true</parameter>
                <parameter name="membershipScheme">aws</parameter>
                <parameter name="domain"></parameter>
                <parameter name="localMemberPort">5701</parameter>
                <parameter name="accessKey">test</parameter>
                <parameter name="secretKey">test</parameter>
                <parameter name="securityGroup">AWS_Cluster</parameter>

        By default, Hazelcast uses port 5701. It is recommended to create a Hazelcast specific security group. Then, an inbound rule for port 5701 from s
        g-hazelcast needs to be added to this security group.
        Open the Amazon EC2 console.
        Click Security Groups in the left menu.
        Click Create Security Group and enter a name (e.g. sg-hazelcast ) and description for the security group, click Yes, Create .
        On Security Groups page, select the security group sg-hazelcast on the right pane.
        You will see a field below the security group list with the tabs Details and Inbound. Select Inbound.
        Select Custom TCP rule in the field Create a new rule.
        Type 5701 into the field Port range and sg-hazelcast into Source.

        Then when we initialize cluster all nodes in same security group will be added as WKA members.
        Once you done with configurations restart servers.

        Then you will following message in carbon logs.
        [2015-06-23 10:02:47,730]  INFO - HazelcastClusteringAgent Cluster domain:
        [2015-06-23 10:02:47,731]  INFO - HazelcastClusteringAgent Using aws based membership management scheme
        [2015-06-23 10:02:57,604]  INFO - HazelcastClusteringAgent Hazelcast initialized in 9870ms
        [2015-06-23 10:02:57,611]  INFO - HazelcastClusteringAgent Local member: [5e6bd517-512a-45a5-b702-ebf304cdb8c4] - Host:, Remote Host:null, Port: 5701, HTTP:8280, HTTPS:8243, Domain:, Sub-domain:worker, Active:true
        [2015-06-23 10:02:58,323]  INFO - HazelcastClusteringAgent Cluster initialization completed
        Then spawn next instance. When next server startup completed you will see following message in current node.
        [2015-06-23 10:06:21,344]  INFO - AWSBasedMembershipScheme Member joined [417843d3-7456-4368-ad4b-5bad7cf21b09]: /
        Then terminate second instance. Then you will see following message.
        [2015-06-23 10:07:39,148]  INFO - AWSBasedMembershipScheme Member left [417843d3-7456-4368-ad4b-5bad7cf21b09]: /
        This means you have done configurations properly.

        John MathonServices, Micro-services, Devices, Apps, APIs what’s the difference?

        What is the difference between a Device and a Service?

        DropCam-PRO_Front_72dpi       VSapi-icon-512x512-9b21

        The internet of things is going to have a big impact on technology and business.   Let’s look at some ways we should change how we think of services and devices.

        We are facing some interesting paradigm changes as our Platform 3 software platform evolves.   Platform 3 is the combination of new technologies related to mobile, the cloud and social that are redefining how we make software and deliver it.

        block diagram platform 3

        How should we think about managing devices versus a service.   In some ways these things are very similar.   We want to manage these things similarly in the sense that it would be ideal to think of physical devices as simply conduits of information to and from the physical world without having to worry about their physical aspects too much.   We do this with multi-tiered architecture.  Separating the physical layer from the abstract layers we use in programming.

        A service, micro-service and a device all have the following features in common:

        Common Features:

        1.  Have an interface for communicating, managing, getting data to and from, a clearly defined set of APIs
        2. There can be many instances
        3. Need authentication to control access and entitlement to control aspects available to each entity requesting access
        4. The interaction can be request-reply or publish-subscribe
        5. The location of the service or device may be important
        6. Provides a minimal set of functions around a single purpose that can be described succinctly
        7. Have a cost of operating, maximum throughput and operational characteristics and configuration details per instance
        8. may be external or owned by external party
        9. Usually has a specific URI to refer to a particular entity
        10. has access to a limited set of data defined in advance
        11. Can depend on or be depended on by many other services or devices
        12. Can be part of larger groups of entities in multiple non-hierarchical relationships
        13. Orchestration of higher level capabilities involves interacting with multiple services or devices
        14. Needs to be monitored for health, managed for security, for failure and have configuration changed and all the corresponding best practices management aspects of this
        15. Services and devices both have social aspect where people may want to know all about or comment on the specific service or device or the class of services and devices
        16. Can be infected with malware, viruses or compromised in similar ways
        17. Can have side effects or be purely functional
        18. May have data streams associated with their activity

        Some differences between Services and Devices:

        1. Services:are Multi-tenant with data isolation per tenant.  Devices usually have only one tenant.
        2. Devices: Physical existence can be compromised physically, stolen, tampered with.
        3. Devices:  Possibly have a physical interface for humans.  Services do not.
        4. Devices may have data embedded in the device whereas services usually are designed to be stateless
        5. Devices may be mobile with changing location
        6. Devices connectivity can be marginal at times with low bandwidth or nonexistant
        7. A device may have compromised functionality but still work fine usually services are working or not working
        8. Service failover can be as simple as spinning up new virtual instances.  Physical failure usually involves  physical replacement.   Service failure may point to a physical device failure but is usually not dependent on any particular physical device, i.e. can be replicated on similar abstract hardware.
        9. A physical device may produce erroneous results or be out of calibration
        10. Services can be scaled dynamically instantly whereas devices need manual component to be scalable.
        11. Devices have a physically unique identification such as a MAC address or NFC id whereas services are usually fungible and identified by a URI uniquely.

        Publish and Socialize to Facilitate Reuse

        Enterprise Store or Catalog

        It is apparent that devices and services have a large number of common characteristics. Especially important are the social aspects and interdependence which means that it is desirable to consider sharing the same interface to look at services, micro-services, devices, APIs.

        Apps are more like devices and depend on services.   Apps, Services and Devices share a number of characteristics as well.

        To make all these things more reusable it is desirable to put them in a store of repository where people can comment and share useful information to prospective users.   Thus the concept of the Enterprise Store makes complete sense where the store can store any asset that might be shared.

        Each asset including APIs, Mobile Apps, Service, Micro-service can have social characteristics, health characteristics, instances, owners, belong to groups, need to be permission’d and allocated.  Further, you will undoubtedly want to track usage, monitor and maintain the asset through upgrades, lifecycle.    You will also want to revoke permission, limit usage or wipe the device.

        Apps are usually built up from APIs and Devices.  Orchestration of multiple devices and services together makes having a common interface a good idea as well.   Using this interface you can see the dependencies and easily combine APIs, Devices and Apps to build new functionality.

        OMA DM-1

        Device management vs Service Management

        There are some difference outlined above between services and devices.

        Additional security capabilities are needed with physical devices similar to the kinds of things that cell phones and EMM can do.  EMM systems have the ability to insure encryption of data at rest, box the data on the device and delete it with or without the owners permission or physical possession of the device.   Geofencing is an important capability for devices since some devices when outside a defined area may be considered stolen.

        It’s also important to be able to tell if a device is being tampered with or has been compromised and set up appropriate measures to replace or recover the device.

        Devices  inherently collect data that is user specific or location specific that has implications for privacy.

        The fact that devices can sometimes have marginal or zero connectivity means the management must be able to take advantage of opportunities when a device is connected as well as to set reminders and queue activities for when a device does become available.

        Since the inventory of devices is more problematic and can’t be “dynamically created on demand”  a link to the acquisition systems and automatic configuration and security management services is desirable.    Monitoring their health and being responsive to impending failure is important whereas a service will fail for completely different reasons.

        There are other issues with devices versus service management that can be encapsulated in an abstraction layer.

        Device Management Platform

        The Connected Device Management Framework Layer

        For many reasons an IoT or IIoT architecture with many devices presents problems for management that need to be abstracted to be effectively managed.

        1.  The communication layers differ depending on the device.  Today there are well over a dozen different protocols and physical layer standards.  This number will decline hopefully as the market converges but the different requirements for communication are real and will require different solutions so there will never be a single “IoT” protocol or communication paradigm.  Some devices have  persistent always-on communication but many devices cannot afford that level of connectivity.  Some have marginal connectivity to the point with NFC devices where not much more than presence is communicated.    Some devices have mesh capability to help propagate messages from other devices and some don’t.  As a result of differing power and physical limitations there is a need for multiple protocols and different connectivity models that will never go away.

        2.  Some devices have APIs that can be directly communicated with, some are passive devices only reporting information and can take no action, some are only action and don’t report or measure anything.  Some talk to a server that is their proxy.  Some devices have SDKs.  Some have GPS capability and some can only report their proximity to other devices.

        3.  Due to the variety of manufacturers some devices will only work in a relatively proprietary environment and some are much more standards compliant and open.

        Due to all these variations that seem impossible to bridge in the short term it is my belief that the only way to support a variety of devices is through an abstraction layer with support for multiple device types, protocols, connectivity.

        WSO2 has created such a device abstraction layer which is called the “Connected Device Management Framework (CMDF)” which allows a variety of standards and proprietary protocols and management interfaces to be added.  The LWM2M standard for device management which is an outgrowth of the OMA mobility standard can be supported as well as other standards or even proprietary management can be added into this open framework.  I believe other vendors should adopt such a CMDF layer.

        The CDMF layer includes management capabilities, device definition, communication and orchestration, data management and understanding relationships of devices and services.

        The general management capabilities of all management platforms include things like security, device catalog, upgrading, device health can be abstracted and delegated to specific handlers for different devices.   The need for this is important because there is no hope that the vast array of manufacturers will agree to a common standard soon.  Even in the case such an agreement could be reached there would still leave billions of legacy devices that may need to be worked with.   So, a CMDF is the way to go.

        IoT Cloud Teiring

        Tiered Data Architectures

        Most industrial applications of IoT devices will incorporate the notion of functional and regional grouping of devices.    Many IoT devices will put out voluminous amounts of data.  Storing all that data in the cloud would represent a 100-fold increase in cloud storage needs.   It is not really practical to store all data from all devices for all time and to do so in a single place as the data usually is better used locally.

        As an example, imagine all the video feeds from security cameras across a company.  Unless there is an incident it is usually not necessary to keep the entire full resolution copy of all those devices for all time.   So, when an incident is discovered you might tag more detailed information should be maintained or to do research you may want to pull certain information from a region or all regions but in general after an expiration period you would maintain more and more abstracted versions of the original data.

        A management architecture for services or devices should support the notion of regional data stores, regional services and devices as well as being able to catalog devices by tags that represent functional capabilities such as all security devices, all mobile phones, all telepresence devices, all devices in Sales or Finance.  It should be possible to tier data along various functional, regional or other criteria as fits the application.

        API Management overview

        Multi-tiered services / device architecture

        There is a need in service and device management to support multi-tiered architectures.   The best practice around multi-tiering is that it promotes agility by enabling abstraction of a service or device.   The service or device can be replaced at will, improved at a differing pace than the services that depend on it.

        So, you can modify the physical device to a competitive device or to a new API to the device or new functionality without having to change the code of all the applications that depend on that device.     Similarly if you change the applications that use devices the devices themselves shouldn’t have to change to support the applications.

        Multi-tiered architectures started primarily by the need to isolate database design from application design.  The database designers could design databases to be efficiently organized to promote maximum performance without impacting the application design.   New fields could be added to support new features and existing applications which didn’t need that feature wouldn’t have to be changed.   More important they wouldn’t crash or fail unexpectedly when you made changes because they were insulated from changes.

        In a similar way it should be the case that devices are abstracted into service APIs or proxies that represent idealized devices or views of a device that insulates you from changes to devices and similarly allows you to change applications without having to change devices.

        Other Stories you may find interesting:

        Merging Microservice Architecture with SOA Practices

        Do Good Microservices Architectures Spell the Death of the Enterprise Service Bus?

        Management of Enterprises with Cloud, Mobile Devices, Personal Cloud, SaaS, PaaS, IaaS, IoT and APIs

        Chanaka FernandoExtending WSO2 ESB with a Custom Transport Implementation - Part II

        This blog post is a continuation to my previous blog post where I have described the concepts of WSO2 ESB transports mechanism. Since we have covered the basics, let's start writing some real code. I will be using the ISO8583 standard as my subject to this custom implementation. I will be grabbing some content from this blog post for my reference to ISO8583 java implementation (Business logic). Thanks Manoj Fernando for writing such an informative post.

        Idea of the custom transport implementation is to provide a mechanism to write your business logic which can plug in to the WSO2 ESB runtime. I am not going to tell more about ISO8583 or it's internal implementations. I will be using already implemented java library jPos for this purpose.  It has the functionality to cover the basic use cases of ISO8583 implementations.

        Sample use case

        Let’s take the scenario of a certain financial application needing to make a credit transaction by sending an XML message that needs to be converted to an ISO8583 byte stream before passed on to the wire through a TCP channel.


        First, we need to define our ISO8583 field definition.  This might be a bit confusing to some.  If we are dealing with a specification, why do we need a field definition?  This is because that ISO8583 specification is not hard-binding any data elements and/or field ordering. It is entirely up to the application designer to define which field types/IDs need to be placed for their specific transactional requirements.

        At a glance, the field definition file looks like the following.

        <?xml version="1.0" encoding="UTF-8" standalone="no"?>
        <!DOCTYPE isopackager SYSTEM "genericpackager.dtd">
              name="Message Type Indicator"
              name="Primary Account number"
              name="Processing Code"


        Please refer to [1] & [2] for a complete reference of ISO8583.   As per now, let me just say that each field should have an ID, a length and type specified in its definition.  I have only listed a snippet of the XML config here, and you may find the full definition jposdef.xml inside the codebase.
        I have created a simple maven project to implement this transport.  Make sure that you have included the jPOS dependencies on pom.xml as follows.

        To implement the transport sender, you need to subclass the AbstractTransportSender and implement its sendMessage method as follows.

        public class ISO8583TransportSender extends AbstractTransportSender {

        public void sendMessage(MessageContext msgCtx, String targetEPR,
        OutTransportInfo outTransportInfo) throws AxisFault {

        try {
        URI isoURL = new URI(targetEPR);
        ISOPackager packager = new GenericPackager(this.getClass()

        ASCIIChannel chl = new ASCIIChannel(isoURL.getHost(),
        isoURL.getPort(), packager);

                                writeMessageOut(msgCtx, chl);

        } catch (Exception e) {
        throw new AxisFault(
        "An exception occured in sending the ISO message");


             * Writting the message to the output channel after applying correct message formatter
             * @param msgContext
             * @param chl
             * @throws org.apache.axis2.AxisFault
             * @throws
            private void writeMessageOut(MessageContext msgContext,
                                         ASCIIChannel chl) throws AxisFault, IOException {
                ISO8583MessageFormatter messageFormatter = (ISO8583MessageFormatter)BaseUtils.getMessageFormatter(msgContext);
                OMOutputFormat format = BaseUtils.getOMOutputFormat(msgContext);
                messageFormatter.writeTo(msgContext, format, null, true);

        Within the TransportSender, we are extracting the URL and then create the relevant entities for the Message Formatter and pass the control to the MessageFormatter. Within the MessageFormatter, we can send the actual message to the back end server.

        public class ISO8583MessageFormatter implements MessageFormatter {

            private ASCIIChannel asciiChannel;
            public byte[] getBytes(MessageContext messageContext, OMOutputFormat omOutputFormat) throws AxisFault {
                return new byte[0];

            public void writeTo(MessageContext messageContext, OMOutputFormat omOutputFormat, OutputStream outputStream, boolean b) throws AxisFault {
                ISOMsg isoMsg = toISO8583(messageContext);
                ASCIIChannel chl = this.asciiChannel;
                try {
                } catch (Exception ex) {
                    throw new AxisFault(
                            "An exception occured in sending the ISO message");

            public String getContentType(MessageContext messageContext, OMOutputFormat omOutputFormat, String s) {
                return null;

            public URL getTargetAddress(MessageContext messageContext, OMOutputFormat omOutputFormat, URL url) throws AxisFault {
                return null;

            public String formatSOAPAction(MessageContext messageContext, OMOutputFormat omOutputFormat, String s) {
                return null;

            public ISOMsg toISO8583(MessageContext messageContext) throws AxisFault {
                SOAPEnvelope soapEnvelope = messageContext.getEnvelope();
                OMElement isoElements = soapEnvelope.getBody().getFirstElement();

                ISOMsg isoMsg = new ISOMsg();

                Iterator<OMElement> fieldItr = isoElements.getFirstChildWithName(
                        new QName(ISO8583Constant.TAG_DATA)).getChildrenWithLocalName(

                String mtiVal = isoElements
                        .getFirstChildWithName(new QName(ISO8583Constant.TAG_CONFIG))
                        .getFirstChildWithName(new QName(ISO8583Constant.TAG_MTI))

                try {

                    while (fieldItr.hasNext()) {

                        OMElement isoElement = (OMElement);

                        String isoValue = isoElement.getText();

                        int isoTypeID = Integer.parseInt(isoElement.getAttribute(
                                new QName("id")).getAttributeValue());

                        isoMsg.set(isoTypeID, isoValue);


                    return isoMsg;

                } catch (ISOException ex) {
                    throw new AxisFault("Error parsing the ISO8583 payload");
                } catch (Exception e) {

                    throw new AxisFault("Error processing stream");


            public ASCIIChannel getAsciiChannel() {
                return asciiChannel;

            public void setAsciiChannel(ASCIIChannel asciiChannel) {
                this.asciiChannel = asciiChannel;


        Here within the formatter, we are transforming the XML message into ISO8583 binary message and send to the back end server.

        This is only an example of dividing your message sending logic to message sender and message formatter. You can design your implementation according to your requirement. Sometimes, you may not need specific formatter but you can do the formatting part also in the sender itself. But I have delegated a part of the message handling to message formatter for demonstration purpose.

        Likewise, you can write a message receiver and message builder for receiving the messages via iso8583 protocol. I will leave that as an exercise for the reader.

        Once we have the message sender and formatter implemented, we need to register them in the axis2.xml file. Let's go to the axis2.xml file and add following 2 entries there.

                <messageFormatter contentType="application/iso8583"

        <transportSender name="iso8583" class="org.wso2.transport.iso8583.ISO8583TransportSender"/>

        Once you create the jar file from your custom transport implementation code, place it is ESB_HOME/repository/components/lib directory.

        If you are done with the above steps, you can start the ESB server.

        Let's create a sample API to interact with this custom transport implementation. Here is the API definition.

        <api xmlns="" name="iso8583" context="/iso8583">
           <resource methods="POST GET">
                 <log level="full"></log>
                 <property name="OUT_ONLY" value="true"></property>
                 <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"></property>
                 <property name="messageType" value="application/iso8583" scope="axis2"></property>
                    <endpoint name="isoserver">
                       <address uri="iso8583://localhost:5000"></address>

        In the above configuration, I have specified the messageType as application/iso8583 such that it will engage the correct message formatter within the mediation flow. 

        Now we need to create a sample TestServer to test the functionality of the ISO8583 back end server. We can create a MockServer using the jpos library itself. Here is the code for the TestServer.

        public class TestServer implements ISORequestListener {
            static final String hostname = "localhost";
            static final int portNumber = 5000;

            public static void main(String[] args) throws ISOException {

                ISOPackager packager = new GenericPackager("jposdef.xml");
                ServerChannel channel = new ASCIIChannel(hostname, portNumber, packager);
                ISOServer server = new ISOServer(portNumber, channel, null);

                server.addISORequestListener(new TestServer());

                System.out.println("ISO8583 server started...");
                new Thread(server).start();

            public boolean process(ISOSource isoSrc, ISOMsg isoMsg) {
                try {
                    System.out.println("ISO8583 incoming message on host ["
                            + ((BaseChannel) isoSrc).getSocket().getInetAddress()
                            .getHostAddress() + "]");

                    if (isoMsg.getMTI().equalsIgnoreCase("1800")) {

                        receiveMessage(isoSrc, isoMsg);

                } catch (Exception ex) {
                return true;

            private void receiveMessage(ISOSource isoSrc, ISOMsg isoMsg)
                    throws ISOException, IOException {
                System.out.println("ISO8583 Message received...");
                ISOMsg reply = (ISOMsg) isoMsg.clone();
                reply.set(39, "00");


            private static void logISOMsg(ISOMsg msg) {
                System.out.println("----ISO MESSAGE-----");
                try {
                    System.out.println("  MTI : " + msg.getMTI());
                    for (int i = 1; i <= msg.getMaxField(); i++) {
                        if (msg.hasField(i)) {
                            System.out.println("    Field-" + i + " : "
                                    + msg.getString(i));
                } catch (ISOException e) {
                } finally {



        You can run the above program to mimic the ISO8583 server and then we can send a message from a client like Advanced REST Client plugin in the chrome browser. Our message payload should be like below.

                      <field id="3">110</field>
                      <field id="5">4200.00</field>
                      <field id="48">Simple Credit Transaction</field>
                      <field id="6">645.23</field>
                      <field id="88">66377125</field>

        When we send this message from client, ESB will accept the message and execute the message sender we have written and then selects the message formatter and send to the mock back end server. You can see the following log printed in the TestServer side if you have done all the things right.

        ISO8583 server started...
        ISO8583 incoming message on host []
        ISO8583 Message received...
        ----ISO MESSAGE-----
          MTI : 1800
            Field-3 : 000110
            Field-5 : 000004200.00
            Field-6 : 000000645.23
            Field-48 : Simple Credit Transaction
            Field-88 : 0000000066377125

        When you are have configured the ESB, you will get exceptions if you do not copy following jar files to the lib directory alongside with custom transport jar file.

        • jpos-1.9.0.jar
        • jdom-1.1.3.jar
        • commons-cli-1.3.1.jar

        Now we have written our message sender and message formatter implementations. Likewise, you can implement the message receiver and message builder code also. I have created an archive with all the relevant artifacts which I have developed for this blog post and uploaded them to github. You can download all the projects and relevant jar files from following location.

        Chanaka FernandoExtending WSO2 ESB with a Custom Transport Implementation - Part I

        WSO2 ESB is considered as one of the best and highest performing open source integration solutions available in the market. One of the astonishing features of the WSO2 ESB is the extensibility of the solution to meet your custom requirements. This means a LOT, if you have dealt with proprietary solutions provided big players (you name it). With this blog post, I will be discussing about one of the not so frequently used but THE BEST extension point in WSO2 ESB which is implementing a custom transport.

        Given that WSO2 ESB is an extensible solution, that does not mean that it is lacking OOTB features. In fact it provides the most complete feature set provided by any open source integration solution in the market. But as you know, it is not a silver bullet (In fact we can't make silver bullets). Therefore, you may encounter some scenarios where you need to write a custom transport implementation to connect with one of your systems.

        I will be taking ISO8583 messaging standard to write this custom transport implementation. It is used heavily in the financial transactions domain for credit card transaction processing. One of the reasons to select this as my custom transport implementation is that there is an already written code for this transport by Manoj Fernando in this blog post. Since my focus of this blog post is to describe about writing a custom transport implementation, I think I am not re-writing what Manoj has written.

        Enough talking. Let's do some real work. WSO2 ESB mediation engine can be depicted in the following diagram.

        • As depicted in the above diagram, requests/responses coming from clients/servers (Inbound) will be hitting the ESB through the transport layer. It will select the proper transport receiver  implementation by looking at the request URI (eg: HTTP, TCP, JMS, etc.)
        • Transport will hand over this message to the appropriate message builder according to the content-type (if specified) specified in the message.
        • Then the message will be handed over to the Axis engine(In Flow) where it does the QOS related operations. 
        • After that, message will be handed over to the mediation engine for executing the mediation logic configured with mediators.
        • Then again, message will be going through the Axis engine (Out Flow) for any QOS related operations.
        • Message formatter will be selected according to the content-type provided in the message.
        • Then the message will be passed back to the relevant transport sender implementation to send the message from ESB to client/server (Outbound)

        Alright.. Alright .. Now we know what happens to a message coming towards the WSO2 ESB and what happens when message is going out of the same. I have highlighted 4 terms in the previous section. Those 4 terms are

        • Transport Receiver
        • Message Builder
        • Message Formatter
        • Transport Sender
        These would be the classes we need to implement for our custom transport implementation. WSO2 ESB has provided the interfaces for these implementations such that you need to focus only on the business logic rather than knowing the internals of WSO2 ESB. We will be using following interfaces to write our custom implementation.

        • org.apache.axis2.transport.base.AbstractTransportListener
        • org.apache.axis2.builder.Builder
        • org.apache.axis2.transport.MessageFormatter
        • org.apache.axis2.transport.base.AbstractTransportSender
        Now we have the ground covered for our custom transport implementation. Let's do some coding. I will be transferring this discussion to my next blog post since this is getting too long here.

        Hiranya JayathilakaExpose Any Shell Command or Script as a Web API

        I implemented a tool that can expose any shell command or script as a simple web API. All you have to specify is the binary (command/script) that needs to be exposed, and optionally a port number for the HTTP server. Source code of the tool in its entirety is shown below. In addition to exposing simple web APIs, this code also shows how to use Golang's built-in logging package, slice to varargs conversion and a couple of other neat tricks.
        // This tool exposes any binary (shell command/script) as an HTTP service.
        // A remote client can trigger the execution of the command by sending
        // a simple HTTP request. The output of the command execution is sent
        // back to the client in plain text format.
        package main

        import (

        func main() {
        binary := flag.String("b", "", "Path to the executable binary")
        port := flag.Int("p", 8080, "HTTP port to listen on")

        if *binary == "" {
        fmt.Println("Path to binary not specified.")

        l := log.New(os.Stdout, "", log.Ldate|log.Ltime)
        http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        var argString string
        if r.Body != nil {
        data, err := ioutil.ReadAll(r.Body)
        if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        argString = string(data)

        fields := strings.Fields(*binary)
        args := append(fields[1:], strings.Fields(argString)...)
        l.Printf("Command: [%s %s]", fields[0], strings.Join(args, " "))

        output, err := exec.Command(fields[0], args...).Output()
        if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        w.Header().Set("Content-Type", "text/plain")

        l.Printf("Listening on port %d...", *port)
        l.Printf("Exposed binary: %s", *binary)
        http.ListenAndServe(fmt.Sprintf("", *port), nil)
        Clients invoke the web API by sending HTTP GET and POST requests. Clients can also send in additional flags and arguments to be passed into the command/script wrapped within the web API. Result of the command/script execution is sent back to the client as a plain text payload.
        As an example, assume you need to expose the "date" command as a web API. You can simply run the tool as follows:
        ./bash2http -b date
        Now, the clients can invoke the API by sending an HTTP request to http://host:8080. The tool will run the "date" command on the server, and send the resulting text back to the client. Similarly, to expose the "ls" command with the "-l" flag (i.e. long output format), we can execute the tool as follows:
        ./bash2http -b "ls -l"
        Users sending an HTTP request to http://host:8080 will now get a file listing (in the long output format of course), of the current directory of the server. Alternatively users can POST additional flags and a file path to the web API, to get a more specific output. For instance:
        curl -v -X POST -d "-h /usr/local" http://host:8080
        This will return a file listing of /usr/local directory of the server with human-readable file size information.
        You can also use this tool to expose custom shell scripts and other command-line programs. For example, if you have a Python script which you wish to expose as a web API, all you have to do is:
        ./bash2http -b "python"

        Lali DevamanthriMicroservices with API Gateway

        Let’s imagine that you are developing a native mobile client for a shopping application. you would have to implement a product details page, which displays following information,

        • Items in the shopping cart
        • Order history
        • Customer reviews
        • Low inventory warning
        • Shipping options
        • Various recommendations,  (other products bought by customers who bought this product)
        • Alternative purchasing options


        In a monolithic application architecture, Those data would retrieve by making a single REST call (GET<productId>) to the application. A load balancer routes the request to one of N identical application instances. The application would then query various database tables and return the response to the client.

        But if you use microservice architecture the data which need to displayed must retrieve by multiple microservices. Here are some of the example microservices we would need.

        • Shopping Cart Service – items in the shopping cart
        • Order Service – order history
        • Catalog Service – basic product information, such as it’s name, image, and price
        • Review Service – customer reviews
        • Inventory Service – low inventory warning
        • Shipping Service – shipping options, deadlines, and costs drawn separately from the shipping provider’s API
        • Recommendation Service(s) – suggested items

        All above  microservice would have a public endpoint (https://<serviceName&gt; and client would have to make many requests to retrieve the all necessary data. If app need to make hundreds of request to render a one page, the app would be inefficient. Also if already existing microservices response with different data type, the app have to handle it too.

        Due to these reasons, its wise to use an API Gateway for encapsulates the internal microservices and provides an API that respond to each client. The API Gateway is responsible for request meditions and compose a singe respond.
        A great example of an API Gateway is the Netflix API Gateway. The Netflix streaming service is available on hundreds of different kinds of devices including televisions, set-top boxes, smartphones, gaming systems, tablets, etc. Initially, Netflix attempted to provide a one-size-fits-all API for their streaming service. However, they discovered that it didn’t work well because of the diverse range of devices and their unique needs.

        Chanaka FernandoMonitoring Garbage Collection of WSO2 ESB

        Garbage Collection has been one of the most important features of Java programming language which made it the automatic choice for developing enterprise applications. WSO2 ESB has been written entirely in Java. Garbage Collection is pretty much related to the performance of a Java program. WSO2 ESB is a java program and it needs to provide the maximum performance to the users who use that for their enterprise integrations. From this blog post, I will be discussing about different tools which we can use to monitor the GC performance of WSO2 ESB.

        1) Monitoring GC activity using jstat command

        We can use the jstat command line tool which comes with the JDK to monitor the GC activity on a java program. Let's start the WSO2 ESB server by executing the file located under ESB_HOME/bin directory.

        sh start

        Then we need to find the process ID of this java process using the following command

        ps -ef | grep wso2esb | grep java

        501 13352 13345   0  7:25PM ttys000    1:18.41

        We can execute the jstat command with the process ID

        jstat -gc 13352 1000

        In the above command, last argument is the time gap in which it prints the statistics. For the above command, it will print statistics every 1 second.

         S0C       S1C      S0U   S1U      EC          EU            OC            OU        PC          PU     YGC   YGCT   FGC  FGCT    GCT   
        49664.0 50688.0  0.0    0.0   246272.0 135276.4  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818
        49664.0 50688.0  0.0    0.0   246272.0 135276.7  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818
        49664.0 50688.0  0.0    0.0   246272.0 135281.1  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818
        49664.0 50688.0  0.0    0.0   246272.0 135281.1  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818
        49664.0 50688.0  0.0    0.0   246272.0 135281.1  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818
        49664.0 50688.0  0.0    0.0   246272.0 135281.2  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818

        49664.0 50688.0  0.0    0.0   246272.0 135285.7  175104.0   91437.3   114688.0 61223.4     24    0.954   1      0.864    1.818

        The above output provides a detailed information on the GC activity going on with the java program.

        • S0C and S1C: This column shows the current size of the Survivor0 and Survivor1 areas in KB.
        • S0U and S1U: This column shows the current usage of the Survivor0 and Survivor1 areas in KB. Notice that one of the survivor areas are empty all the time.
        • EC and EU: These columns show the current size and usage of Eden space in KB. Note that EU size is increasing and as soon as it crosses the EC, Minor GC is called and EU size is decreased.
        • OC and OU: These columns show the current size and current usage of Old generation in KB.
        • PC and PU: These columns show the current size and current usage of Perm Gen in KB.
        • YGC and YGCT: YGC column displays the number of GC event occurred in young generation. YGCT column displays the accumulated time for GC operations for Young generation. Notice that both of them are increased in the same row where EU value is dropped because of minor GC.
        • FGC and FGCT: FGC column displays the number of Full GC event occurred. FGCT column displays the accumulated time for Full GC operations. Notice that Full GC time is too high when compared to young generation GC timings.
        • GCT: This column displays the total accumulated time for GC operations. Notice that it’s sum of YGCT and FGCT column values.
        2) Monitoring GC activity using JVisualVM an VisualGC 

        If you need to monitor the GC activity in a graphical manner, you can use the jvisualvm tool which comes with the JDK by installing the Visual GC plugin.

        Just run jvisualvm command in the terminal to launch the Java VisualVM application. 

        Once launched, you need to install Visual GC plugin from Tools -> Plugins->Available Plugins (Tab) option

        After installing Visual GC, just open the application(by double clicking) from the left side column and head over to Visual GC section

        As depicted in the above diagram, you can visually monitor the GC activities of the WSO2 ESB using the jvisualvm tool.

        3) Monitoring GC activity using GC log file

        In most of the production use cases, we don't need to interact with the running process through different programs. Instead, we would like to have the GC logging as an internal part of the program itself. We can enable GC logging for the JVM such that it will log all the GC activities into a separate log file such that monitoring tools can interpret this file separately without interacting with the application directly.

        You can enable GC logging in to an external log file by adding the following flags to startup script of the WSO2 ESB (

            -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps \
            -Xloggc:/Users/chanaka-mac/Documents/wso2/wso2esb-4.8.1/repository/logs/gc.log \

        When you start the server with these flags included in the file, you can observe that gc.log file is populating with the relevant GC activities as depicted below.

        Chanakas-MacBook-Air:logs chanaka-mac$ tail -f gc.log 
        2015-06-20T20:04:22.222-0530: 20.347: [GC [PSYoungGen: 253355K->31555K(285184K)] 354302K->132534K(460288K), 0.0179690 secs] [Times: user=0.02 sys=0.00, real=0.02 secs] 
        2015-06-20T20:04:23.031-0530: 21.156: [GC [PSYoungGen: 262979K->33422K(288256K)] 363958K->134431K(463360K), 0.0183790 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 
        2015-06-20T20:04:23.797-0530: 21.922: [GC [PSYoungGen: 264846K->35384K(292352K)] 365855K->136426K(467456K), 0.0192760 secs] [Times: user=0.02 sys=0.00, real=0.02 secs] 
        2015-06-20T20:04:24.468-0530: 22.593: [GC [PSYoungGen: 271928K->33834K(292864K)] 372970K->134994K(467968K), 0.0195170 secs] [Times: user=0.03 sys=0.00, real=0.02 secs] 
        2015-06-20T20:04:25.162-0530: 23.287: [GC [PSYoungGen: 270378K->29641K(288768K)] 371538K->130840K(463872K), 0.0186680 secs] [Times: user=0.03 sys=0.00, real=0.02 secs] 
        2015-06-20T20:04:26.547-0530: 24.672: [GC [PSYoungGen: 267721K->2845K(290816K)] 368920K->104320K(465920K), 0.0069150 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 
        2015-06-20T20:04:29.429-0530: 27.554: [GC [PSYoungGen: 240925K->9509K(294400K)] 342400K->111496K(469504K), 0.0123910 secs] [Times: user=0.04 sys=0.00, real=0.02 secs] 
        2015-06-20T20:04:32.290-0530: 30.415: [GC [PSYoungGen: 248613K->28794K(268288K)] 350600K->134734K(443392K), 0.0373390 secs] [Times: user=0.13 sys=0.01, real=0.03 secs] 
        2015-06-20T20:04:37.493-0530: 35.618: [GC [PSYoungGen: 249742K->21673K(287744K)] 355682K->152515K(462848K), 0.0903050 secs] [Times: user=0.16 sys=0.02, real=0.09 secs] 
        2015-06-20T20:04:37.584-0530: 35.709: [Full GC [PSYoungGen: 21673K->0K(287744K)] [ParOldGen: 130841K->80598K(175104K)] 152515K->80598K(462848K) [PSPermGen: 57507K->57484K(115200K)], 0.8345630 secs] [Times: user=1.68 sys=0.14, real=0.83 secs] 

        From the above log, we can extract information related to GC activities going on within the WSO2 ESB server.

        1. 2015-06-20T20:04:22.222-0530: – Time when the GC event started.
        2. 20.347 – Time when the GC event started, relative to the JVM startup time. Measured in seconds.
        3. GC – Flag to distinguish between Minor & Full GC. This time it is indicating that this was a Minor GC.
        4. PSYoungGen – Collection type. 
        5. 253355K->31555K – Usage of Young generation before and after collection.
        6. (285184K) – Total size of the Young generation.
        7. 354302K->132534K – Total used heap before and after collection.
        8. (460288K) – Total available heap.
        9. 0.0179690 secs – Duration of the GC event in seconds.
        10. [Times: user=0.02 sys=0.00, real=0.02 secs] – Duration of the GC event, measured in different categories:
          • user – Total CPU time that was consumed by Garbage Collector threads during this collection
          • sys – Time spent in OS calls or waiting for system event
          • real – Clock time for which your application was stopped. As Serial Garbage Collector always uses just a single thread, real time is thus equal to the sum of user and system times.

        I hope this blog post have provided you a comprehensive guide on monitoring GC activities with WSO2 ESB for tuning the performance. I will discuss about GC performance of different algorithms with WSO2 ESB in a future blog post.

        Happy GC Monitoring !!!

        Yumani property

        Hazelcast, uses heart beat monitoring as a fault detection mechanism where, the nodes sends heart beats to other nodes. If a heart beat message is not received by a given amount of time, then Hazelcast assumes the node is dead. This is configured via property. By default, this is set to 600 seconds.

        The optimum value for this property would depend on your system.

        Steps on how to configure the
        1. Create a property file called, and add the following properties to it.
        2. Place this file in CARBON_HOME/repository/conf/ folder of all your Carbon nodes.
        3. Restart the servers.

        sanjeewa malalgodaHow to import, export APIs with WSO2 API Manager 1.9.0

        API Manager 1.9.0 we have introduced API import/export capability. With that we will be able to to download API from one platform and export it to other platform.
        With this feature we will retrieve all the required meta information and registry resources for the requested API and generate a zipped archive.
        And then we can upload that to other API Manager server.

        To try this first you need to get web application source code from this git repo(

        Then build and generate web application.
        After that you have to deploy that in API Manager. For that you may use web application ui. Login to management console as admin user and go to this link.

        Home     > Manage     > Applications     > Add     > Web Applications

        Then add web application.
        Zipped archive of API will consists of the following structure

        |_ Meta Information
           |_ api.json
        |_ Documents
           |_ docs.json
        |_ Image
           |_ icon.
        |_ WSDL
           |_ -.wsdl
        |_ Sequences
           |_ In Sequence
           |_ Out Sequence
           |_ Fault Sequence
        API Import accepts the exported zipped archive and create an API in the imported environment.

        This feature has been implemented as a RESTful API.

        Please use following curl command to export API.
        Here you need to provide basic auth headers for admin user.
        And need to pass following parameters.

        Name of the API as > name=test-sanjeewa
        Version of API > version=1.0.0
        Provider of API > provider=admin
        curl -H "Authorization:Basic YWRtaW46YWRtaW4=" -X GET "https://localhost:9443/api-import-export/export-api?name=test-sanjeewa&version=1.0.0&provider=admin" -k >
        Now you will see downloaded zip file in current directory.

        Then you need to import downloaded zip file to other deployment.
        See following sample command for that.

        Here file is above downloaded archive file.
        And service call should go to the server we need to import this API. Here i'm running my second server with port offset one. So url would be "https://localhost:9444"
        curl -H "Authorization:Basic YWRtaW46YWRtaW4=" -F file=@"/home/sanjeewa/work/" -k -X POST "https://localhost:9444/api-import-export/import-api"
        Now go to API publisher and change API life cycle to publish(by default imported APIs will be in created state once you imported).

        Then go to API store and subscribe,use it :-)
        Thanks thilini and chamin for getting this done.

        Hasitha AravindaSetting Up Mutual SSL in WSO2 ESB and Testing Using SOAP UI

        This Blog post is an updated version of Asela's Blog 

        Exchanging Certificates with Client and Server. 

        First step is to create Client Key Store and Client Trust Store. Here I am using Java Keytool, which can be found in JDK bin directory.

        1) Create Client ( let's call wso2client ) Key Store (wso2clientkeystore.jks)

        keytool -genkey -keyalg RSA -keystore wso2clientkeystore.jks  -alias wso2client -dname "CN=wso2client" -validity 3650 -keysize 2048

        Provide Store password and Key password.

        2) Create Client Certificates. 

        keytool -export -keyalg RSA -keystore wso2clientkeystore.jks -alias wso2client  -file wso2client.cert

        3) Create Client Trust Store (wso2clientTrustStore.jks)

        keytool -import -file wso2client.cert -alias wso2client -keystore wso2clientTrustStore.jks

        Provide Trust store password.

        4) Export ESB Server Certificate

        keytool -export -keyalg RSA -keystore /repository/resources/security/wso2carbon.jks -alias wso2carbon -file wso2carbon.cert

        Provide wso2carbon store password "wso2carbon"

        5) Import Client Certificate wso2client.cert to WSO2 ESB client-trustStore.jks

        keytool -import -file wso2client.cert -alias wso2client -keystore /repository/resources/security/client-truststore.jks

        Provide wso2carbon store password "wso2carbon"

        6) Import ESB Server Certificate wso2carbon.cert to client-trust store 

        keytool -import -file wso2carbon.cert -alias wso2carbon -keystore wso2clientTrustStore.jks

        Configure WSO2 ESB Server 

        1) Edit https transportReceiver in axis2.xml, which is located in /repository/conf/axis2/ folder and Add SSLVerifyClient to require as follows.

        2) Restart ESB Server.

        Note: This will Enable Mutual SSL for Proxies on https transport in ESB.

        Create Test Proxy

        Create a test proxy with Following Content

        Testing Test Proxy Using SOAP UI

        1) Open SOAP UI and create a SOAP UI project using Test Proxy WSDL. ( https://localhost:9443/services/Test?wsdl )

        2) Try to Invoke Test Proxy with default configuration.

        As shown bellow, it will fail with This is because Soap UI doesn't have wso2client key store and trust store.

        3) Let's Add Key store and Trust Store to Project.  Open Test Project Properties. -> WS-Security Configuration -> Key Store -> Add Key Store as shown in following picture. -> Select wso2clientkeystore.jks

        4) Enter store password for wso2clientkeystore.jks

        5) Similarly add Client Trust store to SOAP UI ( An optional step for this tutorial )

        6) Select SSL Keystore to wso2clientkeystore.jks.

        7) Invoke Request 1 again with SSL configuration.

        Now you will be able to invoke Test proxy service with Mutual SSL enabled.

        In Next blog, I will discuss how to Enable Mutual SSL only for One proxy.

        Hasitha AravindaSetting Up Mutual SSL in WSO2 ESB - Enable only for selected proxy services

        This Blog post is an updated version of Asela's Blog 

        I am using same environment described in my previous blog for this tutorial

        Configure WSO2 ESB Server 

        1) Edit https transportReceiver in axis2.xml, which is located in /repository/conf/axis2/ folder and Add SSLVerifyClient to optional as follows.
        2) Restart ESB Server.

        Note: This will make Mutual SSL optional for proxy services exposed on https transport.

        Now you will able to Invoke Test Proxy without SSL KeyStore property in SOAP UI. To verify this remove value of SSL KeyStore and Invoke Request 1 Again.

        Enable Mutual SSL for Test Proxy

        1) Create a ESB XML local entry called MutualSSLPolicy.xml with following content.

        2) Add following parameters to Test Proxy. 

        ( Add these parameters to proxy services you want to enable mutual authentication. )'

        3) Final Test proxy will look like this

        Testing With SOAP UI 

        1) Try Request 1 without SSL KeyStore parameter. Request Fails with SOAP Fault

        2) Now try with SSL KeyStore Parameter, Now you will able to invoke Test Proxy Service.

        John MathonMost Enterprises are incompetent to manage the infrastructure they have

        Virtual Enterprises

        what hapens online

        This may sound insulting but the fact is that almost all companies are not really competent to buy, manage and handle the lifecycle of technology for the enterprise.  In most cases they are incompetent at security aspects of physical infrastructure.   If you have already invested in some physical infrastructure it may not be cost-effective to eliminate it but you should seriously consider the real need for each addition to the physical infrastructure you manage.

        Why almost all companies should NOT be managing the technology they use:

        1) Do not know which technologies in many cases are the most cost effective, easily managed, best tools to use

        I have been in companies that simply buy the most expensive fastest servers (financial industry) as if they were buying sports cars.  Which router do you buy?   What is the best choice of hardware memory, CPU speed, number of CPUs for your organization?   Assuming you buy the hardware for a specific application what happens when that application changes and different hardware is needed?   Do you need SDN would it be helpful?    Most companies make these decisions because of a relationship with a specific vendor or some individual who has become infatuated with a particular technology.  Few have the discipline to make the choice taking into account the full cost of the choices they make.

        2) Do not know how to manage the lifecycle costs and usually have not considered the full cost of technology maintenance in their calculations.

        What happens when that shiny box you bought is 3 years old and there are dozens of newer boxes 5 times more powerful?  How do you handle situations where you have high maintenance costs for a technology?  Do you keep maintaining it or move to something with less maintenance costs?   What happens if you have to keep this technology 10 or 20 years in the company? How do you handle integrating it with new technology, keeping it alive?

        3) Do not know how to share the resources or may not even have the ability to share the technology effectively.

        You buy or have bought expensive new technology.  Do you know how to share this technology within your organization to get the most cost effectiveness?   In many cases it may simply not be possible to share the technology across your organization sufficiently to gain maximum cost savings.  Have you fully considered the costs of wasted servers, wasted hardware sitting idle much of the day or underused?  The energy or environmental cost as well as the financial burden.

        4) Do not know how to maintain the technology they purchase or decide when it is a good idea to sunset a technology and move on.   They will continue using antiquated technology well beyond its intended lifetime.

        Most companies if they use a technology successfully will keep that technology around virtually forever.   Many older companies are still running IBM mainframe software for critical business functions.  This costs them billions in many cases per year.   While in those cases it may be justifiable to keep that technology alive no sane CIO should consider repeating this and investing in technology today that will be costing billions years and years from now considering that there is NO NEED to do this.  With the cloud you can minimize those kinds of longer term dependencies.   You may find it hard to unravel old decisions that were justified and continue to be worthwhile but most companies can choose shared resources that they put the maintenance burden on a separate organization to be shared among many companies.

        Most companies don’t manage their hardware or software maintenance well leading to more downtime, unexpected outages and more cost than they need.

        5) They do not know how to manage the security of the technology they have and frequently have attacks and losses due to badness, i.e. poor practices, poor maintenance, lack of knowledge or employee training.

        Security in today’s age of government spying and hacking from all over the world is a tough sophisticated job.   Most companies experience dozens, even more than 100 security incidents a year.   The average company patches critical software with security patches 30-60 days after the vulnerability was discovered.  This 30-60 day window means essentially that most companies are effectively completely vulnerable to sophisticated attackers.   It is expensive and hard to train employees on best security practices, to monitor and track every possible avenue of loss.

        6) Not competent to interview and assess who to hire to do 1-5.

        Even if after reading all this you decide your business requires you to purchase technology and manage it yourself are you competent to hire the right people to do the jobs above?  Do you even know the right questions to ask?  How are you sure they are doing the best job?  The best practices are being employed well?  How do you manage such assets?  What if you lose such assets?   I have worked at companies whose job was to provide security technology or banks where they had high standards for security or needs to maintain technology competence yet they frequently fell below the bar.  Hiring and retaining the talent needed to do these tasks at a high level is nontrivial.


        Most companies are not in the business of technology and it is a waste of their time and energy to manage technology.   Most companies are not technology companies although more and more need to consider technology as a key part of their business that doesn’t mean they need to manage the technology they use.

        In this age of rapid evolution of technology and the connected business almost every business deals with technology as an important part of their business but that doesn’t mean they need to own everything, manage everything.   You should very carefully consider which things you consider worth making the significant investment to own or manage technology yourself.

        Chanaka FernandoGarbage Collection and Application Performance

        Automatic Garbage Collection is one of the finest features of the Java programming language. You can find more information about Garbage Collection concepts from the below link.

        Even though GC is a cool feature in the JVM, it comes at a cost. Your application will stop working (Stop the World) when GC happens in the JVM level. Which means that, GC events will affect the performance of your java application. Due to this, you should have a proper understanding about the impact of GC for your application. 

        There are two general ways to reduce garbage-collection pause time and the impact it has on application performance:

        • The garbage collection itself can leverage the existence of multiple CPUs and be executed in parallel. Although the application threads remain fully suspended during this time, the garbage collection can be done in a fraction of the time, effectively reducing the suspension time.
        • The second approach is leave the application running, and execute garbage collection concurrently with the application execution.

        These two logical solutions have led to the development of serial, parallel, and concurrent garbage-collection strategies , which represent the foundation of all existing Java garbage-collection implementations.

        In the above diagram, serial collector suspends the application threads and executes the mark-and-sweep algorithm in a single thread. It is the simplest and oldest form of garbage collection in Java and is still the default in the Oracle HotSpot JVM.

        The parallel collector uses multiple threads to do its work. It can therefore decrease the GC pause time by leveraging multiple CPUs. It is often the best choice for throughput applications.

        The concurrent collector does the majority of its work concurrent with the application execution. It has to suspend the application for only very short amounts of time. This has a big benefit for response-time for sensitive applications, but is not without drawbacks. 

        Concurrent Mark and Sweep algorithm

        Concurrent garbage-collection strategies complicate the relatively simple mark-and-sweep algorithm a bit. The mark phase is usually sub-divided into some variant of the following:

        • In the initial marking, the GC root objects are marked as alive. During this phase, all threads of the application are suspended.

        • During concurrent marking, the marked root objects are traversed and all reachable objects are marked. This phase is fully concurrent with application execution, so all application threads are active and can even allocate new objects. For this reason there might be another phase that marks objects that have been allocated during the concurrent marking. This is sometimes referred to as pre-cleaning and is still done concurrent to the application execution.

        • In the final marking, all threads are suspended and all remaining newly allocated objects are marked as alive. This is indicated in Figure 2.6 by the re-mark label.

        The concurrent mark works mostly, but not completely, without pausing the application. The tradeoff is a more complex algorithm and an additional phase that is not necessary in a normal stop-the-world GC: the final marking.

        The Oracle JRockit JVM improves this algorithm with the help of a keep area, which, if you're interested, is described in detail in the JRockit documentation . New objects are kept separately and not considered garbage during the first GC. This eliminates the need for a final marking or re-mark.

        In the sweep phase of the CMS, all memory areas not occupied by marked objects are found and added to the free list. In other words, the objects are swept by the GC. This phase can run at least partially concurrent to the application. For instance, JRockit divides the heap into two areas of equal size and sweeps one then the other. During this phase, no threads are stopped, but allocations take place only in the area that is not actively being swept. 

        There is only one way to make garbage collection faster: ensure that as few objects as possible are reachable during the garbage collection. The fewer objects that are alive, the less there is to be marked. This is the rationale behind the generational heap.

        Young Generation vs Old Generation

        In a typical application most objects are very short-lived. On the other hand, some objects last for a very long time and even until the application is terminated. When using generational garbage collection, the heap area is divided into two areas—a young generation and an old generation—that are garbage-collected via separate strategies.

        Objects are ussually created in the young area. Once an object has survived a couple of GC cycles it is tenured to the old generation. After the application has completed its initial startup phase (most applications allocate caches, pools, and other permanent objects during startup), most allocated objects will not survive their first or second GC cycle. The number of live objects that need to be considered in each cycle should be stable and relatively small.

        Allocations in the old generation should be infrequent, and in an ideal world would not happen at all after the initial startup phase. If the old generation is not growing and therefore not running out of space, it requires no garbage-collection at all. There will be unreachable objects in the old generation, but as long as the memory is not needed, there is no reason to reclaim it.

        To make this generational approach work, the young generation must be big enough to ensure that all temporary objects die there. Since the number of temporary objects in most applications depends on the current application load, the optimal young generation size is load-related. Therefore, sizing the young generation, known as generation-sizing, is the key to achieving peak load.

        Unfortunately, it is often not possible to reach an optimal state where all objects die in the young generation, and so the old generation will often often require a concurrent garbage collector. Concurrent garbage collection together with a minimally growing old generation ensures that the unavoidable, stop-the-world events will at least be very short and predictable. 

        Chanaka FernandoUnderstanding Java Garbage Collection for beginners

        Java is one of the heavily used languages in enterprise application development. One of the key features of the Java language is it's capability to clear out memory automatically. This gives application developer more freedom to think about his business logic rather than worrying about memory management of the application. This may be the utmost reason for the selection of java language for complex business application development. 

        Java uses a technology called Automatic Garbage Collection (GC) for clearing out any unused memory from your application. During this blog post, I will be discussing about Java memory model and how GC works within the Java virtual machine (JVM). In fact every java application runs on top of its own JVM.

        Java Memory model

        Each thread running in the Java virtual machine has its own thread stack. The thread stack contains information about what methods the thread has called to reach the current point of execution. I will refer to this as the "call stack". As the thread executes its code, the call stack changes. The thread stack also contains all local variables for each method being executed (all methods on the call stack). A thread can only access it's own thread stack. Local variables created by a thread are invisible to all other threads than the thread who created it. Even if two threads are executing the exact same code, the two threads will still create the local variables of that code in each their own thread stack. Thus, each thread has its own version of each local variable.

        All local variables of primitive types ( boolean, byte, short, char, int, long, float, double) are fully stored on the thread stack and are thus not visible to other threads. One thread may pass a copy of a primitive variable to another thread, but it cannot share the primitive local variable itself. The heap contains all objects created in your Java application, regardless of what thread created the object. This includes the object versions of the primitive types (e.g. Byte, Integer, Long etc.). It does not matter if an object was created and assigned to a local variable, or created as a member variable of another object, the object is still stored on the heap. 

        A local variable may be of a primitive type, in which case it is totally kept on the thread stack.

        A local variable may also be a reference to an object. In that case the reference (the local variable) is stored on the thread stack, but the object itself if stored on the heap.

        An object may contain methods and these methods may contain local variables. These local variables are also stored on the thread stack, even if the object the method belongs to is stored on the heap.

        An object's member variables are stored on the heap along with the object itself. That is true both when the member variable is of a primitive type, and if it is a reference to an object.

        Static class variables are also stored on the heap along with the class definition.

        Objects on the heap can be accessed by all threads that have a reference to the object. When a thread has access to an object, it can also get access to that object's member variables. If two threads call a method on the same object at the same time, they will both have access to the object's member variables, but each thread will have its own copy of the local variables. 

        Java Heap Memory

        As discussed in the previous section, Java heap memory is responsible for storing all the objects created during the runtime of a program and all the member variables and the static variables with its class definitions. This is the area of memory which needs to be carefully controlled. Below diagram depicts the memory model of the JVM heap.

        Young Generation

        Most of the newly created objects are located in the Eden memory space. 

        When Eden space is filled with objects, Minor GC is performed and all the survivor objects are moved to one of the survivor spaces.

        Minor GC also checks the survivor objects and move them to the other survivor space. So at a time, one of the survivor space is always empty.

        Objects that are survived after many cycles of GC, are moved to the Old generation memory space. Usually it’s done by setting a threshold for the age of the young generation objects before they become eligible to promote to Old generation.

        Old Generation

        Old Generation memory contains the objects that are long lived and survived after many rounds of Minor GC. Usually garbage collection is performed in Old Generation memory when it’s full. Old Generation Garbage Collection is called Major GC and usually takes longer time.

        Stop the World Event

        All the Garbage Collections are “Stop the World” events because all application threads are stopped until the operation completes.

        Since Young generation keeps short-lived objects, Minor GC is very fast and the application doesn’t get affected by this.

        However Major GC takes longer time because it checks all the live objects. Major GC should be minimized because it will make your application unresponsive for the garbage collection duration. So if you have a responsive application and there are a lot of Major Garbage Collection happening, you will notice timeout errors.

        The duration taken by garbage collector depends on the strategy used for garbage collection. That’s why it’s necessary to monitor and tune the garbage collector to avoid timeouts in the highly responsive applications.

        Permanent Generation

        Permanent Generation or “Perm Gen” contains the application metadata required by the JVM to describe the classes and methods used in the application. Note that Perm Gen is not part of Java Heap memory.

        Perm Gen is populated by JVM at runtime based on the classes used by the application. Perm Gen also contains Java SE library classes and methods. Perm Gen objects are garbage collected in a full garbage collection.

        Garbage Collection

        As already mentioned in the beginning of this post, Garbage Collection is one of the prime features of the java programming language. Many people think garbage collection collects and discards dead objects. In reality, Java garbage collection is doing the opposite! Live objects are tracked and everything else designated garbage. As you'll see, this fundamental misunderstanding can lead to many performance problems.

        Let's start with the heap, which is the area of memory used for dynamic allocation. In most configurations the operating system allocates the heap in advance to be managed by the JVM while the program is running. This has a couple of important ramifications:
        • Object creation is faster because global synchronization with the operating system is not needed for every single object. An allocation simply claims some portion of a memory array and moves the offset pointer forward. The next allocation starts at this offset and claims the next portion of the array.
        • When an object is no longer used, the garbage collector reclaims the underlying memory and reuses it for future object allocation. This means there is no explicit deletion and no memory is given back to the operating system.     

        Garbage Collection Roots

        Every object tree must have one or more root objects. As long as the application can reach those roots, the whole tree is reachable. But when are those root objects considered reachable? Special objects called garbage-collection roots (GC roots; see below figure) are always reachable and so is any object that has a garbage-collection root at its own root.

        There are four kinds of GC roots in Java:

        Local variables are kept alive by the stack of a thread. This is not a real object virtual reference and thus is not visible. For all intents and purposes, local variables are GC roots.

        Active Java threads are always considered live objects and are therefore GC roots. This is especially important for thread local variables.

        Static variables are referenced by their classes. This fact makes them de facto GC roots. Classes themselves can be garbage-collected, which would remove all referenced static variables. This is of special importance when we use application servers, OSGi containers or class loaders in general. 

        JNI References are Java objects that the native code has created as part of a JNI call. Objects thus created are treated specially because the JVM does not know if it is being referenced by the native code or not. Such objects represent a very special form of GC root, which we will examine in more detail in the Problem Patterns section below.

        Marking and Sweeping Away Garbage

        To determine which objects are no longer in use, the JVM intermittently runs what is very aptly called a mark-and-sweep algorithm . As you might intuit, it's a straightforward, two-step process:

        • The algorithm traverses all object references, starting with the GC roots, and marks every object found as alive.
        • All of the heap memory that is not occupied by marked objects is reclaimed. It is simply marked as free, essentially swept free of unused objects.

        Garbage collection is intended to remove the cause for classic memory leaks: unreachable-but-not-deleted objects in memory. However, this works only for memory leaks in the original sense. It's possible to have unused objects that are still reachable by an application because the developer simply forgot to dereference them. Such objects cannot be garbage-collected. Even worse, such a logical memory leak cannot be detected by any software. Even the best analysis software can only highlight suspicious objects. 

        Garbage Collection Algorithms and Performance

        As I have already mentioned previously, GC event is a "Stop the World" operation where application stop its execution during this time period. Therefore, it is very important to choose a proper GC algorithm for your performance critical enterprise application. There are 5 GC algorithms available with the Oracle JVM.

        Serial GC (-XX:+UseSerialGC): Serial GC uses the simple mark-sweep-compact approach for young and old generations garbage collection i.e Minor and Major GC. Serial GC is useful in client-machines such as our simple stand alone applications and machines with smaller CPU. It is good for small applications with low memory footprint.
        Parallel GC (-XX:+UseParallelGC): Parallel GC is same as Serial GC except that is spawns N threads for young generation garbage collection where N is the number of CPU cores in the system. We can control the number of threads using -XX:ParallelGCThreads=n JVM option. Parallel Garbage Collector is also called throughput collector because it uses multiple CPUs to speed up the GC performance. Parallel GC uses single thread for Old Generation garbage collection.
        Parallel Old GC (-XX:+UseParallelOldGC): This is same as Parallel GC except that it uses multiple threads for both Young Generation and Old Generation garbage collection.
        Concurrent Mark Sweep (CMS) Collector (-XX:+UseConcMarkSweepGC): CMS Collector is also referred as concurrent low pause collector. It does the garbage collection for Old generation. CMS collector tries to minimize the pauses due to garbage collection by doing most of the garbage collection work concurrently with the application threads. CMS collector on young generation uses the same algorithm as that of the parallel collector. This garbage collector is suitable for responsive applications where we can’t afford longer pause times. We can limit the number of threads in CMS collector using -XX:ParallelCMSThreads=n JVM option.
        G1 Garbage Collector (-XX:+UseG1GC): The Garbage First or G1 garbage collector is available from Java 7 and it’s long term goal is to replace the CMS collector. The G1 collector is a parallel, concurrent, and incrementally compacting low-pause garbage collector. Garbage First Collector doesn’t work like other collectors and there is no concept of Young and Old generation space. It divides the heap space into multiple equal-sized heap regions. When a garbage collection is invoked, it first collects the region with lesser live data, hence “Garbage First”. You can find more details about it at Garbage-First Collector Oracle Documentation.

        That is enough for this lengthy blog post. I am planning to write several blog posts on GC tuning in the future.

        Happy GC !!!


        Madhuka UdanthaGenerate a AngularJS application with grunt and bower

        1. Install grunt, bower, yo.. etc. If you have miss any.
        npm install -g grunt-cli bower yo generator-karma generator-angular

        Yeoman is used to generate the scaffolding of your app.
        Grunt is a powerful, feature rich task runner for Javascript.

        2. Install the AngularJS generator:
        npm install -g generator-angular

        3. Generate a new AngularJS application.
        yo angular

        The generator will ask you a couple of questions. Answer them as you need.

        4. Install packages/libs  
        bower install angular-bootstrap --save
        bower install angular-google-chart --save

        5. Start the server.
        npm start
        grunt server

        Lasantha FernandoIntegrating WSO2 Products with New Relic

        Recently, I had a chance to play around with New Relic [1] and integrate WSO2 products with New Relic. It turns out, that integrating new relic is WSO2 servers is quite straight forward.
        1. Download the newrelic agent as instructed in [2] or in the NewRelic APM Getting Started guide.
        2. Unzip to <CARBON_HOME> directory. The agent should be installed under <CARBON_HOME>/newrelic
        3. Make sure the license key and application name are correct in <CARBON_HOME>/newrelic/newrelic.yml.
        4. Add the following line to <CARBON_HOME>/bin/ java command at the end.   

        5.  -javaagent:$CARBON_HOME/newrelic/newrelic.jar \  

          The last part of the should be as follows.

           while [ "$status" = "$START_EXIT_STATUS" ]  
          $JAVACMD \
          -Xbootclasspath/a:"$CARBON_XBOOTCLASSPATH" \
          -Xms256m -Xmx1024m -XX:MaxPermSize=256m \
          -XX:+HeapDumpOnOutOfMemoryError \
          -XX:HeapDumpPath="$CARBON_HOME/repository/logs/heap-dump.hprof" \
          $JAVA_OPTS \
          -javaagent:$CARBON_HOME/newrelic/newrelic.jar \
          -classpath "$CARBON_CLASSPATH" \
          -Djava.endorsed.dirs="$JAVA_ENDORSED_DIRS" \
"$CARBON_HOME/tmp" \
          -Dcatalina.base="$CARBON_HOME/lib/tomcat" \
          -Dwso2.server.standalone=true \
          -Dcarbon.registry.root=/ \
          -Djava.command="$JAVACMD" \
          -Dcarbon.home="$CARBON_HOME" \
          -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager \
          -Djava.util.logging.config.file="$CARBON_HOME/repository/conf/etc/" \
          -Dcarbon.config.dir.path="$CARBON_HOME/repository/conf" \
          -Dcomponents.repo="$CARBON_HOME/repository/components/plugins" \
          -Dcom.atomikos.icatch.file="$CARBON_HOME/lib/" \
          -Dcom.atomikos.icatch.hide_init_file_path=true \
          -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true \
          -Dcom.sun.jndi.ldap.connect.pool.authentication=simple \
          -Dcom.sun.jndi.ldap.connect.pool.timeout=3000 \
          -Dorg.terracotta.quartz.skipUpdateCheck=true \
          -Dfile.encoding=UTF8 \
          -DapplyPatches \
          org.wso2.carbon.bootstrap.Bootstrap $*

        6. Start the WSO2 server. If the agent was picked up successfully, you should see additional log entries similar to following at startup.

        7.  May 14, 2015 14:17:54 +0530 [1866 1] com.newrelic INFO: New Relic Agent: Loading configuration file "/home/lasantha/Staging/wso2am-1.8.0/newrelic/./newrelic.yml"May 14, 2015 14:17:55 +0530 [1866 1] com.newrelic INFO: New Relic Agent: Writing to log file: /home/lasantha/Staging/wso2am-1.8.0/newrelic/logs/newrelic_agent.log  

        8. After server is started up properly, use the server for sometime as you would normally use e.g. send requests to the server, browse the management console etc. You should be able to see statistics when you are logged into your new relic dashboard when you view APM stats under the application name you have given.
        New Relic dashboard showing stats about the JVM

        Dashboard for viewing application request stats
        Now that you've setup your WSO2 product properly with stats being published to New Relic, its time to know about those little caveats and gotchas! New Relic uses JMX to collect statistics from any Java server. However, by default, it listens to stats from a pre-defined set of beans. This will give you stats about request to the management console of a WSO2 server. However, WSO2 products use a different set of ports and publishes stats to a different set of MBeans for traffic related to HTTP/HTTPS NIO transport. These MBeans are not captured by default in New Relic, dashboard. 

        But to capture these stats, all you need to do is register an additional set of stats to a new relic custom dashboard. To do that you need to create custom jmx instrumentation when configuring New Relic. The following link [3] can provide more details about custom JMX instrumentation on New Relic. You can find some JMX examples as well in [4].

        And for the final link...! A list of JMX MBeans that are available for monitoring HTTP/HTTPS synapse NIO transport is available in [5]

        Madhuka UdanthaGit simple Feature Branch Workflow

        In my previous post, I wrote about git work flows. Now I will going to try out simple 'Feature Branch Workflow'.
        1. I pull down the latest changes from master
        git checkout master
        git pull origin master

        2. I make branch to make changes 
        git checkout -b new-feature

        3. Now I am working on the feature

        4. I keep my feature branch fresh and up to date with the latest changes in master, using 'rebase'
        Every once in a while during the development update the feature branch with the latest changes in master.

        git fetch origin
        git rebase origin/master

        In the case where other devs are also working on the same shared remote feature branch, also rebase changes coming from it:

        git rebase origin/new-feature

        Resolving conflicts during the rebase allows me to have always clean merges at the end of the feature development.

        5. When I am ready I commit my changes
        git add -p
        git commit -m "my changes"

        6. rebasing keeps my code working, merging easy, and history clean.
        git fetch origin
        git rebase origin/new-feature
        git rebase origin/master

        Below two points are optional
        6.1 push my branch for discussion (pull-request)
        git push origin new-feature

        6.2 feel free to rebase within my feature branch, my team can handle it!
        git rebase -i origin/master

        Few point that can be happen in developing phase.
        Another new feature is needed and it need some commits from my new branch 'new-feature' that new feature need new branch and few commits need to push to it and clean from my branch.

        7.1 Creating x-new-feature branch on top of 'new-feature'
        git checkout -b x-new-feature  new-feature

        7.2 Cleaning commits
        //revert a commit
        git revert --no-commit
        //reverting few steps a back from current HEAD
        git reset --hard HEAD~2

        7.3 Updating the git
        //Clean new-feature branch
        git push origin HEAD --force

        John MathonManagement of Enterprises with Cloud, Mobile Devices, Personal Cloud, SaaS, PaaS, IaaS, IoT and APIs

        The traditional tools for Enterprise Asset management and app management, performance management are challenged by the cloud.   Existing security tools are inadequate and are challenged by the new aspects of Enterprise Virtualization and new technology.  These new aspects:

        1. Personal Cloud
        2. SaaS Applications
        3. PaaS
        4. IaaS services
        5. Mobile Devices
        6. IoT Devices
        7. APIs and Cloud Services, Web Services
        8. Mobile Apps

        These technologies turn traditional enterprise 4-walls paradigm security management into “Swiss Cheese.”   In many cases traditional Enterprise management tools are incapable of dealing with these new capabilities at all.

        As a result Enterprises have taken on new applications to help manage some of these technologies in a one-off approach.  MDM mobile device management software is one tool used.   Most organizations employ best practices training for employees on most of the other technologies or depend on the vendor of those technologies to provide sufficient management information.   Frequently these management consoles or information are not integrated.

        In some cases it may be possible to extend traditional Enterprise performance management and asset management to include some of the new technologies but most companies simply depend on employees to follow best practices or ignore the shadow IT problem and hope for the best.  Some are vigilant in trying to discourage shadow IT resulting in probably much less productivity of employees and the enterprise itself or turn employees seeking productivity into rogue employees.

        The virtual enterprise needs a new set of management tools that are designed to manage devices, applications and services in a cloud world and provide security and management of the “holey” enterprise.

        This is the “New Enterprise” and in fact many enterprises today may already be completely virtual.   This means traditional Enterprise Asset Management tools more focused on hardware are useless.  They have no traditional hardware to manage.

        What we need is a new “Asset Management, performance management, operations management” capability that includes all these technologies above as a unified set of tools in our new virtual world.

        Since I don’t know of any tool that exists that combines all these features I am going to dream for a bit about what such a tool would entail and what it’s requirements would be.

        First, the tool would have to understand all 7 of the technologies listed above.     A lot of the products share common characteristics that makes a centralized administration, monitoring and usage sensible.   All of the assets mentioned have a set of url’s, login’s, key’s, security tokens or certificates and since they are all of the cloud type they all have APIs except possibly Mobile Apps.

        All of these virtual services are multi-tenant and / or user specific.   Most of them can have many instances in an enterprise owned by different groups in the company or different individuals.   They all have the need to be tracked in usage and when compromised or a departure occurs they need to be cleaned or repurposed.

        One can imagine an asset store which allows you to add easily any asset of the above types.   Ideally, the tool would automatically discover services when possible or interface to APIs periodically to update the list of known devices or virtual services and applications being used.

        There may be a cost to such tools and those costs should be tracked.   When new employees come onboard you may need to allocate some of these services and devices, similarly when they leave this has to be backed out.   Ideally you should be able to organize the assets by numerous tags, such as location, group, type of asset.   You should be able to aggregate costs, usage, incidents, instances or any other metric that makes sense.

        Many assets of this type are related to each other.   For instance a number of personal cloud services may be linked to an individual.   Devices, apps may also be linked to an individual.  Devices may be linked to an office or part of an office.   For physical devices it would be good to be able to locate the devices on a map.   For virtual services it would be good to have summaries of the riskiness of the data they contain, what kinds of threats have taken place or down time incidents.  For mobile apps it would be good to be able to see the dependency on APIs, so that if an API is experiencing a problem we can assume the app dependent on it will experience a problem.

        I would think a good feature would be to track the version of the firmware or app for each service or instance being used.  It should be possible to force upgrade of devices and applications if needed.

        One of the major benefits of such an overarching management application would be to help account for all the holes in the organization where information can go, to provide a way to isolate and govern that information separate from the employees personal services.   Possibly to track the content or purge it when needed.

        The system would also be useful for helping manage large numbers of IoT devices, their dependencies on each other and other services.  It would be integrated with device management so that upgrades could be systematically applied and vulnerabilities understood.

        It should support the social aspects of these assets helping employees find assets and understand how to use them.

        I believe this kind of asset management platform is essential for the new virtual enterprise.   I have been saying for a while we need a way to operate with the cloud and the inevitable swiss cheese this makes of Enterprise security.

        WSO2 has an Enterprise Store, an App ManagerDevice Manager and API Manager and of course Identity Management.   All of these can be used to provide some of the functionality I describe.     In particular the Enterprise Store can host any kind of asset I described above and provide a social front end for users to find and learn about the assets or make requests.   I see the future of these types of tools as critical to the Enterprise adoption of cloud and IoT in the future.

        Other Articles you may find interesting like this:

        Put it in the Store – The new paradigm of enterprise social asset sharing and reuse: Just put it in the store.

        The Enterprise Store – App, API, Mobile

        Here are some user stories for such an application:

        User Story
        Regular Employee see, search in a user friendly way the available external APIs, internal APIs I may use as well as mobile apps, web apps, SaaS services or other assets
        Regular Employee to see, search or in a user friendly way see the relationship of assets to each other and to groupings or other individuals
        Regular Employee See the all the virtual services and devices I use (or am registered for) and the health and status of all these virtual services and devices I use
        Regular Employee See the usage and cost for the services I use
        Regular Employee See other people’s comments, ratings, user docs and other information about any asset in the system
        Regular Employee register services I use in the cloud such as google docs, dropbox, etc.. that may have corporate information on them and the credentials for the service
        Regular Employee register IoT, Mobile devices I use
        Regular Employee request an existing service, app, API for my use
        Regular Employee inform that some service is compromised, in need of repair or will not be used anymore
        Regular Employee to log a message with helpful advice, complaint, video, bug report or any content which would be usefully associated with an asset or group of assets
        Regular Employee I can see the status of all my comments, tickets or other requests that are pending
        Regular Employee I want to be notified via email or sms of incidents related to the assets I use
        Regular Employee I can make a ticket request for a new asset type to be included in the store
        Operations be able to do all that a regular employee can do for all assets or the assets I am responsible for
        Operations be able to see more detailed health and status of all assets I am responsible for
        Operations be able to act on behalf of a regular employee or set of regular employees to request, register or do any of the regular employee activities and that my acting on behalf of the employee is logged as well
        Operations be able to go into the administrative API and perform tasks related to any asset including security, performance, upgrading
        Operations be able to see the bigdata generated by the asset and perform queries against the logs and bigdata
        Operations to be notified if any asset has a change of status or has something logged against it that may be of interest to me
        Operations to be able to revoke instances, create instances of any service, set limits on the usage of services, devices or any asset
        Operations to be able to configure new services or devices, allocate number of instances, security constraints and policies, fault tolerant policies, scaling policies, approval policies for requests for the services or devices
        Operations to be able to move an asset to a different lifecycle stage such as from development to test to staging, production
        Operations be able to configure the lifecycle of services or devices
        Operations to create an incident, modify or cancel.  notify everyone involved with an asset affecting the availability, usage criteria and information about an eissue
        Operations can set up SLA for any service or device
        Developer be able to clone or create a new development environment for a service or device
        Developer be able to set up continuous integration, test and deployment scripts
        Developer be able to request the service or version of a service advance in its lifecycle
        Developer be able to see all versions of the service or device I am working on and information related to the health or operation of that service or device
        Developer be able to close a ticket related to services or devices I am responsible for
        Developer to be able to examine in any depth the logs or other data associated with any service or device
        Developer to be able to create or assign relationships between services and devices, to create new groups or tags associated with devices or services that links these or show a dependence
        Developer to be able to create dashboards or analytical tools that themselves are services based on information and bigdata associated with services or devices
        Developer be able to see more detailed health and status of all assets I am responsible for
        Management to have configurable dashboards of operating metrics, costs, usage, incidents or other useful information for management
        Management to be able to research history of the management data related to all assets
        Management to see statistics and dashboards with respect to a single instance, the class of instances, the group responsible, the person responsible or any other tags associated with devices and services
        Management to establish rules and policies for security,
        Management to be able to configure new services or devices, allocate number of instances, security constraints and policies, fault tolerant policies, scaling policies, approval policies for requests for the services or devices
        Overall the system must support numerous common personal cloud services, should enable automatic logon and scanning of content and activity to insure compliance, creation of accounts, deletion of accounts, transfer or copying of data
        Overall the system must support numerous common SaaS applications and tie into their administrative and performance APIs to augment the information available in the dashboards
        Overall the system must support numerous common internal use only APIs, external APIs we provide or provided by others, different tiers of usage, entitlement limitations or other policies around those APIs such as cost
        Overall the system must support numerous common IaaS vendors and monitor usage, link to management APIs to be able to manage the IaaS infrastructure
        Overall the system must support common PaaS platforms and enable monitoring of virtual containers, instances and tie those to assets in the store
        Overall the system must support numerous common mobile devices and allow the MDM of those devices
        Overall the system must support numerous common IoT devices and allow the MDM of those devices
        Overall the system must support numerous common apps that users can download or come pre-configured for them
        Overall the system should support any amount or type of content to be placed on the wall of an asset, group, tag or class
        Overall the system should support security protocols, OAUTH2 and OPEN_ID or other protocols to support minimal need for the users to specify passwords or security themselves. In the case the service or device doesn’t support that then the system should be able to hold critical security information and invoke it to perform operations on behalf of the user
        Overall the system should support an unlimited number of instances of devices or services even hundreds of thousands and to enable efficient management of large number of devices, services
        Overall the system should support monitoring performance, be able to perform health checks automatically, create geofencing for devices, policy based management for deviations from the norm
        Overall the system should support new user profiles with combinations of permissions and asset types not envisioned at this time

        sanjeewa malalgodaPlanning large scale API Management deployment with clustering - WSO2 API Manager

        When we do capacity we need to consider several factors. Here i will take basic use case as scenario and explain production recommendations.

        With default configuration we can have following TPS per gateway node.
        Single gateway = 1000 TPS
        Single gateway by adding 30% buffer = 1300

        Normally following are mandatory for HA setup

        WSO2 API Manager : Gateway - 1-active. 1-passive
        WSO2 API Manager : Authentication - 1-active. 1-passive
        WSO2 API Manager : Publisher - 1-active. 1-passive
        WSO2 API Manager : Store - 1-active. 1-passive

        You can compute exact instance count

        Hardware Recommendation
        Physical :
        3GHz Dual-core Xeon/Opteron (or latest), 4 GB RAM (minimum : 2 GB for JVM and 2GB for the OS, 10GB free disk space (minimum) disk based on the expected storage requirements (calculate by considering the file uploads and the backup policies) . (e.g if 3 Carbon instances running in a machine, it requires 4 CPU, 8 GB RAM 30 GB free space)
        Virtual Machine :
        2 compute units minimum (each unit having 1.0-1.2 GHz Opteron/Xeon processor) 4 GB RAM 10GB free disk space. One cpu unit for OS and one for JVM. (e.g if 3 Carbon instances running require VM of 4 compute units 8 GB RAM 30 GB free space)
        EC2 : c3.large instance to run one Carbon instance. (e.g if 3 Carbon instances EC2 Extra-Large instance) Note : based on the I/O performance of c3.large instance, it is recommended to run multiple instances in a Larger instance (c3.xlarge or c3.2xlarge).

        When we setup clusters normally we will have gateway cluster, store-publisher cluster and key manager clusters separately.
        Let me explain why we need this.
        In API Manager all store and publisher clusters need to be in same cluster as they need to do cluster communications related to registry artifacts.
        When you create API from publisher it should immediately appear in store node. For this registry cache should be shared between store and publisher.
        To do that replication we need to have them in single cluster.

        In the same way we need to have all gateway nodes in single cluster as they need to share throttle counts and other run time specific data.

        And having few(10-15) gateway nodes in single cluster will not cause any issue.
        Only thing we need to keep in mind is when node count increases(within cluster) cluster communication may take very small additional time.

        So in production deployments normally we will not cluster all nodes together.
        Instead we will cluster gateways, key managers, Store/publishers separately.

        Isuru PereraJava Garbage Collection

        In this blog post, I'm briefly introducing important concepts in Java Garbage Collection (GC) and how to do GC Logging. There are many resources available online for Java GC and I'm linking some of those in this post.

        Why Garbage Collection is important?

        When we develop and run Java applications, we know that Java automatically allocates memory for our applications. Java also automatically deallocates memory when certain objects are no longer used. As Java Developers, we don't have to worry about memory allocations/deallocations as Java takes care of the task to manage memory for us.

        This memory management is a part of "Automatic Garbage Collection", which is an important feature in Java.  It is important to know how Garbage Collection manages memory in our programs.

        See Java Garbage Collection Basics, which is a great "Oracle by Example (OBE)" tutorial to understand the basics in Java GC.

        See also Java Garbage Collection DistilledWhat is Garbage Collection?Java Garbage Collection IntroductionJVM performance optimization, Part 3: Garbage collection and the whitepaper on Memory Management in the Java HotSpot™ Virtual Machine

        Java GC is also an important component when tuning performance of the JVM.

        Marking and Sweeping Away Garbage

        GC works by first marking all used objects in the heap and then deleting unused objects. This is called a mark-and-sweep algorithm.

        GC also compacts the memory after deleting unreferenced objects to make new memory allocations much easier and faster.

        JVM references GC roots, which refer the application objects in a tree structure. There are several kinds of GC Roots in Java.

        1. Local Variables
        2. Active Java Threads
        3. Static variables
        4. JNI references
        When the application can reach these GC roots, the whole tree is reachable and GC can determine which objects are the live objects.

        Java Heap Structure

        Java Heap is divided in to generations based on the object lifetime. This allows the GC to perform faster as the GC can mark and compact objects in particular generation. Usually in Java applications, there are many short lived objects and there will be less objects remaining in the heap for a long time. 

        Following is the general structure of the Java Heap. (This is mostly dependent on the type of collector).

        Java Memory

        There are three Heap parts.
        1. Young Generation
        2. Old Generation
        3. Permanent Generation

        We can define the heap sizes with JVM arguments. See Java Non-Standard Options.

        Following are some common arguments.
        • -Xms - Initial heap size
        • -Xmx - Maximum heap size
        • -Xmn - Young Generation size
        • -XX:PermSize - Initial Permanent Generation size
        • -XX:MaxPermSize - Maximum Permanent Generation size

        Young Generation

        Young Generation usually has Eden and Survivor spaces.

        All new objects are allocated in Eden Space. When this fills up, a minor GC happens. Surviving objects are first moved to survivor spaces. When objects survives several minor GCs (tenuring threshold), the relevant objects are eventually moved to the old generation based on the age.

        Old Generation

        This stores long surviving objects. When this fills up, a major GC (full GC) happens. A major GC takes a longer time as it has to check all live objects.

        Permanent Generation

        This has the metadata required by JVM. Classes and Methods are stored here. This space is included in a full GC.

        In Java 8, the PermGen is removed.

        "Stop the World"

        When certain GC happens, all application threads are stopped until the GC operation completes. These kind of GC events are called as "Stop the World" events/pauses.

        When GC tuning, one of the main targets is to reduce the time for "Stop the World" pause.

        Java Garbage Collectors

        Following are some garbage collectors available in Java 7 and there are different scenarios to use those. See Java Garbage Collectors in OBE tutorial, Types of Java Garbage CollectorsGarbage Collection in Java (1) - Heap Overview and Understanding Java Garbage Collection.

        1. The Serial GC
        2. The Parallel Scavenge (PS) Collector
        3. The Concurrent Mark Sweep (CMS) Collector
        4. The Garbage First (G1) Collector
        See Java HotSpot VM Options for specific flags to enable above collectors.

        My test runs revealed that the Parallel GC and the Parallel Old GC flags activate the Parallel Scavenge Collector (Java 1.7.0_80).

        Following are some of my observations when using different collectors with Java 7 (I got the Young & Old Garbage Collectors from the GC configuration tab after opening a Java Flight Recording in Java Mission Control).

        Garbage Collectors
        NameJVM FlagYoung CollectorOld Collector
        Serial GC-XX:+UseSerialGCDefNewSerialOld
        Parallel Old-XX:+UseParallelOldGCParallelScavengeParallelOld
        Parallel New-XX:+UseParNewGCParNewSerialOld
        Concurrent Mark Sweep-XX:+UseConcMarkSweepGCParNewConcurrentMarkSweep
        Garbage First-XX:+UseG1GCG1NewG1Old

        JVM GC Tuning Guides

        See following:


        A GC can be triggered by calling System.gc() from a Java program. However, a call to System.gc() does not guarantee that the system will a run a GC.

        Using this method is not recommended and we should let the JVM to run GC whenever needed.

        The finalize() method

        An object's finalize() method is called during GC. We can override the finalize method to clean up any resources. 

        GC Logging

        There are JVM flags to log details for each GC. See Useful JVM Flags – Part 8 (GC Logging)

        See Understanding Garbage Collection Logs.

        Following are some important ones. Last two flags log the Application times

        GC Logging Flags
        -XX:+PrintGCPrint messages at garbage collection
        -XX:+PrintGCDetailsPrint more details at garbage collection
        -XX:+PrintGCTimeStampsPrint timestamps at garbage collection
        -XX:+PrintGCApplicationStoppedTimePrint the application GC stopped time
        -XX:+PrintGCApplicationConcurrentTimePrint the application GC concurrent time

        Note: "-verbose:gc" is same as "-XX:+PrintGC".

        The "-Xloggc:" flag can be used to output all GC logging to a file instead of standard output (console).

        Following flags can be used with "-Xloggc" for log rotation.

        GC Log File Flags to be used with -Xloggc
        -XX:+UseGCLogFileRotationEnable GC log rotation
        -XX:NumberOfGCLogFiles=nSet the number of files to use when rotating logs, must be >= 1. Eg: -XX:NumberOfGClogFiles=100
        -XX:GCLogFileSize=sizeThe size of the log file at which point the log will be rotated, must be >= 8K. Eg: -XX:GCLogFileSize=8K

        Note: Evan Jones has found that JVM statistics cause garbage collection pauses.

        Viewing GC Logs

        The GCViewer is a great tool to view GC logs created from above mentioned flags.



        This blog post briefly introduced Java Garbage Collection, Java Heap Structure, Different Type of Garbage Collectors and how to do GC logging. I strongly recommend to go through the links and read. Those r