WSO2 Venus

Amila MaharachchiWSO2 App Cloud vs. Heroku - Part 2

When I started writing my experience on using WSO2 App Cloud and Heroku, I found it very interesting. One reason for that was, I was understanding how Heroku was working. Since I know how WSO2 App Cloud is working and with my knowledge on cloud technologies, I could better understand how Heroku is doing things.

This is the second part of my comparison of WSO2 App Cloud vs. Heroku. I recommend you to read WSO2 App Cloud vs Heroku - Part 1 before reading this. During this post, I will be writing about the experience based on the following four topics.
  • Using resources within the app (databases etc.)
  • Lifecycle management of the app
  • Collaborative development
  • Monitoring logs

Using resources within the app

When you are developing an app, you need to provision some resources for its runtime. For example, a database is a must have resource for an app these days. Not only that, your app might want to consume APIs or connect to some kind of a gateway.

If I consider the support provided by Heroku for such kind of resources, it is very limited out of the box. When we create an app, Heroku provides us with a Postgres database. We need to add the connection information to our app by changing the app code.

But, WSO2 App Cloud's experience goes far beyond that. After creating the app, you can create a database for it. When you create the database, you can create a user and attach a permission template to the user too. i.e. We can decide what are the database permissions we are going to allow for that user. Then, it provides the feature of creating a datasource from the database you just created. With the power of WSO2's product stack, this datasource is going to be available in app's runtime. Following is the Java code you need to write in order to access the datasource.
private DataSource getDataSource() {
DataSource dataSource = null;
Hashtable<String, String> env = new Hashtable<String, String>();
try {
InitialContext context = new InitialContext();
context = new InitialContext(env);
dataSource = (DataSource) context.lookup("jdbc/customer_ds");
} catch (NamingException e) {
return dataSource; 

See for a tutorial on how to do this.

Greatest advantage of this feature can be experienced when you promote your app to different stages in its lifecycle. I'll explain it in the next topic.

Not only datasources, WSO2 App Cloud allows you to create name-value pairs called properties that can be used by your app. These properties are stored in a storage called the "registry". These properties are also available in app's runtimes thanks to the power of WSO2's product stack. These properties also can be accessed in a way similar to how you access datasources. Advantage of having such properties will also be explained in the next topic.

See for a tutorial on how to use name-value pairs (properties) within your app.

Then comes APIs. If your app wants to invoke an API, its keys can be stored in the "registry". At the moment this feature works only with the WSO2 API Cloud. If you have an API in the WSO2 API Cloud, you can subscribe to it and then the WSO2 App Cloud get an store the API keys in its registry. You can synchronise the keys when you needed too. These keys can be retrieved from the registry and used in the app code. You don't need to hard-code them or maintain them in a config file separately.

Lifecycle management of apps 

Application lifecycle management (ALM) means allowing to create an app from scratch and take it through different stages such as development, testing, production etc. 

When you create an app in Heroku, you have the same environment for development, testing and production phases of the app. Heroku has a feature which keeps track of all the deployments of your app and allows you to revert to an older deployed version. But, you can have only one release of your app running at a given moment. Actually these are the deployments you push time to time. In summary, it does not have ALM support.

But, WSO2 App Cloud has a complete story when it comes to application lifecycle management. After you create an app, you can create branches of the repository and promote them to different stages. Provides stages are development, testing and production. You can do your dev work, create a branch and promote to testing stage. Then a QA person from your team can test your app which is deployed in the testing stage. In the meantime you can continue the dev work in another branch or trunk as you wish and keep them deploying to development environment. When you have the green light from your QA person, you can promote the app to the production stage. Note that, these three stages are three different runtimes.  With the power of resource management mentioned in the above section, you can use the same code while you change the resources in each environment. i.e. You can have three databases in each stage, but you don't need to change your code. After keeping your app in production for sometime, if you want, you can retire it as well. Thats the complete ALM support in WSO2 App Cloud.

Collaborative development

When you are working on an app (when you do some serious work), you will have the requirement of getting others involved in it. Heroku allows you to invite others to your app. So, they can also involve in development and deployment work. But the drawback in this feature is, if you are creating another app, you have to invite the same person separately for that app as well. I explained the reason for this in my previous post.

WSO2 App Cloud also supports collaborative development with extended and powerful features than Heroku. Following are its features.
  • You invite members to your organisation, not the app. So, when you have multiple apps, you don't need to invite them again and again. This does not mean that once invited, they can collaborate in all your apps. You have the power to add the invited members to the apps you only need them.
  • When you invite members to the organisation, you can assign roles to them. Available roles are App owner, Developer, QA and DevOps. What they can do in your app is decided based on these roles.
See this video to get an understanding of the WSO2 App Cloud's collaboration experience.

Monitoring logs

When it comes to monitoring logs of your application, Heroku seems to do a slightly better job than WSO2 App Cloud. Since Heroku runs your apps on isolated containers called "dynos", it is capable of providing all the logs which are related to app and the dyno. If you have multiple dynos, you have the facility of viewing the logs from all of them aggregated or view logs of a single dyno. But these logs are not persisted. If you want to persist them, you have to buy some Heroku addons which are capable of it.

WSO2 App Cloud, on the other hand shows only application logs to the user. Those logs are distinguished as current logs and previous logs. You can download the previous logs for your app. Important thing is, these are persisted by default. 

During this post, I compared the experience of WSO2 App Cloud and Heroku based on using resources within the app, lifecycle management support, collaborative development and monitoring logs.

I will most probably write a part-3 of this series of posts by further comparing the two clouds. I haven't decided the topics on which I'll be comparing them yet. But, stay tuned.....

Chris HaddadConnected Education Reference Architecture

With Connected Education Reference Architecture, plug in your mind, connect with your dreams, and make a difference. Our education system must prepare citizens to thrive in a global economy, inspire constructive thinkers, instill personal responsibility, and empower passionate leaders.  As we prepare next generation citizens who will drive economic growth, scientific advancement, social awareness, and personal satisfaction, we need to ask:

  • Are students learning the right skills?
  • Are students learning fast enough?
  • Is collaboration between private industry and education effective?
  • Are we successfully promoting research project intellectual property?


In many educational communities, students, parents, and mentors answer these questions with a resoundingly negative response.  As education continues a technology driven transition towards new school models, we need to establish a roadmap plan that engages students, overcomes structural challenges, and builds connections into the educational experience.

Plug in your mind

Connect with your dreams

Everyone can make a difference


Are students engaged?

Today’s students are the millennial generation; individuals who are immersed in an interactive, digital world.  They require interactivity and fun to remain engaged. Dry, paper textbooks are a pale shadow of their online, out of school, educational experience. Students tune in when educational material is not only digitized, but also interactive, collaborative, and compelling. The gamification of education will entice students to plug in their mind, connect with information repositories, and gain the knowledge required to pursue their dreams.

gamification_in_education Image:

According to CommerceLab:

Gamification uses video game design techniques in non-gaming experiences with the overall goal of improving user engagement and driving user behaviour. It can be used in a variety of ways to enhance user engagement on multiple platforms and is being integrated into business and marketing campaigns to captivate users that have grown weary of more traditional marketing tactics and messaging.

Building Connections into Education

Because the education industry is just beginning a transition towards digital engagement, we must work together to take fundamental steps that will build connections into education environments and communities. For example,

  • Digitize lesson plans, exercises, quizzes, and tests
  • Incorporate gamification design principles into educational programs and content
  • Provide access to engaging digital resources
  • Establish opportunities to collaborate with peers and experts
  • Deliver powerful tools to solve real problems as an integral part of learning experiences


New Interaction Models – New School

We must re-imaging the learning experience, and incorporate new interaction models into a new school environment that interacts across online digital resources.  Where the old school environment included chalk boards, pen, paper, and overhead projectors, the new school environment includes promethean boards, PixelSense, tablets, broadband internet access, and Internet resources (e.g. Google, Wikipedia),.

Promethean Board


Education Industry Challenges

While computers, Internet technology, and digital content has been incubated within the education community since their invention, fundamental challenges inhibit democratizing access to new school environments.  Common challenges include:

  • An unfunded edge cost
  • Incomplete content digitization (exams, quizzes, lesson plans)
  • Limited broadband access
  • A focus on money rather than innovation
  • Global sharing and collaboration impediments

Call To Action

To best prepare next generation citizens who will drive economic growth, scientific advancement, social awareness, and personal satisfaction, we need to transform education into an encouraging and engaging learning environment.  Gamification, collaboration, immersive digital experiences will encourage students to plug in their mind, gains the intellectual tools required to connect with their dreams and make a difference.


Gamifying lesson plans, classwork exercise, homework assignments, and projects will inject fun and motivation into the educational experience. Implement player centered design focusing on

  • Knowing your player (the student)
  • Identifying your mission
  • Understand human motivation
  • Apply game mechanics
  • Manage, Monitor, and Measure the program



Create a collaborative environment that includes business mentors, teachers, students, and family members. IT professionals should execute a plan to:

  • Build an engaging environment
  • Enable collaboration across boundaries
  • Deeply embed activity streams
  • Unlock data, knowledge, and intellectual property

Immersive Digital Experience

An industry roadmap should include digitization and unlocking access. For example,

In South Korea, all schools are connected to the internet with high-speed connections, all teachers are trained in digital learning, and printed textbooks will be phased out by 2016.

Creating an immersive digital experience requires building connected education software and hardware.  The business opportunity to build online learning courses, digital delivery platforms, and online educational environments is huge.

A robust market in educational software can unlock the full educational potential of these investments and create American jobs and export opportunities in a global education marketplace of over $1 trillion. Third-party validators can help schools find educational software (including apps) that provide content aligned with college- and career-ready standards being adopted and implemented by States across America.


Denis WeerasiriWhen flight tickets get cheaper in advance - v2

In When flight tickets get cheaper in advance - v1, I wrote a Google App Script with Google Spreadsheets to track when flight tickets get cheaper in advance. In that script I weekly tracked the minimum return-ticket price from Sydney to Colombo for a fixed departure date (07th of April 2014). In this post, I improved the script and automatically calculated minimum return-ticket price from Sydney to Colombo daily for last 8 months without restricting to a fixed departure date. These details say, for a given date, ticket prices get minimum in advance of 80 days in average (given that you have no preferred departure date).
Here you can find the spreadsheet which contains up-to-date data-set and followings are the charts extracted from the data-set.

Denis WeerasiriWhen flight tickets get cheaper in advance - v1

I wrote a small Google App Script with Google Spreadsheets to track when flight tickets get cheaper in advance. I weekly tracked the minimum return-ticket price from Sydney to Colombo for a fixed departure date (07th of April 2014). The answer was, ticket prices get minimum in advance of 5 months in average. Due to the disappearance of Malaysian airliner, MH370, I noticed a sudden fall of Malaysian air-line ticket prices after 8th of March 2014. So I added two charts. 1st one doesn't includes deviations related to MH370. 2nd chart includes the relevant deviations.

Asanka DissanayakeForceful JSON Convertor with WSO2 ESB

Hi all, I am writing this blog after a long time. I am trying to address a one use case I could find when  I was working on a customer issue.Use case is as follows.Forceful JSON Convertor

As show in the above figure , There is a client that sends a any type of message to the ESB and then to the Back end. and Then Back end sends a message with type other than JSON. And there is a use case user wants to access the incoming payload from the Back end in the Script Mediator. using

var payload=mc.getPayloadJSON();

This works perfectly when Backend sends messages of type application/json. If Backend sends a message of any type other than application/json ,  above line of code cannot be used to access the JSON payload inside the script mediator.

i wrote the following custom mediator [1]  to convert the payload into JSON and set it as a property in the message context. So it can be accessed in the script mediator and do what ever you want to do with the payload :).

Instructions are as follows.

How to Use

  1. Download Jar [2]  and copy it to CARBON_HOME/repository/components/lib.
  2. Change the Builder and Formatter for application/json in CARBON_HOME/repository/conf/axis2/axis2.xml to

WSO2 ESB 4.7.0



WSO2 ESB 4.8.1



Then in the out sequence use the following synapse config


<class name="org.wso2.carbon.esb.forceful.json.ForcefulJsonConvertor"/>

<script language="js">
var json=mc.getProperty("FORCE_BUILT_JSON");


<property name="messageType" value="application/json" scope="axis2" type="STRING"/>



This can be used in the in sequence as well only if the incoming message type is other than application/json. Because if the incoming message type is application/json , getPayloadJSON() can be used to get the json payload in the script mediator. Hope this may be a help .



Chanaka FernandoHow to configure timeouts in WSO2 ESB to get rid of client timeout errors

WSO2 ESB has defined some configuration parameters which controls the timeout of a particular request which is going out of ESB. In a particular  scneario, your client sends a request to ESB, and then ESB sends a request to another endpoint to serve the request.


The reason for clients getting timeout is that ESB timeout is larger than client's timeout. This can be solved by either increasing the timeout at client side or by decreasing the timeout in ESB side. In any of the case, you can control the timeout in ESB using the below properties.

1) Global timeout defined in (ESB_HOME\repository\conf\) file. This will decide the maximum time that a callback is waiting in the ESB for a response for a particular request. If ESB does not get any response from Back End, it will drop the message and clears out the call back. This is a global level parameter which affects all the endpoints configured in ESB.


2) Socket timeout defined in the (ESB_HOME\repository\conf\) file. This parameter will decide on the timeout which a particular http request is waiting for a response. If ESB does not receive any response from the Back End during this period, HTTP connection get timed out and that will eventually throws timeout error in the ESB side and fault handlers will be hit.


3) You can define timeouts on the endpoint configuration such that it will affect only that particular endpoint. This will be a better option if you need to configure timeouts per endpoint for different Back End services. You can also define the action upon the timeout. Below example configuration will set the endpoint to timeout in 30 seconds and then execute the fault handler.


Senaka FernandoState of Development vs. State of Availability

Runtime Governance is a broad area of focus and involves many different kinds of people and processes. The complexity of runtime governance is perhaps the main contributor to why most projects are either fully or partially unsuccessful in meeting their objectives. If you consider the people involved, there are a variety of roles including project managers, DevOps, and also C*-executives who are interested in the outcomes of runtime governance. In terms of processes yet again there are many, such as continuous integration, system monitoring, and analytics for understanding overall performance, generated value, and ROI.

While there are several aspects that require attention to get runtime governance right, one of the most important aspects is having a proper lifecycle management strategy. This is also perhaps the most misunderstood area in terms of runtime governance. The whole idea of a design/development lifecycle is to keep track of a project’s progression from Concept to Production. But once in production, such a lifecycle is not really going to help. However, more user-oriented systems such as API Stores and Gateways also require a concept of a lifecycle to manage a running system. This is not focusing on the development of a project but on its availability - for it to be used or accessed by an end-user. This is what gives rise to two separate kinds of state that you need to keep track in a system, namely, the State of Development and the State of Availability.

The State of Development is all about keeping track of whether a project is ready to go into a production or a “running live” setting. This involves lining up the development and continuous integration processes, as to whether the project is built properly, whether best practices have been followed and whether proper testing has been done. The lifecycle itself might be fully-automated, semi-automated or even manual. The level of automation does not create any harm in terms of finding answers to the kinds of questions related to readiness, however, automation can reduce a significant proportion of human error and produce more robust outputs within strict timelines. The only downside of automation is that it really leaves little room for manual override and limits the agility of the project creating a scenario where "system drives human" rather than "human drives system”.

The State of Availability is all about understanding whether your project is ready to be accessed by the outside world (or the world beyond the project team). Now, the interesting fact is that most projects become already accessible well before they go into the Production state, and you’ll often find conflicting situations with most of the all-in-one linear and continuous lifecycles that attempt to merge the concepts of development and availability together. This creates situations where process and tool don’t tend to be fitting. This leads to development teams exploring into their own sets of workarounds to make things happen. However, in a lifecycle management system that is well designed, the possibility of things available and the ability to keep track of development should both be possible at the same time. But, these concepts are not fully orthogonal, and the teams themselves should be able to decide on how these two things connect to each other.

Therefore, to solve the problem of two kinds of state, the lifecycle management of your project should be designed such that it takes both of these things into consideration. Both of these kinds of state will have multiple stages of progression and they will require concepts of checklists, validations, approvals and permissions for the model to be meaningfully governed. Therefore, from the tool’s point of view, there should exist the ability to support multiple parallel lifecycles at the same time, which can be separately tracked and monitored. Such a Governance Framework will be able to support both Continuous Integration Systems and Enterprise Asset Registries at the same time.

Danushka FernandoWSO2 App Factory - How to Create a new Application Type and how application type deployment works.

With WSO2 AppFactory 2.1.0 onwards it allows to add Application Types as archives. These archives which should be named with the extention ".apptype" should contain a file named apptype.xml which is the configuration for the new Application Type. The configuration given below is a sample apptype.xml which is for java web applications.

Sample configuration

<DisplayName>Java Web Application</DisplayName>
<Description>Web Application Archive file</Description>
-DarchetypeArtifactId=webapp-archetype -DarchetypeVersion=2.0.1
-Dversion=default-SNAPSHOT -DinteractiveMode=false

XML elements of the sample code

Property Name


Fixed Values

Mandatory / Optional


The type of the application archive.



Application Type Class name for application type.



Display name of application type. This is the name that you will be shown as "Application Type", when you are selecting the application type at the time of Application creation.



Extension of application archive.



Detailed description of application type.


If this property false, then it is consider as a non-buildable artifact and not go through the process of building with build server.




Build job configuration template name for jenkins application build.


Maven Archetype Request for build. Change archive type based on application type.


Server deployment path.


Define whether the application type is enabled or not.

enabled, disabled



Programming language used to write the artifact's code.


The application can be uploaded to App Factory as an archive file.


The value you define here will be added to the URL pattern of the application when it is in the development stage.


The value you define here will be added to the URL pattern of the application when it is in the testing stage.


Defines the URL patten of the artifact that is deployed. For example, if you define the URL as https://appserver{stage}, then the will dynamically change in each lifecycle stage according to the values you define in and elements.



The place in which the new application type will appear in the application type drop-down list in the App Factory console.


Whether to allow domain mapping for this application type or not


**Note : If you have changed the stages in appfactory/apptype to be different than default, For example lets say you removed Development, Testing stages and added a new staged named PreProduction, Then You can remove DevelopmentStageParam and TestingStageParam and add PreProductionStageParam property without changing anything else. And say you want to add something to launch url for Production Stage also then you can simply add a property named ProductionStageParam also.

You can create an apptype.xml with similar content and create an zip file with  ".apptype" extension and then copy the archive to the


Then an underlying Axis 2 Deployer [1] which listens to the above mentioned location will extract the archive and read the apptype.xml and fill up an in memory data structure in Application Type Manager class [2]. This class is a singleton and can be accessed as following


There is an Application Type Bean Map [3] which contains all configurations provided in apptype.xml as properties. Someone can access a property with the name "foo" in application type "bar" with following code.


There should be a class provided for the name mentioned in ProcessorClassName in configuration provided to server by copying a jar to the server *. This should be an implementation of the interface ApplicationTypeProcessor [4].  There are few events that can be customized according to the application type. So you can write a new implementation which will match your application type.





Hiranya JayathilakaRunning Python from Python

It has been pointed out to me that I don't blog as often as I used to. So here's a first step towards rectifying that.
In this post, I'm going to briefly describe the support that Python provides for processing, well, "Python". If you're using Python for simple scripting and automation tasks, you might often have to load, parse and execute other Python files from your code. While you can always "import" some Python code as a module, and execute it, in many situations it is impossible to determine precisely at the development time, which Python files your code needs to import. Also some Python scripts are written as simple executable files, which are not ideal for inclusion via import. To deal with cases such as these, Python provides several built-in features that allow referring to and executing other Python files.
One of the easiest ways to execute an external Python file is by using the built-in execfile function. This function takes the path to another Python file as the only mandatory argument. Optionally, we can also provide a global and a local namespace. If provided, the external code will be executed within those namespace contexts. This is a great way to exert some control over how certain names mentioned in the external code will be resolved (more on this later).
Another way to include some external code in your script is by using the built-in __import__ function. This is the same function that gets called when we use the usual "import" keyword to include some module. But unlike the keyword, the __import__ function gives you lot more control over certain matters like namespaces.
Another way to run some external Python code from your Python script is to first read the external file contents into memory (as a string), and then use the exec keyword on it. The exec keyword can be used as a function call or as keyword statement.
code_string = load_file_content('/path/to/')
Similar to the execfile function, you have the option of passing custom global and local namespaces. Here's some code I've written for a project that uses the exec keyword:
globals_map = globals().copy()
globals_map['app'] = app
globals_map['assert_app_dependency'] = assert_app_dependency
globals_map['assert_not_app_dependency'] = assert_not_app_dependency
globals_map['assert_app_dependency_in_range'] = assert_app_dependency_in_range
globals_map['assert_true'] = assert_true
globals_map['assert_false'] = assert_false
globals_map['compare_versions'] = compare_versions
exec(self.source_code, globals_map, {})
except Exception as ex:
utils.log('[{0}] Unexpected policy exception: {1}'.format(, ex))
Here I first create a clone of the current global namespace, and pass it as an argument to the exec function. The clone is discarded at the end of the execution. This makes sure that the code in the external file does not pollute my existing global namespace. I also add some of my own variables and functions (e.g assert_true, assert_false etc.) into the global namespace clone, which allows the external code to refer to them as built-in constructs. In other words, the external script can be written in a slightly extended version of Python.
There are other neat little tricks you can do using the constructs like exec and execfile. Go through the official documentation for more details.

Amila MaharachchiWSO2 App Cloud vs Heroku - Part 1

Its been a year since we launched WSO2 App Cloud. It was launched as a preview in October 2013. Since then, we at WSO2 have been working on making it a leader in the public cloud offerings. Its needless to say that we compare our stuff with competitors to see where we are and how can we improve. I always wanted to write my findings on the comparison between WSO2 App Cloud and Heroku. Heroku is a cloud platform as a service supporting several programming languages. It was acquired by in 2010 [source: wikipedia].

This comparison will be done in several steps and this is the fist part of it. During this step, I was concentrating on the following aspects of the two clouds.
  1. Onboarding process
  2. Creating your first application
  3. Launching your first application
  4. Editing and re-launching the application
Rest of my post is my experience on above four topics in WSO2 App Cloud and Heroku. I have tried to be unbiased while writing them down.

Onboarding process

Both WSO2 App Cloud and Heroku provide a pretty much similar experience to the users when it comes to onboarding. Both of them allow you to sign-up by providing your email address. Then you get an email with a link to click and confirm your account. Next step is also same for both with one additional parameter requested by WSO2 App Cloud, which is the Organization name.

Heroku Sign-up
WSO2 App Cloud Sign-up

WSO2 App Cloud has an organization concept which allows a set of developers, QA engineers, DevOps persons to work collaboratively. When you create an account in WSO2 App Cloud, it creates an organisation for you. In other words, this is a tenant. Then you can invite other members to this organisation. Then, you can assign members to different applications developed under this organisation entity. I'll explain this structure in a future post (to keep you reading through this :))

Heroku, does not have the organisation concept. It only has the application entity. You sign-up and create an application. Then you are the owner of that app. You can invite others to collaborate with your app. But, if you create another app, you need to invite them again for collaboration.

Creating your first application

After you sign-up and sign-in successfully, lets have a look at the first app creation experience. Both offerings support multiple application types to be created. If I name them against each cloud
  • WSO2 App Cloud
    • Java web applications
    • Jaggery web applications (Jaggery is a Javascript framework introduced by WSO2 itself)
    • JAX-WS services
    • JAX-RS services
    • WSO2 Data Services
    • It also allows you to upload existing Java web apps, JAX-WS, JAX-RS and Jaggery apps
  • Heroku
    • Ruby
    • PHP
    • Node.js
    • Python
    • Java
    • Clojure
    • Scala 
According to the supported application types, Heroku leads at the moment. Lets consider the app creation process.

In WSO2 App Cloud, its a two click process. After you login in, you click on "Add New Application" button, fill in the information and click "Create Application" button. This will create the skeleton of the app, a git repository for it, build space for the app. Then it will build it and deploy it for you. All you have to do is, wait a couple of minutes, go to the app's overview page and click the "Open" button to launch your app. This has the following advantages.
  1. You don't need to know anything about the folder structure of the app type you are creating. WSO2 App Cloud does it for you.
  2. Within a couple of minutes, you have an up and running application. This is same for new users and users who are familiar with the WSO2 App Cloud.
See this video on WSO2 App Cloud's app creation experience.

My view on Heroku's app creation process is, it is somewhat difficult and takes more time for a new / less techie user. Following are the reasons.
  1. First, we need to download and install the Heroku Toolbelt (its CLI tool). During this process, we need either upload our SSH public key or create a new one and upload.
  2. Then, we need to git clone the skeleton app. I was trying a Java application. Cloned application source only has one Java class which extends the HttpServlet class. But, if I wanted to create a web app, I needed to know the folder structure to be created etc. (this I see as a major negative point to usability)
  3. After we have pushed the code back to Heroku, there is another step. i.e. We need to make sure at least one instance of the application running. We have to nominate the number of instances and then they are started and the app is deployed to them.
Although I see above taking time and needing some tech knowledge, once you are used to the Heroku Toolbelt, it will allow you to do your work quickly.

Launching your first application

Now I have created my first app in both clouds. Lets see how we can launch it.

When you create the app in WSO2 App Cloud, after it goes through the build process, it is automatically deployed. Within seconds of creation, it presents you the URL to launch your app. A user with very little knowledge can get an app created and running easily.

After creating the app in Heroku, you need to push the code back and then define the scale options. i.e. there is a command similar to "heroku ps:scale web=1". This means, Heroku will deploy the app in one instance. You can also go to the Heroku UI and open the application from there. In that case also it will deploy the app in one instance and present to you (so, the setting scaling option is not a must).

Editing and re-launching the application

Ok. Now I have created and launched my first application. I would like to edit it and add the code I want to see in my app.

WSO2 App Cloud becomes a clear winner in this. It provides you a cloud IDE. You can just click the "Edit Code" button and your app code will be open in the browser. Not only you can edit the code in the browser, you can build and run it before pushing the code to the App Cloud. Very cool, isn't it?
Second option is to edit the code using WSO2 Developer Studio. To do this, you need to have a working installation of WSO2 Developer Studio.
Third option is to clone the source code and edit it using your favourite IDE. 
WSO2 App Cloud IDE
See this video on WSO2 App Cloud's cloud IDE.

Heroku does not have an IDE provided. So, you need to edit it with your local IDE. Recently I saw Heroku allows you to deploy from DropBox. i.e. You can drop your code to DropBox and it will get deployed from there. I don't see a real advantage of this and I have to say, I didn't try this feature because I am not a fan of DropBox.

Both the clouds have the auto-deploy feature. i.e. If you edit your code and push it to the git repository, changes get deployed automatically. When you launch the app, you will see the changes.

I have covered the comparison of WSO2 App Cloud vs. Heroku on the above four topics. There are more areas to be compared. I'll be covering following topics in my next post.
  • Using resources within the app (databases etc.)
  • Collaborative development
  • Lifecycle management of the app
  • Monitoring logs
Your feedback is welcome. Stay tuned :) 

Amila MaharachchiLets get started with WSO2 App Cloud

We at WSO2 Cloud team are working on improving the experience we are providing to the WSO2 Cloud users. During this effort, we try to provide clear instructions on using the various features available. Since we have two cloud services offered to the users, I'll be talking about the WSO2 App Cloud in this blog post.

As the first step, we published a set of tutorials with step-by-step instructions on how to do things in the WSO2 App Cloud. This included

  • Creating an application from scratch
  • Uploading an existing application
  • Editing your app with the Cloud IDE
  • Creating and using databases
  • Invoking APIs from your app code etc.
You can find those tutorials at

As the next step, we started working on a series of screencasts which shows you how to use different features in the WSO2 App Cloud. These screencasts go parallel with the above mentioned tutorials. We have published them in YouTube and also linked from the tutorials too.  So, you can use both of them to make your life easier. At the moment we have released four screencasts and we are in the process of releasing more. I'll list them here for your reference.

  • Create and deploy your first Java application to WSO2 App Cloud
  • Edit your app using the Cloud IDE

  • Edit your app using your favourite IDE

  • Upload your WAR file to WSO2 App Cloud

When you start using WSO2 App Cloud, go through the tutorials and these screencasts. If you face any problems, feel free to contact the WSO2 Cloud team via We would like to hear your feedback and improve the experience we provide.

Sohani Weerasinghe

Accessing an API without using access token

You can do this by changing the authentication level to 'None' as shown in the attached image. By default it is set to 'Application and Application User' and you can change the authentication level before Save and Publish the created API.

Sohani Weerasinghe

Invoking an API using WSO2 API manager

You can create an API to invoke the echo service in WSO2 ESB as follows

1. Start WSO2 ESB
2. Then login to WSO2 AM publisher
3. Create an API stating the production endpoint as  http://sohani-ThinkPad-T530:8280/services/echo
4. Then login to the WSO2 AM store and subscribe the API and generate the token
5. Get the prduction URL from the store to send the request
6. You can refer about the token generation in AM
7. Now you can use the Rest client tool in AM or curl command to send the request as follows


Sohani Weerasinghe

Writing a data service to execute stored procedure in WSO2 DSS

This blog post basically describes about the way to execute a stored procedure in WSO2 DSS. You can refer below steps

1. Create a sample database


2. Create a table using below command

CREATE TABLE company(name VARCHAR(10), id VARCHAR(10), price DOUBLE, location VARCHAR(10));

3. Create the stored procedure as follows

CREATE PROCEDURE InsertData(compName VARCHAR(10), compId VARCHAR(10), compPrice DOUBLE, compLocation VARCHAR(10)) INSERT INTO company VALUES(compName,compId,compPrice,compLocation) ;

Now you can configure the DSS as follows

1. Download mysql connector and copy it to <DSS_HOME>/repository/components/lib

2. Create a data service as follows 

<data enableBatchRequests="true" name="SampleDataService">
  <description>Sample Data Service</description>
  <config id="Demo">
     <property name="driverClassName">com.mysql.jdbc.Driver</property>
     <property name="url">jdbc:mysql://localhost:3306/ESB_SAMPLE</property>
     <property name="username">root</property>
     <property name="password">root</property>
  <query id="insertData" useConfig="Demo">
     <sql>Call ESB_SAMPLE.InsertData(?,?,?,?)</sql>
        <property name="">100</property>
        <property name="">true</property>
     <param name="name" ordinal="1" sqlType="STRING"/>
     <param name="id" ordinal="2" sqlType="STRING"/>
     <param name="price" ordinal="3" sqlType="DOUBLE"/>
     <param name="location" ordinal="4" sqlType="STRING"/>
  <operation name="insertData" returnRequestStatus="false">
     <call-query href="insertData">
   <with-param name="name" query-param="name"/>
        <with-param name="id" query-param="id"/>
        <with-param name="price" query-param="price"/>
        <with-param name="location" query-param="location"/>        

3. Include the data service in <DSS_HOME>/repository/deployment/dataservices.
4. Use TryIt tool to access the data service

Sohani Weerasinghe

Setting up message tracing for WSO2 ESB

Message Tracing is basically used to trace, track and visualize a message's body of its transmission. We can use the Activity Dashboard of WSO2 BAM to trace messages going through WSO2 ESB.

First of all you need to install the message tracer feature to WSO2 ESB by following the below steps

1. Login to WSO2 ESB and select Features

2. In the Repository Management tab click Add Repository
3. Insert a name and the URL (
4.Go to the Available Features tab and select the added repository. Under Filter by feature name field, enter BAM Message Tracer Handler Aggregate, select Show only the latest versions checkbox and click Find Features.5. The BAM Message Tracer Handler Aggregate feature appears. Select it and click Install.6. Click Finish to complete the installation.

Now in order to configure message tracing follow below steps

1. Go to the Configure menu of ESB Management Console, click Message Tracing and then click Message Tracing Configuration.2. Select all the check boxes and enter Receiver URL ((in  tcp://[IP address of localhost]:[thrift port] format), Username and Password that is used by BAM. 3. Click update.

Please note if you change the offset, change the receiver port accordingly. if the offset is 0 port is 7611, if offset is 1 it should be 7612.

Then you can publish data to BAM

1. Invoke the service

2. Login to WSO2 BAM
3.  Go to Tools menu, click Cassandra Explorer and then click Explore ClusterEnter the default values as localhost:9160, admin and admin respectively for Connection URL, Username and Password and click Connect.  

If the port offset is 1 then this should be localhost:9161

In Keyspaces, check for the published contents in the BAM_MESSAGE_TRACE column family, which is located in the EVENT_KS keyspace.

Malintha AdikariOpen multiple VMWARE instances

There is shortcut key for this task in linux based distributions.

 1. Press ALT + F2

 2. Then type "vmplayer" in the popped up text box

 3. Hit ENTER key

 And you are done.......

John MathonMaybe Thanksgiving has me in the sprit of thinking of giving to society another shot at Fraudless Voting

Here is a presentation I posted that is much easier to read with more facts and detail which is also more consumable by the public on the fraudless voting system possible with the blockchain.

Sriskandarajah SuhothayanAdding Siddhi extensions when using Siddhi as a Java library

You can write Siddhi extensions and run them in WSO2 CEP as given :

But if you are using Siddhi as a Java library then you can add then to SiddhiManager as follows

        List extensionClasses = new ArrayList();
        SiddhiConfiguration siddhiConfiguration = new SiddhiConfiguration();
        SiddhiManager siddhiManager = new SiddhiManager(siddhiConfiguration);  

Kasun GunathilakeShow an image in a web page using binary data

Here I have shown, how Java script can be use to show a image using binary data (binary data may come from a data base). 
Java script function

function showBinaryImage(binaryImageData){
            var html = null;
 html += "<img width=\"40%\" height=\"40%\" src=\"data:image/png;base64," + binaryImageData.replace(/^\s+|\s+$/g, '')+  "\"/>";
            document.getElementById('content_div').innerHTML = html;

here 'content_div' is the id of a html div element inside the html.     
<div id="content_div" ></div>
Now binary image should show in your web page. 

Kasun GunathilakeStream processing on WWW data

This is about my final year project at university of Moratuwa. We named this project as Glutter, the name 'Glutter‘ is the result of a combination of the words 'Clutter‘ and 'Gutter‘. So as the name implies, 'Glutter‘ operates as a gutter which is connected to a clutter of information. In other words, it enables the users to gather information from various sources and then set up rules for how that content should be filtered and modified to fit the requirements of the users.

This projetct is something similar to the Yahoo Pipes, because Yahoo Pipes also works by enabling users to gather information from different sources and then setting up rules on how that content should be modified (Filtering, renaming, truncating , translating etc.). But the main limitation of yahoo pipes is that it is not aware of temporal aspects and causality of events in the web which hinders the usefulness drastically. Therefore as a solution to that we introduce the Glutter. The approach in Glutter is to use Complex Event Processing on web events thus enabling temporal querying and an awareness of causality in its operators. It supports more input and output means such as Twitter, Email, Feeds, Web services, CSV data, XMPP Chats etc, thus making it more connected to the real-time web.

The following video shows the concept of the project.

when a typical internet user steps into the internet, the information keep flowing at him in a real time manner and its difficult for his to keep track of it. also the internet is a huge mess of data and to make things worse, its a dynamically changing mess. Therefore Glutter can acts as an intermediate in between the clutter of information and the web user, so by using Glutter he can get only information that he is interesting, in real time manner.

The main objective of Glutter is to empower the user by allowing him to decide what, when, and how to view and get-notified-of information in the web without the developers designing and deciding it(Without even writing a single line of code). The client is provided a sophisticated workbench to create workflows according to his preference in order to decide 'what‘ to view and when the information should be delivered.

For creating workflows we have given a user interface (Workbench), which allows users to create workflows and run them against web data sources. The workbench consists of a toolbox containing drag and drop components which could be used to construct workflows using a graphical user interface. The main components can be categorized as,

  • Connectors - Connectors are used to establish the connection to various data sources. Currently Glutter consists of 5 connectors which support RSS/ Atom Feeds, EMail (pop/imap), Twitter, CSV, and Pull based Querying of Webservices.
  • Operators - Operators allow the user to perform many different operations on the stream. 
  • Sink - Sinks are the end components of a workflow, and users can get the output of the workflow in many different forms such as an EMail, a Tweet, XMPP Message, or can be viewed in the viewer section.
All components are listed below

Following I have shown a sample workflow

In this scenario, user is interested in particular news, but those news are in different languages, also user is interested to know, which country belongs that news. Then he can use Glutter to do his task. As shown in the above figure. User get feeds in Dutch, Spanish languages and another one from English language. Those news feeds can be fetched using 3 feed connector, then he can use our translate operator to translate news feeds to English language. Now all  different language feeds are converted to English then by using union operator he can get all feeds as one feed. Since he need to view the news based on Geo location, we can use semantic operator to do this task. Because it can analyse the text and can extract geo location within that text semantically, then these Geo location details are added to every news feed item. After that those results are sent to the data sink, so that it can be viewed using Viewer which support map view. 

Following I have shown the map viewer.

This is only one example that I have given to show the power of the Glutter. Bellow I have given other use cases

  • Another possible use case could be redirection of data channels. That means fetch interesting tweets to email or chat, also you can feed your blog to twitter (you could add more filtering operations in between in order to control what gets tweeted from your blog).  
  • Email auto-replier.
  • Send notification emails when price below some level for ebay item.
  • Stock market data related chat notification (when price increase and decrease) (can use pattern recognition operator)
  • When interesting feed comes user can do the google search (using web service operator) based on content of the feed and get addition links for that news.
  • There are many more. It depends on the creativity of the user :) (Since we have given lot of connectors, operators and sinks)
The some of resulting viewers are shown below.

Line Chart

Area Chart

List View

Kasun GunathilakeUseful SVN commands

svn add
svn add - Add files and directories to your working copy and schedule them for addition to the repository. They will be uploaded and added to the repository on your next commit.
svn add --non-recursive testDir - You can add a directory without adding its contents 
svn add * --force - If you want to add every unversioned object in your working copy

svn update 
svn up | svn update - Update your working copy.
svn up -r8000 update your working copy to an older revision (revision 8000)

svn checkout
svn co | svn checkout -  Check out a working copy from a repository.
svn co

svn delete
svn del | svn delete - Delete an item from a working copy or the repository.
svn del testDir

svn mkdir
svn mkdir Create a new directory under version control.
svn mkdir testDir

svn commit
svn ci | svn commit - Send changes from your working copy to the repository.
svn ci -m "Added new functionality." 

svn move 
svn move | svn mv — Move a file or directory
$svn mv parentDir/test.txt parentDir/subDir
A         parentDir/subDir/test.txt
D         parentDir/test.txt

svn cleanup
svn cleanup - Recursively clean up the working copy, removing locks resuming unfinished operations. If you ever get a “working copy locked” error, run this command to remove stale locks and get your working copy into a usable state again.

svn blame 
svn blame — Show author and revision information in-line for the specified files or URLs. (This is only for files)
As an example 

svn blame

 15686    saminda <project xmlns="">
 15686    saminda     <parent>
 15686    saminda         <groupId>org.wso2.carbon</groupId>
 15686    saminda         <artifactId>carbon-parent</artifactId>
 72444    ruwan         <version>3.1.0-SNAPSHOT</version>
 15686    saminda     </parent>
 15686    ruwan     <modelVersion>4.0.0</modelVersion>
 15686    saminda     <artifactId>samples</artifactId>
 15686    saminda     <packaging>pom</packaging>
 15686    saminda     <name>samples</name>
 15686    saminda     <description>samples</description>
 15686    saminda     <url></url>
 15686    saminda     <modules>
 15687    saminda         <module>org.wso2.carbon.sample</module>
 15686    saminda     </modules>
 15686    saminda </project>

svn diff
svn diff - Display the differences between two paths.
svn diff trunkDir - Compare repository and your working copy.
svn diff  -r 3900 trunkDir - working copy's modifications compare against an older revision
svn diff - Compare revision 3000 to revision 3010 using “@” syntax

svn cat
svn cat -  Output the contents of the specified files or URLs.
$svn cat - you can view readme.txt in your repository without checking it out

svn log
svn log - Display commit log messages.
svn log - sample log is shown below.

r121 | pradeeban | 2010-10-17 01:13:37 +0530 (Sun, 17 Oct 2010) | 2 lines
Applying the patch for ARCHITECTURE-25 provided by kasunw. BPS loan approval demo was added with necessary documentation with minor change to Account Service client, in this patch. 
r119 | lahiru | 2010-10-16 23:09:01 +0530 (Sat, 16 Oct 2010) | 2 lines
applying patch from kasun for
r118 | lahiru | 2010-10-16 23:07:56 +0530 (Sat, 16 Oct 2010) | 2 lines
applying patch from kasun.
r117 | lahiru | 2010-10-16 22:42:04 +0530 (Sat, 16 Oct 2010) | 2 lines
adding patch given by kasun.
r116 | kasunw | 2010-10-15 00:44:58 +0530 (Fri, 15 Oct 2010) | 2 lines
changes to platform demo, using carbon studio, and minor feature adding like automating using ant.

svn revert
svn revert - Undo all local edits.
svn revert 
svn revert testDir --recursive - If you want to revert a whole directory of files, use the --recursive flag.

svn resolved
svn resolved - Once you've resolved the conflict, run svn resolved to let your working copy know you've taken care of everything

svn list
svn list - List directory entries in the repository.
svn list

Kasun GunathilakeInteliJ IDEA shortcut keys

IntelliJ IDEA is a Java IDE by JetBrains, It is more powerful IDE for java developments. Also It has lots of shortcut keys that make developers life easier. Below Image I have shown all most all IDEA shortcut keys.

Kasun GunathilakeHow to find files in Ubuntu using Terminal

Case insensitive searches can be achieved by using the -iname switch

find /home -iname '*.mpg' -o -iname '*.avi'

Let'search for .avi files bigger than 700M. This can be done with.

find /home/ -name '*.avi' -a -size +700M

Now, let's find the same subset of files that were modified less than 15 days ago

find /home/ -name '*.avi' -a -size +700M -mtime -15

Kasun GunathilakeGenerating project structure using Maven

To generate the project structure using maven, we use maven's archetype mechanism. In Maven, an archetype is a template of a project which is combined with some user input to produce a working Maven project that has been tailored to the user's requirements (This help you generate the desired project structure depending on the application that you are trying to build.)

Following I have shown how you can use maven to generate a project structure for simple java application.

mvn archetype:create -DgroupId=org.wso2.carbon -DartifactId=sample -DarchetypeArtifactId=maven-archetype-quickstart

This will generate a project structure as follows

|-- sample
|   |-- pom.xml
|   `-- src
|       |-- main
|       |   `-- java
|       |       `-- org
|       |           `-- wso2
|       |               `-- carbon
|       |                   `--
|       `-- test
|           `-- java
|               `-- org
|                   `-- wso2
|                       `-- carbon
|                           `--

This will create with package name as org.wso2.carbon and create default pom.xml as well as class for unit testing.

Kasun GunathilakeWSDL to UDDI Mapping


Universal Description, Discovery and Integration (UDDI) is a platform independent, Extensible Markup Language (XML) based registry and it provides a mechanism for describing and discovering Web service providers, Web services and technical interfaces which may be used to access those services. There are several UDDI implementations are available (Apache jUDDI [1], OpenUDDI Server etc.). The Web Services Description Language (WSDL) is an XML based language for describing the interface, protocol bindings, and the deployment details of network services. The objective of this blog is to show the relationship between WSDL and UDDI and to describe a mechanism for mapping WSDL service descriptions to the UDDI data structures.The information in this blog adheres to the procedures outlined in OASIS UDDI Technical Note [2] and is consistent with the UDDI Version 3.0.2 Specification [3].


How to invoke a service using UDDI registry

UDDI is designed to be interrogated by SOAP messages and to provide access to WSDL documents describing service binding information required to interact with the web services listed in the registry.
Add caption

The steps involved in providing and consuming a service are:

  1. A service provider (business) describes its service using WSDL. This definition is published to a UDDI registry.
  2. A service consumer lookup the service in the UDDI registry and receives service binding information that can be used to determine how to communicate with that service.
  3. The client then uses the binding information to invoke the service.

    How to Map WSDL Document in UDDI.

This mapping describe a methodology to map WSDL 1.1 documents to UDDI version 3. Before going into details of mapping, it is important to understand the UDDI data structures. Here I have briefly described the data structures in UDDI.

UDDI data structures

  • businessEntity - A businessEntity structure used to represent the business or service provider within UDDI.
  • businessService - A businessService structure used to represent a web service. A businessEntity can have several businessServices.
  • bindingTemplate - A binding template contains the technical information associated to a particular service. A businessService can have several bindingTemplates
  • Technical Model (tModel) - A tModel is a generic container of information where designers can write any technical information associated for using the Web service.

WSDL portType to tModel Mapping

The information represent about a WSDL portType by the UDDI tModel is its entity type, local name, namespace, and location of the WSDL document that defines the portType. Each WSDL portType maps to a UDDI tModel having the same name as local name of the portType in the WSDL. The overviewURL provides the location of the WSDL document. In addition to that tModel contains a category bag with keyedReferences for type categorization as “portType” and namespace of the portType (If the wsdl:portType has a targetNamespace).

Following is the structure of UDDI portType tModel

<tModel tModelKey="uuid:e8cf1163-8234-4b35-865f-94a7322e40c3">
    [WSDL portType local name]
    <overviewURL useType=”wsdlInterface>
        [WSDL location URL]
            keyName="portType namespace"
            keyValue="[WSDL namespace]"/>
            keyName="WSDL type"

WSDL binding to tModel Mapping

The information represent about a WSDL binding by the UDDI tModel is its entity type, local name, namespace, the location of the WSDL document that defines the binding, the portType that it implements, protocol, and optionally the transport information. Each WSDL binding maps to a UDDI tModel having same name as local name of the binding in the WSDL. The overviewURL provides the location of the WSDL document. In addition to that tModel contains a category bag with following keyedReferences

  • namespace of the binding (If the wsdl:binding has a targetNamespace).
  • type categorization as “binding”
  • binding characterized as type "wsdlSpec".
  • portType reference for wsdl:portType to which the wsdl:binding relates.
  • protocol categorization
  • transport categorization

    Following is the structure of UDDI binding tModel

    <tModel tModelKey="uuid:49662926-f4a5-4ba5-b8d0-32ab388dadda">
        [WSDL binding local name]
        <overviewURL useType=”wsdlInterface>
            [WSDL location URL]
                keyName="binding namespace"
                keyValue="[WSDL namespace]"/>
                keyName="WSDL type"
                keyName="portType reference"
                keyValue="[tModel key of the PortType]"/>
                keyName="[Protocol supported by the binding]"
       keyValue= "[tModel key of the Protocol tModel]" />
         keyValue="[tModel key of the Transport tModel]" />

    WSDL port to UDDI bindingTemplate Mapping

    A WSDL port maps to a bindingTemplate. The information represent about a WSDL port by the UDDI bindingTemplate is the binding that it implements, the portType that it implements, local name of the port and access point of the service. The bindingTemplate has tModelInstanceDetails element which contain the following tModelInstanceInfo elements.

    • tModelInstanceInfo with a tModelKey of the tModel corresponding to the binding that port implements. The instanceParms represent wsdl:port local name.
    • A tModelInstanceInfo with a tModelKey of the tModel corresponding to the portType that port implements.
    • The accessPoint is set from the location attribute on the extension element that is associated with the port element.

    WSDL service to UDDI businessService Mapping

    A WSDL service maps to a businessService. The information represent about a service by the UDDI businessService is its entity type, local name, namespace, and the list of ports that it supports. The name of the businessService can be represented by the local name of the service in the WSDL. In addition to that businessService contains a category bag with following keyedReferences
      • namespace of the service
      • local name of the service
      • type categorization as “service”

    Following is the structure of UDDI businessService and bindingTemplate

            businessKey=[businessKey of the bussinessEntity which this service belongs]>
        <name>[Service local name]</name>
            <!--WSDL port maps to a bindingTemplate-->
            <!--1 or more repetitions-->
                <accessPoint useType="endpoint">
                    [EndPoint URL]
                    <!-- TModelInstanceInfo indicating the binding-->
                            tModelKey=[tModel key of the binding tModel]>
                        <description xml:lang="en">
                            The wsdl:binding that this wsdl:port implements. The instanceParms specifies the port local name.
                            <instanceParms>[WSDL port local name]</instanceParms>
                    <!--TModelInstanceInfo indicating portType -->
                            tModelKey=[tModel key of the portType tModel]>
                        <description xml:lang="en">
                            The wsdl:portType that this wsdl:port implements
                    keyName="service namespace"
                    keyValue="[Service namespace Value]"/>
                    keyName="service local name"
                    keyValue="[Service local name value]"/>
                    keyName="WSDL type"


    • WSDL portType element is mapped to a UDDI tModel
    • WSDL binding element is mapped to a UDDI tModel.
    • WSDL port element is mapped to a UDDI bindingTemplate which has information about the WSDL binding and the WSDL portType implemented by port.
    • Finally WSDL service element is mapped to a UDDI businessService.

    This blog post has shown how to map WSDL document to the UDDI registry using the approach described in the OASIS UDDI Technical Note[2] also it adheres to the UDDI Version 3.0.2 Specification [3]


Kasun GunathilakeFew ways to improve the performance of your Java code

In the following post, I have shown several ways to improve the performance in your Java code. 

Appending string values in a loop

  String s = "";
  for (int i = 0; i < field.length; i++) {
    s = s + field[i];

  StringBuffer buf = new StringBuffer();
  for (int i = 0; i < field.length; i++) {
  String s = buf.toString();

When concatenating string in a loop, In each iteration, the String is converted to a StringBuffer/StringBuilder, appended to, and converted back to a String. these addtional cost can be avoided by directly using the second approach. 

Creating instance of  Integer,Long, Short, Character, and Byte

Using new Integer(int) is guaranteed to always result in a new object whereas Integer.valueOf(int) allows caching of values to be done by the compiler, class library, or JVM. Using of cached values avoids object allocation and the code will be faster.

Therefore Integer.valueOf(int) is better than using new Integer(int);
This is same for others (Long,Short,Character and Byte)

Accessing values in a Map.

for (Object key: map.keySet()){
for (Map.Entry entry: map.entrySet()){
It is more efficient to use an iterator on the entrySet of the map than using the keySet, to avoid the Map.get(key) lookup.

Kasun GunathilakeReinstall grub2 after installing windows 7 (XP/Vista)

You may need to reinstall grub when windows is installed after ubuntu (or if you reinstalled windows in dual boot (Windows and ubuntu) system). I used following steps for reinstalling grub in order to login to my Ubuntu 10.04 after reinstalling Windows 7 on my machine.

1. boot the ubuntu LiveCD

2. Open a terminal and type
sudo fdisk -l

This list the partition tables, and in my case root (/) partition is on /dev/sda9.

3. Mount that partition using following command.

sudo mount /dev/sda9 /mnt

4. Run the grub-install command as below. This will reinstall the grub 2
sudo grub-install --root-directory=/mnt /dev/sda
Here "sda" is the hard disk on which your Linux distribution is installed!

5. Reboot

6. Finally Refresh the grub 2 menu using following command

sudo update-grub

Now everything should be fine :)

Kasun GunathilakeHow to re-index all resources in GREG 4.0.0

If you need to re-index all resources in greg 4.0.0, here I have explain how you can do that. 
This can be easily done by editing registry.xml located at CARBON_HOME/repository/conf folder. In registry.xml go to the indexingConfiguration section and then change the resource name of lastAccessTimeLocation to some other value.
Default value:


New Value (after changing the resource name):


Now restart the server, then greg will re-index all resources from scratch.

Kasun GunathilakeHow to close, open ports in linux

If you want to close the port 8080, this is an one way of doing this, if you are on ubuntu.

type following command:
> netstat -lpn

This will list all listening ports

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 *               LISTEN      13098/java    
tcp        0      0*               LISTEN      13098/java    
tcp        0      0*               LISTEN      -            
tcp        0      0  *               LISTEN      13098/java    
tcp        0      0 *               LISTEN      13098/java    
tcp        0      0 *               LISTEN      -            
tcp        0      0*               LISTEN      13098/java  

Use grep for filtering out the 8080 port.

You can use following command:
> netstat -lpn | grep 8080

You'll get output something like this

tcp        0      0  *               LISTEN      13098/java

Here my process id is 13098 and it is the process that is using port 8080

Kill the process using following command:
> sudo kill 13098

Now port 8080 is free.

Kasun GunathilakeWSO2Con 2011

"WSO2Con is one fantastic week of tutorials, tech talk and networking events. Whether you are a developer, architect, IT manager or technology enthusiast, learn how global enterprises, SaaS providers and innovative startups are using WSO2 platforms to build distributed web apps, java services, bpel flows, Software-as-a-Service (SaaS) and more"

This will be held on September 12-16 at Sri Lanka, You must join this event if you are interested in SOA and Cloud Computing. Not only that there are many other topics (NoSQL,Carbon platform,OSGI,Security,..etc) will be covered in conference and tutorial sessions. 

Also You can find more detail about WSO2 Business Activity Monitoring product, which is  developing with new architecture that can handle large volume of data, also it provides a powerful framework for customizing and monitoring key performance indicators.

Speaker panel consist of experienced speakers from more than 10 countries, including speakers from Google,IBM,..etc.

Complete Aggenda for Wso2 Con 2011 can be found from here.

You can get rough idea of the topics that is going to cover from following image.

Kasun GunathilakeFixing ADB databinding issue when web service method returning OMElement

When I try to call a web service which return an OMElement, I faced above issue (My Axis2 version is 1.6.1). Following I have shown the steps that I did for fixing the issue.

This is the part of the stack trace.

org.apache.axis2.AxisFault: org.apache.axis2.databinding.ADBException: Any type  element type has not been given
    at org.apache.axis2.AxisFault.makeFault(
    at org.wso2.carbon.bam.presentation.stub.QueryServiceStub.fromOM(
    at org.wso2.carbon.bam.presentation.stub.QueryServiceStub.queryColumnFamily(
    at org.wso2.carbon.bam.clustermonitor.ui.ClusterAdminClient.getClusterStatistics(

If you check the schema of the response element in your generated wsdl(by axis2) it should similar to this.

<xs:element name="queryColumnFamilyResponse">
                <xs:element minOccurs="0" name="return" nillable="true" type="xs:anyType" />

In order to fix the ADB databinding issue you need to change the above schema as follows and regenerate the stub code.

<xs:element name="queryColumnFamilyResponse">
              <xs:any processContents="skip"/>

Then ADB will generate code that represents the content of OriginalMessage as an OMElement and this will fix your problem.

Kasun GunathilakeHow to remote debug Apache Cassandra standalone server

In order to debug the cassandra server from your favorite IDE. You need to add the following into located in apache-cassandra-1.1.0/conf directory.

JVM_OPTS="$JVM_OPTS -Xnoagent"
JVM_OPTS="$JVM_OPTS -Djava.compiler=NONE"
JVM_OPTS="$JVM_OPTS -Xrunjdwp:transport=dt_socket,server=y,address=5005,suspend=n"

After adding this, once you start the server you can see the following line printed in cassandra console

"Listening for transport dt_socket at address: 5005" 

This the port that you specified in JAVA_OPTS. You can change it to some other value as you want.

Now configure your IDE to run on debug mode.

Now you can debug the apache cassandra server from your favorite IDE :)

Kasun GunathilakeJDBC Storage Handler for Hive

I was able to complete the implementation of Hive JDBC Storage Handler with basic functionality. Therefore I thought to write a blog post describing the usage with some sample queries. Currently It supports writing into any database and reading from major databases (MySql, MsSql, Oracle, H2, PostgreSQL). This feature comes with WSO2 BAM 2.0.0 release. 

Setting up the BAM to use Hive jdbc-handler. 

Please add your jdbc-driver to $BAM_HOME/repository/component/lib directory, before starting the server. 

Web UI for executing Hive queries.

BAM2 comes with a web ui for executing the Hive queries. Also there is a option to schedule the script

User interface for writing Hive Queries

User interface for scheduling hive script

Sample on writing analyzed data into JDBC 

Here I am going to demonstrate the functionality of writing the analyzed data into JDBC storage. In this simple example, We'll fetch records from a file then analyze it using hive and finally store those analyzed data into MySQL database. 

Records - These are the records that we are going to analyze.

bread   12      12/01/2012
sugar   20      12/01/2012
milk    5       12/01/2012
tea     33      12/01/2012
soap    10      12/01/2012
tea     9       13/01/2012
bread   21      13/01/2012
sugar   9       13/01/2012
milk    14      13/01/2012
soap    8       13/01/2012
biscuit 10      14/01/2012

Hive Queries

//drop tables if already exist
drop table productTable;
drop table summarizedTable;
//Load the file with above records
load data local inpath '/opt/sample/data/productInfo.txt' into table productTable;
summarizedTable( product STRING, itemsSold INT) 
                'mapred.jdbc.driver.class' = 'com.mysql.jdbc.Driver',
                'mapred.jdbc.url' = 'jdbc:mysql://localhost/test',
                'mapred.jdbc.username' = 'username',
                'mapred.jdbc.password' = 'password',
                'hive.jdbc.update.on.duplicate'= 'true',
                'hive.jdbc.table.create.query''CREATE TABLE productSummary (product VARCHAR(50) NOT NULL PRIMARY KEY, itemsSold INT NOT NULL)');
insert overwrite table summarizedTable SELECT product, sum(noOfItems) FROM productTable GROUP BY product;

View the result in mysql.

mysql> select * from productSummary;
| product | itemsSold |
| biscuit | 10 |
| bread | 33 |
| milk | 19 |
| soap | 18 |
| sugar | 29 |
| tea | 42 |
6 rows in set (0.00 sec)

Detail description on TBLPROPERTIES in storage handler.

Property name Required Detail
mapred.jdbc.driver.class Yes
The classname for the JDBC Driver to use. This should be available on Hive's classpath.
mapred.jdbc.url  YesThe connection url for the database.
mapred.jdbc.username NoThe database username, if it's required.
mapred.jdbc.password  No The database Password, if it's required.
hive.jdbc.table.create.query No
If table already exist in the database, then you don't need this. Otherwise you should provide the sql query for creating the table in the database.  No
The name of the table in the database. It does not have to be the same as the name of the table in Hive. If you have specified the sql query for creating the table, handler will pick the table name from query. Otherwise you need to specify this if your meta table name is different from the table in database.
hive.jdbc.primary.key.fields YesIf you have any primary keys in the database table
hive.jdbc.update.on.duplicate No
Expected values are either "true" or "false". If "true" then the storage handler will update the records with duplicate keys. Otherwise it will insert all data. 

This can be use to optimize the update operation. The default implementation is  to use insert or update statement after the select statement. So there will be two database round trips. But we can reduce it to one by using db specific upsert statement. Example query for mysql database is 'INSERT INTO productSummary (product, itemsSold) values (?,?) ON DUPLICATE KEY UPDATE itemsSold=?'

hive.jdbc.upsert.query.values.order No
If you are using an upsert query then this is mandatory. sample values for above query will be 'product,itemsSold,itemsSold' //values order for each question mark 

hive.jdbc.input.columns.mapping No
This is mandatory if your field names in meta table and database tables are different. Provide the field names in database table in the same order as the field names in meta table with ',' separated values. example: productNames,noOfItemsSold. These will map to your meta table with product,itemsSold field names. No
Used when reading from a database table. This is needed if the meta table name and database table name are different.

Sample on reading from JDBC.

Now I am going to read the previously saved records from mysql using hive jdbc-handler.

Hive queries

//drop table if already exists
drop table savedRecords;
STORED BY ''        
             TBLPROPERTIES (                
                    'mapred.jdbc.driver.class' = 'com.mysql.jdbc.Driver',
                    'mapred.jdbc.url' = 'jdbc:mysql://localhost/test', 
                    'mapred.jdbc.username' = 'username',     
                    'mapred.jdbc.password' = 'password',
                    '' = 'productSummary');
SELECT product,itemsSold FROM savedRecords ORDER BY itemsSold;

This will give all the records in the productSummary table.

Kasun GunathilakeWSO2 Business Activity Monitor 2.0.0 released ....!!!!

We spent almost year for releasing the WSO2 BAM 2.0.0 after completely re-writing it twice from BAM 1.x.x to  BAM 2.0.0 according to new architecture, suggestions and improvements. Finally we released it today, below you can see the release note of the BAM 2.0.0 :)  

WSO2 Business Activity Monitor 2.0.0 released!

The WSO2 Business Activity Monitor (WSO2 BAM) is an enterprise-readyfully-open sourcecomplete solution for aggregating, analyzing and presenting information about business activities. The aggregation refers to collection of data, analysis refers to manipulation of data in order to extract information, and presentation refers to representing this data visually or in other ways such as alerts. The WSO2 BAM architecture reflects this natural flow in its design.
Since all WSO2 products are based on the component-based WSO2 Carbon platform, WSO2 BAM is lean, lightweight and consists of only the required components for efficient functioning. It does not contain unnecessary bulk, unlike many over-bloated, proprietary solutions. WSO2 BAM comprises of only required modules to give the best of performance, scalability and customizability, allowing businesses to achieve time-effective results for their solutions without sacrificing performance or the ability to scale.
The product is available for download at:

  • Key Features

    Collect & Store any Type of Business Events

    • Events are named, versioned and typed by event source
    • Event structure consists of (name, value) tuples of business data, metadata and correlation data
  • High Performance Data Capture Framework

    • High performance, low latency API for receiving large volumes of business events over various transports including Apache Thrift, REST, HTTP and Web services
    • Scalable event storage into Apache Cassandra using columns families per event type
    • Non-blocking, multi-threaded, low impact Java Agent SDK for publishing events from any Java based system
    • Use of Thrift, HTTP and Web services allows event publishing from any language or platform
    • Horizontally scalable with load balancing and high available deployment
  • Pre-Built Data Agents for all WSO2 Products

  • Scalable Data Analysis Powered by Apache Hadoop

    • SQL-like flexibility for writing analysis algorithms via Apache Hive
    • Extensibility via analysis algorithms implemented in Java
    • Schedulable analysis tasks
    • Results from analysis can be stored flexibly, including in Apache Cassandra, a relational database or a file system
  • Powerful Dashboards and Reports

    • Tools for creating customized dashboards with zero code
    • Ability to write arbitrary dashboards powered by Google Gadgets and {JaggeryJS}
  • Installable Toolboxes

    • Installable artifacts to cover complete use cases
    • One click install to deploy all artifacts for a use case

Issues Fixed in This Release

All fixed issues have been recorded at -

Known Issues in This Release

All known issues have been recorded at -

Engaging with Community

Mailing Lists

Join our mailing list and correspond with the developers directly.

Reporting Issues

WSO2 encourages you to report issues, enhancements and feature requests for WSO2 BAM. Use the issue tracker for reporting issues.

Discussion Forums

We encourage you to use stackoverflow (with the wso2 tag) to engage with developers as well as other users.


WSO2 Inc. offers a variety of professional Training Programs, including training on general Web services as well as WSO2 Business Activity Monitor and number of other products. For additional support information please refer to


We are committed to ensuring that your enterprise middleware deployment is completely supported from evaluation to production. Our unique approach ensures that all support leverages our open development methodology and is provided by the very same engineers who build the technology.
For additional support information please refer to
For more information on WSO2 BAM, and other products from WSO2, visit the WSO2 website.

We welcome your feedback and would love to hear your thoughts on this release of WSO2 BAM.
The WSO2 BAM Development Team

Kasun GunathilakeA Fix for Huawei E220 connection issue with ubuntu 12.04

After installing Ubuntu 12.04, I faced an issue when connecting to the internet from my Huawei E220 dongle. So I did some google search and found a bug report relating this[1]. After going through this issue I found a workaround which fix the issue.

This is the workaround.

You should execute following command as root.

echo -e "AT+CNMI=2,1,0,2,0\r\nAT\r\n" > /dev/ttyUSB1 

Now try to connect your dongle again, it works for me until dongle is removed from USB port. Thanks Nikos for your workaround :)

Kasun GunathilakeConfiguring Hive metastore to remote database - WSO2 BAM2

Hive Metastore

Hive metastore is the central repository which is used to store Hive metadata. We use embedded H2 database as the default hive metastore. Therefore only one hive session can access the metastore. 

Using remote MYSQL database as Hive metastore. 

You can configure hive metastore to MYSQL database as follows. 

Edit hive-site.xml located at WSO2_BAM2_HOME/repository/conf/advanced/ directory.

<description>JDBC connect string for a JDBC metastore</description>
<description>Driver class name for a JDBC metastore</description>
<description>username to use against metastore database</description>
<description>password to use against metastore database</description>

Put MYSQL driver into WSO2_BAM2_HOME/repository/components/lib

Now You have successfully configured the hive metastore to MYSQL database. Now restart the BAM server. 

Kasun GunathilakeArticle on Monitor Your Key Performance Indicators using WSO2 BAM.

I've written an article for WSO2 library explaining how to Monitor your Key Performance Indicators  using WSO2 BAM 

WSO2 BAM an enterprise-ready, fully-open source, complete solution for aggregating, analyzing and presenting information about business activities also it supports big data analytics and storage capability via Apache Hadoop, Hive and Cassandra.

This article focuses on KPI monitoring via WSO2 BAM. The article flows based on the following topics.

  • Introduction
  • BAM architecture
  • Use case
    • KPIs for this use case
  • Collect information for the usecase.
    • BAM data-agent (Java API)
    • Non Java Data-agent
    • REST API
  • Viewing collected information using Cassandra explorer
  • Data Analysis
    • Writing a hive script for analyzing captured data
  • Visualizing the KPIs.

Kasun GunathilakeShell script edited on windows - Issue when executing on linux

I faced the above issue when I'm trying to execute the script after doing some editing on windows. Those two issues are due to BOM character and the carriage return (\r) present in the file.

  • BOM (byte order mark) character -  This is a Unicode character used to signal the order of bytes in a text file or stream.
  • Carriage return (\r) -  Editors use in windows needs '\r' and '\n' both the characters together to interpret as new line, which is ‘\r\n’. But unix understand only (\n).

These above characters use in windows, but unix shell scripts won't understand those characters. Because of that you might face issues when running a bash script editted in windows. To fix this you need to remove those characters. This is how you can do it.

BOM character issue
You might see following issue from the first line of the script.

": No such file or directory1: #!/bin/bash" - If you get such kind of issue from 1st line of your script then you can cross-check the script and if there is no any visible issue. you can run the
following command.

$ head -n 1 | LC_ALL=C od -tc
$ 0000000 357 273 277   #   !   /   b   i   n   /   b   a   s   h  \r  \n

In the output if you can see "357 273 277" sequence, then this is the BOM character. So you need to remove it.

* Open the script using vim
* Type this and enter in the first line - ":set nobomb" - this will remove the BOM character from ur file.
* save the file and close - :wq

Carriage return issue

Carriage return present in the script might throw this issue.

"$'\r': command not found"
"syntax error near unexpected token `$'do\r''"

To fix this you need to remove the \r characters from your script. Use any unix way to replace \r character with empty string.

* String replace using sed command

$ sed -i 's/\r//g'

* String replace using perl

$ perl -pi -e 's/\r//g'

Now the script is ready to run in unix :)

Kasun GunathilakeUbuntu - Gnu parallel - It's awesome

GNU parallel is a shell package for executing jobs in parallel using one or more nodes. If you have used xargs in shell scripting then you will find it easier to learn GNU parallel,
because GNU parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel.

To install the package

sudo apt-get install parallel

Here is an example of how to use GNU parallel.

If you have a directory which is having large log files and if you need to compute no of lines per each file and get the largest file. You can do it efficiently with GNU Parallel and it can utilize all your cpu cores in the server very efficient way.

In this case most heavier operation is calculating the number of lines of each file, instead of doing this operation sequentially we can do this operation parallely using GNU Parallel.

Sequencial way

ls | xargs wc -l | sort -n -r | head -n 1

Parallel way

ls | parallel wc -l | sort -n -r | head -n 1

This is only one example, like this you can optimize your operations using GNU parallel. :)

Saliya Ekanayakeඉඩ දෙන්න මා හදට

Dedunu DhananjayaEnterprise Messaging

Messaging made was never easy like this. This is first Oreilly video tutorial I followed. And I should mention that I am really happy about this tutorial. Before I watch this tutorial I had no idea about JMS. But in this tutorial Mark Richards explains Messaging concepts as well as JMS implementations.

Most of the developer tutorials don't explain important stuff about essential administration properly, But author explains concepts/implementations and administration tips and you wouldn't notice it. Examples are really simple and easy to understand. If you want to start to lean Enterprise messaging this is the correct tutorial you should follow. You should have a understanding about Java language basics other than that there is nothing else you should know.

Also this course was highly addictive. I couldn't stop my self from finishing this video tutorial. Although I had some other work to do I couldn't do anything else. I love this tutorial. It's simple and easy to understand.

Mark compares JMS 1.0 and JMS 2.0 in very effective manner with examples. That makes all the features really easy to understand. Unlike a book I found watching this video was really helpful to learn JMS.

I can recommend this tutorial to anyone who is willing to learn JMS without any condition.

You can buy this tutorial from here :

John MathonPrivacy! This is egregious and illustrative example of way over the line behavior.

holding hands

I have written about Privacy before.   It is a topic I feel personally very strongly about.  Here is my other blog entry in which I specify what I think should be legislated to help bring some sanity to the privacy debate.

A Case Study

This morning I read an article in buzzfeed:



The article purports that in a “closed” meeting an Uber executive suggested (possibly jokingly although the article seems to make that unlikely) that Uber could create a slush fund of $1M to fund operatives who would discover information about reporters who report negatively about Uber and leak that information to the press to hurt those reporters.   He said that in particular that he had specific personal information on a reporter at BuzzFeed who is critical of Uber that he could release that would be damaging to her.

This is a nightmare scenario.   We are doing commerce at all kinds of places in the cloud with all kinds of new businesses.   Many of these businesses have information that we would not like broadcast.   Even a HINT that some business takes less than 100% seriously our privacy is a KILLER for the business in my opinion.

There are many reasons our privacy is under attack.   The government is constantly seeking more and more ability to gather information about all of us.   Corporations are wanting to use private information to offer us services and make money off us, hackers are trying to break in to corporations to find personal information and IT personnel may in many cases not have the best practices and allow information to be obtained.  Some may say that privacy is a lost goal.   However, there is still a line that hasn’t been crossed.

WHAT WE DON”T EXPECT and never will tolerate in any way is the idea that a corporation would leverage personal information to attack its critics or create sludge funds to do so!!!   In my opinion Uber needs to quash this idea that any of its executives think anything like this ever!  The idea that Uber would allow customer information to be disclosed is a deal killer in my opinion.

Uber said they took customer information absolutely seriously and that they have rules and policies and violations of those policies are to be punished.  Great, but it isn’t believable, why?   Emil Michael is reported to have said that he had specific information on this reporter.  How did he get this information if this information is guarded at Uber?  Ubers response that it protects our privacy is unbelievable because obviously Mr Michael got a hold of such information.   So, how seriously are we supposed to take their protestations?

Mr Michael (maybe others at Uber) are apparently upset at this reporter because she is critical of Uber for striking a deal with a French escort service.  Uber rightly commented that escorts taking Uber were less likely to be attacked than in regular taxis.   The reporter seemed to feel Uber was not taking women’s rights seriously by signing a deal with an escort agency.   Whatever you believe about escorts her criticism doesn’t seem leveled at escorts, their patrons but somehow Uber in providing rides is somehow evil.  I fail to see the logic of this attack.   There is no way for me to logically conclude how Uber is being unsympathetic to women in any way by transporting them.  So, this attack is weak and pathetic.   Nobody would really take the reporters criticism of Uber so Mr Michaels spiteful attitude is more damning than the reporters article.     So, Mr Michael is wrong and stupid on numerous points:

1) His response is way out of proportion to the attack on Uber

2) His idea of a slushfund to gather information is wrong and evil

3) His idea of disclosing the information is wrong and evil

4) Gathering information on this reporter is wrong and evil

This can’t be a “joke” because Mr Michael alleges to have specific information to attack the reporter.  So, his slushfund idea and plumbers idea are not funny ideas he came up with on the spur of the moment.   He researched the reporter himself.   So, it is impossible to see how this could be a joke or simple meanderings of his mind that were wrong.  He threatened to make public private specific information about the reporter.

In all respects Mr Michael has made egregious errors that border on incompetence.   He said the meeting was considered off the record.  Even so, there is no way to interpret his statements in an ethical way.  They are simply wrong headed and especially wrongheaded in an eCommerce company that has private information on its patrons.  This is about the worst thing that can be said about Uber if true.

I am very hopeful that the executives who run Uber are not spiteful deceptive executives interested in playing with personal information of its patrons for its advantage.   I have no reason to believe that any other executive at Uber believes anything like what this executive has apparently said.  However, if Mr Michael continues to be an employee of Uber then the only conclusion is that they are somewhat accepting of such conduct which makes me wonder.  So, I shall wait and see what happens to Mr Michael.

The General Problem with Privacy

I believe the privacy issue is complicated.   There is no “solution” that is going to solve it easily.   Even improvements are going to be hard to achieve.   The trend is definitely to the opposite direction right now.  Our privacy is less and less being protected.

The problem:

Data is TOO dispersed

The nature of the current system is that every entity can gather information about you and must disclose policies but these policies are way too complicated and too many organizations have access to make enforcement reasonable.   Virtually every company I talk to is gathering vast amounts of information about us.  Frequently they can easily ask us to disclose information other organizations know about us and we become unwittingly pawns in distributing vastly more information to any entity than they need to know.    The organizations have limited responsibility to insure the accuracy of any information they get or the source of any information or to disclose if they have information or not.   They have no way of redacting information.

Let’s say someone is incorrectly attributed with an attribute.  It could be a transaction, a charge, a statement or anything that is in error, i.e. They did not do the thing.  Some people may interpret the thing as positive and others as negative.   As the person goes around the internet doing business authorizing different organizations to get this or that information from this or that organization the information spreads.   Sometimes the organization that has the error information may sell the information to third parties.  Pretty soon hundreds of copies of this erroneous information is distributed.  Now this unfortunate person goes to look for employment or credit or to buy a house or something and people discover this information.  The person is denied the job or credit or simply finds people won’t do a deal with them and they may not even know why.

For financial information we have some rules.   If someone denies you credit they have to disclose where they got the negative information.  You can typically seek to correct the information if it is in a credit report.  The credit reporting agencies are required to disclose to you that they have information to you and give you free reports of what they know about you.   However, we don’t have such guarantees about information in the cloud.

The organizations in the cloud that have erroneous information about you (possibly through no fault of your own) have no need to disclose to you they have such information, to give you any means to correct or redact the information, to put your own comments in to counter the erroneous information.    So, you may not know you are being denied things.  You may not know what things know about you.  If you do find out you have no way to correct the information.

There are two approaches to solving this problem:  1) Require that organizations remove personal information about you after say 3 years as a matter of course.   2) Put in place procedures similar to the financial information which requires that organizations that have personal information have to disclose it to you and allow you to challenge it

Either of these methods cost money.  The first method is easier but some people may want information about themselves retained longer than 3 years.  I also advocate that the 3 year time be lower for younger people as they are more likely to make stupid errors.  Information for people under 18 for instance should routinely be removed after 1 year.   In order to do this then an organization may need to know your age which is more personal information that they otherwise may not need to know.  I realize no method is going to be foolproof or costless.  However, measures such as these would at least get the ball rolling in the right direction rather than the very wrong direction it is going today.

The Data has value to consumers and corporations and the government

Certainly a big part of this problem is that personal information is valuable to everybody to do something they consider desirable including consumers themselves.  Foreign governments are hacking into cloud companies for numerous purposes and represent 25% of all hacking according to some measures.   The value of information gives an incentive for people to hack and steal and disclose information.   In the case study I referred to above the executive at Uber thought the personal dirt they could dig up would be beneficial to them to hurt their opponents.    This is a reason for the proliferation of information but I don’t know how to decrease the value other than for all of us to get thicker skin and to be aware of the ways people can misuse information.  If you see personal information about someone that seems negative consider that everyone is not perfect.  Everyone is unique.  Don’t ascribe negative things about people from third parties if you don’t need to and if you need to know if it is true then make sure to give the person a chance to respond.  I know I am speaking to a wall generally on this point because people do seem to like dirt and they believe dirt too readily.   I am repelled by Mr Michaels alleged comments and I hope others are too.  If there really is an explanation for Mr Michaels behavior I am very willing to entertain any explanation he has on these pages in his words and edit if I am in error about what is said he said.

The systems are imperfect

Our authentication systems are imperfect.  Our systems are hackable.   Our processes are imperfect.   The fact that this is the case is irrefutable and is an excuse for why privacy breaches happen but there are better and better systems, processes and best practices that are evolving.   We now see dual authentication at most Cloud Sites now.  This is a huge improvement.  Private information can be encrypted so that even if the data itself is obtained it cannot be interpreted.   What is not permissible is that we aren’t striving to learn and get better.

People are flawed

People make mistakes.  No news there.   Mr Michael for instance has the wrong attitude about privacy and its importance.  He seems to have misjudged the acceptable ways to respond to attack.  In all ways the systems are created, run by people who themselves can be bribed or act in bad faith.  There are ways to at least put in incentives for people to do better.   Breaking people’s privacy should be a fireable offense at any company which has private information on individuals.   Simply disclosing private information should be a crime whether or not it is negative information.


The Uber case points out another angle I had hoped would not be breached by a corporation which is willful abuse of private information.  We know there are all kinds of accidental disclosures and errors people make, systemic errors however willful abuse should not be tolerated.


Dimuthu De Lanerolle

Useful Git commands

Q: How can I merge a distinct pull request to my local git repo ?

   You can easily merge a desired pull request using the following command. If you are doing this merge at first time you need to clone a fresh check-out of the master branch to your local machine and apply this command from the console.
git pull +refs/pull/78/head

Q: How do we get the current repo location of my local git repo?

A: The below command will give the git repo location your local repo is pointing to.

git remote -v

Q: Can we change my current repo url to a remote repo url

A: Yes. You can point to another repo url as below.

git remote set-url origin


Q : I need to go ahead and build no matter i get build failures. Can I do that with maven build?

A: Yes. Try building like this.

mvn clean install -fn

sanjeewa malalgodaHow to use custom authentication header and pass it as auth header to back end server(in addition to bearer token).

In this article we will describe how we can use custom authentication header and pass it as auth header to backend server.

You can add a mediation extension [1], and have a custom global sequence in the API gateway which will assign Authorization header the value of your basic authentication.

<sequence name="WSO2AM--Ext--In" xmlns=""> 
<property name="Authentication" expression="get-property('transport', 'Authentication')"/>
<property name="Authorization" expression="get-property('Authentication')" scope="transport" type="STRING"/>
<property name="Authentication" scope="transport" action="remove" />

In order to add the custom mediation, visit '/repository/deployment/server/synapse-configs/default/sequences' and create an xml file (Ex: global_ext.xml) to contain your mediation extension.
Then include above synapse configuration in that xml. (I have attached the custom global sequence xml here).

When you invoke your Rest API via a RESTclient, configure that client to have a custom header(Ex:Authentication) for your basic authentication credentials and configure 'Authorization' header to contain the bearer token for the API.

So, what will happen will be something like this:
Client (headers: Authorization, Authentication) -> Gateway (drop: Authorization, convert: Authentication-Authorization) -> Backend


Niranjan KarunanandhamUser / Role based access to API using WSO2 API Manager 1.6.0

In WSO2 API Manager (APIM) 1.7, you can create "scopes" and add multiple user roles to a created scope. A scope can be mapped to an API resource (GET, POST, DELETE, PUT, OPTIONS). When an API resource is invoked, APIM will allow that request to go through only if the request access token is linked to a user account with a user role as defined in the scope of the invoked API resource. This blog explains how this works in APIM 1.7.0.

In APIM 1.6, an access-token is generated uniquely for composite key 'client-id' and 'user-id'. Therefore this role based API is not straight forward as in APIM 1.7.0. This can be achieved by using XACML entitlement server. Special thanks goes to Rushmin Fernando who suggested this. WSO2 APIM can delegate policy decision making to a XACML entitlement server, which would evaluate each request based on pre-defined XACML policies to allow or deny service invocations. By default XACML entitlement server is not shipped with API Manager distribution, however this can installed via the feature manager available in the API Manager. Given below are the steps to install and use the XACML policies to allow or deny services: 

1. Install the XACML components to WSO2 API Manager
Log into the admin console and select "Configure" tab from the tabs available on the left-hand side.Then select the "Features" option and add a repository. Give any name and the location as "". Once the repository is added, click on "Available Features" tab, select the new repository, un-tick the "Group features by category" option and click on "Find Features" button. Then select the two XACML features (i.e., XACML Mediation version 4.2.2 and XACML version 4.2.0) and click on Install. Once the features are installed, restart WSO2 API Manager.

2. Create an API (API name as "sample" with a resource name "retrieve" and method as GET) via the API publisher and change the status to publish

3. Log in again to the admin console and select "Main" tab from the tabs available on the left-hand side and select "Source View". This will display the synapse configuration of all the APIs that are available in the API Manager. Search for the newly created API ("sample") and add the following configuration below the <inSequence>

 <property name="xacml_use_rest" value="true" scope="axis2" type="STRING"/>
<property name="username" expression="$ctx:END_USER_NAME" scope="axis2" type="STRING"/>  

Then add the entitlement service configuration "<entitlementService remoteServiceUrl="https://localhost:9443/services" remoteServiceUserName="admin" remoteServicePassword="admin" client="basicAuth">" followed by what it should do if the request is allowed or denied and click on "Update" button.

4. Add the entitlement policies to the Entitlement Server.
XACML policies need to be added to WSO2 API Manager to evaluate API invocations based on those policies. In the WSO2 API Manger's admin console, Select "Policy Administration" and click on "Add New Entitlement Policy" icon. From the list of Policy creation method, select "Basic Policy Editor". in the "Create XACML Policy" page, enter a policy name. For "Resource Names", select "equal" and enter the API resource name ("/retrieve"). For "User's", select "Role" / "User" from the first drop-down, "equal" from the second drop-down and enter the Role name (the Role / User which should have access to this API). For "Action Name", select "equal" and enter the API resource method("GET") and click on the "Finish" button.

In the "Policy Administration" page, click on "Publish to My PDP" icon for the policy that you created. Once the policy is published, click on the "PDP Policy View" and enable the policy in the PDP.

Now the API (sample) can be only accessed by the user / users (who are assigned to the role) that was given in the policy.

Nandika JayawardanaWS-BPEL 2.0 Beginner's Guide Book Review

I had the opportunity to read the WS-BPEL 2.0 Beginners Guide Book from PACKT publishing. The authors of this book has done a very good job in explaining the concepts in a simple and concise manner.

It is a very descriptive and practical guide to a beginners in BPEL. Writing an executable BPEL process is a very different task compared to write code in a general purpose programming language. The reason for that is, you need to have background knowledge on a lot of technologies in order to properly understand and implement a BPEL process. Minimum set of those technologies include SOAP / HTTP web services, WSDL, XML ,  Xml schema and XPath.

Hence , WS-BPEL 2.0 beginners guide takes an ideal approach for a beginner. It starts by introducing the basic concepts and straightaway goes into a practical example. It chooses oracle SOA Suite as the target technology stack and  JDeveloper  as its development environment for BPEL and provides step by step screen shots on how to implement a process.  Next it explains each and every step taken in implementing the sample process and how to deploy and test the process. I find this approach very useful, simply because, when learning a complex technology like BPEL, the best approach is to start with simple exercises to get the feel for the technology and then dive into the more complex topics step by step.

This pattern is followed for all the chapters as well. Each new chapter introduces a concept from BPEL, and goes onto a practical example explaining the details and finally testing the process. Hence, when you finish reading the book, not only you will understand the concepts in BPEL, but also, you would have mastered to BPEL development tool. As BPEL is developed mostly by using graphical tools, mastering the development environment is an essential skill for becoming a skilled BPEL developer.

The book explains the concepts in words as well as using diagrams. Book covers all the concepts from BPEL specification including topics such as synchronous processes, asynchronous processes, message correlation, fault handing , compensation handling ect.

In addition to BPEL concepts, book also covers the WS-Human Tasks space as well. The human tasks tooling capabilities of JDeveloper as well as the concepts are explained in a concise manner. In many practical process implementations in the industry there will be BPEL as well as human tasks. Hence for a beginner, this book is an ideal guide to master the BPEL based workflow technologies. Also, This book can be useful for an experienced BPEL developer migrating from another tool to JDeveloper.

Finally , I would recommend this book to anyone who is new to BPEL and is looking for a practical guide to learning BPEL related workflow technologies.   

Dedunu DhananjayaHow to swap two integer variables without a third variable? - Java

Well. We can swap two integer variables using third variables. That is not a problem. Let's say we want to swap two variables without using a third variable. For that you can use simple mathematical operations such as addition/subtraction. But division multiplication will introduce an error to the data for example if you have to swap 1 and 3. Using subtraction and addition, you can swap two variables without affecting their values. 

Dedunu DhananjayaPrinciples of Big Data

I was really interested in reading this book. But it took a long time to me to read it. And concepts which are explained in this book are really important. To a person who wants to start learning about Big data concepts, I will recommend this book for sure.

If you have prior knowledge on BI tools and techniques, this book will help you to learn help you to quicker than any other book in the market. And this book wont get outdated like a technology book.  Because these concepts are valid for anything you are going to do.
I am working with Big Data researchers as part of my job. So I found this book was very helpful to learn Big data concepts and that made my life easier.

I like this book and I will recommend to any one who wants to learn about Big data concepts. No matter whether they are students/novice or intermediate readers.

You can purchase a copy of this book from :

Dedunu Dhananjayagitignore file for Java

I am working with Java projects and Git both. But sometimes opening a project from different IDE will create extra files which we really don't want. So using .gitignore file in the root folder of your Git repository will ignore files and patterns you have mentioned. Finally I found there are three main Java IDEs currently. They are 
  • Eclipse
  • IntelliJ IDEA
  • NetBeans
I am very sorry if your favourite IDE is not listed here. So I just went through the Bitbucket and GitHub (link to gitignore templates). Then I created my own .gitignore file. I wish this would help you.

Dinusha SenanayakaHow to enable login to WSO2 API Manager Store using Facebook credentials

WSO2 Identity Server 5.0.0 release has provided several default federated authenticators like Google, Facebook, Yahoo. Even it's possible to write custom authenticator as well, in addition to default authenticators provided.

In this post we are going to demonstrate, how we can configure WSO2 API Manager with WSO2 Identity Server, so that users comes to API Store can use their Facebook account as well to login to API Store.

Step 1 : Configure SSO between API Store and API Publisher

First you need to configure SSO between publisher and store as mentioned in this document.

Step 2 : You need to have App Id and App secret key fair generated for a application registered in facebook developers site. This can be done by login to facebook developer site and creating a new app.

Step 3 :  Login to the Identity Server and register a IdP with Facebook authenticator

This can be done by navigating to Main -> Identity Providers -> Add. This will prompt the following window. In the "Federated Authenticators" section expand the "Facebook Configuration" and provide the details.

App Id and App Secrete generated in the step two maps to Client Id and Client Secret values asked in the form.

Step 4 : Go to the two service providers created in step-1 and associate the above created IdP to it.

This configuration is available under "Local & Outbound Authentication Configuration" section of the SP.

Step 5 : If you try to access store url (i.e: https://localhost:9443/store) , it should redirect to the facebook login page.

Step 6: In order to store users to capable in using their facebook account as a login, they need to follow this step and associate their facebook account to their user account in the API Store.

Identity Server has provided a dashboard which gives multiple features for users in maintaining their user accounts. Associating a social login for their account is a one option provided in this dashboard.

This dashboard can be accessed in the following url .

eg: https://localhost:9444/dashboard

Note: If you are running Identiry Server with port offset, you need to do changes mentioned here, in order to get this dashboard working.

Login to the dashboard with API Store user account. It will give you a dashboard like follows.

Click on the "View details" button provided in "Social Login" gadget. In the prompt window, there is a option to "Associate Social Login".  Click on this and give your Facebook account id as follows.

Once account is registered, it will list down as follows.

That's all we have to configure . This user should be able to login to API Store using his facebook account now.

Note: This post explained , when there is already a user account is exist in the API Store , how these users can associate their facebook account to authenticate to API Store. If someone needs to enable API Store login for all facebook accounts without having user account in API Store, that should be done though a custom authenticator added to Identity Server. i.e Provision this user using JIT (Just In Time Provisioning) functionality provided in IdP and using custom authenticator associate "subscriber" role to this provisioned user.

sanjeewa malalgodaHow to enable mutual SSL connection between WSO2 API Manager gateway and key manager

In WSO2 API Manager we will do service calls from gateway to key manager to validate tokens.
For this we will use key validation client. Lets add following code and build jar file.

import org.apache.axiom.soap.SOAPHeaderBlock;
import org.apache.axis2.AxisFault;
import org.apache.axis2.client.Options;
import org.apache.axis2.client.ServiceClient;
import org.apache.axis2.context.ConfigurationContext;
import org.apache.axis2.context.ConfigurationContextFactory;
import org.apache.axis2.context.MessageContext;
import org.apache.axis2.context.ServiceContext;
import org.apache.axis2.transport.http.HTTPConstants;
import org.apache.commons.httpclient.Header;
import org.wso2.carbon.apimgt.api.model.URITemplate;
import org.wso2.carbon.apimgt.gateway.internal.ServiceReferenceHolder;
import org.wso2.carbon.apimgt.impl.APIConstants;
import org.wso2.carbon.apimgt.impl.APIManagerConfiguration;
import org.wso2.carbon.apimgt.impl.dto.APIKeyValidationInfoDTO;
import org.wso2.carbon.apimgt.keymgt.stub.validator.APIKeyValidationServiceAPIManagementException;
import org.wso2.carbon.apimgt.keymgt.stub.validator.APIKeyValidationServiceStub;
import org.wso2.carbon.utils.CarbonUtils;
import javax.xml.namespace.QName;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Map;

public class APIKeyValidatorClient {

private static final int TIMEOUT_IN_MILLIS = 15 * 60 * 1000;
private APIKeyValidationServiceStub clientStub;
private String username;
private String password;
private String cookie;

public APIKeyValidatorClient() throws APISecurityException {
APIManagerConfiguration config = ServiceReferenceHolder.getInstance().getAPIManagerConfiguration();
String serviceURL = config.getFirstProperty(APIConstants.API_KEY_MANAGER_URL);
// username = config.getFirstProperty(APIConstants.API_KEY_MANAGER_USERNAME);
// password = config.getFirstProperty(APIConstants.API_KEY_MANAGER_PASSWORD);
/* if (serviceURL == null || username == null || password == null) {
throw new APISecurityException(APISecurityConstants.API_AUTH_GENERAL_ERROR,
"Required connection details for the key management server not provided");
try {

ConfigurationContext ctx = ConfigurationContextFactory.createConfigurationContextFromFileSystem(null, null);

clientStub = new APIKeyValidationServiceStub(ctx, serviceURL + "APIKeyValidationService");

ServiceClient client = clientStub._getServiceClient();


Options options = client.getOptions();


options.setProperty(HTTPConstants.SO_TIMEOUT, TIMEOUT_IN_MILLIS);




} catch (AxisFault axisFault) {

throw new APISecurityException(APISecurityConstants.API_AUTH_GENERAL_ERROR,

"Error while initializing the API key validation stub", axisFault);

} catch (Exception e) {

e.printStackTrace(); //To change body of catch statement use File | Settings | File Templates.


public APIKeyValidationInfoDTO getAPIKeyData(String context, String apiVersion, String apiKey,
String requiredAuthenticationLevel, String clientDomain,
String matchingResource, String httpVerb) throws APISecurityException {
// CarbonUtils.setBasicAccessSecurityHeaders(username, password,
// true, clientStub._getServiceClient());
if (cookie != null) {
clientStub._getServiceClient().getOptions().setProperty(HTTPConstants.COOKIE_STRING, cookie);
try {
List headerList = (List)clientStub._getServiceClient().getOptions().getProperty(org.apache.axis2.transport.http.HTTPConstants.HTTP_HEADERS);
Map headers = (Map) MessageContext.getCurrentMessageContext().getProperty(
if (headers != null && headers.get("activityID")!=null) {
headerList.add(new Header("activityID", (String)headers.get("activityID")));
clientStub._getServiceClient().getOptions().setProperty(org.apache.axis2.transport.http.HTTPConstants.HTTP_HEADERS, headerList);
org.wso2.carbon.apimgt.impl.dto.xsd.APIKeyValidationInfoDTO dto =
clientStub.validateKey(context, apiVersion, apiKey,requiredAuthenticationLevel, clientDomain,
matchingResource, httpVerb);
ServiceContext serviceContext = clientStub.
cookie = (String) serviceContext.getProperty(HTTPConstants.COOKIE_STRING);
return toDTO(dto);
catch (APIKeyValidationServiceAPIManagementException ex){

throw new APISecurityException(APISecurityConstants.API_AUTH_FORBIDDEN,
"Resource forbidden", ex);
}catch (Exception e) {
throw new APISecurityException(APISecurityConstants.API_AUTH_GENERAL_ERROR,
"Error while accessing backend services for API key validation", e);
private APIKeyValidationInfoDTO toDTO(
org.wso2.carbon.apimgt.impl.dto.xsd.APIKeyValidationInfoDTO generatedDto) {
APIKeyValidationInfoDTO dto = new APIKeyValidationInfoDTO();
dto.setScopes(generatedDto.getScopes() == null ? null : new HashSet(Arrays.asList(generatedDto.getScopes())));
return dto;
public ArrayList getAllURITemplates(String context, String apiVersion
) throws APISecurityException {

// CarbonUtils.setBasicAccessSecurityHeaders(username, password,
// true, clientStub._getServiceClient());
if (cookie != null) {
clientStub._getServiceClient().getOptions().setProperty(HTTPConstants.COOKIE_STRING, cookie);
try {
org.wso2.carbon.apimgt.api.model.xsd.URITemplate[] dto =
clientStub.getAllURITemplates(context, apiVersion);
ServiceContext serviceContext = clientStub.
cookie = (String) serviceContext.getProperty(HTTPConstants.COOKIE_STRING);
ArrayList templates = new ArrayList();
for (org.wso2.carbon.apimgt.api.model.xsd.URITemplate aDto : dto) {
URITemplate temp = toTemplates(aDto);
return templates;
} catch (Exception e) {
throw new APISecurityException(APISecurityConstants.API_AUTH_GENERAL_ERROR,
"Error while accessing backend services for API key validation", e);
private URITemplate toTemplates(
org.wso2.carbon.apimgt.api.model.xsd.URITemplate dto) {
URITemplate template = new URITemplate();
return template;
private static void setMutualAuthHeader(ServiceClient serviceClient, String username) throws Exception {
OMNamespace omNamespace =
OMAbstractFactory.getOMFactory().createOMNamespace("", "m");
SOAPHeaderBlock mutualsslHeader = OMAbstractFactory.getSOAP12Factory().createSOAPHeaderBlock("UserName", omNamespace);

Then once you build jar copy jar file to API Manager
 cp target/org.wso2.carbon.apimgt.gateway-1.2.2.jar /home/sanjeewa/work/170deployment/newdep/test/wso2am-1.7.0-1/repository/components/plugins/org.wso2.carbon.apimgt.gateway_1.2.2.jar

You should share same trust store or need to export cert to gateway from key manager.

Then enable SSL in WSO2 server. Add following to"$CARBON_HOME/repository/resources/security/wso2carbon.jks" \"wso2carbon" \"$CARBON_HOME/repository/resources/security/client-truststore.jks" \"wso2carbon" \"wso2carbon" \

This custom authenticator to makes Identity Server compatible with mutual ssl, so you would need to download it too. You could find the source code of the authenticator from here(

Build above mentioned code and copy custom authenticator to /repository/components/dropins/ folder which is handling the mutual authentication in the server backend. This should copied to key manager node.

Then restart both servers by pointing gateway to use key manager as key management service.

Invoke APIs and you are now connected to key manager from gateway using mutual ssl. To verify this you can put wrong credentials in gateway side and restart server.

Dimuthu De Lanerolle

Troubleshooting ESB maven dependency issues ....

1. If you are getting the below error when running tests ...


Exception in thread "HTTP Listener I/O dispatcher-2" java.lang.NoSuchMethodError: org.apache.http.params.HttpProtocolParams.getMalformedInputAction(Lorg/apache/http/params/HttpParams;)Ljava/nio/charset/CodingErrorAction;


This is due to missing of dependency "HTTPCORE - HttpProtocolParams ". Hence we need to search for maven dependency with contains the method  - getMalformedInputAction(..) . You will easily find that the missing dependency is httpcore version. Therefore you can add the below dependency to your pom.xml.


2. If you want to upgrade your activeMQ server version from a lower version (eg: 5.2.0) to an advanced higher version (eg: 5.9.1) you need to add these dependencies to your root pom.xml file.

                <!-- any library that uses commons-logging will be directed to slf4j -->
                <!-- any library that uses slf4j will be directed to java.util.logging -->

Important code short cuts when writing ESB tests

1. If you need to retrieve the backend service url without the service name you can get the thing done as below.

RestApiAdminClient restApiAdminClient = new RestApiAdminClient(contextUrls.getBackEndUrl(),getSessionCookie();

 2. Replacing and adding a new Proxy service:

private void addProxy() throws Exception {
        String proxy = " <proxy xmlns=\"\" name=\"StockQuoteProxyPreserveHeaderScenario1\">\n" +
                       "        <target>\n" +
                       "            <inSequence>\n" +
                       "                <property name=\"preserveProcessedHeaders\" value=\"true\"/>\n" +
                       "                <send>\n" +
                       "                    <endpoint>\n" +
                       "                        <address\n" +
                       "                                uri=\"https://localhost:8243/services/UTSecureStockQuoteProxy\"/>\n" +
                       "                    </endpoint>\n" +
                       "                </send>\n" +
                       "            </inSequence>\n" +
                       "            <outSequence>\n" +
                       "                <send/>\n" +
                       "            </outSequence>\n" +
                       "        </target>\n" +
                       "    </proxy>";
        proxy = proxy.replace("https://localhost:8243/services/UTSecureStockQuoteProxy"
                , getProxyServiceURLHttps("UTSecureStockQuoteProxy"));

3. If you need to send a request and do not expect a response :

axisServiceClient.fireAndForget(putRequest, getProxyServiceURLHttp("StockQuoteProxy"), "getQuote")

4. For JMS releated scenarios only :

        OMElement synapse = esbUtils.loadResource("/artifacts/ESB/mediatorconfig/property/ConcurrentConsumers.xml");

5. To update axis2.xml file during a test run.

private ServerConfigurationManager serverManager = new ServerConfigurationManager(context);

        serverManager.applyConfiguration(new File(TestConfigurationProvider.getResourceLocation() + File.separator + "artifacts" + File.separator + "ESB"
                                                  + File.separator + "jms" + File.separator + "transport"
                                                  + File.separator + "axis2config" + File.separator
                                                  + "activemq" + File.separator + "axis2.xml"));

6.Sending API requests :

   HttpResponse response = HttpRequestUtil.sendGetRequest(getApiInvocationURL("stockquote") + "/view/IBM", null);

7. Getting the status code of a response :

 int responseStatus = 0;

        String strXMLFilename = FrameworkPathUtil.getSystemResourceLocation() + "artifacts"
                                + File.separator + "ESB" + File.separator + "mediatorconfig" +
                                File.separator + "property" + File.separator + "GetQuoteRequest.xml";

        File input = new File(strXMLFilename);
        PostMethod post = new PostMethod(getProxyServiceURLHttp("Axis2ProxyService"));
        RequestEntity entity = new FileRequestEntity(input, "text/xml");
        post.setRequestHeader("SOAPAction", "getQuote");

        HttpClient httpclient = new HttpClient();

        try {
            responseStatus = httpclient.executeMethod(post);
        } finally {

        assertEquals(responseStatus, 200, "Response status should be 200");

Isuru PereraJava Performance Monitoring Libraries

There is a proposal to build performance probes in WSO2 Platform. For that I started looking in to some performance monitoring libraries.

Following libraries were mentioned in the WSO2 architecture thread.
While looking in to these libraries, I found out about following also.

Here is a quick comparison of each project. These comparison criteria are based on the requirements in above proposal.

Metrics ParfaitJAMonJava SimonPerf4J
LicenseApache License 2.0Apache License 2.0JAMon LicenseNew BSD LicenseApache License 2.0
SourceGitHubGoogle CodeSourceforgeGitHubGitHub
Latest Version3.
Last PublishedSep 4, 2014Jun 01, 2011Aug 20, 2014Oct 29, 2014Oct 16, 2011
Java Version-Java 6-Java 7-
JMX SupportYesYesNoYesYes

* Not confirmed

Let's look at each library in brief.


Metrics provides various measuring instruments. 
  • Meters - Measuring rate of events over time
  • Gauges - Instantaneous measurement of a value
  • Counters - Measurement for counting
  • Histograms - Statistical distribution of values
  • Timers - Measures the duration of a code block and the rate of invocation.
  • Health Checks - Centralizing the health checks of services

Metrics has modules for common libraries like Jetty, Logback, Log4j, Apache HttpClient,
Ehcache, JDBI, Jersey.

Metrics provides a way to have multiple reporting options. Mainly JMX, Servlets (http), Console, CSV and SLF4J.  It also supports Ganglia and Graphite reporting

Metrics' Getting Started page shows you how to use the Metrics APIs.


Parfait provides mechanisms for collecting counter and timing metrics. Data can be exposed via various mechanisms including JMX and the the open-source cross-platform Performance Co-Pilot.

Parfait also has number of modules, which enable to collect metrics from common data sources.


JAMon provides various ways to monitor applications without code changes. It has in built support for HTTP Monitoring, Spring, JDBC, Log4j, EJB etc. JAMon can be used in Servlets, JSPs, EJBs and Java Beans in various Java EE Application Servers.

JAMon doesn't seem to support JMX.

Java Simon

Java Simon has monitors called Simons, which can be used in the code to count something or to measure the time taken.

It's interesting to know that the Java Simon was started by the people, who used JAMon earlier. They were not satisfied with JAMon in terms of simplicity and monitor structure. Some people also consider Java Simon as the replacement of JAMon.

Java Simon also measures time in nanos. Simons are organized in a hierarchy.

Simons can be disabled easily. 

Java Simon has a web console in addition to exposing data via JMX. There are many examples for Java Simon usage and Getting Started wiki is a good place to see how we can use the APIs.

There is comparison of Java Simon with JAMon, which shows performance overhead of each library.


The Perf4J mainly makes use of the logging frameworks and it has support for popular logging frameworks.

The Perf4J support is limited only to timing metrics and I didn't find the support for counters.


In this blog post, I just wanted to give an idea about Java Performance Monitoring libraries and each library has pros & cons. So, depending on the project requirements, it's better to evaluate all libraries and select a suitable one for your project.

There is lot of information in each project's web pages. Going through those pages will help to understand more about features provided by each library.

Niranjan KarunanandhamUseful Java keytool commands for checking

JKS is the default keystore type in the Sun / Oracle Java security provider.  To view the list of certificate in a Java keystore:
keytool -list -keystore <jks_file>

To view details of the certificates in Java keystore:
keytool -list -v -keystore <jks_file>

To export a certificate from Java keystore:
 keytool -export -alias <alias_name> -file <output_certificate_name> -keystore <jks_file>

To view the details of a certificate:
keytool -printcert -file <certificate_name>

To delete a certificate from Java keystore:
keytool -delete -alias <alias_name> -keystore <jks_file>

Niranjan KarunanandhamGithub: How to move to another branch with uncommitted changes in the current one?

When working in repository, there will be a need for you to move between branches. In such a case, you would want to preserve the uncommitted changes and work on it later. This can be achieved using "git commit" or "git stash".

Git Reset

As you know git commit will commit your changes to the current branch in your local repository. In such a case, care needs to be taken when doing a "git push" since this will push all the commits in all branches. In-order to avoid this, when doing a "git push", it is better to mention the branch that you want to push. For example: in your repository, you have a "master" and "dev" branch and you wanted to push only the commits in the "dev" branch to the remote repository. This can be achieved using the command:
git push origin dev

Later there might be a need where you want to discard the changes that you had committed previous. In such a case, you can remove the last commit from your local repository, by using "git reset". Say you want to remove the last commit, then this is the command to do it:
git reset --hard HEAD~1

Say you want to roll back to a particular commit, then using git log, find out the commit id of the commit you want to roll back to and then do this:
git reset --hard <sha1-commit-id>

In case you have already pushed it to the remote repository, then you need to force push to get rid of it:
git push origin HEAD --force

Git reset can also be used to remove all uncommitted changes by:
git reset --hard HEAD

For more inform on reset, check out Git Reset

Git Stash

This is something that I use a lot. Instead of committing your changes, you can stash your work and move to another branch. You can later come back to the branch and apply the stashed work and continue forward. To stash uncommitted changes, the command is:

git stash

You can also give a name for the stash which will be easy when trying to find our a particular stash:
git stash save "my_stash"

If you had done multiple stash then you can view the stash list using the command:
git stash list

To apply the stash changes, you need to get the stash name from the "git stash list". For example, in the stash, you have 3 stashes say stash@{0}, stash@{1} and stash@{2} and you want to apply the 2nd stash list. Then you need do this:
git stash apply stash@{1}

If you no longer need the stash, you can delete it from the list by:
git stash drop stash@{1}

If you do not specify the stash name then it will apply the most recent stash.

For more inform on stash, check out Git Stash.


Shelan PereraHandy Mac OSX shortcuts to work with Terminal

Clean up the line: You can use Ctrl+U to clear up to the beginning.

Clean up the line: Ctrl+A Ctrl+K to wipe the current line in the terminal 

Cancel the current command/line: Ctrl+C.

Recall the deleted command: Ctrl+Y

Go at the beginning of the line: Ctrl+A

Go at the end of the line: Ctrl+E

Remove the forward words for example, if you are middle of the command: Ctrl+K

Remove characters on the left, until the beginning of the word: Ctrl+W

To clear your entire command prompt: Ctrl + L

Toggle between the start of line and current cursor position: Ctrl + XX

Source :

Lali DevamanthriShould we use ESB in any case of online integration?

When you are developing integration architecture principles in your company, the best current approaches is use ESB as target approach to online integration and services implementation. But do we need to use ESB in any case of online integration? Is it correct to fix it as principle? Or are there any cases when point-to-point integration is preferred.

The answer depends on vision and expected complexity of future services. Good ESB or different integration middleware can help to define integration architecture well. But on the other side it is new quite expensive component in infrastructure with indirect return on investment which is visible after longer time period.

You can focus on finding additional benefits. For instance, if you have in your company some specialized applications, you can choose ESB with adapters for them which rapid creation of first level of services which will represent functionalities within them. Integration middleware also can bring performance benefits via support of some high performance messaging (EMS, MQ and so on) which can be used as internal communication channel. Of course if you are considering future use of some repository during runtime (dynamic routing and changing of endpoints), implementation of features like these will be easier with ESB/integration middleware.

If you want to have a service oriented architecture, with all the benefits then your first thought would be use the ESB whenever you expose business services or APIs. However, as in most IT situations, there are no absolutes. While rarely right in a good, well-designed SOA, there may be occasions where point to point interaction without the ESB as an intermediary is the best answer. For example, if there is only one application using a service, and the expectation is that will not ever change, or at least not for a long time, then that interaction is probably best point to point.

In a good SOA, one does not implement services with an ESB, but only exposes services. By that I mean that the ESB presents a well-defined interface to presumably many consumers of that service. The ESB deals with issue such as security, message translation, and other things that fall under the category of syntactic mismatches between the consumers of the service and the provider(s) of the actual implementation of the service.

Pushpalanka JayawardhanaSigning SOAP Messages - Generation of Enveloped XML Signatures

Digital signing is a widely used mechanism to make digital contents authentic. By producing a digital signature for some content, we can let another party capable of validating that content. It can provide a guarantee that, is not altered after we signed it, with this validation. With this sample I am to share how to generate the a signature for SOAP envelope. But of course this is valid for any other content signing as well.

Here, I will sign
  • The SOAP envelope itself
  • An attachment 
  • Place the signature inside SOAP header 
With the placement of signature inside the SOAP header which is also signed by the signature, this becomes a demonstration of enveloped signature.

I am using Apache Santuario library for signing. Following is the code segment I used. I have shared the complete sample here to to be downloaded.

public static void main(String unused[]) throws Exception {

        String keystoreType = "JKS";
        String keystoreFile = "src/main/resources/PushpalankaKeystore.jks";
        String keystorePass = "pushpalanka";
        String privateKeyAlias = "pushpalanka";
        String privateKeyPass = "pushpalanka";
        String certificateAlias = "pushpalanka";
        File signatureFile = new File("src/main/resources/signature.xml");
        Element element = null;
        String BaseURI = signatureFile.toURI().toURL().toString();
        //SOAP envelope to be signed
        File attachmentFile = new File("src/main/resources/sample.xml");

        //get the private key used to sign, from the keystore
        KeyStore ks = KeyStore.getInstance(keystoreType);
        FileInputStream fis = new FileInputStream(keystoreFile);
        ks.load(fis, keystorePass.toCharArray());
        PrivateKey privateKey =

                (PrivateKey) ks.getKey(privateKeyAlias, privateKeyPass.toCharArray());
        //create basic structure of signature
        javax.xml.parsers.DocumentBuilderFactory dbf =
        DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();
        DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();
        Document doc = dBuilder.parse(attachmentFile);
        XMLSignature sig =
                new XMLSignature(doc, BaseURI, XMLSignature.ALGO_ID_SIGNATURE_RSA_SHA1);

        //optional, but better
        element = doc.getDocumentElement();

            Transforms transforms = new Transforms(doc);
            //Sign the content of SOAP Envelope
            sig.addDocument("", transforms, Constants.ALGO_ID_DIGEST_SHA1);

            //Adding the attachment to be signed
            sig.addDocument("../resources/attachment.xml", transforms, Constants.ALGO_ID_DIGEST_SHA1);


        //Signing procedure
            X509Certificate cert =
                    (X509Certificate) ks.getCertificate(certificateAlias);

        //write signature to file
        FileOutputStream f = new FileOutputStream(signatureFile);
        XMLUtils.outputDOMc14nWithComments(doc, f);

At first it reads in the private key which is to be used in signing. To create a key pair for your own, this post  will be helpful. Then it has created the signature and added the SOAP message and the attachment as the documents to be signed. Finally it performs signing  and write the signed document to a file.

The signed SOAP message looks as follows.

<soap:Envelope xmlns:dsig="" xmlns:pj=""
        <pj:MessageHeader pj:version="1.0" soap:mustUnderstand="1">
                <pj:PartyId pj:type="ABCDE">FUN</pj:PartyId>
                <pj:PartyId pj:type="ABCDE">PARTY</pj:PartyId>
            <pj:ConversationId>FUN PARTY FUN 59c64t0087fg3kfs000003n9</pj:ConversationId>
                <pj:MessageId>FUN 59c64t0087fg3kfs000003n9</pj:MessageId>
        <pj:Via pj:id="59c64t0087fg3ki6000003na" pj:syncReply="False" pj:version="1.0"
                soap:actor="" soap:mustUnderstand="1">
        <ds:Signature xmlns:ds="">
                <ds:SignatureMethod Algorithm=""></ds:SignatureMethod>
                <ds:Reference URI="">
                    <ds:DigestMethod Algorithm=""></ds:DigestMethod>
                <ds:Reference URI="../resources/attachment.xml">
                        <ds:Transform Algorithm=""></ds:Transform>
                    <ds:DigestMethod Algorithm=""></ds:DigestMethod>
            <ds:SignatureValue>d0hBQLIvZ4fwUZlrsDLDZojvwK2DVaznrvSoA/JTjnS7XZ5oMplN9  THX4xzZap3+WhXwI2xMr3GKO................x7u+PQz1UepcbKY3BsO8jB3dxWN6r+F4qTyWa+xwOFxqLj546WX35f8zT4GLdiJI5oiYeo1YPLFFqTrwg==
   <ds:X509Certificate>                MIIDjTCCAnWgAwIBAgIEeotzFjANBgkqhkiG9w0BAQsFADB3MQswCQYDVQQGEwJMSzEQMA4GA1UE...............qXfD/eY+XeIDyMQocRqTpcJIm8OneZ8vbMNQrxsRInxq+DsG+C92b
        <pr:GetPriceResponse xmlns:pr="">

In a next post we will see how to verify this signature, so that we can guarantee signed documents are not changed (in other words guarantee that the integrity of the content is preserved) .


Dimuthu De Lanerolle

A Simple HTTP client to retrieve response status code for Wso2 ESB

 int responseStatus = 0;
        String strSoapAction = "getQuote";
        // Get file to be posted
        String strXMLFilename = FrameworkPathUtil.getSystemResourceLocation() + "artifacts" + File.separator +
                                "ESB" + File.separator + "mediatorconfig/property/MyRequest.xml";

        File input = new File(strXMLFilename);

        PostMethod post = new PostMethod(getProxyServiceURLHttp("Axis2ProxyService"));
        // Request content will be retrieved directly
        // from the input stream
        RequestEntity entity = new FileRequestEntity(input, "text/xml");
         post.setRequestHeader("SOAPAction", strSoapAction);
        HttpClient httpclient = new HttpClient();

        try {
            int result = httpclient.executeMethod(post);

        } finally {

MyRequest.xml file

<soapenv:Envelope xmlns:soapenv="" xmlns:ser="http://services.samples" xmlns:xsd="http://services.samples/xsd">

Sajith RavindraPerforming a simple load test on WSO2 ESB proxy using Apache Jmeter

In this post I'm  explaining how you could perform a simple load test on a WSO2 ESB proxy or any Web service using Apache Jmeter which is a free and very easy to use tool. Also, please note that this post describes only how to setup a very basic test case and it's intended audience is people who has no experience in Jmeter (I'm not an expert too).

In this example I will described how you can create a test plan where you do a simple load test by sending same message over and over again to a ESB proxy or a web service.

Bear minimum required to perform a test

After launching Jmeter the following screen will appear. On the left pane you will see an icon named “Test Plan”.
  • Right click on “Test plan” → Select “Add” → Select “Threads (Users)” → Select “Thread Group”. Here you can configure the number of threads that you going to use for the test. The number of threads are analogues to number of simulated users. Also you can specify the Loop Count or number of requests sent by each thread. This where you decide on the load that you going to use for the test
    • Therefore, if you set “Number of Threads (users)” = 5 and “Loop Count” = 100. The test will send 5 * 100 = 500 requests to the web service.
  • Now we have to set the SOAP request that's going to be used in the test. For this example we will be sending the same hard coded request over and over again. To add the request right click on the “Thread Group” → Select “Add” → Select “Sampler” → Select “SOAP/XML-RPC Request”

Above two  are the basic steps you need carry out to perform a test using Jmeter for a SOAP service. Now if you click "Start", JMeter will send the specified number of requests to the proxy or the web service. In addition to above, below I have described couple of JMeter components that may also be useful when building a real world test plan.

Setting HTTP headers

Let's say the proxy service you are trying to test is secured, so that you need to send the “Authorization” header. For fulfilling this requirement you have to use the “HTTP Header Manager”. To add a HTTP header manager right click on the “SOAP/XML-RPC Request” → select “Config Element” → select “HTTP Header Manager” . In this component you can set the headers as required.

Viewing and testing responses

You can add listeners to see the responses form the service. To add a Listener, right click on the “SOAP/XML-RPC Request” → Select “Listener” and from there you can select various types of listeners. Following is a screen shot of “View Result Tree” Listener which is a very primitive but still a very useful listner.

From this listener you can view the request and response payloads for each message along with the status of the message.

In order to validate the response coming from the service you can use “Assertion”. In an assertion you can set various validations on the response. Following is a “Response Assertion” where I'am testing if the “Text” elements in the response “Contains” the text “IBM”.

If a certain response does not meet the condition in Assertion, in the “Listener” the response status will be marked as "error" with a red exclamation mark. To add an “Assertion” right click on the “SOAP/XML-RPC Request” → Select “Assertion” and from here select the type of assertion you want.

Niranjan KarunanandhamSearching in Github

Usually when I have to search for something in a repository, I used to clone it and then search it from my locally. This is time consuming specially when I don't have the repository in my local machine. Then I found out that Github also provides a way to search without having clone the repository. Not only that it provides advance searching like search. within an organization / user repositories, search by filename, file extension.

For example: say you want to search for the word "hazelcast" within an organization (say wso2) repositories and only within files with the extension "xml". The you need to search for:
hazelcast user:wso2 extension:xml

Currently Github only provides searching within the default branch of the repositories. In most cases, this is the "master" branch.

For more advance search options, check out Github Search.

Niranjan KarunanandhamWhether to support Rooted device in WSO2 EMM?

EMM stands for Enterprise Mobility Management, ie., a set of tools and policies which is used to manage the mobile devices within an Organization. This can be classified into three parts, namely:

Mobile Device Management (MDM):
This is used by the administration to deploy, monitor, secure and manage mobile devices such smartphones, tablets and laptops within an organization. The main purpose of MDM is to protect the organization network.

Mobile Application Management (MAM)
MAM is used for provisioning and controlling access to internally developed and public applications to personal devices and company-owned smartphones, tablets and laptops.

Mobile Information Management (MIM)
MIM is to ensure that the sensitive data in the devices are encrypted and can be access by certain applications.

Rooted (Jailbroken) devices gives the user full system level privileges and also will be able to access the file system. Since the device as root access permission, if someone gets hold of the device then he / she can bypass the passlock and access the phone.

WSO2 EMM allows organization to enroll both BYOD (Bring Your Own Device) and COPE (Company Owned Personal Enabled) devices. This allows the employees to store the organization data (if the organization permits) in the devices. This can be both sensitive and non-sensitive data and should be stored securely in the device so that it cannot be accessed by other applications (other than the organization’s applications).

The way a device is root / jailbroken is by exploiting a security flaw in the OS and installing an application to get elevated permission. By exploiting the security flaw, the device is now more vulnerable. One of the main concerns in root / jailbroken devices is that the OS level protection is lost. By default, mobile OS has an inbuilt security which protects the data in the devices. I have taken the two most popular mobile OS and explained what the security risk is when the device is root / jail-broken. Once it is rooted / jailbroken, other applications gain system level permission.

  • iOS
In iOS, data protection is implemented at the software and works with the hardware and firmware encryption to provide better security [1]. In simple terms, when the data protection is enabled, the data get encrypted using a complex key hierarchy. Therefore when a device is locked, the data are all encrypted and gets decrypted when the mobile is unlocked. This is lost when the device is jail-broken. The user can bypass the lock screen and access the phone.  
  • Android
As explained above, when a device is rooted, it provides system level privileges to applications. Most of the end-users do not know about permissions and when installing an app, do not bother to check what permission they are giving access to the app. This provides the app to gain user data (credit card details, bank details, etc…) and send it to someone else.
Rooted devices lead to data leaks,hardware failures and so on. According to Android Security Overview [2], encrypting data with a device key-store or with a key-store at the server side does not protect it from a root device since at some point it needs to be provided to the application which is then accessible to the root user. Also the user will have access to the file system, thereby accessing the data inside the Container [3].

Apart from the security concern, the phone also losses it warranty it is rooted / jailbroken. So if there are any hardware failures after the phone is rooted / jailbroken, then the manufacturer will not cover the damages.

[3] -

Niranjan KarunanandhamConnect iOS (6 & 7) to a Hotspot without credentials

If you try to connect an iOS (6 or 7) device to a Hotspot, a web-page will pop-up asking the user to enter his / her credentials (Captive portal technique).

If you exit this page by clicking the "Cancel" button (right side top corner of the page), the device will disconnect from the Wifi network.

There is a work around so that you can connect your iOS device to the Hotspot without the device disconnecting from the network. This will only allow the device to be connected to the network, but it cannot access the internet.

1. In the Wifi Settings Page, select the blue arrow next to the name of the Wifi network that you want to connect.

2. Under "HTTP Proxy", select "Manual". Then in the "Server" address, enter an alphabet (say "n") and in "Port", enter a number (say "1"). This will prevent the router / access page from popping up by diverting all website / HTTP data to a non-existing address.

3. Now navigate back to the Wifi Settings Page by clicking on the "Wifi" button on the top left corner. Then select the Wifi network name to connect to it.

The device is now connected to the Wifi network even thou it does not have internet.

P.S.: You will not be able to access any web-page within the network since the device is diverting all website to a non-existing address (HTTP proxy settings). If you want to access web-pages (if the Hotspot allows access to certain web-pages without credentials) within the network then under "HTTP Proxy", select "Off", after the device is connected to the network.

Shelan PereraHow to Remote Debug Standalone Hadoop

  When you run your map reduce applications you may have hiccups here and there and may need to have a look inside. If you need to remote debug rather going through logs and figuring out what went wrong following is the procedure.

I am using Intellij Idea as the IDE but for other IDE's process is similar.

1) In Intellij Idea go to Run > Edit Configuration and then click on "+" . And then add Remote for "Remote Debugging"

2) You will have following window after clicking on Remote. You can change the port you are using for remote debugging in this panel.

3) Open your Hadoop Root folder and navigate to etc/ in your editor. At the bottom of the file add the following line. (Make sure to have the port you given for IDE configuration as the address)

export HADOOP_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005"

Now you can start hadoop in standalone mode and it will wait until you attach your IDE to debug process to resume.

Lali Devamanthri‘nogotofail’ Network Traffic Security Testing Tool

Google introduced a new security tool testing for common SSL certificate verification issues, HTTPS and TLS/SSL library vulnerabilities and misconfigurations, SSL and STARTTLS stripping issues, and clear text traffic issues, and more. Tool will help developers to detect bugs and security glitches in the network traffic security that may leave passwords and other sensitive information open to snooping.
The open source tool, dubbed as Nogotofail, has been launched by the technology giant in sake of a number of vulnerabilities discovered in the implementation of the transport layer security, from the most criticalHeartbleed bug in OpenSSL to the Apple’s gotofail bug to the recent POODLE bug in SSL version 3.
The company has made the Nogotofail tool available on GitHub, so that so anyone can test their applications, contribute new features to the project, provide support for more platforms, and help improve the security of the internet. It written by Android engineers Chad Brubaker, Alex Klyubin and Geremy Condra, works on devices running Android, iOS, Linux, Windows, Chrome OS, OS X, and “in fact any device you use to connect to the Internet.” The tool can be deployed on a router, a Linux machine, or a VPN server.

Hasitha Aravinda[WSO2 ESB] How to Convert XML to JSON Array

Following API Demonstrate this functionality.

Try above api with a rest client with following sample requests.

1)Multiple Stocks.
XML request: JSON response : 2) Single Stock
XML request: JSON response (As an array): This is with following message formatter and builder

Ashansa PereraRun a jar file in command line

This is a very simple and short post on running a jar file in command line.
Simplest command that you can try is
java –jar [jarFileName].jar

But you may need to have the MANIFEST file in the jar to run it simply with the above command.

So how to bundle MANIFEST.MF to your jar?

You can add the following maven plugin to your pom.xml and get it done. Remember to add the main class name of your app here.

How to run the jars with external dependencies?

If you need to use the simple command java –jar [jarFileName].jar to run your application which has external dependencies, you can bundle the dependencies you need to the executable jar itself. Again use the below maven plugin to get it done.

The highlighted part is doing this for you and the part above is to bundle MANIFEST.MF file to that jar file too.

Ashansa PereraGet started with WSO2 Stratos Live and deploy your first axis2 service.

WSO2 Stratos Live is a complete open source PaaS and Cloud Middleware Platform.
To get a space in Stratos what you need is only an internet connection :)
By following the below mentioned steps you can have a your own service up and running in Stratos Live.

  1. Register to Stratos Live
    Visit WSO2 Stratos site and click on 'Get Started Now For FREE' button to create an account and a domain for your tenant. You have to fill the registration form with the relevant information.

    The admin username and the domain name is used as the login username.
    The e-mail address you enter should be a valid one. A verification e-mail is send to that email address and you have to verify your email address by following the instructions in the mail.

  2. After the verification you can log in to the Stratos manager page using the username and password given when registering.

    Now you have logged in to your domain in Stratos Live.

  3. The services provided in Stratos for your tenant can be seen there. Click on Application Server to deploy an axis2 service in Application Server.

  4. In the 'Main' tab of the left menu you can see the service types that you can add. Click on Axis2 Service under Web Services → Add

  5. Upload your service archive by choosing the .aar file.
    ( You can read on how to create a simple axis2 service from this article )

  6. Now you have deployed your axis2 service in Straots Live. You can access it by 'List' under Web Services in left menu.

  7. Click on 'Try this service' and you can try the deployed service.

Ashansa PereraConvert XML String to OMElement and extract values

A simple tip which saves a lot of your time...
You can simply convert XML string to an OMElement as below

      OMElement resultElement = AXIOMUtil.stringToOM(xmlString);

Extracting values from XML

Sample XML code

        Gambardella, Matthew
XML Developer's Guide
An in-depth look at creating applicationswith XML.

Ralls, Kim
Midnight Rain
A former architect battles corporate zombies,an evil sorceress.

Corets, Eva
Maeve Ascendant
After the collapse of a nanotechnologysociety in England.

Corets, Eva
Oberon's Legacy
In post-apocalypse England, the mysteriousagent known only as Oberon.

Java code to retrieve values

OMElement resultElement = AXIOMUtil.stringToOM(xmlString);

Iterator i = resultElement.getChildren();
while (i.hasNext()) {
OMElement book = (OMElement);
Iterator properties = book.getChildren();
System.out.println("====== book =======");
while (properties.hasNext()) {
OMElement property = (OMElement);
String localName = property.getLocalName();
String value = property.getText();
System.out.println(localName + ": " + value);


====== book =======
author: Gambardella, Matthew
title: XML Developer's Guide
genre: Computer
price: 44.95
publish_date: 2000-10-01
description: An in-depth look at creating applicationswith XML.
====== book =======
author: Ralls, Kim
title: Midnight Rain
genre: Fantasy
price: 5.95
publish_date: 2000-12-16
description: A former architect battles corporate zombies,an evil sorceress.
====== book =======
author: Corets, Eva
title: Maeve Ascendant
genre: Fantasy
price: 5.95
publish_date: 2000-11-17
description: After the collapse of a nanotechnologysociety in England.
====== book =======
author: Corets, Eva
title: Oberon's Legacy
genre: Fantasy
price: 5.95
publish_date: 2001-03-10
description: In post-apocalypse England, the mysteriousagent known only as Oberon.

Ashansa PereraGet started with WSO2 App Factory

AppFactory is an elastic and self-service enterprise DevOps platform to mange applications from cradle to grave. This is a 100% free and open source solution developed by WSO2 which covers the whole lifecycle of an application. You will be facilitated by all the required resources for the application created in one go. Just use ones, you will see the difference :)

Getting started is very simple. It is all online running on cloud. But I will go through step by step, so you do not miss anything.

Create an account

Click the 'Register' link in App Factory live URL
You need to fill the following form to get registered.

Do not worry about the phone number field, just give some number there if you do not like to put the actual number.

If your registration is successful you will be asked to check your email.

Change the default password

Using the URL sent to your email address given at register time, log in to the system and change password.
(Remember, that log in URL can be used only ones and you need to change the default password in that log in)

Then we will change the default password with the one you gave

 Log in to AF

Use this link and log into the system using your new password. You will see the following page just after the log in.

But wait for few seconds, we are creating a default application for you :)

When you navigate to the application that just now got created, you will see the set of features and the functionalities that are bound with your application. 

I will discuss about how to manage the applications with App Factory in a recent post.

Ashansa PereraWrite a simple JDBC PIP attribute finder module for WSO2 Identity Server

With this post I am going to discuss on how you can implement a simple JDBC PIP attribute finder module for WSO2 IS. I am using the latest released IS version ( WSO2 IS version 4.5.0 ) which you can download from here.

To have your own customized PIP module, the main task is to implement an attribute finder. It is not that hard since we already have the modeling interfaces. You can simply extend the AbstractPIPAttributeFinder ( abstract class) or implement PIPAttributeFinder ( interface ) to create your attribute finder.

I will provide step by step guide on how to
  • Create your own attribute finder
  • Register your PIP module in WSO2 IS
  • Test your attribute finder

Create your own attribute finder

I am going to create a JDBC attribute finder where the attributes required are stored in a database. I am going to use mysql for this sample.

I will add a sample code for our attribute finder and I'm going to address it as JDBCAttributeFinder.

package org.wso2.identity.samples.entitlement.pip.jdbc;

import org.apache.commons.dbcp.BasicDataSource;
import org.wso2.carbon.identity.entitlement.pip.AbstractPIPAttributeFinder;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.HashSet;
import java.util.Properties;
import java.util.Set;

* This is sample implementation of PIPAttributeFinder in Wso2 Entitlement Engine Here we are
* calling to a external user base to find given attribute Assume that user store is reside on mysql
* database
public class JDBCAttributeFinder extends AbstractPIPAttributeFinder {

* DBCP connection pool is used to create connection to database
private BasicDataSource dataSource;

* List of attribute finders supported by the this PIP attribute finder
private Set supportedAttributes = new HashSet();

* initializes the Attribute finder module. creates a connection with JDBC database and the
* retrieve attribute names from following sample table
* +--------------+----------------+-----------------+---------+
* +--------------+----------------+-----------------+---------+
* | 1 | EmailOfUser | | 1 |
* | 2 | EmailOfUser | | 2 |
* | 3 | EmailOfUser | | 3 |
* | 4 | CountryOfUser | SL | 1 |
* | 5 | CountryOfUser | USA | 2 |
* | 6 | CountryOfUser | UK | 3 |
* | 7 | AgeOfUser | 23 | 1 |
* | 8 | AgeOfUser | 19 | 2 |
* | 9 | AgeOfUser | 31 | 3 |
* +--------------+----------------+-----------------+---------+
* @throws Exception throws when initialization is failed
public void init(Properties properties) throws Exception {
* JDBC connection parameters
String dbUrl = properties.getProperty("databaseUrl");
String driver = properties.getProperty("driverName");
String userName = properties.getProperty("userName");
String password = properties.getProperty("password");
* SQL statement to retrieve all attributes from database

Connection connection = null;
PreparedStatement prepStmt = null;
ResultSet resultSet = null;

dataSource = new BasicDataSource();

try {
connection = dataSource.getConnection();
if (connection != null) {
prepStmt = connection.prepareStatement(sqlStmt);
resultSet = prepStmt.executeQuery();
while ( {
String name = resultSet.getString(2);
} catch (SQLException e) {
throw new Exception("Error while initializing JDBC attribute Finder", e);

* This returns the name of the module
* @return Returns a String that represents the module name
public String getModuleName() {
return "JDBCPIPAttributeFinder";

* This returns the Set of Strings the attributeId that are retrieved
* in initialization
* @return Set of String
public Set getSupportedAttributes() {
return supportedAttributes;

* This is the overloaded simplify version of the getAttributeValues() method. Any one who extends the
* AbstractPIPAttributeFinder can implement this method and get use of the default
* implementation of the getAttributeValues() method which has been implemented within
* AbstractPIPAttributeFinder class
* @param subject Name of the subject the returned attributes should apply to.
* @param resource The name of the resource the subject is trying to access.
* @param action The name of the action the subject is trying to execute on resource
* @param environment The name of the environment the subject is trying to access the resource
* @param attributeId The unique id of the required attribute.
* @param issuer The attribute issuer.
* @return Returns a Set of Strings that represent the attribute
* values.
* @throws Exception throws if fails
public Set getAttributeValues( String subject, String resource, String action, String environment, String attributeId, String issuer) throws Exception {

String sqlStmt = "select ATTRIBUTE_VALUE from UM_USER_ATTRIBUTE where ATTRIBUTE_NAME='" + attributeId + "' and USER_ID=(select USER_ID from UM_USER where USER_NAME='" + subject + "');";

Set values = new HashSet();
PreparedStatement prepStmt = null;
ResultSet resultSet = null;
Connection connection = null;

try {
connection = dataSource.getConnection();
if (connection != null) {
prepStmt = connection.prepareStatement(sqlStmt);
resultSet = prepStmt.executeQuery();
while ( {
} catch (SQLException e) {
throw new Exception("Error while retrieving attribute values", e);
return values;

You need to create your PIP module by using this class.

Register your PIP module in WSO2 IS

I will provide the action that you should follow to get your module running.
  • Build your module and copy the jar to CARBON_HOME/repository/components/lib
  • Copy the JDBC driver to CARBON_HOME/repository/components/lib ( here the mysql-connector )
  • Register your attribute finder by adding it to CARBON_HOME/repository/conf/security/ as follow ( make sure that you change the dbUserName and userPassword )


I am attaching the db script that can be used to generate data which is required for this sample, the sample module and the mysql-connector jar to make your job more easy

Test your attribute finder

Given below is a sample policy which can be used to test your new JDBC attribute finder.
You can find the uploaded policy here.





This policy says that only the users whose age is between 18 and 30 can access the resource “foo” and perform action “bar”

-  Copy the above sample policy to an xml file, start WSO2 IS and upload the policy file through Policy Administrator.
         Follow Policy Administration > Add New Entitlement Policy > Import Existing Policy

-  Enable the policy through Policy View

-  Publish the policy through Policy Administrator.

-  Click on Tryit and send a request. ( given below is a sample request which you can download from here)




Now you would see that your newly created attribute finder has come into play :)

There are some configuration changes if you are trying a 3.x.x version. This would help you to identify those changes if you are using an earlier version.

Ashansa PereraConfigure SAML2 Single Sign-On on WSO2 servers with WSO2 Identity Server.

By following this post you will be able to find out how to configure WSO2 servers to have SAML2 SSO with WSO2 Identity Server (IS) as the identity provider. It is really simple to configure SAML2 SSO for carbon servers.
I am going to address the server that you need to have SSO configured as 'Carbon Server' and just by following the below 2 steps you can configure SSO in your carbon server with WSO2 IS.

1. Configure your carbon server to enable SSO

All the required configuration to have SSO in your carbon server are in Carbon server/repository/conf/security/authenticators.xml

  • Enable SSOAuthenticator in authenticators.xml

( 1 ) Set disabled="false"

( 2 ) This should be unique to your carbon server. You will need this value when configuring IS too.

  • Start your carbon server with an offset ( offset can be configured in carbon.xml)

2. Register a service provider in IS side
  • Start IS in default port ( 9443 ) and log in 
  • Follow Main > Manage > SAML SSO > Register New Service Provider
  • Add the unique identifier ( 2 ) as the Issuer
  • Provide Assertion Consumer URL with your carbon server info as https://[host name]:[port]/acs
  • Tick on Enable Response Signing and Enable Assertion Signing
  • Click on "Register"

Now you are done. You can simply try to log into your carbon server with SSO.
To verify
    - Try to access https://[host name]:[port]/carbon
    - This will direct you to the authentication endpoint of IdentityProviderSSOServiceURL specified in authenticators.xml
      ( here https://localhost:9443/authenticationendpoint )
    - Give the credentials and hit Sign in
    - You will be logged in to your carbon server

Ashansa PereraSimple overview on the main roles in App Factory how they involve in application process.

With the multi tenanted App Factory there are some changes in user model in App Factory. I am going to give you and idea on the default roles and the main actions that those roles are responsible of doing in the application space in App Factory.

  • Creates a space for the organization in App Factory.
  • Can add organization level users and assign them roles
    Default roles would be Developer, DevOps, QA, Application Owner, CXO

Application Owner
  • Only the application owners can create applications.
  • After creating an application he can assign people ( that has been already added to the organization by the organization admin ) to his application. And those people become the members of the application and can play relevant roles ( developer, QA... ) assigned for them ( by admin ) for the created applications.

  • He will see all the applications of which he is a member of.
  • He can do git clone, push, trigger build, etc ( the work related for developing the application )

  • Will see the applications that he has is a member of.
  • Can perform the testing tasks. ( testing the deployed artifacts, report bugs... )


  • Can view dashboards.

Ashansa PereraHadoop - pseudo distributed mode setup

You can simply make your standalone hadoop setup to a pseudo distributed mode with following changes

  • In HADOOP_HOME/etc/hadoop/core-site.xml, add


  • In HADOOP_HOME/etc/hadoop/hdfs-site.xml, add


  • Make sure that you can connect to localhost with ssh.

Start and test your hadoop setup

  • Fist navigate to HADOOP_HOME
  • Format the hadoop file system
        /bin/hdfs namenode –format
  • Start Name node and Data node

  • Now you should be able to browse the hadoop web interface through
        And your hadoop file system under
        Utilities > browse the file system

  • Add /user/ to hadoop file system
        hdfs dfs –mkdir /user
        hdfs dfs –mkdir /user/
        You will be able to see these directories when you browse the file system now. And you can list         the files with
        hdfs dfs –ls ( ie: hdfs dfs –ls / )

  • Copy the input file to the hadoop file system
        hdfs dfs –put
        ie: hdfs dfs –put myinput input
and the file will be copied to /user//input

  • Run the application with
      hadoop jar [local path to jar file] [path to main class] [input path in dfs]  [output location in dfs]
        ie: hadoop jar myapp.jar input output

Result file: part-r-00000 should be saved in the output directory of dfs ( /user/[username]/output

Ashansa PereraSpecify download directory at the time of downloading - Safari

When I download something I used to save it to a relevant directory in my machine. When I moved to Mac and to Safari, one of the most inconvenient things I experienced was that Safari download everything to the download directory and I cannot choose the place at the time of downloading (Even though I can change the path from downloads directory to some other directory, it downloads everything to that and it was not the thing I was looking for)

After much trouble going through different sources finally found something reasonable. Since I saw that there are many more people out there with the same problem thought that it would be useful to share this simple tip (though the tip is simple, it was very useful to me + had to spend a considerable amount of time to find this out)

So simply if you need to specify the download directory at the time of downloading, without just clicking the link to download

Right click on link and choose ‘Download Linked File As…’

This will do the trick.
Hope this would make your life easier with Safari ☺

John MathonTech Explosion the greatest in history. Things that cost millions of dollars are costing thousands. Things that took years are taking months.

Yes, I’m an optimist, so you might put a grain of salt next to my unmitigated optimism about how things will go but I think we are seeing the greatest time in technology history unfolding and it is breathtaking and fun to participate and think about.   I believe it will have an impact on our economy and create jobs too but let’s look at it from the ground.  What is happening that drives my optimism?

I delineate below what I have seen in trends and technology I know is in the pipe.   I include some references at the bottom if you want support for the basic ideas here.

Software:  Tectonic change but hidden

Virtuous Circle

The technology changes brought about by the virtuous circle I talk about are revolutionary in terms of productivity and cost reduction.  Things that cost millions of dollars are costing thousands now.  Things that took years are taking months.   We are seeing a paradigm shift before our eyes that is coming about faster than I have ever seen one before and it is having a bigger impact on business than anyone seems to acknowledge.  Virtually every business is up for disruption in this new economy.

When I say tectonic but hidden I mean the average person doesn’t know bigdata, APIs and DevOps from an apple or orange.  They have no idea why technology is changing so fast but I believe everyone sees the change.

The shift to platform 3.0 which is about Mobile, Social, Cloud, IoT is synergistically driven meaning that these technologies support each other so that they each accelerate the impact of the other parts.   Underlying these major forces are the raw technologies of the APIs in the cloud, Open Source and DevOps/PaaS technology which enables these technologies, automates and makes their adoption possible faster.   I believe this is a new thing.  It is so different from what we did during the distributed computing world I call it Platform 3.0 or the connected platform.

The “App” Economy

Exponential Value from Connectedness

A big part of this rapid change is due to what I call the “network effect” which is that as you add more services in the cloud, more devices attached to the cloud they build value in ways that we didn’t anticipate that drive value and participation much higher than originally thought.   Users are spending 84% of their time on their smartphones in apps.   Apps are becoming the dominant way people interact with other people, software and services in the new economy.   The Apps have become the proxy to all our interactions with business and consumption and social interactions as well.  It is an App economy.

Business is seeing the change in ways that are changing how business interacts with customers and partners in a fundamentally more intimate way than ever before.  This is possible because of the ubiquitous services and technology available through the Cloud and Mobile which allows connectivity with customers and relationships with customers and partners in more ways persistently and more important dynamic and agile.   The new technology allows companies to react and offer new ways to combine value and deliver value faster than ever before.    Boeing is working with WSO2 for instance to create a way for the 1400 airlines in the world to interact with Boeing individually and help them manage their Boeing airplanes better.   This is part of the connectedness of platform 3.0 called an Ecosystem PaaS.   Boeing is enabling airlines to become smarter by connecting to it.  How?

Smarter Apps because of Network Effect

brain sprouting

Everything is becoming connected.  I call it the network effect.    When I get in my car it tells me my schedule.  It guesses where I am going, checks the route and if there is a problem suggests an alternate route.  It guesses based on past traffic patterns and knowing if an accident occurs how long it will take to clear up and if I will be there.  How many times have you gotten on the road and realized you should have checked if there was an accident or there was an accident but now it’s clearing up?  Technology is becoming smarter faster by utilizing combinations of services, i.e. the Network Effect.   When I look up things in the cloud I get better answers and smarter answers, with contact information included, prices, locations, reviews.   If I map to someplace it tells me how to get there and how long by car, mass transit, taxi, walking, airplane, whatever.   I can then book those things on the spot or click and have the destination sent to my car so I can go right away.   My car is smarter, knows how to drive itself sometimes, knows where I am going and responds to traffic in real time to reroute me.  My car knows when to raise itself higher automatically and to lower itself based on history.   It knows my schedule and tells me when I need to leave or helps me tell others I am on the way, where I am and the schedule.   I can listen to my books anyplace and find new books easily while on the way and listen to music and talk from anywhere in the world in my car.  My music knows what I like and finds new music for me automatically that I like.

One of the new segments of the market for iPaaS is called API aggregation and consists of companies that combine APIs from different companies to create an easier to use combined set of APIs or just a place to be able to find APIs easier.  Just more evidence that the services explostion of APIs and Platform 3.0 is really happening.

Higher value is created by leveraging multiple services, bigdata to anticipate obvious things and do them for me if I just say yes. The new iWatch sounds like it will have many features Google Now accomplishes by providing knowledge of your schedule and locations it tries to anticipate your next move and provide what you need when you need it.   Now that’s useful stuff.

It affects everything

All of this is brought about because the ability to do software and improve software and deliver software is an order of magnitude faster than even a few years ago.   My car updates itself as well as my phone and other devices every few weeks.   Soon everything in my home will be getting updates from the cloud and getting smarter and better AFTER I bought them.   10 years ago this would have sounded like a crazy idea that was 40 or 50 years away from reality.   It is happening and you can’t run a business without understanding where this is going and how your business is affected by it.   You have to be smart and fast and react like a cougar.   The technology of BigData, APIs, Cloud, PaaS, Open Source, Mobile, Social are all critical to absorb and this is what I call platform 3.0.

Hardware: Smaller, Faster, Lighter, Cheaper

I divide hardware into computer hardware and IoT hardware.   Computer hardware  is a combination of the 3 fundamental types of computing resources (Networking, Compute and Storage).   I include IoT as part of the hardware because much of the physical world we consume will be connected in the future through connected computers built into them.



There are new technologies for the home and cell on the horizon that promise more and faster connectivity through wireless including Wimax and evolution of LTE to XLTE we will see up to 1Gb data rates to the home and cellular.  SDN promises to improve dramatically the cost to administer networks in corporations and data centers.   Longer term I believe there are no limits to providing terabit or higher connectivity in the home and cell if required.  Even more impressive these data rates are being provided more ubiquitously and at lower and lower power consumption over longer ranges.  BLE (Bluetooth low energy) can run 300ft and operate at a hundredth the power or less than wifi or cell.  These power savings are critical to IoT.   For something cool check out this kickstarter project that gives you 1km wireless at home.


250MB  250mb disk drive -> becomes 250GB hand_chip

We are in a constant battle between the vast amounts of data we are accumulating and the Growth in storage densities to store the data.   This has to be one of the most remarkable things in my mind that has happened that I didn’t realize would be possible.  Engineers in the 70s thought that the theoretical limit of density for a semiconductor chip was 64,000 bits.  We are at >640,000,000,000 bits for a chip area much smaller than he was talking about so those engineers were only off by only about a BILLION times what we would achieve in 40 years and nobody thinks we are close to a theoretical limit now,  I talk to company after company whose plans include keeping data on every American (or more), so the needs between bigdata and the raw storage for fundamental data are exploding but the ability to store it keeps expanding at an exponential rate.    If this dynamic of the technology keeping up with demand stops we’re in trouble but it seems fine for the foreseeable future.  I am always stunned to think that a 256GB chip has a trillion transistors in a space that is 2dimensional and is smaller than the size of my thumb.  It’s just hard to believe sometimes this is even possible and it’s so cheap it is consumer cost!


Compute is exciting because what has happened is we have dramatically reduced the cost for the compute portion.  The overall speed of computing hasn’t undergone radical change in a while but what used to be a $700 processor is now a $7 processor that can be put into a cell phone.   We now buy computers with 16 or more CPUs each running at 2Ghz or more. More important is huge power reduction.  The $1 processors now run on an infinitesimal amount of power compared to their predecessors.   We are now building compute capability that can run for a year intermittently on small thimble batteries.   This is making the IoT revolution possible.

IoT (Internet of Things)

shirt-alert-300x225 NinjaSphere-663x442 tesla-model-s-6

The IOT explosion is possible because of the changes to the hardware above.  The ability to have compute, storage and decent networking for extremely low cost (under a dollar in some cases), using so little power it can last for a year of intermittent use is creating another micro-computer disruption.   This time on a scale hundreds of times smaller and cheaper than the previous 1980s disruption.   It is another step to the nanotech science fiction writers have been talking about for decades.   The race in my mind is between man-made nano devices and biological engineering of nano-devices.   Right now I would say the biological side is winning.  We are doing things on a nano scale in biology already whereas electronic nano devices are still imagination for the most part. I have read articles on working nano batteries.

This ability to embed compute power in virtually everything from clothing to every device is now viable and happening at a hyper rate.   There will be a data explosion as data from these devices is delivered to the cloud and then the “network effect” as people find value in the combined knowledge and interactivity we get from all these devices added to the cloud.

Much of industry is already “instrumented.”  Many buildings have automated systems already for HVAC etc… this will all be revamped as the much lower cost IoT technology becomes mainstream it will improve and lower the cost of all such technology across the board.  Robots and 3d printers, all devices in our homes and work will be connected and use common protocols like publish/subscribe.   We have no idea where the network effects of these things will lead us but I am sure it will lead to lower costs and mass proliferation and a smarter world.

This is another step in the consumerization of IT which is: higher cost IT technology now available to consumers at consumer prices and therefore IT ends up purchasing many of the same products you and I do for our home and we end up with higher quality products in our homes and businesses.

Basic Science (Physics, Chemistry, Batteries, Materials):



Source: C.-X. Zu & H. Li Energy Environ. Sci. 4, 2614–2624 (2011)/Avicenne

We have been talking about better batteries for some time to little avail but I believe we are going to see some big improvements here in the next 5 years.   Various modifications of LiOn technology involving new cathodes, new charging strategies, that promise possibly slightly higher power density but more along the lines of reduced charging time and improved lifetimes as well as lighter weight.  It has been shown practical by carefully controlling the charging of LiOn batteries to extend their lifetimes by a factor of 10.  It is also very likely that a number of different paths could lead to charge times that are an order of magnitude faster.   We are also seeing safer LiOn batteries and lower cost.  Other technologies may leap frog LiOn but with just the improvements expected and engineered today we will see significant improvements in the next few years.   The emergence of the electric car, specifically Tesla could really power these battery improvements because Tesla plans a battery factory which will double production of all LiOn batteries in the world in one factory.    I have great confidence that Elon will be able to leverage these new advancements in batteries and make long life, fast charging, lower cost, lighter weight batteries a reality.  I reference some articles on battery technology below to give you the specifics of what I think are promising technologies.   Some are absolutely in the pipeline today and some are more speculative.


Bi2212_Unit_Cell Strongly Correlated Oxide – high temperature superconductor

Our ability to engineer materials is rapidly improving.  Part of it is a better understanding of fundamental science and some is a better ability to manipulate things at smaller and smaller scale.  Just as with computer science as we gain skill at producing some new materials at scale we learn how to produce other materials too.  New screen technologies, Nanostructured ceramics, designer atoms, nanocrystals, single atom layered sheets of platinum or carbon structures give us materials that can change rapidly the progress we make in IoT, quantum computing, batteries, genetic manipulation tools, screens, sensors and all kinds of tests.  One company for instance takes the blood tests you get from your doctor that cost hundreds of dollars and take vials of blood into a couple dollars and a pinprick of blood.   Microscopes can now see individual atoms and the orbitals of the electrons around them.   This type of power will enable the quantum revolution that I believe is coming in materials and computing.   D-Wave is producing a quantum computer today that has to be supercooled but new materials science has enabled us to create high temperature quantum effects in materials such as Correlated Oxide.

Quantum Chemistry and Quantum Biology

Physics has been focused for a decade or more on relatively esoteric subjects that have little bearing on the real world.  However, recently the advances in quantum computing seems to be driving a new focus in physics to practical quantum technology.  Physicists have finally started to address a thorny issue that they tabled for decades.  The “measurement problem” is the problem that physicists have had since the 1930s when quantum mechanics was first elucidated by luminaries such as Bohr and Schroedinger.  I want to make clear that quantum computers are not the only reason to think that quantum mechanics and its study and use has implications on our near term future.  We have discovered quantum tricks are useful for understanding how to do some really cool stuff in organic, inorganic semiconductors which is enabling us to get beyond some of the limitations we face with classical understanding and we are discovering that quantum mechanics is at play in biological systems.

PW-2013-05-23-hydrogen-wavefunction1 Actual Picture of Fuzzy(Foam) electron orbit around a hydrogen atom using a quantum microscope

The world appears to be mostly in a fuzzy state sometimes called quantum foam or fog where things are just probabilities but don’t actually appear to be anywhere.  In this fuzzy state the world evolves along multiple paths simultaneously but when we look, the foam disappears instantly and we see real particles with real locations and velocities that have chosen the least energy (and therefore the most likely state).    We know that if we make some noise near the cohered (fuzzy) state it will collapse.  We don’t know if decoherence is physical “action” that happens or is some measurement artifact.  Surprisingly, physics tabled this thorny issue 90 years ago and we are just now trying to get to the bottom of it.  There have been lots of fun speculative ideas like multi-worlds theories and Schroedinger’s cat paradoxes.   The most recent theory I know of that is intriguing (not suggesting it is the answer, just intriguing) is called quantum darwinism in which space itself has memory and evolves.

This quantum fuzziness allows nature to perform “magic,” like quantum tunneling in which it transports electrons through complex paths at near zero energy loss.  Evolution has used this trick to make eyes that can see single photons of light or to allow plants to leverage single photons of energy from the sun to build themselves.  Birds can detect magnetic field variations incredibly small.  Noses of dogs can identify individual molecules and follow traces of them in the air.   If we can leverage such quantum tricks in our chemistry and technology we can not only build fast quantum computers but we can build new amazing sensors or new ways to leverage the sun for energy or to do things on a scale we can’t imagine large or small.   It’s time we learn how to utilize what nature has given us.

Yes, certainly leveraging this for current technology in the next few years is highly speculative but the combination of the new microscopes, the new understanding of quantum systems, emergence of quantum computers will undoubtedly (and is) leading to some immediate surprising new materials and capabilities.   Simply understanding and being able to see these activities in nature can enable us to understand how proteins in our bodies work, how the configuration of the proteins binds to different things or accomplishes tasks that have been mysteries.

When people first started looking inside the human body at all the organs before we had microscopes we postulated all kinds of theories about what they did and how they worked, most of it complete garbage.  The ability to see what’s happening at finer and finer levels is producing gargantuan leaps in our understanding and abilities. Being able to understand those things may lead to being able to engineer new materials ourselves that leverage our understanding how chemistry really works, quantum chemistry.

amplutihedron_spanAmplituhedron helps physicists calculate quantum results dramatically faster than before

All chemistry is fundamentally quantum chemistry in the sense that at the individual molecule level the electrons and bonds are in quantum states of fuzziness.  We have ignored that and tried to operate at a higher level for a long time, producing rules and noticing behaviors at a higher level but we have missed the action of how everything actually works and therefore lack an understanding.   Another big advance on the horizon is the amplituhedron which gives us the ability to do quantum calculations much faster than before.  The combination of these discoveries and new tools will herald a new era in biology, chemistry, materials science.


dna to protein Process of translation of DNA to protein machine

It was just 13 years ago amazingly that the humane genome project first produced its first “result.”   Since then we have gained exponentially better skill in mapping genes faster and faster, manipulating genes and splicing as well as understanding our DNA.   We originally thought that everything was in the genes and most of our DNA was accidental junk left over possibly from many mistakes.   It was thought if we mapped all the genes we would have “done it.”  We have since learned that the “junk” DNA, which was 98% of the DNA code is not junk but actually contains a second code which is the “control programming” for the Genes.

Genes are encodings of how to produce a protein machine.   The reason that many creatures have similar numbers of genes is because most creatures need thousands of machines to build and operate a big multi-cellular body and many of these machines are common between all multi-cellular creatures.  A human body doesn’t need that many more genes than a fly.  These protein “machines” run around fixing, transporting, making, destroying, basically doing stuff.  Our genes are different than a flies but many are the same or very close.  A fly still needs various machines to transport materials around the cell and body.  A fly still needs to regulate and make lots of chemicals.   So, get over it, we’re not that special at least as far as our basic genes.   :)  The question then becomes who is telling what machines to make and what machines to turn on and turn off when, where to go?  The Junk DNA and other things seems to be the “programming” of the DNA that has the actual programming and we are learning some of the code may not even be in the DNA so this is a lot more complicated than we originally thought.  Sometimes a disease may be because you have a defective machine.  Sometimes because you have defective control mechanism.  Our growing understanding of how DNA works, how the body does what it does is going to obviously lead to significant advances.  What?  When?  It’s happening right now in terms of understanding diseases and coming up with new drugs and new mechanisms to target the defects or to reproduce desired behaviors.


One discovery I can point to is that a “miracle drug” is soon to be released from Genentech that a friend of mine is just producing now.   Genentech is able to create the antibodies,the protein machines our bodies produce to kill invaders, for the common cold, not “a single”  cold but all influenza virus.   This is a miracle for many reasons. Limitations of the vaccines we’ve produced for the common cold are that they only work on 3 of what we think will be this seasons worst cold flu viruses and that the vaccines only work as good as our bodies are able to produce the antibodies to kill the virus.  If we have a weak immune system as most people who are older do then a vaccine can be useless.   The mortality rate for Ebola is 94% for people over 45 and 50% for 25 year olds.  What genentech has done is to create the antibodies themselves not an antigen and because this antibody binds to a common feature of every cold virus it is able to kill all flu viruses including bird flu for instance.   Not only that but it does not depend on the bodies ability to turn antigens into antibodies because it is the antibody itself it can kill the virus even if your immune system is completely dormant.    I think you will agree this is pretty spectacular advance on multiple levels.

The application of bigdata to genetics combining results of trials data, drug data, genetic data is just becoming real.   IoT may provide additional data to fuel the analysis and produce better results and more discoveries.    It is clear to me we are on the verge of a big step function in our ability to work in the body because of our new tools and new knowledge.

Revolutions in health are coming although the costs of these new technologies is quickly outstripping our ability to pay for them.  We have the ability to operate at the genetic level, splicing and inserting things into DNA is possible.  We have dramatic improvements in our genetic knowledge that is spurring new techniques to detect disease and problems and to provide a better response tailored to a patient,

Bodymedia_armband_link Bodymedia detects heat flux, capacitance of the skin, moisture and motion detection

One of the largest and fastest growing costs in medicine is pharmaceuticals.   One of the biggest costs is testing drugs in clinical trials.   A clinical trial can cost 10s of millions of dollars.  The clinical trial process is extremely costly for many reasons but one is the cost increases linearly with the number of people in the trial.    IoT devices, bigdata could herald in a new era of  lower cost trials.   Being able to track patients and monitor them constantly could easily lead to massive results.  What if you could detect someone was pre-stroke or pre-heart attack by a couple of hours or minutes?  If someone could self-administer an aspirin or drug or even as simple as pause and stop what they are doing it could cause a significant drop in heart attacks, improve the death rate impressively.  Google thinks so.  Even it is investing in early detection devices.  If more patients could be released from hospital with monitors built in it could lead to much lower costs and much better lifestyle and better results.   Let us not be timid.  The implications of this stuff could be huge because medical costs are 20% of the US economy, a 2% cut is the equivalent of the size of the entire cloud industry in revenues for 2014.

Space Flight

Russian Proton Explodes             Virgin Galactic Explosion          Orbitial Sciences Anteres Explodes

An unmanned Antares rocket is seen exploding seconds after lift off from a commercial launch pad in this still image from video shot at Wallops Island SpaceShipTwo russian_protonrocket_crash.jpg.CROP.original-original

Elon Musk’s SpaceX company is the only company that didn’t lose a rocket in the last couple months as several high flying rockets crashed and exploded.   Below shows Elon’s Spacex Falcon 9 now with 10 consecutive successful launches.

dezeen_elon_musk_spacex_5 F9 FLT-001

More impressive is that the last several launches SpaceX has been testing the radical re-usable landing capability it expects to be able to leverage to lower costs to space.

spacex falcon reusable landing

Currently NASA pays SpaceX and other companies and the Russians between $60 and $120 million dollars for each launch.   Elon has said that 90% of the cost of each launch is the rocket itself which is lost.  If he is successful (who would bet against Elon now?) then the costs of travel to space could drop by a huge amount enabling a radical leap in our ability to explore space.

Sure, this isn’t going to transform our economy or our current way of life soon but for people like me who dream that someday humans may go beyond our little ball and explore it is incredibly exciting.  In the 60s the space program fueled advances in lots of technologies and produced economic benefits.   It is clear that a path forward in space depends entirely on being able to lower the cost to lift material from out of our gravity well.   Elon looks to be finally able to achieve the dream that the shuttle had of reducing the cost to the point that we could think of space as a viable place to do business, explore and eventually establish colonies.


Okay, I went from very practical today stuff that is happening and very impactful on business today to technology that is more and more speculative and probably farther out but the reason I include these other more speculative things is that there are real advances in many of these things that are on the cusp of happening or have happened that have the potential to revolutionize those previously dreamy ideas.   It is exciting.   We don’t know where future advances will happen.  They always surprise us but the advances I’m talking about here are the kind that truly disrupt and create vast opportunity.   Whether some of those opportunities materialize is questionable but we have reason to believe they very well may be impactful sooner rather than later.

Hope this was entertaining at least if not inspiring.  I believe I have proved this is easily the most exciting time in technology in the history of mankind.

Related Materials:

WSO2 App Factory

The Virtuous Circle is key to understanding how the world is changing – Mobile, Social, Cloud, Open Source, APIs, DevOps

Is 10,000,000,000,000,000 (= 10^16 ) API calls / month a big deal?


Manipulating complex molecules by hand

Artificial Intelligence, The Brain as Quantum Computer – Talk about Disruptive

Enterprise Application Platform 3.0 is a Social, Mobile, API Centric, Bigdata, Open Source, Cloud Native Multi-tenant Internet of Things IOT Platform


Batteries that last longer

A Super-Strong and Lightweight New Material












Google is developing cancer and heart attack detector


Dimuthu De Lanerolle

TAF Notes

1. automation.xml - without tenants

            <tenant domain="carbon.super" key="superTenant">
                    <user key="superAdmin">
                    <user key="user1">
                    <user key="user2">


Sivajothy VanjikumaranHow to get the tables list in ms sql server that has the data

This query will help to see the data size details in tables of the MS SQL database.

.NAME AS TableName,
.Name AS SchemaName,
.rows AS RowCounts,
(a.total_pages) * 8 AS TotalSpaceKB,
(a.used_pages) * 8 AS UsedSpaceKB,
(SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
.tables t
.indexes i ON t.OBJECT_ID = i.object_id
.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
.allocation_units a ON p.partition_id = a.container_id
.schemas s ON t.schema_id = s.schema_id
AND t.is_ms_shipped = 0
.Name, s.Name, p.Rows

Sivajothy VanjikumaranRetrieving source IP detail in WSO2 ESB and APIM

These properties will retrieve the respective source IP into the properties.

Remote host 

   <property name="client-host" expression="get-property('axis2', 'REMOTE_HOST')" />

Remote address 

   <property name="client-address" expression="get-property('axis2', 'REMOTE_ADDR')" />


   <property name="xforward-header" expression="$trp:X-Forwarded-For""/> />

Aruna Sujith Karunarathna[WSO2] Sample Web Application to Demonstrate Insertion, Retrieval and Deletion of a resource to Registry

Here is a sample web application to test Insertion, Retrieval and Deletion of a  resource to Registry. Here is the sample servlet code. Github Link <!-- HTML generated using --> package org.wso2.carbon.test; import; import; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet;

John MathonA simple guide to to Blockchain and a compelling use case: Voting Election Democracy Worldwide Zero-Fraud


(Mandelbrot Set signifying the alarming simplicity and yet powerful possibilities of some simple math)

The Blockchain is useful for financial, legal and other applications in general not just for Bitcoin.    It is a new enterprise grade technology that disrupts a lot of existing trust and transactional  systems.

Properties of the Blockchain:

1) You have a unique code which nobody can fabricate themselves from anything you do.  They can’t “be you” without stealing it from you.

2) Other people have a way of determining that you and only you have the code, so assuming you haven’t lost it, other people know you are who you say you are.

3) When you do a transaction (buy something, send someone money, vote, sign your will, sign a contract) the information recorded on the blockchain in an indelible manner and copied hundreds/thousands of times.  The blockchain is publicly viewable by all proving that you have done the thing you did and it is very very hard for anyone to erase or change your transaction no matter what access they have to any system or data.   This includes governments, large corporations, banks, rich people or despots.

This is the basis of the blockchain and its usefulness for instance to do currency transactions.    In a sense the blockchain is more secure than a bank or government could ever be.  There is no known way to use your unique code without stealing it from you.   There is no way to change (or delete or add) a transaction to the blockchain, it is inviolable and permanent record of transactions.

Use Case:   Fraud-less Voting


An example of how to use the blockchain for another application is “voting.”   In a voting system we want to insure:

1) each person can only vote once and only once

2) each person is a “valid” voter

3) We are 100% sure that the ballot that we have is exactly the ballot the voter cast without necessarily having to know the identity of the voter

4) The ballot box cannot be stuffed with fake id’s

5) The votes of each person are confidential

This is a pretty straightforward usage of the blockchain.


Each voter when they register engages in a process where they generate a public/private key pair of which the government keeps the public key for the registered voter.   It is desirable that the government simply keep the public key but not associate any personal information with the public key identifying the person.  They can have a separate database with lots of personal information on who the voters are but the public key should not necessarily be associated with the particular voter.   The number of registered voters should match the number of public keys.  Other than that it is not needed that the public keys be associated with individuals.   The list of valid public key voters should be made public and downloadable.

Getting the ballot and filling the ballot

Any person can request a ballot.  They are publicly available anywhere on the web and could be in any form including a basic text file or whatever.  Once downloaded the voter can fill in the ballot with any software of their choosing which will “mark” the ballot.   It is desirable that the source code of valid ballot filling programs are published on the internet in open source.  It is also important the binary hash of the executable ballot program is compared against the hash of the official ballot program.  There should be identification programs which make sure that the hash code of the ballot voting program and the ballot itself are checked by a third program which validates you have official material.  These programs should run on web sites run by independent international agency authorities that keep a database of the official ballot hashes and the ballot filling applications.  An invalid voter can vote and put on the blockchain but their vote wouldn’t be counted unless they had a valid public key associated with a valid voter.  Alternately the voting software could check a list of valid voters public keys before putting the ballot on the blockchain.

Submitting your ballot

You submit your ballot through a ballot submission service provided by submitting your ballot to the blockchain.   The ballot filling application will also give you a hash code of the completed encrypted ballot.    Once the ballot is submitted the submission service will return the hash code of the received ballot.   In case there is any concern that somehow the incorrect ballot (or fudged ballot) was somehow submitted the ballot filling program can also verify that the blockchain does show that your ballot with the correct hashcode has been added to the block chain.

Counting the Ballots

Since we have the public key for your ballot we can decrypt your ballot and every other ballot submitted.  We can therefore sum the votes as well as almost anybody could do this.  They would all be on the block chain and visible.  We cannot determine who filled out which ballot because there is no association between public key and individuals.   All we know is that the public key is a valid voter as identified by the registration process.

Other forms of fraud that are impossible with this system

Nobody can vote your ballot because they don’t have your private key.  Public/private keys are unique so they can’t be mass manufactured without having a disconnect from the number on the roles and since real people will have registered they will have a set of public keys which go with those voters (not individuals but voters in general) Using alternate keys won’t work because only the correct public/private ones will work on the block chain (voting roll.)    The ballots won’t decrypt.     It is impossible to “lose” votes since all votes will be on the block chain copied hundreds of times worldwide potentially.

Advantages of such a system

This system has massive advantages which could transform democracy throughout the world.

1) Impossible to stuff the ballot box

2) impossible to modify a ballot

3) the ballots are all visible publicly so that any third party can count the votes anywhere in the world

4) no ballot is identifiable to any individual

5) Ballots cannot be lost

6) The voter is certain at the time of voting their unmodifiable ballot exactly as they filled it out is on the blockchain and counted

7) The system is open source entirely

8) International and national organizations can verify the integrity of the system and the entire process from start to finish


Here are some articles suggesting the same:


Hasitha Aravinda[Active-MQ] PortOffset

1) Changing transport ports. Edit /conf/activemq.xml
and change ports numbers in following configuration.

2) Changing web console port. Edit /conf/jetty.xml
and change following port in following config.

Hasitha Aravinda[Active-MQ] Setting up AMQ with MySQL database.

1) Create MySQL database.

2) Download activeMQ and extract it.
3) Edit /conf/activemq.xml
4) Add following configuration.

5) Copy mysql jdbc driver (mysql-connector-java-5.1.25-bin.jar) in the directory "activemq_home/lib/optional"

6) Start ActiveMQ server using $ ./activemq start ( To stop the server use ./activemq stop )

7) Log in to activeMQ management console using http://localhost:8161/ with admin:admin credentials.

8) Create a queue and send a message to the queue with persistence enabled.

9) You can see the message in database.

sanjeewa malalgodaHow to change logged in user password in WSO2 Carbon based products

If you need to call user admin service and perform this operation you can use following information.
Service URL


<soapenv:Envelope xmlns:soapenv=""
Other possible solution is change password from management console user interface.
Visit following URL And go to change password window.
Home > Configure > Users and Roles > Change Password

John MathonRDBs (Relational Databases) are too hard. NoSQL is the future for most data.

Exponential Value from Connectedness

Conventional vs NoSQL

Conventional RDB’s require a lot of maintenance, significant overhead in operating but mainly the biggest problem is that specifying static schema for data is incredibly time-consuming, error-prone, changing frequently it is just so 2000s pre-agile way of operating. Today people want to collect data and decide later what fields to put in, what fields are interesting to look at, what the format will be. People want to explore data and add data in an agile way and then build business rules and intelligence quickly. RDB’s are too hard.

Problems with RDBs

1)  Too hard to add new data sources

2) Complex Schema Specifications take too long

3) Changing Schemas is hard and potentially impactful on applications

4) RDBs scale to a limited size which in todays world is simply too small for much of the data we want to look at

5) RDBs and the tables in them each require lots of maintenance and care and feeding to keep healthy and fast

Why NoSQL is a good solution for a lot of data

It is easy to stream data to a Cassandra, HBase or MongoDB database.  Frequently it can be a matter of configuring a tool that then feeds the data into the database.   Using tools like open source WSO2 BAM and others makes it trivial to add data.

Once you’ve got the data in you can use numerous tools to visualize the data and to determine patterns of interest, metrics, or other sequences that look like candidates to automate or make your system look smarter.   This is an iterative process not very easy to anticipate in advance all the ways you might use the data or what things will be interesting later.   Feedback from users, customers or partners may result in new insights or new value to the data as time goes on.

Once you’ve found some interesting statistic or event, combination of events you want to automate some behavior.   With an RDB you may already have a program you can modify to add the new functionality.   That’s a big deal changing and risky.  You may write another program.   You may decide later to remove the association or to expand it requiring more programmatic changes.   With NoSQL you can drive specific automations quickly and easily from new ideas.   For instance, you discover a correlation of sales or interest in your product with certain news stories or when people look at an article they seem to go to buy a certain product.   You can use any stream of data to correlate, add an event and set up a business process to implement the new idea quickly.    Using tools like BAM and Business Process Servers with an event driven architecture you can practically implement such things the same day you think of them.

With SQL RDB if you do it programmatically you will have weeks or months to do it.  More important it is very unlikely you will have the data in an RDB in the first place because the cost of keeping data in an RDB is so high that nobody would stream news data or the detail of everything that people look at or every event in your network.  So, it is probably impossible to do some of the things you can easily find in NoSQL world.

This easy adaptability to new ideas, new requirements to implement something is characteristic of Agile.

What you need:

To make a very flexible BigData architecture that allows you to build new automation quickly you need a set of open source components in addition to the bigdata database.    There are many open source alternatives to the following:

1) Cassandra, HBase or MongoDB

2) Hadoop

3) Hive

4) Pentaho

5) WSO2 BAM (includes adapters for files, capability to configue metrics and new data sources)

6) WSO2 CEP for real time event processing

7) WSO2 Business Process Server (to build processes around the events and correlations, metrics)

The other unspoken advantage of NOSQL is that they are all open source, proven to billions of records / day, scale easily to arbitrary size and are free as far as the software license fees go.

Where SQL still makes sense

When you look at the cost of commercial enterprise databases (Oracle 10 + can cost millions and millions of dollars / year)  you have to have an awfully good reason you are putting something in so expensive a storage vehicle.   Transactional semantics would seem to be the “key advantage” of RDBs.   It is easy to configure tables in Cassandra to have multiple copies to guarantee reliability to whatever level you want.   Complex joins are a good reason, the comparable approach in NoSQL is to do map/reduce and put the result set into an RDB.  A huge advantage of NoSQL over RDBs is that NoSQL does these queries in parallel over massive data sets that would be impossible with RDB but these usually are not immediately available.  Depending on how fast you need the result RDB might be a better choice.   If you use NoSQL to use Hadoop Map/Reduce to do the joins then Open Source RDBs are a good solution to store the result set.

A lot of effort has gone into building data wharehouses for data analysis over the last 20 years.  A lot of these could be done using NoSQL databases easily depending on the data processing to be done on the results, more scalably and with vastly less cost.

Other things that may be interesting to read on this topic:

Oracle Database Maintenance





Deependra AriyadewaHow to setup VFS transport in WSO2 ESB with Samba

Environment: WS02 ESB 4.8.1, Samba 4.1.11, Ubuntu 14.10

Install Samba:

apt-get update
apt-get install samba

Configure two Samba shares:

  path = /tmp/samba/in
  available = yes
  valid users = deep
  read only = no
  browseable = yes
  public = yes
  writable = yes
  guest ok = no

  path = /tmp/samba/out
  available = yes
  valid users = deep
  read only = no
  browseable = yes
  public = yes
  writable = yes
  guest ok = no

Set passwd for user deep:

smbpasswd -a deep

Enable VFS transport ( transport sender and listener ) in ESB:


<transportReceiver name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportListener"/>

<transportSender name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportSender"/>

Now you can create a VFS enabled ESB proxy:

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <address uri="http://localhost:9000/services/SimpleStockQuoteService"
         <property name="transport.vfs.ReplyFileName"
                   expression="fn:concat(fn:substring-after(get-property('MessageID'), 'urn:uuid:'), '.xml')"
         <property name="OUT_ONLY" value="true"/>
               <address uri="vfs:smb://deep:deep@localhost/SambaShareOut/reply.xml"/>
   <parameter name="transport.PollInterval">5</parameter>
   <parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
   <parameter name="transport.vfs.FileURI">vfs:smb://deep:deep@localhost/SambaShareIn</parameter>
   <parameter name="transport.vfs.MoveAfterProcess">vfs:smb://deep:deep@localhost/SambaShareOut</parameter>
   <parameter name="transport.vfs.MoveAfterFailure">vfs:smb://deep:deep@localhost/SambaShareOut</parameter>
   <parameter name="transport.vfs.FileNamePattern">.*\.xml</parameter>
   <parameter name="transport.vfs.ContentType">text/xml</parameter>
   <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>

Now you can copy a SOAP message (test.xml) to location “smb://deep:deep@localhost/SambaShareIn” then ESB will poll for new files with extension “.xml” and send it to the give service. Response will by copy to the location “smb://deep:deep@localhost/SambaShareOut”


<?xml version='1.0' encoding='UTF-8'?>
<soapenv:Envelope xmlns:soapenv="" xmlns:wsa="">
        <m0:getQuote xmlns:m0="http://services.samples">

Aruna Sujith KarunarathnaCreate a WSO2 Worker-Manager Cluster in Just 2 Minutes !

I've been working on an application(WSO2 Cluster Wizard) which creates a Worker-Manager Separated cluster for a given WSO2 Product. The objective of this application is to reduce the time spent on creating clusters in developers/testing local machines. Though puppet scripts can automate the process AFAIK, no one uses puppets to create clusters in their local setups'.  This is a simple GUI

sanjeewa malalgodaEnable web service key validation and session affinity - WSO2 API Manager deployment in AWS

In clustered environment following issue can happen if we didn't enabled session affinity.
TID: [0] [AM] [2015-11-01 23:31:42,819] WARN {org.wso2.carbon.apimgt.keymgt.service.thrift.APIKeyValidationServiceImpl} - Invalid session id for thrift authenticator. {org.wso2.carbon.apimgt.keymgt.service.thrift.APIKeyValidationServiceImpl}
TID: [0] [AM] [2015-11-01 23:31:42,820] WARN {} - Login failed.. Authenticating again.. {}
TID: [0] [AM] [2015-11-01 23:31:42,821] ERROR {} - API authentication failure {}

When API gateway receives API call we do token validation call to key management service running on key management server. First we will authenticate with key management service and then do secure call to validate token. This key management service is running on all API Manager nodes. When client call to validate token it will authenticate with service running in one server and actual validation service call going to other server. To resolve this issue please follow below instructions.

Set following properties in API Manager configuration file and restart servers. With this we will be using web service call between gateway and key manager to validate token. Most of load balancers cannot route thrift messages in session aware way. So we will use web service call for that.

Also you need to configure key management server URL properly in configurations(see following configuration).

To enable session affinity. Enable session affinity and enable application generated session cookies in load balancer level. Also set cookie name as JSESSIONID. Then it will route requests in session aware manner.

sanjeewa malalgodaAPI Manager distributed deployment best practices - API gateway Deployment in DMZ and MZ

Normally when we deploy API Manager in different zones we need to follow security related best practices. Here in this post we will briefly describe about API Manager gateway deployment in DMZ and MZ.

How API Gateway and key manager communication happens
In API Manager we have authentication handler to authenticate all incoming requests. From authentication handler we will will initiate token validation flow. We do have extension point to authentication handler. So if you need to implement some custom flow you can write new handler and use it. In key manager side we have exposed web service and thrift service for this(key validation service). We have two options to select when we call key manager from gateway.
01. Web service call.
In this case we will do a web service call from gateway to key manager. For this call we will Https and we will be using transport layer security for this. for this gateway will authenticate with key manager by using configured username/password in api-manager.xml file.
02. Thrift call.
In this case we will do a thrift service call from gateway to key manager. For this call also we will be using transport layer security(for login). And gateway will authenticate with key manager(thrift server) by using configured username/password in api-manager.xml file.

How to secure deployment(API gateway) in DMZ from common attacks
We have configurations files, run time artifacts and required jar file in repository directory.
We can use secure vault( to protect repository/conf directory. Then attacker will not get sensitive data like user name, passwords and important urls. With this we will be able to recover configurations even if attacker got file system access.
And if we can keep open 8280 and 8243 then we don't need to expose management urls to outside world. So external users cannot use management services perform server side operations.
If we are having API gateway with worker manager separated then, only workers will reside in DMZ. Even if attacker destroyed synapse configuration in worker nodes still manager node and svn repository will have original configurations. So we can easily recover from this type of situation.
And we can enable java security manager to avoid some common form of attacks(jar file modifications etc).

Please see attached image to identify call between MZ and DMZ.

We have few call between MZ and DMZ
Key validation call from gateway to key manager(connect over https).
Token generation request from gateway to key manager(connect over https).
Artifact synchronize call to svn server(connect over https).
Call from gateway to back end services hosted in MZ(connect over https or http).
Cluster communication between workers and manager node(tcp level cluster messages).

Shelan PereraHow to correct "ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)" on Mac OS X

I tried to install mysql using home-brew. Everything was successful But could not connect to the server. Obviously server have not started.

ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)


Following command rescued me..

mysqld stop
.server start
I found it in this stack overflow answer.

Chamila WijayarathnaMy GSoC Experience with Apache Thrift

In this year (2014), I did my 2nd GSoC with Apache Thrift. Apache Thrift is an open source cross language Remote Procedure Call (RPC) framework with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.
In my GSoC, I was working on "Thrift-847 Test Framework Harmonization across all languages". Thrift cross language test framework tests the communication from clients created using various languages to server created using various languages. Also there are various communication protocols and transports a RPC communication in thrift can happen. Cross language test suite to be complete should cover all client language-server language-protocol-transport combinations. Also for languages which support ssl communication, it should be tested for both ssl communication and unsecure communication. The proposal I created lists the langauages , protocols and transports thrift supported at the time I wrote it. By the time you read this post there may be more languages most probably since Thrift community are always busy adding new features to the product. If you have downloaded and installed Apache Thrift using its source available at , you can run cross language test suite by running 'make cross'. It will run cross test suite with giving output like follows.

chamila@chamila-Dell-System-Vostro-3450:~/GSoC/thrift/test$ ./

Apache Thrift - integration test suite

2014 මාර්තු 19 වැනි බදාදා 20:59:11 +0530


client-server: protocol: transport: result:

java-java binary buffered-ip success

java-java binary buffered-ip-ssl success

java-java binary framed-ip success

java-java binary framed-ip-ssl success

java-java binary fastframed-ip success

java-java        binary      fastframed-ip-ssl success 

Before I joined this project, it had most of the test needed for java and cpp, and some of the tests for nodejs and C#. The main objective of my project was to add missing tests to the cross language test suite. Also
• Port cross language test suite to non linux systems
• Improve test and functionality reporting towards features list for the web site
• Fix documentation
• Quality in general
• Help to make it perfect

After my proposal get accepted, the I began from adding missing tests to cross test suite. After 3 months of project, I was able to extend test suite to support all tests of java, c++, nodejs, python, ruby and Haskell. Related details and patches are available at . Also  lists the tests available before the project, what I added and what are still missing. 
Also I improved cross language test suite to record results to a html file, so the results can be observed more comfortably. work - 
Running cross language test suite using shell script has few limitations. They are

  • has redundant code in it
  • limitations in scripting language
  • not supported in non-linux systems
So as a part of my project, I worked to move cross language test suite from shell script to a python script. Related work is available at .

Other than those major work, I helped by solving following minor issues also.

From this project, I was able to learn many technologies I haven't worked much before such as nodejs, python, haskell, vagrant and travis-ci. So it was a great 4 months of work for me with lot of knowledge and experience.
My mentor for this project was Roger Meier. He gave me lot of helpful hand throughout the project. Not only him, all from thrift community including Randy Abernethy, Jake Farrell, Henrique Mandonca and Jens Geyer. Also my friends including Maduranga Siriwardena, Geeth Tharanga and Dimuthu Upeksha gave me lot of help throughout the project. Thank You all for helping to make this project a successful one.

Project URL -

Deependra AriyadewaHow to set a Tomcat Filter in WSO2 Servers

Create your filter Jar and update the Carbon Tomcat web.xml to pick then new filter.


<!-- Filter implementation -->
<!-- Filter maping --> 

Hasitha AravindaPackt offers its library subscription with an additional $150 worth of free content

Packt provides full online access to over 2000 books and videos to give users the knowledge they need, when they need it. From innovative new solutions and effective learning services to cutting edge guides on emerging technologies, Packt’s extensive library has got it covered.

For a limited time only, Packt is offering 5 free eBook or Video downloads in the first month of a new annual subscription – up to $150 worth of extra content. That’s in addition to one free download a month for the rest of the year.

This special PacktLib Plus deal expires on 4th November, 2014.

Check out the deal here

Shelan PereraCassandra GUI 2.0 - Making things a little bit easier

Update (2014 October 31st)

You can download WSO2 Storage server  1.1.0 which ships Cassandra explorer and many interesting tools to manage storages. Please follow the following links for documentation to use Cassandra Explorer.

Download the product

Extract the binary and run the product. (See "Starting the server" after extracting )

Documentation about Explorer


Cassandra GUI evolved from its first version and new version includes bug fixes and enhanced features.

New features.

  • Complete pagination for Row view of explorer
  • Search rows by their names. (Filtered on the fly as you type.)
  • Filtering non displayable data and label them with warnings.

Bug Fixes

  • Remote connection problem
  • Connect to Remote Cassandra server without restarting the server.

Start the Server

Extract the downloaded product and lets refer extracted folder as CARBON_HOME.

go to CARBON_HOME/bin and run sh (linux) or

Log in to the admin console using https://localhost:9443/
default user name and password : admin, admin

Following screen shots include a quick flow on how it works.

1) Click connect to cluster on right hand side panel. Give connection URL and credentials (if there is any) to connect.

eg: URL = localhost:9160,  or

2) After a Successful connection you will be directed to keyspace listing page. Which will include Keyspaces and clickable column family names. Click on a Column Family to Explore data.

3) Row view Page.
After clicking the column family you have landed in the row view page. It includes rows of your column family and a slice of column data as a summary.

You can search, paginate or change no of items to filter your data. Click "view more" to explore a single row.

4) Column family view page will list all the columns in a single row. You can filter the data with column name,value or time stamp. full numbered pagination is available. 

Dimuthu De Lanerolle

How to shift between fault sequence and custom fault sequence


<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="">
   <proxy name="Faultproxy"
          transports="https http"
            <makefault version="soap11">
               <code xmlns:soap11Env=""
               <reason value="500"/>
   <proxy name="Axis2ProxyService"
          transports="https http"
         <inSequence onError="fault">
            <property name="FORCE_ERROR_ON_SOAP_FAULT"
               <endpoint key="Axis2EP"/>
            <log level="full"/>
            <log level="custom">
               <property name="faultSequence" value="** Its Inline faultSequence ****"/>
            <payloadFactory media-type="xml">
                  <sequence xmlns="">$1</sequence>
                  <arg value="Its Inline faultSequence "/>
   <endpoint name="Axis2EP">
      <address uri="http://localhost:8280/services/Faultproxy"/>
   <sequence name="fault">
      <log level="full">
         <property name="MESSAGE" value="Executing default &#34;fault&#34; sequence"/>
         <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/>
         <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/>
      <log level="custom">
         <property name="fault" value="================Its fault Sequence================"/>
      <payloadFactory media-type="xml">
            <sequence xmlns="">$1</sequence>
            <arg value="Its fault Sequence "/>
   <sequence name="main">


Tips To Remember

1. Generating a response inside the proxy service for get request you need to add this property after removing the To header.

<property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>

2. Making of soap-fault generating proxy

<proxy name="Faultproxy"
          transports="https http"
            <makefault version="soap11">
               <code xmlns:soap11Env=""
               <reason value="500"/>

3. Script mediator to log thread names for wso2 esb each request

<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="">
   <localEntry key="stockquoteScript"
   <sequence name="fault">
      <log level="full">
         <property name="MESSAGE" value="Executing default &#34;fault&#34; sequence"/>
         <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/>
         <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/>
   <sequence name="main">
         <script language="js" key="stockquoteScript" function="transformRequest"/>
               <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
         <script language="js" key="stockquoteScript" function="transformResponse"/>
         <log level="custom">
            <property name="Fooo" expression="get-property('threadName')"/>

Add a script as a local entry (eg: URL repository/samples/resources/script/stockquoteTransform.js)

* Your stockquoteTransform.js should look like this.

function transformRequest(mc) {
    var symbol = mc.getPayloadXML()..*::Code.toString();
            <m:getQuote xmlns:m="http://services.samples">

function transformResponse(mc) {

    mc.setProperty("threadName", java.lang.Thread.currentThread().getName());
Now open a soapUI and send your soap request to http://localhost:8280/ main sequence.

Now you should be able to see entries displayed in the console of each thread name. Alternatively you can also open the wso2carbon.log file which resides at [CARBON_HOME]/repository/logs to view the thread names generated to handle each request.

Dedunu DhananjayaMass insertion on Redis

You may want to insert a lot of data into Redis. This would be easier to insert a lot data into Redis using Linux commands. Lets say we have a comma separated values in a file.


With following command you can load all the data into Redis. But you should start Redis server first.

You can change data.csv as you want and according to your file.

sanjeewa malalgodaHow to modify JWT to retrieve subscriber details instead of end user - WSO2 API Manager

IN WSO2 API Manager JWT generated per API call. To generate we will use access token coming with the request. From this token we will retrieve token owner(person who generated token).
Once we have token owner, we will retrieve claims associated with that user. In this case we need to get application owner details(person who create application and subscribe to API). We have an extension point to implement claim retriever.

We can find more information from this[1] document. If you need to generate custom claims based on your requirement you need to implement claim retriever class and configure following[2] parameter in api-manager.xml configuration file.
Inside our implementation we need to retrieve application owner and retrieve his claims. For this first we need to get SUBSCRIBER_ID from AM_APPLICATION table by using application Id(we already have it in JWT). Then need to retrieve USER_ID from AM_SUBSCRIBER table by using previously retrieved SUBSCRIBER_ID. Then from that USER_ID we will be able to retrieve claims of Application owner in the same way we do it for end user. Hope this will help you.



Ishara Premadasa[WSO2 ESB] Comparing two XMLs and find matching elements using WSO2 ESB 4.8.0

Imagine we are getting a list of xml data from an external service and it is required to match this response against another xml payload and find the matching xml elements inside ESB 4.8.0. For e.g. a user send the below request into ESB.

Inside ESB we are calling an Account DB service which retrieves all the Account elements inside the database and return us. Assume the BE response with Account details is as follows.

<a123: Accounts xmlns:a123=">


Once this response comes into ESB it is required to compare the two xmls and retrieve Account elements that matches with account numbers which are in incoming request. We can do this through an iterate mediator inside ESB too, however the way that is explained here is rather simple to be used, specially for a scenario when it is not possible to use Iterate mediator.

1. When the incoming request comes, we can use xpath to get a comma separated String with all account numbers in the request. Then assign it to a Property mediator.

<property name="accountsList" expression="string-join(//Accounts/acctNum, ',')" scope="default" type="STRING"/>
This XPath expression uses string-join() function from Xpath 2.0. Therefore it is required to enable the below property in ESB/repository/conf/ file in order to support Xpath 2.0

# Uncomment following to support fallback XPATH 2.0 support with DOM and Saxon
2. Once this property is set we can call the BE and get the Account details response. Then the response payload will be passed to the given XSLT stylesheet using XSLT mediator in ESB.
Also as there is comparison needs to be done here, i am passing the previous 'accountsList' property as a parameter into XSLT stylesheet as well.

<xslt key="MatchingAccounts">
    <property name="AccountsList" expression="$ctx:accountsList"

3. The stylesheet is added as a local-entry in the ESB with the name 'MatchingAccounts'. If needed you can add this as a Registry resource too.

<localEntry xmlns="" key="MatchingAccounts">
<xsl:stylesheet xmlns:xsl="" xmlns:a123=" xmlns:fn="" version="2.0">
        <xsl:param name="AccountsList"/>
        <xsl:output method="xml" indent="yes"/>
        <xsl:template match="/">
            <Result xmlns="">
                <xsl:for-each select="//a123:Account">
                    <xsl:if test="matches($AccountsList,
                        <xsl:copy-of select="."/>

4. Inside the style sheet if process through each a123:Account/a123:AcctNum element and checks whether the 'accountsList' String contains that account number. If there is a match that <Account> node is taken as results.

5. For the above two XMLs the final payload that comes outside XSLT mediator will be like this, with one matching Account node.
<Result xmlns="">

Ishara Premadasa[WSO2 ESB] Sending Form Data through WSO2 ESB with x-www-form-urlencoded content type

This post is about how to post form data into a REST service from WSO2 ESB 4.8.1.
Imagine that we have the following key values pairs to be passed into a REST service  which accepts x-www-form-urlencoded type data.


Now when we going to send these data into ESB, it is needed to set them as key values pairs through adding a PayloadFactory mediator in the following format.  

<property name="name" value="ishara" scope="default" type="STRING"/>
<property name="company" value="wso2" scope="default" type="STRING"/>/>
<property name="country" value="srilanka" scope="default" type="STRING"/>/>

<payloadFactory media-type="xml">
                    <soapenv:Envelope xmlns:soapenv="">
                    <arg evaluator="xml" expression="$ctx:name"/>
                    <arg evaluator="xml" expression="$ctx:company"/>
                    <arg evaluator="xml" expression="$ctx:country"/>


Then set the messageType property as 'application/x-www-form-urlencoded'. This is how ESB can identify these key-value pairs as form data and it will do the transformations. Then it is also required to disable chunking too.
<property name="messageType" value="application/x-www-form-urlencoded" scope="axis2" type="STRING"/>
<property name="DISABLE_CHUNKING" value="true" scope="axis2" type="STRING"/>

Now we are all set to call the REST endpoint with this message data as below. You can use either send or a call mediator.

   <endpoint key="conf:endpoints/EmployeeDataServiceEndpoint.xml"/>


Shazni NazeerCreating Roles and Assigning Permissions in WSO2 IS

In this article I will try to explain the ways to create and assign roles in WSO2 Identity Server. I'll walk you through on creating and assigning roles in the Management console UI and then outline the ways to do the same using an API and using a Java client programmatically.

Let's start the WSO2 Identity Server (IS) first. To download and run the WSO2 IS, look here.

Once started login to management console. First let's look at how we can create a role named 'TestRole' and assign some permissions in the permission tree to it. This involves only a very few steps. Navigate to Configure -> Users and Roles -> Roles ->  Add New Role. Enter the role name as 'TestRole' and click next. The permission tree would be shown and you need to select the the relevant permission in the tree for this role. If you need to assign certain existing users to this role, click next and select those users. Otherwise click finish. That's it. You have successfully create a a new role named 'TestRole' and assigned permission for that. You can see the existing roles by navigating to Home -> Configure -> Users and Roles -> Roles. At this place you can view the permission, delete the role, assign user and also rename the role.

Next we shall see how to create roles and assign any permission in the permission tree to those roles using a programmatic way, without using the UI. You can do that by calling an admin service as an API or using a Java client. I'll outline both the methods here.

There is a web service API called RemoteUserStoreManagerService that can be used to manage users and roles. This is an Admin Service in the WSO2 carbon platform. Admin services in the WSO2 products are hidden by default. To see the wsdl of this web service you need to unhide the Admin service WSDLs. To do that, first, open up CARBON_HOME/repository/conf/carbon.xml and look for the following line.
Make it to 'false' and restart the server.

After the server is successfully started, you can access the wsdl of the RemoteUserStoreManagerService by navigating to https://localhost:9443/services/RemoteUserStoreManagerService?wsdl (Replace 'localhost' part as applicable).

Following are the two methods I mentioned

1. You can create a SOAP UI project with this wsdl. You can use the addRole method to add the role. A sample SOAP call is given below to add a role named 'ValidRole' and assign permission under '/permission/admin/login/EmailLogin' where 'EmailLogin' being a new permission I created under '/permission/admin/login/'
<soapenv:Envelope xmlns:soapenv="" xmlns:ser="" xmlns:xsd="">
2. You can write a Java client instead and invoke the methods in the RemoteUserStoreManagerService. A sample Java program is shown below to achieve this. The very self explanatory. Note: you need to add the plugins directory of a IS product to the classpath of the program to build and run this.
import org.apache.axis2.client.Options;
import org.apache.axis2.client.ServiceClient;
import org.apache.axis2.context.ConfigurationContext;
import org.apache.axis2.context.ConfigurationContextFactory;
import org.apache.axis2.transport.http.HTTPConstants;
import org.apache.axis2.transport.http.HttpTransportProperties;
import org.wso2.carbon.CarbonConstants;

public class ISClient {

private static final String CARBON_HOME = "/home/shazni/Downloads/WSO2/wso2is-5.0.0";
private static final String SEVER_URL = "https://localhost:9443/services/";
private static final String USER_NAME = "admin";
private static final String PASSWORD = "admin";
private static final String ROLE_NAME = "permissionRole";

public static void main(String args[]){

String trustStore = CARBON_HOME + "/repository/resources/security/wso2carbon.jks";
System.setProperty("", trustStore );
System.setProperty("", "wso2carbon");

ConfigurationContext configContext;

try {
configContext = ConfigurationContextFactory.createConfigurationContextFromFileSystem( null, null);
String serviceEndPoint = SEVER_URL + "RemoteUserStoreManagerService";

RemoteUserStoreManagerServiceStub adminStub = new RemoteUserStoreManagerServiceStub(configContext, serviceEndPoint);
ServiceClient client = adminStub._getServiceClient();
Options option = client.getOptions();

option.setProperty(HTTPConstants.COOKIE_STRING, null);

HttpTransportProperties.Authenticator auth = new HttpTransportProperties.Authenticator();
option.setProperty(org.apache.axis2.transport.http.HTTPConstants.AUTHENTICATE, auth);

boolean authenticate = false;

authenticate = adminStub.authenticate(USER_NAME, PASSWORD);
} catch (Exception e){

System.out.println("User is authenticated successfully");
} else {
System.err.println("User is authentication failed");

PermissionDTO permissionDTO = new PermissionDTO();

PermissionDTO[] permissionDTOs = new PermissionDTO[1];
permissionDTOs[0] = permissionDTO ;

adminStub.addRole(ROLE_NAME, null, permissionDTOs);
System.out.println("Role is created successfully");
} catch (Exception e){
System.err.println("Role creation is failed");
} catch (Exception e) {
Well that's it. Hope this article has been informative to you.

Lali DevamanthriDenial of Service Flaw in libxml2

Debian Linux Security Advisory 3057-1 – Sogeti found a denial of service flaw in libxml2, a library providing support to read, modify and write XML and HTML files. A remote attacker could provide a specially crafted XML file that, when processed by an application using libxml2, would lead to excessive CPU consumption (denial of service) based on excessive entity substitutions, even if entity substitution was disabled.

Entity Substitution

Entities in principle are similar to simple C macros. An entity defines an abbreviation for a given string that you can reuse many times throughout the content of your document. Entities are especially useful when a given string may occur frequently within a document, or to confine the change needed to a document to a restricted area in the internal subset of the document (at the beginning). Example:

1 <?xml version="1.0"?>
2 <!DOCTYPE EXAMPLE SYSTEM "example.dtd" [
3 <!ENTITY xml "Extensible Markup Language">
4 ]>
6    &xml;

Line 3 declares the xml entity. Line 6 uses the xml entity, by prefixing its name with ‘&’ and following it by ‘;’ without any spaces added. There are 5 predefined entities in libxml2 allowing you to escape characters with predefined meaning in some parts of the xml document content: &lt; for the character ‘<‘, &gt; for the character ‘>’, &apos; for the character ”’, &quot; for the character ‘”‘, and &amp; for the character ‘&’.

One of the problems related to entities is that you may want the parser to substitute an entity’s content so that you can see the replacement text in your application. Or you may prefer to keep entity references as such in the content to be able to save the document back without losing this usually precious information (if the user went through the pain of explicitly defining entities, he may have a a rather negative attitude if you blindly substitute them as saving time). The xmlSubstituteEntitiesDefault() function allows you to check and change the behaviour, which is to not substitute entities by default.

Here is the DOM tree built by libxml2 for the previous document in the default case:

/gnome/src/gnome-xml -> ./xmllint --debug test/ent1
       content=Extensible Markup Language

And here is the result when substituting entities:

/gnome/src/gnome-xml -> ./tester --debug --noent test/ent1
     content=     Extensible Markup Language

Dimuthu De Lanerolle

Java Tips .....

To get directory names inside a particular directory ....

private String[] getDirectoryNames(String path) {

        File fileName = new File(path);
        String[] directoryNamesArr = fileName.list(new FilenameFilter() {
            public boolean accept(File current, String name) {
                return new File(current, name).isDirectory();
        });"Directories inside " + path + " are " + Arrays.toString(directoryNamesArr));
        return directoryNamesArr;

To retrieve links on a web page ......

 private List<String> getLinks(String url) throws ParserException {
        Parser htmlParser = new Parser(url);
        List<String> links = new LinkedList<String>();

        NodeList tagNodeList = htmlParser.extractAllNodesThatMatch(new NodeClassFilter(LinkTag.class));
        for (int x = 0; x < tagNodeList.size(); x++) {
            LinkTag loopLinks = (LinkTag) tagNodeList.elementAt(m);
            String linkName = loopLinks.getLink();
        return links;

To search for all files in a directory recursively from the file/s extension/s ......

private List<String> getFilesWithSpecificExtensions(String filePath) throws ParserException {

// extension list - Do not specify "." 
 List<File> files = (List<File>) FileUtils.listFiles(new File(filePath),
                new String[]{"txt"}, true);

        File[] extensionFiles = new File[files.size()];

        Iterator<File> itFileList = files.iterator();
        int count = 0;

        while (itFileList.hasNext()) {
            File filePath =;
extensionFiles[count] = filePath;

Reading files in a zip

     public static void main(String[] args) throws IOException {
        final ZipFile file = new ZipFile("Your zip file path goes here");
            final Enumeration<? extends ZipEntry> entries = file.entries();
            while (entries.hasMoreElements())
                final ZipEntry entry = entries.nextElement();
                System.out.println( "Entry "+ entry.getName() );
                readInputStream( file.getInputStream( entry ) );
        private static int readInputStream( final InputStream is ) throws IOException {
            final byte[] buf = new byte[8192];
            int read = 0;
            int cntRead;
            while ((cntRead =, 0, buf.length) ) >=0)
                read += cntRead;
            return read;

Converting Object A to Long[]

 long [] myLongArray = (long[])oo;
        Long myLongArray [] = new Long[myLongArray.length];
        int i=0;

        for(long temp:myLongArray){
            myLongArray[i++] = temp;

Prabath SiriwardenaSecuring the Insecure

The 33 years old, Craig Spencer returned back to USA on 17th October from Africa after treating Ebola patients. Just after few days, he was tested positive for Ebola. Everyone was concerned - specially the people around him - and the New Yorkers. The mayor of the New York came in front of the media and gave an assurance to its citizens - that they have the world's top medical staff as well as the most advanced medical equipments to treat Ebola - and they have been prepared for this for so many month. That for sure might have calm down most of the people.

Let me take another example.

When my little daughter was three months old, she used to go to anyone's hand. Now - she is eleven months and knows who her mother is. Whenever she finds any difficulty she keeps on crying till she gets to the mother. She only feels secured in her mother's arms.

When we type a password into the computer screen - we are so much worried that, it will be seen by our neighbors. But - we never worried of our prestigious business emails been seen by NSA. Why ? Either its totally out of our control - or - we believe NSA will only use them to tighten national security and for nothing else.

What I am try to say with all these examples is, insecurity is a perception. Its a perception triggered by undesirable behaviors. An undesirable behavior is a reflection of how much a situation deviates from the correctness.

Its all about perception and about building the  perception. There are no 100% secured systems on the earth. Most of the cryptographic algorithms developed in 80s and 90s are now broken due to the advancements in computer processing power.


In the computer world, most developers and operators are concerned about the correctness. The correctness is about achieving the desired behavior. You deposit $ 1000 in your account you would expect the savings to grow exactly by 1000. You send a document to a printer and you would expect the output to be as it is as you see it on the computer screen.

The security is concerned about preventing undesirable behaviors.


There are three security properties that can lead into undesirable behaviors, if those are violated: confidentiality, integrity and availability.

Confidentiality means protecting data from unintended recipients, both at rest and in transit. You achieve confidentiality by protecting transport channels and storage with encryption.

Integrity is a guarantee of data’s correctness and trustworthiness and the ability to detect any unauthorized modifications. It ensures that data is protected from unauthorized or unintentional alteration, modification, or deletion. The way to achieve integrity is twofold: preventive measures and detective measures. Both measures have to take care of data in transit as well as data at rest.

Making a system available for legitimate users to access all the time is the ultimate goal of any system design. Security isn’t the only aspect to look into, but it plays a major role in keeping the system up and running. The goal of the security design should be to make the system highly available by protecting it from illegal access attempts. Doing so is extremely challenging. Attacks, especially on public endpoints, can vary from an attacker planting malware in the system to a highly organized distributed denial of service (DDoS) attack.


In March, 2011 the RSA corporation was breached. Attackers were able to steal sensitive tokens related to RSA SecureID devices. These tokens were then used to break into companies that used SecureID.

In October, 2013 the Adobe corporation was breached. Both the source code and the customer records were stolen - including passwords.

Just a month after the Adobe attack, in November, 2013 - the Target was attacked and 40 million credit card and debit card data were stolen.

How all these attacks are possible? Many breaches begin by exploiting a vulnerability in the system under question. A vulnerability is a defect that an attacker can exploit to effect an undesired behavior, with a set of carefully crafted interactions. In general a defect is a problem in either the design or the implementation of the system so that it fails to meet its desired requirements.

To be precise, a flow is a defect in the design and a bug is a defect in the implementation. A vulnerability is a defect in the system that affects security relevant behavior of a system, rather than just the correctness.

If you take the RSA 2011 breach, it was based on a vulnerability in the Adobe flash player. A carefully crafted flash program when run by a vulnerable flash player, allowed the attacker to execute arbitrary code on the running machine - which was in fact due to a bug in the code.

To ensure security, we must eliminate bugs and design flows and make them harder to exploit.

The Weakest Link

In 2010, it was discovered that since 2006, a gang of robbers equipped with a powerful vacuum cleaner had stolen more than 600,000 euros from the Monoprix supermarket chain in France. The most interesting thing was the way they did it. They found out the weakest link in the system and attacked it. To transfer money directly into the store’s cash coffers, cashiers slid tubes filled with money through pneumatic suction pipes. The robbers realized that it was sufficient to drill a hole in the pipe near the trunk and then connect a vacuum cleaner to capture the money. They didn’t have to deal with the coffer shield.

The take-away there is, a proper security design should include all the communication links in the system. Your system is no stronger than its weakest link.

The Defense in Depth

A layered approach is preferred for any system being tightened for security. This is also known as defense in depth. Most international airports, which are at a high risk of terrorist attacks, follow a layered approach in their security design. On November 1, 2013, a man dressed in black walked into the Los Angeles International Airport, pulled a semi-automatic rifle out of his bag, and shot his way through a security checkpoint, killing a TSA screener and wounding at least two other officers. This was the first layer of defense. In case someone got through it, there has to be another to prevent the gunman from entering a flight and taking control. If there had been a security layer before the TSA, maybe just to scan everyone who entered the airport, it would have detected the weapon and probably saved the life of the TSA officer. The number of layers and the strength of each layer depend on which assets you want to protect and the threat level associated with them. Why would someone hire a security officer and also use a burglar alarm system to secure an empty garage?

Insider Attacks

Insider attacks are less powerful and less complicated, but highly effective. From the confidential US diplomatic cables leaked by WikiLeaks to Edward Snowden’s disclosure about the National Security Agency’s secret operations, are all insider attacks. Both Snowden and Bradley Manning were insiders who had legitimate access to the information they disclosed. Most organizations spend the majority of their security budget to protect their systems from external intruders; but approximately 60% to 80% of network misuse incidents originate from inside the network, according to the Computer Security Institute (CSI) in San Francisco.

Insider attacks are identified as a growing threat in the military. To address this concern, the US Defense Advanced Research Projects Agency (DARPA) launched a project called Cyber Insider Threat (CINDER) in 2010. The objective of this project was to develop new ways to identify and mitigate insider threats as soon as possible.

Security by Obscurity

Kerckhoffs' Principle emphasizes that a system should be secured by its design, not because the design is unknown to an adversary. Microsoft’s NTLM design was kept secret for some time, but at the point (to support interoperability between Unix and Windows) Samba engineers reverse-engineered it, they discovered security vulnerabilities caused by the protocol design itself. In a proper security design, it’s highly recommended not to use any custom-developed algorithms or protocols. Standards are like design patterns: they’ve been discussed, designed, and tested in an open forum. Every time you have to deviate from a standard, should think twice—or more.

Software Security

The software security is only a part or a branch of computer security. Software security is kind of computer security that  focuses on the secure design and the implementation of software, using best language, tools and methods. Focus of study of software security is the 'code'. Most of the popular approaches to security treat software as a black box. They tend to ignore software security.

In other words, it focuses on avoiding software vulnerabilities, flaws and bugs. While software security overlaps with and complements other areas of computer security, it is distinguished by its focus on a secure system's code. This focus makes it a white box approach, where other approaches are more black box. They tend to ignore the software's internals.

Why is software security's focus on the code important?

The short answer is that software defects are often the root cause of security problems, and software security aims to address these defects directly. Other forms of security tend to ignore the software and build up defenses around it. Just like the walls of a castle, these defenses are important and work up to a point. But when software defects remain, cleaver attackers often find a way to bypass those walls.

Operating System Security

Let's consider a few standard methods for security enforcement and see how their black box nature presents limitations that software security techniques can address.

When computer security was growing up as a field in the early 1970s, the operating system was the focus. To the operating system, the code of a running program is not what is important. Instead, the OS cares about what the program does, that is, its actions as it executes. These actions, called system calls, include reading or writing files, sending network packets and running new programs. The operating system enforces security policies that limit the scope of system calls. For example, the OS can ensure that Alice's programs cannot access Bob's files. Or that untrusted user programs cannot set up trusted services on standard network ports.

The operating system's security is critically important, but it is not always sufficient. In particular, some of the security relevant actions of a program are too fine-grained to be mediated as system calls. And so the software itself needs to be involved. For example, a database management system or DMBS is a server that manages data whose security policy is specific to an application that is using that data. For an online store, for example, a database may contain security sensitive account information for customers and vendors alongside other records such as product descriptions which are not security sensitive at all. It is up to the DBMS to implement security policies that control access to this data, not the OS.

Operating systems are also unable to enforce certain kinds of security policies. Operating systems typically act as an execution monitor which determines whether to allow or disallow a program action based on current execution context and the program's prior actions. However, there are some kinds of policies, such as information flow policies, that can not be, that simply cannot be enforced precisely without consideration for potential future actions, or even non-actions. Software level mechanisms can be brought to bear in these cases, perhaps in cooperation with the OS.

Firewalls and IDS

Another popular sort of security enforcement mechanism is a network monitor like a firewall or intrusion detection system or IDS. A firewall generally works by blocking connections and packets from entering the network. For example, a firewall may block all attempts to connect to network servers except those listening on designated ports. Such as TCP port 80, the standard port for web servers. Firewalls are particularly useful when there is software running on the local network that is only intended to be used by local users. An intrusion detection system provides more fine-grained control by examining the contents of network packets, looking for suspicious patterns. For example, to exploit a vulnerable server, an attacker may send a carefully crafted input to that server as a network packet. An IDS can look for such packets and filter them out to prevent the attack from taking place. Firewalls and IDSs are good at reducing the avenues for attack and preventing known vectors of attack. But both devices can be worked around. For example, most firewalls will allow traffic on port 80, because they assume it is benign web traffic. But there is no guarantee that port 80 only runs web servers, even if that's usually the case. In fact, developers have invented SOAP, which stands for simple object access protocol (no more an acronym since SOAP 1.2), to work around firewall blocking on ports other than port 80. SOAP permits more general purpose message exchanges, but encodes them using the web protocol.

Now, IDS patterns are more fine-grained and are more able to look at the details of what's going on than our firewalls. But IDSs can be fooled as well by inconsequential differences in attack patterns. Attempts to fill those gaps by using more sophisticated filters can slow down traffic, and attackers can exploit such slow downs by sending lots of problematic traffic, creating a denial of service, that is, a loss of availability. Finally, consider anti-virus scanners. These are tools that examine the contents of files, emails, and other traffic on a host machine, looking for signs of attack. These are quite similar to IDSs, but they operate on files and have less stringent performance requirements as a result. But they too can often be bypassed by making small changes to attack vectors.


Heartbleed is the name given to a bug in version 1.0.1 of the OpenSSL implementation of the transport layer security protocol or TLS. This bug can be exploited by getting the buggy server running OpenSSL to return portions of its memory. The bug is an example of a buffer overflow. Let's look at black box security mechanisms, and how they fare against Heartbleed.

Operating system enforcement and anti-virus scanners can do little to help. For the former, an exploit that steals data does so using the privileges normally granted to a TLS-enabled server. So the OS can see nothing wrong. For the latter, the exploit occurs while the TLS server is executing, therefore leaving no obvious traces in the file system. Basic packet filters used by IDSs can look for signs of exploit packets. The FBI issued signatures for the snort IDS soon after Heartbleed was announced. These signatures should work against basic exploits, but exploits may be able to apply variations in packet format such as chunking to bypass the signatures. In any case, the ramifications of a successful attack are not easily determined, because any exfiltrated data will go back on the encrypted channel. Now, compared to these, software security methods would aim to go straight to the source of the problem by preventing or more completely mitigating the defect in the software.

Threat Modeling

Threat modeling is a methodical, systematic approach to identifying possible security threats and vulnerabilities in a system deployment. First you need to identify all the assets in the system. Assets are the resources you have to protect from intruders. These can be user records/credentials stored in an LDAP, data in a database, files in a file system, CPU power, memory, network bandwidth, and so on. Identifying assets also means identifying all their interfaces and the interaction patterns with other system components. For example, the data stored in a database can be exposed in multiple ways. Database administrators have physical access to the database servers. Application developers have JDBC-level access, and end users have access to an API. Once you identify all the assets in the system to be protected and all the related interaction patterns, you need to list all possible threats and associated attacks. Threats can be identified by observing interactions, based on the CIA triad.

From the application server to the database is a JDBC connection. A third party can eavesdrop on that connection to read or modify the data flowing through it. That’s a threat. How does the application server keep the JDBC connection username and password? If they’re kept in a configuration file, anyone having access to the application server’s file system can find them and then access the database over JDBC. That’s another threat. The JDBC connection is protected with a username and password, which can potentially be broken by carrying out a brute-force attack. Another threat.

Administrators have direct access to the database servers. How do they access the servers? If access is open for SSH via username/password, then a brute-force attack is likely a threat. If it’s based on SSH keys, where those keys are stored? Are they stored on the physical personal machines of administrators or uploaded to a key server? Losing SSH keys to an intruder is another threat. How about the ports? Have you opened any ports to the database servers, where some intruder can telnet and get control or carry out an attack on an open port to exhaust system resources? Can the physical machine running the database be accessed from outside the corporate network? Is it only available over VPN?

All these questions lead you to identifying possible threats over the database server. End users have access to the data via the API. This is a public API, which is exposed from the corporate firewall. A brute-force attack is always a threat if the API is secured with HTTP Basic/Digest Authentication. Having broken the authentication layer, anyone could get free access to the data. Another possible threat is someone accessing the confidential data that flows through the transport channels. Executing a man-in-the-middle attack can do this. DoS is also a possible threat. An attacker can send carefully crafted, malicious, extremely large payloads to exhaust server resources. STRIDE is a popular technique to identify threats associated with a system in a methodical manner. STRIDE stands for Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, Escalation of privileges.

Lali DevamanthriBedrock Linux & Poettering’s New Suggestion!

Bedrock Linux is a Linux distribution created with the aim of making most of the (often seemingly mutually-exclusive) benefits of various other Linux distributions available simultaneously and transparently.

If one would like a rock-solid stable base (for example, from Debian or a RHEL clone) yet still have easy access to cutting-edge packages (from, say, Arch Linux), automate compiling packages with Gentoo’s portage, and ensure that software aimed only for the ever popular Ubuntu will run smoothly – all at the same time, in the same distribution – Bedrock Linux will provide a means to achieve this. ~

Lennart Poettering has caused a big stir in the Linux world with his systemd approach to configuration. Now he has suggested a new way of building distros and getting your code into the users hands and its all based on btrfs file system.

using the filesystem versioning feature of btrfs to distribute everything from individual packages to entire operating systems(good for server&embed systems), i.e two files with the same name can exist in the same directory, just under different versions of the filesystem . It would then be possible to mix-and-match, even at run-time. If I understand it right, you could even install two operating systems and natively run executables that rely on either of them without rebooting.

It sounds a lot like Bedrock Linux …!!

But fundamental differences that you can notice are:

  • Bedrock Linux lets you use software straight from an upstream distro. If there’s some distro that provides something, you can use it now, while this proposal requires people make special packages for it. If people have to make special packages, I’m not sure I see the benefit of this over something like Nix… Bedrock Linux was largely created specifically because things like Nix don’t have enough packages.
  • Bedrock Linux intends to be very flexible in terms of what it imposes on the end-user. This seems to have a hard requirement on a btrfs feature. If btrfs isn’t performant in a specific area you like, isn’t stable enough yet, etc, this isn’t a viable solution for you. Moreover it is a bit worrying to tie things that tightly to a specific technology as it may make things hardware to replace down the road. What if someone comes up with some fancy new filesystem that is better in other ways but doesn’t have this feature?
  • Bedrock Linux groups things together by shared libraries so that if there’s a security issue, you only need to update a handful of files managed by presumably trusted upstream distros. With this, security updates fall back to the individual package maintainer; it feels like a rather large step backwards in terms of security from how Linux traditionally works.
  • Bedrock Linux “fixes” the problem of letting users use software that wasn’t aimed at their specific distro onlyfor users of Bedrock Linux. What Bedrock Linux is doing doesn’t really help people on other distros. This proposal could “fix” it in general if developers target it.
  • Bedrock Linux’s ability to run software from other distros means the software it runs wasn’t actually intended to run in this configuration and and an update could, in theory, break it. There is a lot of effort and hopefully smart design to avoid this, but it is still at least in theory possible. The software aimed at this proposal will know its situation and be less likely to run into this problem.

While on the surface this seems a lot like Bedrock Linux this proposal seems functionally closer to Nix. This has a similar restriction on requiring special packages for it and similar “fixes” the problem from the point of view of the packager.

Chanika GeeganageCustomized login pages in WSO2 IS in OAuth2 flow

Customizing the login page to the server is available for SAML2, OAuth and OpenID flows. In this blog post I'm going to explain how to customize the login page for OAuth2 authentication flow. If you want to know about this on SAML2 the steps are explained in the WSO2 IS docs under customizing login pages.  Here, I'm using WSO2 IS 5.0.0 which is the latest release.

1. Check out the source code of the authenticationendpoint web app from the SVN location
2. Modify the existing located at src/main/java/org/wso2/carbon/identity/application/authentication/endpoint/oauth2 as indicated below.

In the doGet method change
String applicationName = request.getParameter("application");
to String applicationName = request.getParameter("relyingParty");

with this modification it identifies the application name as the value for "relyingParty" in the request.

3. Build the source and replace the existing <IS_HOME>/repository/deployment/server/webapps/authenticationendpoint.war file with the new war file. Also, delete the existing expanded authenticationendpoint folder at the same location. (Take a backup of the existing authenticationendpoint folder if needed)

4. Start the server

5. Add init parameters to the "OAuth2Login" servlet in the web.xml file located in the expanded web app as below.

The param-name should be in the format
(the client key received at the application registration)

The param-value is the customized page location

6. Place the customized login page at the same level as 'login.jsp'. Also, if there are css files and images then put them inside the respective folders in the authenticationendpoint.

7. Restart the server and you will be able to see the new login page when you login to the web app.

Chamila WijayarathnaImproving Performance of El Gamal Signature Verification using Batch Screening

Batch screening is a method which can be used to improve the performance of El Gamal Digital Signature verification.

Following is the steps we have to follow when signing and verifying messages using El Gamal.

Fig 1 - Signature Generation and verification in standard El Gamal Signature Scheme (Source -

What I did in batch screening here is, when verifying a batch of signed messages by a single signer, when calculating powers of g, other than calculating each and every power separately, we calculate the powers once for all messages and then verify all messages at once.  That is other than generating g^H(m1),g^H(m2),g^H(m3) ..... , g^H(mn) and verifying messages separately, we generate g^ (H(m1) + H(m2) + .. + H(mn)) and check the verifiability of all messages at same time.
But here fraud messages can be generated, so that they will not pass the standard verification of individual messages, but will pass the screening tests by cancelling errors of each other. To avoid this, here in batch screening it divide the set of signatures into two random subsets and evaluate the separately.
Following is the java program I created to compare performance of two schemes for verifying 10 signatures.

//Key Generation    
    Random rand = new SecureRandom();
    BigInteger secretKey = new BigInteger("12345678901234567890");
    BigInteger q = BigInteger.probablePrime(64, rand);   // select q, probably a integer
    BigInteger g = new BigInteger("3"); // g integer
    BigInteger Y = g.modPow(secretKey, q);   //y = g^x mod q
    String[] strings = {"61684516161651686","51684656846518651826","5684865181548351685","3548646813521831535","1168531211316513168","34846515486165184615","345684484548354355","18486854546358","35485684358355","464684646516845465"};
    BigInteger[] rs=new BigInteger[strings.length];
    BigInteger[] signs=new BigInteger[strings.length];
    BigInteger m,k;
    for(int i=0;i<strings.length;i++){
    m = new BigInteger(strings[i]);
    k = BigInteger.probablePrime(64, rand);
    rs[i] = g.modPow(k, q);
    signs[i] = ((new BigInteger(m.hashCode() + "")).subtract(secretKey.multiply(rs[i]))).multiply(k.modInverse(q.subtract(new BigInteger("1"))));
    //Normal Verification
    BigInteger rhs,lhs;
    long start = System.nanoTime();
    for(int i=0;i<strings.length;i++){
    rhs = g.modPow(new BigInteger((new BigInteger(strings[i])).hashCode() + ""),q);
    lhs = Y.modPow(rs[i], q).multiply(rs[i].modPow(signs[i], q)).mod(q);
    //System.out.println("Sign " + i + " : true");
    //System.out.println("Sign " + i + " : false");
    long end = System.nanoTime();    
    //Batch Verification
    BigInteger sumhm = new BigInteger("0"),lhsmultiplication = new BigInteger("1");
    Random rn = new Random();
    int range = 10;
    int randomNum =  rn.nextInt(range) + 0;
    start = System.nanoTime();
    for(int i=0;i<randomNum;i++){//Random set 1
    sumhm = sumhm.add(new BigInteger((new BigInteger(strings[i])).hashCode() + ""));
    lhsmultiplication = lhsmultiplication.multiply(Y.modPow(rs[i], q).multiply(rs[i].modPow(signs[i], q)).mod(q));
    BigInteger rhs2 = g.modPow(sumhm, q);
    BigInteger lhs2 = lhsmultiplication.mod(q);sumhm = new BigInteger("0");lhsmultiplication = new BigInteger("1");
    //start = System.nanoTime();
    for(int i=randomNum;i<strings.length;i++){//Random set 2
    sumhm = sumhm.add(new BigInteger((new BigInteger(strings[i])).hashCode() + ""));
    lhsmultiplication = lhsmultiplication.multiply(Y.modPow(rs[i], q).multiply(rs[i].modPow(signs[i], q)).mod(q));
    rhs2 = g.modPow(sumhm, q);
    lhs2 = lhsmultiplication.mod(q);    
    if(rhs2.equals(lhs2) && rhs2.equals(lhs2)){
    //System.out.println("Batch Screen true");
    //System.out.println("Batch Screen false");
    end = System.nanoTime();

I observed following results for the above performance test using verification of 10 Strings.

Standard Scheme - time in ns
Batch Screening - time in ns

This values slightly changed when running program different times with different loads in computer, but the ratio between the two values seems to be constant. So from above values, we can conclude that batch screening gives better performance for signature verification than standard El Gamal signature scheme.

Chamila WijayarathnaCreating a Web Service using Apache Axis2

In one of my previous blogs, I wrote about how two programs in a network can communicate with each other using a socket connection. Today I'm going to write about communicating using web services which is a little high level way of communicating than socket connection.
I'm going to write how to create a simple web service using Apache Axis 2 and call it using an Axis2 client. To do this, first you should have Axis2 installed. You can follow guide at for that.
First let's see how we can write an Axis2 web service. Here I'm going to discuss 2 methods for creating a web service.
They are
  1. Using POJO
  2. Using ADB

Building Web Service using POJO

To explain how to build a web service using POJO, I'll use example at AXIS2_HOME/samples/quickstart. When building a service, it should have following folder structure.

- resources
- services.xml
- src
- samples
- quickstart
- service
- pojo

So here, we can identify two important files, services.xml and .
Here java class defines the functionality of operations available in this service. Following is the java class available in quckstart sample.

public class StockQuoteService {
    private HashMap map = new HashMap();

    public double getPrice(String symbol) {
        Double price = (Double) map.get(symbol);
        if(price != null){
            return price.doubleValue();
        return 42.00;

    public void update(String symbol, double price) {
        map.put(symbol, new Double(price));

Here this defines two operations, one is gets input and returns output, and other gets input and returns nothing. The messaging methods used here is defined in services.xml.

<service name="StockQuoteService" scope="application" targetNamespace="http://quickstart.samples/">
        Stock Quote Service
        <messageReceiver mep=""
        <messageReceiver mep=""
    <schema schemaNamespace="http://quickstart.samples/xsd"/>
    <parameter name="ServiceClass">samples.quickstart.service.pojo.StockQuoteService</parameter>

When we had these files ready, we can build the project and generate .aar file. If you are using eclipse, a jar file can be created easily and the rename it to .aar format. It should have following directory structure.

- quickstart/build/classes
- services.xml
- samples
- quickstart
- service
- pojo
- StockQuoteService.class

Then this .aar file should be copied to webapps/axis2/WEB-INF/services directory in the servelet engine. By going to http://localhost:8080/axis2/services/listServices, we can check if the service has deployed properly. If so it should be shown under the list of services.
This we services can be used by sending http requests to it as follows.

Building Web Services using Axis2 Data Binding Framework (ADB)

Using wsdl2java functionality in ADB, we can create a Axis2 server using the .wsdl file of that service. Let's try to create a simple server using the sample code given in axis2 samples. 
In samples/quickstartadb/resources/META-INF folder, we can find StockQuoteService.wsdl file. Here in this wsdl file, we can specify what are the operations that the services should have, their input output parameters, etc. We can use a services.xml file similar to the one we used in POJO.
Then we can generate the java implementation of the defined operations by running 

$AXIS2_HOME/bin/wsdl2java -uri resources/META-INF/StockQuoteService.wsdl -p samples.quickstart.service.adb -d adb -s -ss -sd -ssi -o build/service

from quickstartadb folder. Then this will generate build/service folder which contain the java file. We can run 

ant jar.server

within that folder, which generates .aar file and then can deploy the service.

You can create a axis2 client using ADB with the same wsdl we used and test the services we created.

There are some more methods to create web services like using AXIOM, using XMLBeans and using JiBX. One of my friends has written a blog with more detailed explanations on how to create a service using AXIOM. Its available at

For more details - 

Adam FirestoneIf the Vulnerability of Our National Critical Infrastructure to Cyber-attack Keeps You Up at Night. . .You're Not Alone.

The vulnerability of our national critical infrastructure to cyber-attack is a serious matter that demands attention from industry and elected leadership.  However, if any meaningful change is going to take place, it must be demanded and supported by all stakeholders.  Please join me in Washington, DC on Tuesday, October 28th, 2014 to discuss both the vulnerabilities faced by the electrical grid and to explore – with your assistance and involvement – the way ahead to a safer, cyber-resilient national critical infrastructure.  For more information please see:

Quoted in the Columbia, South Carolina newspaper The State on October 24, 2014, American University history professor Alan Lichtman characterized the national response to the current Ebola outbreak:

“When caught unprepared in a crisis, Americans have a tendency to see things in apocalyptic terms. . . It may not be a uniquely American trait, but it’s one that appears we’re particularly conditioned to and bound to repeat.”
“We are a people of plenty. We’re the richest nation on Earth. . . We have unparalleled prosperity, yet we have this anxiety that it may not last. This external force over which we don’t seem to have any control can cut at the heart of American contentment and prosperity.”
Regardless of how extreme the American reaction to the Ebola outbreak is, or whether it’s warranted, it’s impossible to argue that local, state and federal governments are taking measures to deal with a real and present threat.  These measures, however, are inherently reactive, coming into force after the danger materialized on American shores.

In the case of a dangerous communicable disease, a reactive approach may be sufficient; time will tell.   By the time the public or private sector is able to react to a successful cyber-attack on our national critical infrastructure, it will be too late.  The damage will already be done and the effects will be catastrophic, wide-spread, and long-lasting.  Imagine tens of millions without heat, light, fuel or purified water during winter.  Imagine an inability to transport or distribute food and other necessities to and within large urban areas for months at a time.

Feeling uneasy?  Concerned?  A little worried around the edges?

If you are, you’re not alone.  There’s growing awareness of the perfect storm of vulnerabilities inherent to the American national critical infrastructure.  It results from the combination of a thoroughly interconnected society, a long standing emphasis on safety and reliability (often to the detriment of security) within industrial control systems (ICS) and a commercial software development model that routinely incorporates (and touts!) post-deployment security and vulnerability patching.

Fortunately, as we become more aware of our vulnerabilities, we are also becoming motivated to discover and implement solutions that address them.  These range from policy initiatives designed to degrade, reduce and eventually remove domains and service providers from which attacks and malware emanate to the development and implementation of new technologies, systems and networks that both render conventional attacks less effective and create resilient systems that can continue to operate in spite of an attack.

Securing the resources necessary to implement these solutions will require broad, grass-roots awareness of and enfranchisement in both the vulnerability and the path to a solution.

To help raise this awareness, Kaspersky Government Security Solutions, Inc. (KGSS), in cooperation with its sponsors and partners, is hosting the 2nd annual Kaspersky Government Cybersecurity Forum in Washington, DC on Tuesday, October 28th, 2014.  The event, which will be held at the Ronald Reagan Building and International Trade Center, is open to all at no cost.  Additionally, attendees who hold PMP, CSEP, ASEP and/or CISSP certifications may use conference participation to claim required continuing education credits toward those certifications.
For more information, please see:

Thanks, and I hope to see you there!

Dinusha SenanayakaExposing a SOAP backend service as a REST API using WSO2 API Manager

This post explains how we can publish an existing SOAP service as a  REST API using WSO2 API Manager.

We will be using a sample data-service called "OrderSvc"as the SOAP service which can be deployed as a SOAP service in WSO2 Data Services Server. But this could be any of SOAP service.

1. Service Description of ‘OrderSvc’ SOAP Backend Service

This “orderSvc” service provides WSDL with 3 operations (“submitOrder”, “cancelOrder”, “getOrderStatus”). 

submitOrder operation takes ProductCode and Quantity as parameters.
Sample request :
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">


cancelOrder operation takes OrderId as parameter and does an immediate cancellation and returns a confirmation code.
Sample request:
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">

orderStatus operaioin takes the orderId as parameter and return the order status as response.
Sample request :
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">

We need to expose this "OrderSvc" SOAP service as a REST API using API Manager. And once we exposed this as a REST API, “submitOrder”, “cancelOrder”, “getOrderStatus” operations should map to REST resources as bellow which takes the user parameters as query parameters.

“/submitOrder” (POST) => request does not contain order id or date; response is the full order payload.

“/cancelOrder/{id}” (GET) => does an immediate cancellation and returns a confirmation code.

“/orderStatus/{id}” (GET) => response is the order header (i.e., payload excluding line items).

Deploying the Data-Service :

1. Login to the MySQL and create a database called “demo_service_db” . (This database name can be anything , we need to update the data-service (.dbs file) accordingly).

mysql> create database demo_service_db;
mysql> demo_service_db;

2. Execute the dbscript given here on the above created database. This will create two tables ‘CustomerOrder’, ‘OrderStatus’ and one stored procedure ‘submitOrder’. Also it will insert some sample data into two tables.

3. Include mysql jdbc driver into DSS_HOME/repository/components/lib directory.

4. Download the data-service file given here. Before deploy this .dbs file, we need to modify the data source section defined in it. i.e in the downloaded orderSvc.dbs file, change the following properties by providing correct jdbcUrl ( need to point to the database that you created in step 1)  and change the userName/ Pwd of mysql connection, if those are different than the one defined here.

<config id="ds1">
     <property name="driverClassName">com.mysql.jdbc.Driver</property>
     <property name="url">jdbc:mysql://localhost:3306/demo_service_db</property>
     <property name="username">root</property>
     <property name="password">root</property>

5. Deploy the orderSvc.dbs file in Data services server by copying this file into “wso2dss-3.2.1/repository/deployment/server/dataservices” directory. Start the server.

6. Before expose through API Manager, check whether all three operations works as expected using try-it tool or SOAP-UI.

Sample request :
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">

Response :
<soapenv:Envelope xmlns:soapenv="">
     <submitOrderResponse xmlns="">

Sample request:
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">

<soapenv:Envelope xmlns:soapenv="">
     <axis2ns1:REQUEST_STATUS xmlns:axis2ns1="">SUCCESSFUL</axis2ns1:REQUEST_STATUS>

Sample request :
<soapenv:Envelope xmlns:soapenv="" xmlns:dat="">

<soapenv:Envelope xmlns:soapenv="">
     <OrderStatus xmlns="">


2. Configuring API Manager

1. Download the custom sequence given here and save it to the APIM registry location “/_system/governance/apimgt/customsequences/in/”.  This can be done by login to the API Manager carbon management console. 

In the left menu section, expand the Resources -> Browse -> Go to "/_system/governance/apimgt/customsequences/in" -> Click in "Add Resource" -> Browse the file system and upload the "orderSvc_supporting_sequence.xml" sequence that downloaded above. Then click "Add". This step will save the downloaded sequence into registry.

4. Create orderSvc API by wrapping orderSvc SOAP service.

Login to the API Publisher and create a API with following info.

Name: orderSvc
Context: ordersvc
Version: v1

Resource definition1
URL Pattern: submitOrder
Method: POST

Resource definition2
URL Pattern: cancelOrder/{id}
Method: GET

Resource definition3
URL Pattern: orderStatus/{id}
Method: GET

Endpoint Type:* : Select the endpoint type as Address endpoint. And go to the “Advanced Options” and select the message format as “SOAP 1.1”.
Production Endpoint:https://localhost:9446/services/orderSvc/ (Give the OrderSvc service endpoint)

Tier Availability : Unlimited

Sequences : Click on the Sequences checkbox and selected the previously saved custom sequence under “In Flow”.

Publish the API into gateway.

We are done with the API creation.

Functionality of the custom sequence "orderSvc_supporting_sequence.xml"

OrderSvc backend service expecting a SOAP request while user invoking API by sending parameters as query parameters (i.e cancelOrder/{id}, orderStatus/{id}).

This custom sequence will take care of building SOAP payload required for cancelOrder, orderStatus operations by looking at the incoming request URI and the query parameters.

Using a switch mediator, it read the request path . i.e 

<switch xmlns:soapenv="" xmlns:ns3="http://org.apache.synapse/xsd" source="get-property('REST_SUB_REQUEST_PATH')">

Then check the value of request path using a regular expression and construct the payload either for cancelOrder or orderStatus according to the matched resource. i.e

<case regex="/cancelOrder.*">
<payloadFactory media-type="xml">
<soapenv:Envelope xmlns:soapenv="">
<soapenv:Body xmlns:dat="">
<arg evaluator="xml" expression="get-property('')"/>
<header name="Action" scope="default" value="urn:cancelOrder"/>

<case regex="/orderStatus.*">
<payloadFactory media-type="xml">
<soapenv:Envelope xmlns:soapenv="">
<soapenv:Body xmlns:dat="">
<arg evaluator="xml" expression="get-property('')"/>
<header name="Action" scope="default" value="urn:orderStatus"/>

Test OrderSvc API published in API Manager

Login to the API Store and subscribe to the OrderSvc API and generate a access token. Invoke the orderStatus resource as given bellow. This will call to OrderSvc SOAP service and give you the response.

curl -v -H "Authorization: Bearer  _smfAGO3U6mhzFLro4bXVEl71Gga" http://localhost:8280/order/v1/orderStatus/3

Prabath SiriwardenaA Brief History of OpenID Connect

OpenID, which followed in the footsteps of SAML in 2005, revolutionized web authentication. Brad Fitzpatrick, the founder of LiveJournal, initiated it. The basic principle behind both OpenID and SAML is the same. Both can be used to facilitate web single sign on and cross-domain identity federation. OpenID is more community friendly, user centric, and decentralized. Yahoo added OpenID support in mid-January 2008, MySpace announced its support for OpenID in and in late July of that same year,, and Google joined the party in late October. By December 2009, there were more than 1 billion OpenID enabled accounts. It was a huge success as a web single sign on.

OpenID and OAuth 1.0 address two different concerns. OpenID is about authentication, while OAuth 1.0 is about delegated authorization. As both of these standards were gaining popularity in their respective domains, there was interest in combining them so that one can authenticate a user and also get a token to access their resources on their behalf in a single step. The Google Step 2 project is the first serious effort in this direction. It introduced an OpenID extension for OAuth, which basically takes Oauth-related parameters in the OpenID request/response itself. The same people who initiated the Google Step 2 project later brought it into the OpenID foundation.

The Google Step 2 OpenID extension for OAuth specification is available at:

OpenID has gone through two generations to date. OpenID 1.0/1.1/2.0 is the first generation, while OpenID extension for OAuth is the second. OpenID Connect is the third generation of OpenID.

Yahoo, Google, and many other OpenID Providers will discontinue their support for OpenID 2.0 by mid-2015, and they will migrate into OpenID Connect. 

Unlike OpenID extension for OAuth, OpenID Connect was built on top of OAuth. It simply introduces an identity layer on top of OAuth 2.0. This identity layer is abstracted into an ID token. An OAuth Authorization Server that supports OpenID Connect can return an ID token along with the access token itself.

OpenID Connect vs. OAuth 2.0:

OpenID Connect was ratified as a standard by its membership on February 26, 2014. OpenID Connect provides a lightweight framework for identity interactions in a RESTful manner. This was developed under the OpenID foundation, having its roots in OpenID, but OAuth 2.0 affected it tremendously.

The announcement by the OpenID Foundation regarding the launch of the OpenID Connect standard is available at:

More details and the applications of the OpenID Connect are covered in my book Advanced API Security.

John MathonAPI Centric, APIs taking over, Component as a Service. Reusable components and services.


(Amplituhedron allows us to compute Quantum probabilities orders of magnitude faster than ever before)

APIs and Component as a Service – The dream achieved

Today, 10,000,000,000,000,000 API Calls/month are made in the cloud approximately.  This is a tremendous change in the way software is built.   We have achieved partly the dream of reusable software in the form of APIs and open source.   To me this is exciting stuff!!   When you look at the type of things available in APIs and open source projects today it is mind boggling how much is available and we are seeing that in the applications we get today in terms of amazing cross functionality from little effort.

For many years architects have promoted the practice of building component software.   The principal motivation for this has been the holy grail of software which is the desire to find some way initially to gain efficiency but eventually that all software might easily done by almost anybody simply by referring to higher and higher level abstractions.  Unfortunately, the complexity and lack of social aspects limited componentizations success like SOA.  Componentization by itself was not enough to achieve more and more reuse.

Today with the move to APIs as services in the cloud we tend to think of components as the pieces of code behind API(s) or multiple components stitched together to form a service with an API as its interface.   Frequently components are open source software projects or open software projects are components in a reusable service that has an API.   This API may be public or private, it may be in the cloud or hosted in an on-premise facility but the point is that the element of reusability is the API and open source components.

These new components have different requirements than CBSD (component based software development.)    I call this new component idea Component as a Service.    APIs are Component as a Service, but also any open source project or internal developed software can be a CaaS component.   Sure, there are technical subtleties to this but the overall movement is clearly to this model of componentization.   Today, we are offering components in the cloud and offering them as a service.  It is becoming wildly successful and you have to pay attention when designing any component whether being used entirely for internal use or explicitly for external use you should strive to make that component meet criteria I describe below.

Abbreviated history of  reuse

Sometimes reusable pieces have been incorporated into languages as underlying built-in capability or syntactic flourishes, annotations, libraries.  Languages in the early days of programming such as APL, SNOBOL contained lots of predefined functionality that greatly improved the speed to write programs that used those specific componentry.

Over time it was considered more important to have standard languages and the notion of libraries of functions become the place we put reusable code fragments.

This evolved into the notion of frameworks.   More recently with Java the possibility of having different languages which use an underlying common JAVA machine has allowed the creation of new special purpose languages without the deficits that specialized programming languages had in the past.   We see a surge of new programming languages such as SCALA, RUBY, GROOVY, PHP, …

Over the last 15 years the growth of open source movement has had an enormous impact on reusability, improving productivity and innovation in software engineering.  There is no standard way to package open source technology.  Yet, the ability to leverage open source depends greatly on the ability to incorporate it into your projects, so there is a lot of motivation by open source projects to foster reuse and simplify reuse.

We can see all these things are leading to the effort to package technology in reusable forms and make it easier to build software, more rapid to use it and proliferate its use by other software.

In this blog I am attempting to give you the framework to understand the requirements to build components that can be reused in services easily and be what I call cloud-native Components as a Service.

A CaaS (Component as a service) Definition


1.  “A software component is a unit of composition with contractually specified interfaces and explicit context dependencies only. A software component can be deployed independently and is subject to composition by third parties”

The definition of component software has changed and is to some extent a nebulous concept.  Component based software engineering was started in the days of object oriented programming and has evolved over time.    This evolution reflects a growing understanding of how to reuse software.   For instance, a more practical definition I found:

  • 2. fully documented
  • 3.  thoroughly tested
    • robust – with comprehensive input-validity checking
    • able to pass back appropriate error messages or return codes
  • 4. designed with an awareness that it will be put to unforeseen uses

Technically a component needs to meet certain qualifications to be Cloud-native or usable as Component as a Service (CaaS) including all of the above criteria (1-4 above).

Documentation and usage requirements

5.   Dependencies must be clearly articulated and the versions of those dependencies that it works with must be understood

6. A component API should reflect the way it will be used to make using the API as simple as possible.  If necessary multiple APIs should be available for multiple usage patterns or classes of users

7. All of the component API(s) should be documented and published in a social store where all users or potential users of the component can see it and all comments about it.  Private APIs should be visible only to private users but should be documented and published as well

8. APIs for a component should be changed as frequently as needed, sometimes very frequently.  Users should be made aware of all such changes suitably in advance and support for prior APIs should be supported for the time period appropriate for the usage

9. All of a components API(s) should be documented and should be possible to track usage through observations of the calls themselves as well as log files or event stream from the component 

10. Any internal data needed by a component should be accessible through the component and its interfaces public or private

Systemic requirements

11.  The component must isolate data from different subscribers in a well defined way

12.  A component should be able to be have multiple instances operating simultaneously

13.  A components fault tolerance capability should be well specified

14.  A component should be able to be packaged in a standard container like Docker

15. A component should conform to standards where applicable

16. Conform to standards for security or integrate with security systems in a well defined way

17.  test versions of components should be made available if needed for users to test it within their own functionality

18. SLAs for components in production as a service should be published or available

19.  A component should not depend on any specific network address, file system address, or other physical world dependency but should virtualize and allow injection of all such real world dependencies at time of invocation or instantiation. 


 Containers, Components and PaaS, the Cloud

Today, software is deployed instantly or near instantly using PaaS or DevOps software after it has passed testing.   Usually the way DevOps packages software is in a “container.”  The value of containers is to drastically in some cases simplify the installation of separate pieces and interdependencies of a service into a simple “unit” of composition which is easy to manage.    Containers usually have any specific instance dependencies “injected” into them at time of instantiation so that instances can be created on the fly anywhere without concern for physical constraints.

Containers also help isolate the possible errors a specific service can introduce into a system by limiting the damage to whatever is inside the container and no containers around it operating even on the same machine.

A component may be encapsulated inside multiple containers, for instance OSGi and Docker such that each container provides some protection or functionality.

API Management

Components should be designed to be managed by an API management system.  The value of this is clearly evident.

1.  Each components API is available for use and comment and improvement

2.  Each component can be tracked during usage if desired to see how it is being used and who is using it as well as for error resolution.

3.  If desired API management can be used to also govern and provide additional security for the component

4.  API management can facilitate version management and many of the requirements for components quality service described above



(Random picture I just coulnd’t resist.)



We are entering a new world of software development where software is composed largely of components in the form of open source or APIs that depend on other components.  The result is a blistering fast development of new powerful amazing applications, services and enhanced functionality at lower cost and faster time to market.   If we all follow the rules for building components as a service early on we can make the transition from the old world of slow development, bugs all the time, rarely fixed, living on the same software for years to this new world of rapid change and low cost, fast time to market.  It requires thinking of building or refactoring (taking old software and services and providing a new face) existing software in the enterprise as powerful Components as a Service APIs.

WSO2 is the only vendor I know of that provides a complete stack of platform 3.0,  SOA, Cloud and big data, mobile, social, API centric componentized CaaS software.    WSO2 has built its entire open source software stack on OSGi and multi-tenant component based model that can be taken in bite-size pieces and combined or used alone to build this new model of software development.  It really behooves you to consider looking at WSO2 at the minimum in terms of what are the requirements to build highly reusable cloud native component software even if you don’t use it yourself.   The approach of building truly componentized software that is open source and cloud native is brilliant approach to software development.


Other Articles to read on this topic:







Prabath SiriwardenaWSO2 Identity Server / Thinktecture - Identity Broker Interop

Today is the third and the final day of  the interop event happening right now in Virginia Beach, USA. Today we were able to successfully interop test a selected set of Identity Broker patterns with Thinktecture Identity Provider.

In the first scenario, a .NET web application deployed in IIS talks to Thinktecture via WS-Federation. Thinktecture is acting as the broker and asks the user to pick the Identity Provider. Then Thinktecture will redirect the user to the WSO2 IS via WS-Federation.

In the second scenario, WSO2 IS is acting as the broker. Salesforce which acts as the service provider talks to WSO2 IS via SAML 2.0. WSO2 IS asks the user to pick the Identity Provider. Then WSO2 IS will redirect the user to the Thinktecture via WS-Federation. In the return path WSO2 IS will convert the WS-Federation response into a SAML 2.0 response and sends it back to the Salesforce.

Nuwan BandaraWSO2Con’14 San Francisco

My Sessions, 28th Oct 2.00p.m – 2.30p.m PDT – Run Your Own Mobile App Store 28th Oct 4.00p.m – 4.30p.m PDT – Governance for a Connected Ecosystem 29th Oct 9.00a.m – 9.30a.m PDT – Connected-Health Reference Architecture The agenda for WSO2Con US can be  found at Online registration is still open. So register now and get technology insights for…

Madhuka UdanthaPredictive modeling

Predictive modeling is the process by which a model is created or chosen to try to best predict the probability of an outcome. Most often it wants to predict is in the future or unknown event. The model is chosen on the basis of detection theory. Models can use one or more classifiers.


There are three predictive models

  • Parametric
  • Non-parametric
  • Semi-parametric models

Parametric models make “specific assumptions with regard to one or more of the population parameters that characterize the underlying distributions”

GMDH (Group method of data handling)
Naive Bayes
k-nearest neighbor algorithm
Majority classifier
Support vector machines (SVM)
Random forests
Boosted trees
CART (Classification and Regression Trees)
MARS (Multivariate adaptive regression splines)
Neural Networks
Ordinary Least Square
Generalized Linear Models (GLM)

Generalized additive models
Robust regression

Semi-parametric models
Semiparametric regression



Prabath SiriwardenaAMAZON Still Uses OpenID!

Few have noticed that Amazon still uses (at the time of this writing) OpenID for user authentication. Check it out yourself: go to, and click the Sign In button. Then observe the browser address bar. You see something similar to the following, which is an OpenID authentication request:

Prabath SiriwardenaWSO2 Identity Server / Microsoft ADFS - Identity Broker Interop

We are in the middle of an interop event happening right now in Virginia Beach, USA. Today and yesterday we were able to successfully interop test a selected set of Identity Broker patterns with Microsoft ADFS 2.0/3.0.

In the first scenario, a .NET web application deployed in IIS talks to ADFS via WS-Federation. ADFS is acting as the broker and asks the user to pick the Identity Provider. Then ASFS will redirect the user to the WSO2 IS via WS-Federation.

In the second scenario, a .NET web application deployed in IIS talks to ADFS via SAML 2.0. ADFS is acting as the broker and it asks the user to pick the Identity Provider. Then ADFS will redirect the user to the WSO2 IS via SAML 2.0.

In the third scenario, WSO2 IS is acting as the broker. Salesforce which acts as the service provider talks to WSO2 IS via SAML 2.0. WSO2 IS asks the user to pick the Identity Provider. Then WSO2 IS will redirect the user to the ADFS via WS-Federation. In the return path WSO2 IS will convert the WS-Federation response into a SAML 2.0 response and sends it back to the Salesforce.