WSO2 Venus

Pavithra MadurangiEmail based login for tenants - For WSO2 Carbon based Products.

This simple blog post explains how to configure WSO2 Carbon based servers to support email authentication for tenants.

e.g :  If the tenant domain is and if the email user name of the tenant user is, then "" should be able to login to management console of WSO2 products

1) To support email authentication, enable following property in user-mgt.xml (CARBON_HOME/repository/conf)

2) Change following two properties in primary user store manager

3) Remove following property

After this configuration, tenants will be able to login with email attribute (email@tenantDomain)

e.g :

References :

Kavith Thiranga LokuhewageHow to use DTO Factory in Eclipse Che


Data transfer objects are used in Che to do the communication between client and server. In a code level, this is just an interface annotated with @DTO (com.codenvy.dto.shared.DTO). This interface should contain getters and setters (with bean naming conventions) for each and every fields that we need in this object.

For example, following is a DTO with a single String field.

public interface HelloUser {

    String getHelloMessage();

    void setHelloMessage(String message);

By convention, we need to put these DTOs to shared package as it will be used by both client and server side.

DTO Factory

DTO Factory is a factory available for both client and server sides, which can be used to serialize/deserialize DTOs. DTO factory internally uses generated DTO implementations (described in next section) to get this job done. Yet, it has a properly encapsulated API and developers can simply use DTOFactoy instance directly.

For client side   : com.codenvy.ide.dto.DtoFactory  
For server side : com.codenvy.dto.server.DtoFactory

HelloUser helloUser = DtoFactory.getInstance().createDto(HelloUser.class);
Initializing a DTO

Above code snippet shows how to initialize a DTO using DTOFactory. As mentioned above, proper DtoFactory classes should be used by client or server sides.

Deserializing in client side

Unmarshallable<HelloUser> unmarshaller =
helloService.sayHello(sayHello, new AsyncRequestCallback<HelloUser>(unmarshaller) {
           protected void onSuccess(HelloUser result) {

           protected void onFailure(Throwable exception) {
Deserializing in client side

When invoking a service that returns a DTO, client side should register a callback created using relevant unmarshaller factory. Then, the on success method will be called with a deserialized DTO.

Deserializing in server side

    public ... sayHello(SayHello sayHello){
             sayHello.getHelloMessage() ...
Deserializing in server side     
Everest (JAX-RS implementation of Che) implementation automatically deserialize DTOs when they are used as parameters in rest services. It will identify serialized DTO with marked type -  @Consumes(MediaType.APPLICATION_JSON) - and use generated DTO implementations to deserialize DTO.

DTO maven plugin

As mentioned earlier, for DtoFactoy to function properly, it needs some generated code that will contain concrete logic to serialize/deserialize DTOs. GWT compiler should be able to access generated code for client side and generated code for server side should go in jar file.

Che uses a special maven plugin called “codenvy-dto-maven-plugin” to generate these codes. Following figure illustrates a sample configuration of this plugin. It contains separate executions for client and server sides.
We have to input correct package structures accordingly and file paths to which these generated files should be copied.

    Other dependencies if DTOs from current project need them.

package - package, in which, DTO interfaces resides
outputDirectory -  directory, to which, generated files should be copied
genClassName - class name for the generated class

You should also configure your maven build to use these generated classes as a resource when compiling and packaging. Just add following line in resources in build section.


Kavith Thiranga LokuhewageGWT MVP Implementation in Eclipse Che

MVP Pattern

Model View Presenter (aka MVP) is a design pattern that attempts to decouple the logic of a component from its presentation. This is similar to the popular MVC (model view controller) design pattern, but has some fundamentally different goals. The benefits of MVP include more testable code, more reusable code, and a decoupled development environment.

MVP Implementation in Che

Note : Code example used in this document are from a sample project wizard page for WSO2 DSS .

There are four main java components used to implement a Che component that follows MVP.

  1. Interface for View functionality
  2. Interface for Event delegation
  3. Implementation of View
  4. Presenter


To reduce the number of files created for each MVP component, No. 1 and No. 2 are created within a single java file. To be more precise, event delegation interface is defined as a sub interface within view interface.

View interface should define methods that will be used by presenter to communicate with view implementation. Event delegation interface should define methods that will be implemented by presenter so that view can delegate events to presenter using these methods.

Following code snippet demonstrates these two interfaces that we created for DSS project wizard page.

public interface DSSConfigurationView extends View<DSSConfigurationView.ActionDelegate> {
    String getGroupId();
    void setGroupId(String groupId);
    String getArtifactId();
    void setArtifactId(String artifactId);
    String getVersion();
    void setVersion(String version);

    interface ActionDelegate {
           void onGroupIdChanged();
           void onArtifactIdChanged();
           void onVersionChanged();
VIew and Event Handler interfaces

Interface for view should extend from com.codenvy.ide.api.mvp.View interface. This com.codenvy.ide.api.mvp.View interface only defines a single method - void setDelegate(T var1).

…interface DSSConfigurationView extends View<DSSConfigurationView.ActionDelegate>..

Using generics, we need to inform this super interface about our event handling delegation interface.

View Implementation

View implementation often can extend from any abstract widget such as Composite. It may also use UIBinder  to implement the UI if necessary. It is possible to implement view by following any approach and using any GWT widget. The only must is that it should implement view interface (created in previous step) and IsWidget interface (Or extend any subclass of IsWidget).

public class DSSConfigurationViewImpl extends ... implements DSSConfigurationView {

      // Maintain a reference to presenter
    private ActionDelegate delegate;
      // provide a setter for presenter
    public void setDelegate(ActionDelegate delegate) {
        this.delegate = delegate;
      // Implement methods defined in view interface
    public String getGroupId() {
        return groupId.getText();

    public void setGroupId(String groupId) {
      // Notify presenter on UI events using delegation methods
    public void onGroupIdChanged(KeyUpEvent event) {

    public void onArtifactIdChanged(KeyUpEvent event) {

View implementation

As shown in above code snippet (see full code), main things to do in view implementation can be summarised as below.
  1. Extend any widget from GWT and implement user interface by following any approach
  2. Implement view interface (created in previous step)
  3. Manage a reference to action delegate (presenter - see next section for more info)
  4. Upon any UI events inform presenter using the delegation methods so that presenter can execute business logic accordingly   


Presenter can extend from many available abstract presenters such as AbstractWizardPage, AbstractEditorPresenter and BasePresenter, anything that implements com.codenvy.ide.api.mvp.Presenter. It also should implement Action Delegation interface so that upon any UI events, those delegation methods will be called.

public class DSSConfigurationPresenter extends ... implements DSSConfigurationView.ActionDelegate {

      // Maintain a reference to view
    private final DSSConfigurationView view;

    public DSSConfigurationPresenter(DSSConfigurationView view, ...) {
         this.view = view;
         // Set this as action delegate for view

     // Init view and set view in container
    public void go(AcceptsOneWidget container) {   

     // Execute necessary logic upon ui events
    public void onGroupIdChanged() {

      // Execute necessary logic upon ui events
    public void onArtifactIdChanged() {

Depending on the extending presenter, there may be various abstract method that needs to be implemented by presenter. For example, if you extend AbstractEditorPresenter, you need to implement initializeEditor(), isDirty() and doSave(), etc. methods. If it is AbstractWizardPage, you need to implement isCompleted(), storeOptions(), removeOptions(), etc methods.

Yet, as shown in above code snippet (seefullcode), following are the main things that you need to do in presenter.
  1. Extend any abstract presenter as needed and implement abstract methods/or override behaviour as needed
  2. Implement Action delegation interface
  3. maintain a reference to view
  4. set this as the action delegate of view using set delegate method
  5. init view and set view in the parent container (go method)
  6. use methods defined in view interface to communicate with view

The go method is the one that will be called by Che UI framework, when this particular component is need to be shown in IDE. This method will be called with a reference to parent container.

Dedunu DhananjayaOpenJDK is not bundled with Ubuntu by default

This is not a technical blog post. This was about a bet. One of my ex-colleagues told that OpenJDK is installed on Ubuntu by default. And I installed a fresh Virtual machine and showed him that it won't. Then I earned Pancakes. We went to The Mel's Tea Cafe

Dedunu DhananjayaThat Cafe on That Day

This was a treat from Jessi. (President of BVT) Actually we earned it buy helping her course work. According to her this was the best place. And we were excited. We planned to go there on 5pm. I was the guy who went there first. And time was around 4pm. Then I was waiting till someone comes. Aliza came there next. And we were waiting for our honorable president. 

I like the atmosphere. It was little bit hard to find That Cafe. You can see Jessi's favorite drink, Ocean Sea Fossil. BVT didn't want to leave the place. And we also decided the next BVT tour as well. Wait for next BVT tour. ;)

Dedunu DhananjayaSimply Strawberries on 14th Jan

We went to have Strawberry waffles. And all of us wanted it with chocolate sauce. And my friend Jessi always want to take photographs of food. Jessi, Aliza and myself went there. So I got this photo because of her. Waffle was awesome. Also I love the setting there. 

This is the beginning of "Bon Viveur" team. And we decided to go out and try different foods and places much often. Oh my god, I forgot to mention about the shop. It is Simply Strawberries. We had a walk to the place and it was fun!!!

Dedunu DhananjayaSunday or Someday on 27th Dec

Three of us wanted to go somewhere. And then we tried to pick a date, but we couldn't. Finally we just agreed to go out on Sunday. Then we went to Lavinia Breeze and had fun. We were acting like kids. Screaming, Laughing. We don't mind what others think. That's us!!!

Then we went to Majestic City Cinema to watch Hobbit. And we laughed like idiots when we are supposed to be serious. ;) Then finally we had went to Elite Indian Restaurant.

Good Best friends!!! :D

Dedunu DhananjayaThe Sizzle on 17th Dec

Recently I started visiting places with my friends and enjoy. So last month, I went to The Sizzle with one of my best friends. Receptionist asked "table for two?". Then I nodded. He bought us two a table for two which looked little bit embarrassing.  But food was good. And This was the second time, I visited "The Sizzle".

And this Sizzle visit will be remarkable. ;)

Ajith VitharanaRead the content stored in registry- WSO2 ESB

1.Lets say we have stored a XML file (order-id.xml) in registry.


2. I'm going to use Mock Service (Mockproxy) to read the content  and send back as a response (using respond mediator -ESB 4.8.x).

<proxy xmlns=""
<property name="order"
<payloadFactory media-type="xml">
<arg evaluator="xml" expression="$ctx:order//id"/>
<arg evaluator="xml" expression="$ctx:order//symbol"/>

3. Mockproxy test.

Request :

<soapenv:Envelope xmlns:soapenv="">


<soapenv:Envelope xmlns:soapenv="">
<Response xmlns="">

sanjeewa malalgodaWSO2 API Manager - Troubleshoot common deployment issues

 Here are some of useful tips when you work on deployment related issues.

-Dsetup doesn't work for usage tables. Usage table definitions are not there in setup scripts.
In WSO2 API Manager and BAM deployment we need user manager database, registry database and api manager database. We do have setup scripts for those database under db scripts folder of product distribution.  There is no need to create any tables in stats database(manually or using setup script) as API Manager toolbox(deployed in BAM) will create them when hive queries get executed. -Dsetup option will not apply to the hive scripts inside toolbox deployed in BAM.

Understand connectivity between components in distributed API manager deployment.
This is important when you work on issues related to distributed API Manager deployment. Following steps to explain connectivity between components. It would be useful to listed them here.

1. Changed the admin password
2. Tried to log in to publisher and got the insufficient privilege error
3. Then changed the admin password in authManager element in api-manager.xml
4. Restarted and I was able to login to API publisher. Then I created an API and tried to publish. Got a permission error again.
5. Then, I changed password under API Gateway element in api-manager.xml
6. Restarted and published the API. Then, tried to invoke an API using an existing key. Got the key validation error.
7. Then, I changed the admin password in KeyManager element in api-manager.xml and all issues got resolved.

Thrift key validation does not work when we have load balancer fronted key manager.
Reason for this is most of load balancers not capable of routing traffic in session aware manner. So in such cases its always recommend to use WS key validation client.

Usage data related issues.
When you work with usage data related issues first we should check data source configurations in BAM and API Manager. Then we need to check created tables in usage database. Most of reported issues are due to configuration issues.  Same applies to billing sample as well.

sanjeewa malalgodaWSO2 API Manager - How to customize API Manager using extension points

Here in this article i will discuss about common extensions available in API Manager and how we can use them.

Dedunu DhananjayaHow to run Alfresco Share or Repository AMP projects

From the previous post, I explained how to generate an Alfresco AMP project using Maven. When you have an AMP project you can run it by deploying it to an existing Alfresco Repository or Share. But if you are a developer you will not find it as a effective way to run Alfresco modules. The other way is that you can run the AMP project using Maven plug-in. 

In this post, I'm not going to talk about the first method. As I said earlier we can run an Alfresco instance using Maven. To do that from you terminal move to Alfresco AMP project folder and run below command.

mvm clean package -Pamp-to-war

Perhaps it may take a while. If you are running this command for the first time it will download Alfresco binary for local Maven repository. If you are running an instance again, your changes would be still available on that Alfresco instance. 

If you want to discard all the previous data, use below command.

mvn clean package -Ppurge -Pamp-to-war

Above command will discard all the changes and data. It will start a fresh instance.

Enjoy Alfresco Development!!!

Pavithra MadurangiConfiguring Active Directory (Windows 2012 R2) to be used as a user store of WSO2 Carbon based products

The purpose of this blog post is not to explain the steps on how to configure AD as primary user store. Above information is covered from WSO2 Documentation. My intention is to give some guide on how to configure AD LDS instance to work over SSL and how to export/import certificates to the trust store of WSO2 servers.

To achieve this, we need to

  1. Install AD on Windows 2012 R2
  2. Install AD LDS role in Server 2012 R2
  3. Create an AD LDS instance
  4. Install Active Directory Certificate Service in the Domain Controller (Since we need to get AD LDS instance work over SSL)
  5. Export certificate used by Domain Controller.
  6. Import the certificate to client-truststore.jks in WSO2 servers.

Also this information is already covered from following two great blog posts by Suresh. So my post will be an updated version of them and will fill some gaps and link some missing bits and pieces.

Let's get started. 

1. Assume you have only installed Windows 2012 R2 and now you need to install AD too. Following article clearly explains all the steps required.

Note : As mentioned in the article itself, it is written assuming that there's no existing Active Directory Forrest. If you need to configure the server to act as the Domain Controller for an existing Forrest, then following article will be useful

2) Now you've installed Active Directory Domain Service and the next step is to install AD LDS role. 

- Start - > Open Server Manager -> Dashboard and Add roles and feature

- In the popup wizard, Installation type -> select Role-based or feature based option and click the Next button. 

- In the Server Selection, select current server which is selected by default. Then click Next.

- Select AD LDS (Active Directory Lightweight Directory Service ) check box in Server Roles  and click Next.

- Next you'll be taken through wizard and it will include AD LDS related information. Review that information and click Next.

- Now you'll be prompted to select optional feature. Review it and select the optional features you need (if any) and click next.

- Review installation details and click Install.

- After successful AD LDS installation you'll get a confirmation message.

3. Now let's create an AD LDS instance. 

- Start -> Open Administrative Tools.  Click Active Directory Lightweight Directory Service Setup Wizard.

-  You'll be directed to Welcome to the Active Directory Lightweight Directory Services Setup Wizard. Click Next.

- Then you'll be taken to Setup Options page. From this step onwards, configuration is same as mentioned in

4. As explained in above blog, if you pick Administrative account for the service account selection, then you won't have to specifically create certificates and assign them to AD LDS instance. Instead the default certificates used by the Domain Controller can be accessed by AD LDS instance.

To achieve this, let's install certificate authority on Windows 2012 server (if it's not already installed). Again I'm not going to explain it in details because following article covers all required information

5. Now let's export the certificate used by Domain controller

- Go to MMC (Start -> Administrative tools -> run -> MMC)
- File -> Add or Remove Snap-ins
- Select certificates snap-in and click add.

-Select computer account radio button and click Next.
- Select Local computer and click Finish.
- In MMC, click on Certificates (Local Computer) -> Personal -> Certificates.
- There you'll find bunch of certificates.
- Locate root CA certificate, right click on it -> All Tasks and select Export.

Note : The intended purpose of this certificate is all. (Not purely for server authentication.) It's possible to create a certificate for server authentication and use it for LDAPS authentication. [1] and [2] explains how it can be achieved.

For the moment I'm using the default certificate for LDAPS authentication.

- In the Export wizard, select Do not export private key option and click Next.
- Select DER encoded binary X.509 (.cer) format and provide a location to store the certificate.

6. Import the certificate to trust store in WSO2 Server.

Use following command to import the certificate to client-truststore.jks found inside CARBON_HOME/repository/resource/security.

keytool -import -alias adcacert -file/cert_home/cert_name.cer -keystore CARBON_HOME/repository/resource/security/client-trustsotre.jks -storepass wso2carbon

After this, configuring user-mgt.xml and tenant-mgt.xml is same as explained in WSO2 Documentation.


Madhuka UdanthaPython with CSV

CSV (Comma Separated Values) format is the most common format in computer world (export and import). In python 'csv module' implements classes to read and write tabular data in CSV format without knowing the precise details of the CSV format used by Excel.

Here it is reading csv file and filtering data in it. Create new CSV file and moved filtered data


1 import csv
3 data = []
5 #reading csv file
6 with open('D:/Research/python/data/class.csv', 'rb') as f:
7 reader = csv.reader(f)
8 #checking file is open fine
9 print f.closed
10 count =0
11 for row in reader:
12 print row
13 #catching first element
14 if count ==0 :
15 data += [row]
16 #collecting over 99 marks only
17 else:
18 if int(row[1]) > 99:
19 data += [row]
20 count += 1
21 #f.close();
23 #writting to csv file
24 with open('D:/Research/python/data/some.csv', 'wb') as f1:
25 writer = csv.writer(f1)
26 for row in data:
27 print row
28 writer.writerows(data)


out put

%run D:/Research/python/
['name', 'marks']
['dilan', '100']
['jone', '98']
['james', '100']
['jack', '92']
['dilan', '100']
['james', '100']



Malintha AdikariHow to send string content as the response from WSO2 ESB Proxy/API

We can send XML content or JSON content as the response out from our WSO2 ESB proxy/REST API. But there may be a scenario where we want to send string content (which is not in XML format) as the response of our service. Following synapse code snippet shows you how to do it

As an example think you have to send following string content to your client service


Note :
  • Above is not in XML format. So we cannot generate this directly through payload factory mediator.
  • We have to send <,> symbols inside the response, but WSO2 ESB doesn't allow to keep those inside your synapse configuration.
1. First you have to encode above expected response. You can use this tool to encode your xml. We get following xml after encoding in our example


Note : If you want to encode dynamic payload content you can use script mediator or class mediator for that task

2. Now we can attach the require string content to our payload as follows

            <payloadFactory media-type="xml">
                  <ms11:text xmlns:ms11="">$1</ms11:text>
                  <arg value="&lt;result1&gt;malintha&lt;/result1&gt;+&lt;result2&gt;adikari&lt;/result2&gt;"/>
            <property name="messageType" value="text/plain" scope="axis2"/>

Here we are using payload factory mediator to create our payload. You can see still our media-type is XML. Then load our string content as argument value and finally we change the message type to "text/plain". So this would return string content as it's response.

sanjeewa malalgodaWSO2 API Manager visibiity, subscription availability and relation between them

When we create APIs we need to aware about API visibility and subscriptions. Normally API visibility directly couple with subscription availability(simply because you cannot subscribe to something you dont see in store). See following diagram for more information about relationship between them.

Visibility - we can contorl how other users can see our APIS

Subscription availability - How other users can subscribe to APIs created by us

Chintana WilamunaReliable messaging pattern

Reliable messaging involve sending a message successfully from one system to another over unreliable protocols. Although TCP/IP gives you reliability at a lower level, reliable messaging provide delivery guarantees at a higher level. If the recipient is unavailable, messages will be retransmitted over a defined period until it’s successfully delivered.


Traditional SOA method of handling reliable messaging is through a framework/library that implements WS-ReliableMessaging specification. The pattern is illustrated here. A framework like Apache Sandesha provide reliable delivery guarantees according to the specification. From the reliable message specification (PDF) message exchange sequence is like this,

At each step there will be an XML message going back an forth the wire. This create a lot of additional overhead and as a result performance suffer.

Alternative approach using JMS

Looking at the communication overhead and complexity involved with creating/maintaining ReliableMessaging capable clients, an alternative approach using JMS is very popular. Simplicity of JMS and easy maintainability are key factors for JMS’s success as a defacto solution used for reliable message delivery. You put messages to a queue and process messages from a queue.

Messaging with WSO2 platform

WSO2 ESB have a concept of messages stores and message processors. Message stores does what you expect. They store messages. Default implementation of message store is an in memory implementation. Also you have the option of pointing the message store to a queue in an external broker. Also there are standard extension points to extend the functionality and write a custom message store implementation.

Message processors are responsible for reading messages from a message store and sending it to another system. You can configure parameters at the message processor to have a flexible interval to poll for new messages, retry interval, delivery attempts and so on.

Advantages of using WSO2 for reliable messaging

  1. ESB follows a configuration driven approach. Easy to configure, don’t need to write code for large set of integration patterns
  2. Protocol conversion comes naturally and don’t have to do any extra work - Accepting an HTTP request and sending it to a JMS queue require you to specify the JMS endpoint only
  3. Can take advantage of a large set of enterprise integration patterns
  4. Simple config and deployment for simple scenarios. Complex scenarios are possible just be extending/integrating external brokers
  5. Production deployment flexibility (single node, multiple nodes, on-prem, cloud, hybrid deployments)

Hasitha Aravinda[Oracle] How to get raw counts of all tables at once

SQL Query : 

          xmltype(dbms_xmlgen.getxml('select count(*) c from '||table_name))
    from user_tables 

Lahiru SandaruwanRun WSO2 products with high available Master-Master Mysql cluster

This blog post will explain a simple setup of WSO2 product with a Master-Master Mysql cluster. Normal configurations for database can be found at here. In this small note, we configure the Carbon product to use two Mysql master nodes which makes sure high availability of the setup.

This is an extreamly easy guide to setup Mysql cluster. Assume the hostnames of Mysql Master nodes are "mysql-master-1" and "mysql-master-2". Use following sample datasource in masterdatasource.xml of Carbon product. You can use same format for any of the databases which are used by WSO2 Carbon products as explained in above mentioned guide.

<?xml version="1.0" encoding="UTF-8"?>
   <description>The datasource used by user manager</description>
   <definition type="RDBMS">
         <validationQuery>SELECT 1</validationQuery>


Ajith VitharanaHow WS-Trust STS works in WSO2 Identity Server.

WS-Trust STS (Secure Token Service) provides the facility for  secure communication between web service client and server.

Benefits of WS-Trust STS

1. Identity delegation.
2. Service consumers should not be worried about the token specific implementation/knowledge.
3. Secure communication across  the web services.

Work flow.

1. Service client provides credentials to STS and request a security token (RST - Request Security Token).

2. STS validates the client credentials and reply with security token (SAML) to the client (RSTR -Request Security Token Reply).

3. Client invoke the web service along with the token.

4. Web service validates the  token from the STS.

5. STS send the decision to the web service.

6. If the token is valid web service allow to access the protected resource(s).

Use Case

Invoke a secured  web service  (Hosted in WSO2 Application Server) using the secure token issued by WSO2 Identity Server.

1. Download the latest version of WSO2 AS (5.2.1) and WSO2 Identity Server(5.0.0).
2. In AS,  change the port offSet value in carbon.xml to 1 (default 0).
3. Start both servers.
4. The "HelloService" sample web service which is already deployed in AS.

 5. Once you chick on the "HelloService" name, you should see the service endpoints.

6. In this use case we are going to use the "wso2carbon-sts" service of the Identity Server for issuing and validating tokens. Therefore Identity server act as the "Identity Provider". So we need to configure the Resident Identity Provider" first.

7. Go to Home ---> Identity -----> Identity Provider -----> List, then  click on "Resident Identity Provide" link.

8. Add a name for the resident Identity provider. (Eg: "WSO2IdentityProvider")

9. Expand the "WS-Trust / WS-Federation (Passive) Configuration". Now you should see the "wso2carbon-sts" endpoint.

10. Click on the "Apply Security Policy" link and enable the security. Then select the security scenario which is need to be applied for the wso2carbon-sts service. (Eg: select UsernameToken). Once you select the security scenario, the relevant policy will be applied automatically to the "wso2carbon-sts" service.

 10. Select the user group(s) which is allowed to access the "wso2carbon-service" for requesting  tokens.

11. Click on the "wso2carbon-sts" service link, now you should  see the wsdl including the applied policy.


12.To add a service provider for web service client , enter name (eg : HelloServiceProvider) for the new service provider and update.

13. Edit the "HelloServiceProvider" and configure the web service.

14. Apply the security for the "HelloService" deployed in AS.

15. Select the  "Non-Repudiation" as the security scenario.

   Bellow image is captured from Identity Server product.

16. Now  "HelloService" WSDL should have the applied policy.

17. Download the sts-client project from following git repository location.
(This is same sample which is included in the WSO2 Identity Server  project and did few changes for this use case).

git :

18 README of the sts-client project describes how to execute the client.

(The underline values should be changed according to your environment.)

19. The key store of the web service client  should have the public certificate of the STS and AS. Therefore it used the wso2carbon.jks which is already using in ESB and AS.

20 You can enable the soap tracer to capture the request and reply of each servers.

Dedunu DhananjayaHow to generate Alfresco 5 AMP project

Recently I have been working as a Alfresco Developer. When you are developing Alfresco Modules, you need to have a proper project with correct directory structure. Since Alfresco use Maven, you can  generate Alfresco 5 AMP project using archetype.

First you need Java and Maven installed on your Linux/Mac/Windows computer. Then run below command to start the project.

mvn archetype:generate -DarchetypeCatalog= -Dfilter=org.alfresco:

Then you will get below text.

[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] ------------------------------------------------------------------------
[INFO] >>> maven-archetype-plugin:2.2:generate (default-cli) > generate-sources @ standalone-pom >>>
[INFO] <<< maven-archetype-plugin:2.2:generate (default-cli) < generate-sources @ standalone-pom <<<
[INFO] --- maven-archetype-plugin:2.2:generate (default-cli) @ standalone-pom ---
[INFO] Generating project in Interactive mode
[INFO] No archetype defined. Using maven-archetype-quickstart (org.apache.maven.archetypes:maven-archetype-quickstart:1.0)
Choose archetype:
1: -> org.alfresco.maven.archetype:alfresco-allinone-archetype (Sample multi-module project for All-in-One development on the Alfresco plaftorm. Includes modules for: Repository WAR overlay, Repository AMP, Share WAR overlay, Solr configuration, and embedded Tomcat runner)
2: -> org.alfresco.maven.archetype:alfresco-amp-archetype (Sample project with full support for lifecycle and rapid development of Repository AMPs (Alfresco Module Packages))
3: -> org.alfresco.maven.archetype:share-amp-archetype (Share project with full support for lifecycle and rapid development of AMPs (Alfresco Module Packages))

Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): :

Now you have 3 options to select.
  1. All-in-One (This includes Repository Module, Share Module, Solar configuration and Tomcat runner. One-stop solution for Alfresco development. I don't recommend it to beginners to start with. )
  2. Alfresco Repository Module (This will generate AMP for Alfresco Repository.)
  3. Alfresco Share Module (This will generate AMP for Alfresco Share.)
Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): : 2
Choose org.alfresco.maven.archetype:alfresco-amp-archetype version: 
1: 2.0.0-beta-1
2: 2.0.0-beta-2
3: 2.0.0-beta-3
4: 2.0.0-beta-4
5: 2.0.0
Choose a number: 5: 

In this example I used Alfresco Repository Module. Then it prompts for SDK version. By pressing enter you can get the latest(default) SDK version. Then Maven prompts for groupId and artifactId. Please provide a suitable Ids for them.

Define value for property 'groupId': : org.dedunu
Define value for property 'artifactId': : training
[INFO] Using property: version = 1.0-SNAPSHOT
[INFO] Using property: package = (not used)
[INFO] Using property: alfresco_target_groupId = org.alfresco
[INFO] Using property: alfresco_target_version = 5.0.c
Confirm properties configuration:
groupId: org.dedunu
artifactId: training
version: 1.0-SNAPSHOT
package: (not used)
alfresco_target_groupId: org.alfresco

alfresco_target_version: 5.0.c
 Y: : 

Then again Maven prompts for your target Alfresco version. At the moment the latest Alfresco version is 5.0.c. If you hit enter it will continue with latest version. Otherwise you can customize the target Alfresco version. Then it will generate a Maven project for Alfresco.

[INFO] ----------------------------------------------------------------------------
[INFO] Using following parameters for creating project from Archetype: alfresco-amp-archetype:2.0.0
[INFO] ----------------------------------------------------------------------------
[INFO] Parameter: groupId, Value: org.dedunu
[INFO] Parameter: artifactId, Value: training
[INFO] Parameter: version, Value: 1.0-SNAPSHOT
[INFO] Parameter: package, Value: (not used)
[INFO] Parameter: packageInPathFormat, Value: (not used)
[INFO] Parameter: package, Value: (not used)
[INFO] Parameter: version, Value: 1.0-SNAPSHOT
[INFO] Parameter: groupId, Value: org.dedunu
[INFO] Parameter: alfresco_target_version, Value: 5.0.c
[INFO] Parameter: artifactId, Value: training
[INFO] Parameter: alfresco_target_groupId, Value: org.alfresco
[INFO] project created from Archetype in dir: /Users/dedunu/Documents/workspace/training
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 08:33 min
[INFO] Finished at: 2015-01-19T23:58:38+05:30
[INFO] Final Memory: 14M/155M

[INFO] ------------------------------------------------------------------------

sanjeewa malalgodaHow to run WSO2 API Manager 1.8.0 with Java Security Manager enabled

In Java, the Security Manager is available for applications to have various security policies. The Security Manager helps to prevent untrusted code from doing malicious actions on the system.

Here in this post we will see how we can run WSO2 API Manager 1.8.0 with security manager enabled.

To sign the jars, we need a key. We can use the keytool command to generate a key.

sanjeewa@sanjeewa-ThinkPad-T530:~/work/wso2am-1.8.0-1$ keytool -genkey -alias signFiles -keyalg RSA -keystore signkeystore.jks -validity 3650 -dname "CN=Sanjeewa,OU=Engineering, O=WSO2, L=Colombo, ST=Western, C=LK"Enter keystore password: 

Re-enter new password:
Enter key password for
(RETURN if same as keystore password):
Scripts to sign Jars available in product. Create following 2 scripts and grant them required permissions. script:
    if [[ ! -d $1 ]]; then
       echo "Please specify a target directory"
       exit 1
    for jarfile in `find . -type f -iname \*.jar`
      ./ $jarfile
    done script:

    set -e
    signjar="$JAVA_HOME/bin/jarsigner -sigalg MD5withRSA -digestalg SHA1 -keystore $keystore_file -storepass $keystore_storepass -keypass $keystore_keypass"
    verifyjar="$JAVA_HOME/bin/jarsigner -keystore $keystore_file -verify"
    echo "Signing $jarfile"
    $signjar $jarfile $keystore_keyalias
    echo "Verifying $jarfile"
    $verifyjar $jarfile
    # Check whether the verification is successful.
    if [ $? -eq 1 ]
       echo "Verification failed for $jarfile"

Then sign all jars using above created scripts
    ./ ./repository/ > log

Add following to file \$CARBON_HOME/repository/conf/sec.policy \
 -Drestricted.packages=sun.,,com.sun.xml.internal.bind.,com.sun.imageio.,org.wso2.carbon. \,, \

Exporting signFiles public key certificate and importing it to wso2carbon.jks

We need to import the signFiles public key certificate to the wso2carbon.jks as the security policy file will be referring the signFiles signer certificate from the wso2carbon.jks (as specified by the first line).

    $ keytool -export -keystore signkeystore.jks -alias signFiles -file sign-cert.cer
    sanjeewa@sanjeewa-ThinkPad-T530:~/work/wso2am-1.8.0-1$ keytool -import -alias signFiles -file sign-cert.cer -keystore repository/resources/security/wso2carbon.jks
    Enter keystore password: 
    Owner: CN=Sanjeewa, OU=Engineering, O=WSO2, L=Colombo, ST=Western, C=LK
    Issuer: CN=Sanjeewa, OU=Engineering, O=WSO2, L=Colombo, ST=Western, C=LK
    Serial number: 5486f3b0
    Valid from: Tue Dec 09 18:35:52 IST 2014 until: Fri Dec 06 18:35:52 IST 2024
    Certificate fingerprints:
    MD5:  54:13:FD:06:6F:C9:A6:BC:EE:DF:73:A9:88:CC:02:EC
    SHA1: AE:37:2A:9E:66:86:12:68:28:88:12:A0:85:50:B1:D1:21:BD:49:52
    Signature algorithm name: SHA1withRSA
    Version: 3
    Trust this certificate? [no]:  yes
    Certificate was added to keystore

Then add following sec.policy file
    keystore "file:${user.dir}/repository/resources/security/wso2carbon.jks", "JKS";

    // ========= Carbon Server Permissions ===================================
    grant {
       // Allow socket connections for any host
       permission "*:1-65535", "connect,resolve";
       // Allow to read all properties. Use in to restrict properties
       permission java.util.PropertyPermission "*", "read";
       permission java.lang.RuntimePermission "getClassLoader";
       // CarbonContext APIs require this permission
       permission "control";
       // Required by any component reading XMLs. For example: org.wso2.carbon.databridge.agent.thrift:4.2.1.
       permission java.lang.RuntimePermission "";
       // Required by org.wso2.carbon.ndatasource.core:4.2.0. This is only necessary after adding above permission.
       permission java.lang.RuntimePermission "";
     permission "${carbon.home}/repository/deployment/server/jaggeryapps/publisher/localhost/publisher/site/conf/locales/jaggery/locale_en.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/publisher/localhost/publisher/site/conf/locales/jaggery/locale_default.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/publisher/site/conf/site.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/store/localhost/store/site/conf/locales/jaggery/locale_en.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/store/localhost/store/site/conf/locales/jaggery/locale_default.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/store/site/conf/locales/jaggery/locale_en.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/store/site/conf/locales/jaggery/locale_default.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/store/site/conf/site.json", "read,write";
permission "${carbon.home}/repository/deployment/server/jaggeryapps/publisher/site/conf/locales/jaggery/locale_en.json", "read,write";
      permission "${carbon.home}/repository/deployment/server/jaggeryapps/publisher/site/conf/locales/jaggery/locale_default.json", "read,write";
       permission "findMBeanServer,createMBeanServer";
      permission "-#-[-]", "queryNames";
      permission "*[java.lang:type=Memory]", "queryNames";
      permission "*[java.lang:type=Memory]", "getMBeanInfo";
      permission "*[java.lang:type=Memory]", "getAttribute";
      permission "*[java.lang:type=MemoryPool,name=*]", "queryNames";
      permission "*[java.lang:type=MemoryPool,name=*]", "getMBeanInfo";
      permission "*[java.lang:type=MemoryPool,name=*]", "getAttribute";
      permission "*[java.lang:type=GarbageCollector,name=*]", "queryNames";
      permission "*[java.lang:type=GarbageCollector,name=*]", "getMBeanInfo";
      permission "*[java.lang:type=GarbageCollector,name=*]", "getAttribute";
      permission "*[java.lang:type=ClassLoading]", "queryNames";
      permission "*[java.lang:type=ClassLoading]", "getMBeanInfo";
      permission "*[java.lang:type=ClassLoading]", "getAttribute";
      permission "*[java.lang:type=Runtime]", "queryNames";
      permission "*[java.lang:type=Runtime]", "getMBeanInfo";
      permission "*[java.lang:type=Runtime]", "getAttribute";
      permission "*[java.lang:type=Threading]", "queryNames";
      permission "*[java.lang:type=Threading]", "getMBeanInfo";
      permission "*[java.lang:type=Threading]", "getAttribute";
      permission "*[java.lang:type=OperatingSystem]", "queryNames";
      permission "*[java.lang:type=OperatingSystem]", "getMBeanInfo";
      permission "*[java.lang:type=OperatingSystem]", "getAttribute";
      permission "org.wso2.carbon.caching.impl.CacheMXBeanImpl#-[org.wso2.carbon:type=Cache,*]", "registerMBean";
      permission "org.apache.axis2.transport.base.TransportView#-[org.apache.synapse:Type=Transport,*]", "registerMBean";
      permission "org.apache.axis2.transport.base.TransportView#-[org.apache.axis2:Type=Transport,*]", "registerMBean";
      permission "org.apache.axis2.transport.base.TransportView#-[org.apache.synapse:Type=Transport,*]", "registerMBean";
      permission java.lang.RuntimePermission "modifyThreadGroup";
      permission "${carbon.home}/repository/database", "read";
      permission "${carbon.home}/repository/database/-", "read";
      permission "${carbon.home}/repository/database/-", "write";
      permission "${carbon.home}/repository/database/-", "delete";
    // Trust all super tenant deployed artifacts
    grant codeBase "file:${carbon.home}/repository/deployment/server/-" {
    grant codeBase "file:${carbon.home}/lib/tomcat/work/Catalina/localhost/-" {
     permission "/META-INF", "read";
     permission "/META-INF/-", "read";
     permission "-", "read";
     permission org.osgi.framework.AdminPermission "*", "resolve,resource";
     permission java.lang.RuntimePermission "*", "";
    // ========= Platform signed code permissions ===========================
    grant signedBy "signFiles" {
    // ========= Granting permissions to webapps ============================
    grant codeBase "file:${carbon.home}/repository/deployment/server/webapps/-" {
       // Required by webapps. For example JSF apps.
       permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
       // Required by webapps. For example JSF apps require this to initialize com.sun.faces.config.ConfigureListener
       permission java.lang.RuntimePermission "setContextClassLoader";
       // Required by webapps to make HttpsURLConnection etc.
       permission java.lang.RuntimePermission "modifyThreadGroup";
       // Required by webapps. For example JSF apps need to invoke annotated methods like @PreDestroy
       permission java.lang.RuntimePermission "accessDeclaredMembers";
       // Required by webapps. For example JSF apps
       permission java.lang.RuntimePermission "";
       // Required by webapps. For example JSF EL
       permission java.lang.RuntimePermission "getClassLoader";
       // Required by CXF app. Needed when invoking services
       permission javax.xml.bind.JAXBPermission "setDatatypeConverter";
       // File reads required by JSF (Sun Mojarra & MyFaces require these)
       // MyFaces has a fix  
       permission "/META-INF", "read";
       permission "/META-INF/-", "read";
       // OSGi permissions are requied to resolve bundles. Required by JSF
       permission org.osgi.framework.AdminPermission "*", "resolve,resource";


Start server

Chintana WilamunaWhat is microservices architecture?

Microservice architecture to me is a term that doesn’t convey anything new. Martin Fowler’s article about microservices compare it with a monolithic application. Breaking a monolithic application into a set of services comes with several benefits. That’s true regardless of the term being used, microservices or SOA. Quoting from article,

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies

Also the article goes on to explain a set of characteristics of microservices based architecture. These characteristics are general to any architecture that’s based on services regardless of the implementation technology used. Association of ESBs with SOA and labeling it as heavyweight is not right. It’s a flimsy argument against widely available high quality open source tools we have today. We can certainly do SOA without an ESB. A lot of the bad rap for SOA stems from trying to use expensive and bloated tools to do the wrong job. Compared with open source middleware tools these days we can implement distributed services with ease. A lot of these tools doesn’t require developers to lock down on a particular messaging format. It’s about understanding the customers’ problems and using the right technology and toolset to solve the problems with minimum effort and time. So microservices doesn’t give you anything new. You need to understand the problems you have and decide what’s the best possible path to take. After that a proof of concept can be done in weeks. Once you can prove your architecture, progressing towards broader goals become easy.

Challenges and possible solutions

I really like this article on about microservices. This again is not specific to “microservices”. Those challenges are there in every software system that’s based on services. Let’s see whether we can address some of the challenges.

Significant Operations Overhead

Any moderately large distributed deployment with 10+ servers (with HA) will have a significant operational overhead. There are multiple ways to reduce this complexity. This will increase when there are multiple environments for develop, test and production.

  1. Deployment automation - Tools like Puppet gives a lot of power to automate deployment of environments as well as artifacts that should be deployed. All the components of the deployment such as load balancers, message brokers and other services can be successfully automated though Puppet
  2. Monitoring running instances - Although there are open source tools like Nagios/Ganglia/ELK they’re still a bit behind compared to tools like Amazon CloudWatch IMO.
  3. Real-time alerting - Again there are multiple open source products that you can configure to get alerts on based on disk/CPU/memory utilization
  4. Rolling into production - Open source tools like WSO2 AppFactory plays a significant role here to give developers the ability to rollout to different environments easily

We can utilize some of the above tools to reduce operational overhead.

Substantial DevOps Skills Required

To stand up the environment in a private data center or on top of an IaaS vendor does require some devops skill. You can get around this by using cloud hosted platforms such as WSO2 Public Cloud. Then again there can be organization policies around data locality.

Implicit Interfaces

This is one of the major challenges in any RESTful architecture. If you’re using SOAP then the service interface is available through WSDL. Now WADL is becoming popular around HTTP based services for providing a service definition. Even though there are standards coming up still there are challenging aspects when it comes to securing and versioning these services.

Interface changes of services can be solved with a mediation layer when the need arise. This can be an alternative to introducing contracts for services where differences in interfaces are masked at the mediation layer.

Duplication Of Effort

Sanjiva’s article about API Management explains the importance of having an API Management strategy for removing duplication of effort as well as service reuse. In order to reuse, there should be an efficient way to find what’s available. Having a catalog of APIs help to find what can be reused and reduce duplication of effort.

Distributed System Complexity

Although distributed systems add complexity, flexibility aspects you get by having a set of independently operating services is large. According to the requirements of the application and performance characteristics expected, granularity of service decomposition should be decided.

Asynchronicity Is Difficult!

This IMO is a difficult problem to solve than the other points author has mentioned. Maintaining complex distributed transaction semantics across service boundaries is hard.

Testability Challenges

While individual service testing is important, integration testing of the entire interaction from end to end is more important in a distributed environment. So more emphasis should be given to service integrations that will involve several services being called to get a single result

Chamila WijayarathnaCreating a RPC Communication using Apache Thrift

Apache Thrift is an open source cross language Remote Procedure Call (RPC) framework with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.
In one of my previous blogs, I wrote about my contributions to Apache Thrift during Google Summer of Code 2014. In this blog, I'm going to write about how to create a simple RPC communication using Apache Thrift.
To do this, you need to have installed Apache Thrift in your computer. 
Then the first step is writing .thrift file which defines the service you are implementing. In this file you need to define methods, structs and exceptions you are going to use. Following is a thrift code that defines a very simple service I'm going to implement here.

    namespace java hello

    service mytest1{
        i32 add(1:i32 num1, 2:i32 num2)


This defines a service with single method add with take two integers as input and return the sum of those 2.
Thrift documentation describes more about defining services using thrift. There are many data types defined in thrift that can be used when defining the service.
Let's save this in a file named mytest1.thrift. Then by running "thrift -r --gen java mytest1.thrift" we can generate the bean files required to create the service. This will create gen-java/hello/ file which contains the classes and methods needed for creating server and client. You can use any language that is supported by thrift here.

Then next step is creating the server. First we need to create Handler class which contains the method bodies of methods we defined in .thrift definition. So in our case, handler class will look likes following.

import org.apache.thrift.TException;
public class Handler implements mytest1.Iface {
public int add(int num1, int num2) throws TException {
System.out.println("Entered handler");
return num1+num2;


Then we need to create the server class which starts the server and keep listening to incoming requests. It should be look like following.

import org.apache.thrift.server.TServer;
import org.apache.thrift.server.TServer.Args;
import org.apache.thrift.server.TSimpleServer;
import org.apache.thrift.transport.TServerSocket;
import org.apache.thrift.transport.TServerTransport;

public class Processor {

public static Handler handler;
public static mytest1.Processor<mytest1.Iface> processor;
public static void main(String [] args) {
   try {
     handler = new Handler();
     processor = new mytest1.Processor<mytest1.Iface>(handler);

     Runnable simple = new Runnable() {
       public void run() {
     new Thread(simple).start();
   } catch (Exception x) {

 public static void simple(mytest1.Processor<mytest1.Iface> processor) {
   try {
     TServerTransport serverTransport = new TServerSocket(9090);
     TServer server = new TSimpleServer(new Args(serverTransport).processor(processor));

     System.out.println("Starting the simple server...");
   } catch (Exception e) {

The hierarchy of the project I create in Eclipse looks likes following.

You will have to add some thrift dependency jar files to this as well. A simple server I created is available at

Then the next task is to create the client. Its easier than creating the server. Following is a client class written in Java.

import org.apache.thrift.TException;
import org.apache.thrift.protocol.TBinaryProtocol;
import org.apache.thrift.protocol.TProtocol;
import org.apache.thrift.transport.TSocket;
import org.apache.thrift.transport.TTransport;

public class Client {

 public static void main(String [] args) {
   try {
     TTransport transport;
     transport = new TSocket("localhost", 9090);;
     TProtocol protocol = new  TBinaryProtocol(transport);
     mytest1.Client client = new mytest1.Client(protocol);
   } catch (TException x) {

 private static void perform(mytest1.Client client) throws TException
   int sum = client.add(24, 31);
   System.out.println("24+31=" + sum);

After starting server, if you run this client, you will see that service on the server will run and return the desired outcome. For the same .thrift definition, we can create gen files in any language and create servers and clients using them.

Chintana WilamunaMoving the blog

I’ve just moved the blog to a new location. I might move the archives a little later. Moved a couple of posts to adjust the look and feel of the blog. Also hoping that this will be a bit more photo friendly.

Lakmali BaminiwattaTroubleshooting Swagger issues in WSO2 API Manager

WSO2 API Manager provides this functionality through the integration of Swagger ( Swagger-based interactive documentation allows you to try out APIs from the documentation itself which is available as the "API Console" in API Store. 

There are certain requirements that need to be satisfied in order to swagger Try-it functionality to work. First requirement is to enable CORS in API Mananger Store. This documentation describes how that should be done. 

But most of them face many issues in getting the swagger Try-it into work. So this blog post describes common issues faced by users with Swagger and how to troubleshoot them.  


API Console keeps on loading the response for ever as below.

Cause -1

API resource not supporting OPTIONS HTTP verb. 


Add OPTIONS HTTP verb for API resources as below. Then Save the API and Try again. 

Cause -2 

Backend endpoint not supporting OPTIONS HTTP verb. 

Note: You can verify this by directly invoking the backend for OPTIONS verb. If backend is not supporting OPTIONS verb, "403 HTTP method not allowed" will be returned. 


If you have control over the Backend service/api, enable OPTIONS HTTP verb. 

Note: You can verify this by directly invoking the backend for OPTIONS verb. If backend is supporting OPTIONS verb, 200/202/204 success response should be returned.

If you can't enable OPTIONS HTTP verb in the backend, then what you can do is modify the API synapse sequence of the API, which returns back without sending the request to the backend if it is an OPTIONS request. 


API Console completes request execution, but no response is returned. 


Authentication Type enabled for OPTIONS HTTP verb is not 'None'. 

If this is the cause below error message will be shown in the wso2carbon.log. Required OAuth credentials not provided
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(
at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(
at org.apache.axis2.engine.AxisEngine.receive(
at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(
at org.apache.axis2.transport.base.threads.NativeWorkerPool$
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
at java.util.concurrent.ThreadPoolExecutor$


Make the Auth Type as none for OPTIONS HTTP verb as below. Then Save & Publish the API.

Note: If you are seeing the Issue -2 , even though Authentication type is shown as none for OPTIONS, then please re-select the authentication type as 'none' as above. Then Save & Publish the API. There is an UI bug in API Manager 1.7.0, where although you have set some other authentication type other than 'None' for OPTIONS verb, UI shows it as none. 


API Store domain/port which you are currently trying swagger API Console, is not included in the CORS Access-Control-Allow-Origin configuration. For example, in below CORS configuration, only localhost domain addresses are allowed for API Store. But API Console is accessed using IP address. 


Include domain/port in the CORS Access-Control-Allow-Origin configuration. For above example, we have to include IP address as below.  Then restart the server and try API Console again. 


API Console keeps on loading the response for ever as below when API Store is accessed as HTTPs, while HTTP is working properly. 


Browser blocking the request due to accessing the API Gateway in HTTP from HTTPs. 


Go to API Publisher and edit "Swagger API Definition" of the API and change the basePath with https gateway address as below. The Save the "Swagger API Definition" and try again. 


If you are still getting the issue, even after applying the above then, cause can be that the security certificate issued by the server is not trusted by your browser.


Access the HTTPS gateway endpoint directly from your browser and accept the security certificate. Then try again. 

Amila MaharachchiWhy you should try WSO2 App Cloud - Part 1

I have written a blog post on how to get started with WSO2 App Cloud couple of months ago and it can be found at Lets get started with WSO2 App Cloud. My intention of writing this post is to explain why you should try it. I'll be listing the attractive features of WSO2 App Cloud which makes it special from other app cloud offerings available.

  1. Onboarding process
  2. Creating your first application
  3. Launching your first application
  4. Editing and re-launching the application

Onboarding process

WSO2 App Cloud allows you to sign-up by providing your email address. Then you get an email with a link to click and confirm your account. Once you click it, as the next step, you are asked to provide an organisation name which you want to create.

WSO2 App Cloud Sign-Up page

WSO2 App Cloud has an organization concept which allows a set of developers, QA engineers, DevOps persons to work collaboratively. When you create an account in WSO2 App Cloud, it creates an organisation for you. In other words, this is a tenant. Then you can invite other members to this organisation. You can assign members to different applications developed under this organisation entity. I'll explain this structure in a future post (to keep you reading through this :))

Some of the other app clouds available today do not have the organisation concept. They only have the application entity. You sign-up and create an application. Then you are the owner of that app. You can invite others to collaborate with your app. But, if you create another app, you need to invite them again for collaboration.

Creating your first application

After you sign-up and sign-in successfully, lets have a look at the first app creation experience. WSO2 App Cloud support multiple application types to be created. They are,
  • Java web applications
  • Jaggery web applications (Jaggery is a Javascript framework introduced by WSO2 itself)
  • JAX-WS services
  • JAX-RS services
  • WSO2 Data Services
  • It also allows you to upload existing Java web apps, JAX-WS, JAX-RS and Jaggery apps
WSO2 App Cloud is planning to support other app types such as PHP, node.js in the future.

In WSO2 App Cloud, creating an application is a two click process. After you login in, you click on "Add New Application" button, fill in the information and click "Create Application" button. This will create the skeleton of the app, a git repository for it, build space for the app. Then it will build the app and deploy it for you. All you have to do is, wait a couple of minutes, go to the app's overview page and click the "Open" button to launch your app. This has the following advantages.
  1. You don't need to know anything about the folder structure of the app type you are creating. WSO2 App Cloud does it for you.
  2. Within a couple of minutes, you have an up and running application. This is same for new users and users who are familiar with the WSO2 App Cloud.
See this video on WSO2 App Cloud's app creation experience.

Other offerings have different approaches when creating applications. Some of them needs users to download SDKs. Users also need to have knowledge about the structure of an app which they want to create. After a user creates and uploads/pushes an app, they have to start up instances to run it. Some offerings provide a default instance, but if you want high availability for your app, user has to start additional instances. WSO2 App Cloud takes care of them by default, so the user does not need to worry about it.

Launching your first application

Lets see how a user can launch the application they have created.

When you create the app in WSO2 App Cloud, after it goes through the build process, it is automatically deployed. Within seconds of creation, it presents you the URL to launch your app. A user with very little knowledge can get an app created and running easily.

Editing and re-launching the application

Its obvious that a user wants to edit the application he/she creates by adding their code.WSO2 App Cloud dominates the app cloud offerings when it comes to development support. It provides you a cloud IDE. You can just click the "Edit Code" button and your app code will be open in the browser. Not only you can edit the code in the browser, you can build and run it before pushing the code to the App Cloud. Very cool, isn't it?

Second option is to edit the code using WSO2 Developer Studio. To do this, you need to have a working installation of WSO2 Developer Studio.

Third option is to clone the source code and edit it using your favourite IDE. 
WSO2 App Cloud IDE
See this video on WSO2 App Cloud's cloud IDE.

Special thing to note is, there is no other app cloud offering which provides a cloud IDE for the developers.

I'll be writing the second part of this post by discussing some more attractive features of WSO2 App Cloud and I am expecting to discuss
  • Using resources within the app (databases etc.)
  • Collaborative development
  • Lifecycle management of the app
  • Monitoring logs
Your feedback is welcome. Stay tuned :) 

Chandana NapagodaConfigurable Governance Artifacts - CURD Operation

Please refer my previous post which explain about Configurable Governance Artifacts in WSO2 Governance Registry.

Once you have added an RXT, it will generate HTML based List and Add view. Also it will be deployed as an Axis2 based service with CRUD operations. For example, when you upload contact.rxt, there will be a Contact Service exposed with CRUD operations. Using provided CRUD operations, external client applications(php, .net,etc) can perform registry operations.

Below is an example CRUD operations provided for Contact RXT which we created in my previous post(RXT Source).
  • addContact - create an artifact in the type Contact .
  • getContact - retrieve an artifact in the type Contact .
  • updateContact - update an artifact in the type Contact .
  • deleteContact - delete an artifact in the type Contact . 
  • getContactArtifactIDs - get all the artifact ID s of artifacts in the type Contact .

To retrieve WSDL of the above service, we need to disable "HideAdminServiceWSDLs" in "carbon.xml" file. After that, you need to restart WSO2 Governance Registry server. Then Contract(WSDL) will be exposed like this: 


Please refer Service Client Example for more details:

Chandana NapagodaWSO2 API Manager - Changing the default token expiration time

In WSO2 API Manager, expiration period of all the access tokens is set to 60 minutes (3600 seconds) by default. However, you can modify the default expiration period value using identity.xml file located in <APIM_HOME>/repository/conf/ directory.

In the identity.xml file you can see separate configurations to modify default expirations of User tokens and application access tokens.


If you are planning to modify the validity period of appliccation access token, then you have to modify the default value of the <ApplicationAccessTokenDefaultValidityPeriod> element in identity.xml file. Changing the value of <ApplicationAccessTokenDefaultValidityPeriod> will not affect for existing applications which have alreday generated application tokens. So when you regenerate the application token, it will pick the token validity time from the UI. Therefore, for applications which has already generated tokens, token validity period needs to be changed from the UI as well. However, when you are creating a new application or when you generating the token for the first time, it will pick the token validity period from the identity.xml file.


If you are planning to modify the validity period of user token, then you need to update value of the <UserAccessTokenDefaultValidityPeriod> element in identity.xml file. The User token validity period will get updated when user generating and refreshing the token.

Chintana WilamunaSelecting a Strategy for New IT

There are many advantage that you can gain from moving to new IT that connects business data with customers, partners and general public in general. “New IT” here refers to pretty much any process/mechanisms that you take to move away from siloed applications and having a more integrated experience to make day to day business more efficient. From a business and project management standpoint this can mean gaining more visibility into ongoing projects and the business impact they make. From an engineering stand point this can mean, versions of each project that’s in production, projects that will be going to production soon, critical bugs affecting a release schedule and so forth. This blog tries to explore couple of existing problems and how API management helps to move forward with adopting “new IT” for a connected business.

Existing processes

Everyone has some sort of a process/methodology that works for them. No 2 places or teams are alike. When there are already systems and certain ways of doing things are in place, it’s hard to introduce change. First and foremost the battle seems to be from the developers who are accustomed to doing certain things in certain ways. When introducing something that’s new, it’s always helpful to position it as filling a gap in the current process, doing incremental improvements.

Possible problems

Let’s see what some of the usual problems and how API management can solve those.

Standard method for writing services

If you have a rigid standard when introducing a new service, that might not be optimal. Rigid standards in the sense that tied to particular architectural style like REST or everything being a SOAP service. There are some services and scenarios where a SOAP service make sense and there are other situations where an HTTP service accepting a JSON message would be much simple and efficient. Having a one true standard which forces non-optimal creation of services will lead to unhappy developers/consumers and end up not being reused at all.

Challenges in service versioning/consistent interfaces

When a new version of a service is to be released, there has to be a way of versioning the service. If you can disrupt all the client and force them to upgrade then that’s easy (in case of an API change). Otherwise there has to be a way of deprecating an exising service and moving it to the new one.

Finding an existing service. Encouraging service reuse

If there are services available, then there should be a way to discover existing services. Not having a place to discover existing services is going to make service reuse a nightmare. This is not about holy grail of service reuse most SOA literature talks about. Think about how a developer in another team or a new hire discover services.


When you have a service, you need to find out who’s using that service, what are the usage patterns, should certain clients be throtted out to give priority for other clients and so on. If you don’t have any metering, it’s near impossible to determine what’s going on. Problems (if you have any) service consumers face. It also helps to have historic data for doing projections that’ll result in resources expansions accordingly.


Here I would like to present a solution and how it will address some of the problems we listed out earlier. Make APIs the interfaces for services. There is a difference between a service and an API. API will be the consistent interface to everything outside (users within the organization/partners/suppliers/customers etc…), anything or anyone who will consume the service.

Having an API façade will be the first step. You can expose a SOAP based service without any problem. Exposing a REST service falls naturally. Exposing a service is not bound by any imposed standard anymore. A service can be written by analysing the best approach based on the problem at hand.

Versioning is enabled at the service definition level. The API layer. Following picture shows how to create a new version of an existing API.


In the picture above it shows the current API version in the top and you can give a version when you’re creating the new one. Once you create the API, at the publishing stage you can choose whether to deprecate the existing API or to have them coexist so that the new version is taken as a separate API. As following picture shows,


Another cool thing is that you can choose whether to revoke existing security tokens, forcing all clients no re-subscribe.

Next up is having a place for developers to find out about existing services. API Manager has a web application named API Store that list out published APIs. Here’s an example for the store that’s developed on top of API Manager. Monitoring is equally simple and powerful.

This is how API Manager helps to make life easy as well as help making the right technical choice. Allowing developers and other stakeholders to choose right service type, message formats encouraging service implementation to be diverse but still, having a consistent interface where monitoring/security/versioning etc… can be applied with ease.

Chintana WilamunaImplementing an SOA Reference Architecture

A reference architecture try to capture best practices and common knowledge for a particular subject area or a business vertical. Although you can define a reference architecture at any level, for the purpose of this blog we’ll be talking about an SOA reference architecture. When tasked with implementing an SOA architecture, blindly folloing a reference architecture might not give optimal results. If business requirements are not considered sometimes it will be not the right fit for the issues at hand.

When an SOA architecture is going to be implemented, close attention should be given to business requirements and any other technological constraints. Having to work with a specific ERP system, having to work with a legacy datastore which otherwise, too expensive to replace with a whole new system and so on. Based on these facts a solution architecture should be developed that’s aligned with the business requirements.

Looking at the solution architecture a toolset should be evaluated that maximize ROI and possible future enhancements that might come in the next 1 - 2 years. Evaluating an existing architecture every 1 - 2 years and making small refinements saves the time and effort for doing big bang replacements and modifications for critical components. While selecting a toolset having a complete unified platform helps to build the bits you need right away and still have room for additions later. If you’re looking for a complete platform you probably want to consider WSO2 middleware platform provide a complete open source solution.

I’m biased for open source solutions and having a platform that makes connected business possible is mighty cool.

Isuru PereraMonitoring WSO2 products with logstash JMX input plugin

These days, I got the chance to play with ELK (Elasticsearch, Logstash & Kibana). These tools are a great way to analyze & visualize all logs.

You can easily analyze all wso2carbon.log files from ELK. However we also needed to use ELK for monitoring WSO2 products and this post explains the essential steps to use logstash JMX input plugin to monitor WSO2 servers.

Installing Logstash JMX input plugin

Logstash has many inputs and the JMX input plugin is available under "contrib"

We can use "plugin install contrib" command to install extra plugins.

cd /opt/logstash/bin
sudo ./plugin install contrib

Note: If you use logstash 1.4.0 and encounter issues in loading the jmx4r, please refer Troubleshooting below.

Logstash JMX input configuration

When using the JMX input plugin, we can use a similar configuration as follows. We are keeping the logstash configs in "/etc/logstash/conf.d/logstash.conf"

input {
path => "/etc/logstash/jmx"
polling_frequency => 30
type => "jmx"
nb_thread => 4

output {
elasticsearch { host => localhost }

Note that the path points to a directory. We have the JMX configuration in "/etc/logstash/jmx/jmx.conf"

//The WSO2 server hostname
"host" : "localhost",
//jmx listening port
"port" : 9999,
//username to connect to jmx
"username" : "jmx_user",
//password to connect to jmx
"password": "jmx_user_pw",
"alias" : "jmx.dssworker1.elasticsearch",
//List of JMX metrics to retrieve
"queries" : [
"object_name" : "java.lang:type=Memory",
"attributes" : [ "HeapMemoryUsage", "NonHeapMemoryUsage" ],
"object_alias" : "Memory"
}, {
"object_name" : "java.lang:type=MemoryPool,name=Code Cache",
"attributes" : [ "Name", "PeakUsage", "Usage", "Type" ],
"object_alias" : "MemoryPoolCodeCache"
}, {
"object_name" : "java.lang:type=MemoryPool,name=*Perm Gen",
"attributes" : [ "Name", "PeakUsage", "Usage", "Type" ],
"object_alias" : "MemoryPoolPermGen"
}, {
"object_name" : "java.lang:type=MemoryPool,name=*Old Gen",
"attributes" : [ "Name", "PeakUsage", "Usage", "Type" ],
"object_alias" : "MemoryPoolOldGen"
}, {
"object_name" : "java.lang:type=MemoryPool,name=*Eden Space",
"attributes" : [ "Name", "PeakUsage", "Usage", "Type" ],
"object_alias" : "MemoryPoolEdenSpace"
}, {
"object_name" : "java.lang:type=MemoryPool,name=*Survivor Space",
"attributes" : [ "Name", "PeakUsage", "Usage", "Type" ],
"object_alias" : "MemoryPoolSurvivorSpace"
}, {
"object_name" : "java.lang:type=GarbageCollector,name=*MarkSweep",
"attributes" : [ "Name", "CollectionCount", "CollectionTime" ],
"object_alias" : "GarbageCollectorMarkSweep"
}, {
"object_name" : "java.lang:type=GarbageCollector,name=ParNew",
"attributes" : [ "Name", "CollectionCount", "CollectionTime" ],
"object_alias" : "GarbageCollectorParNew"
}, {
"object_name" : "java.lang:type=ClassLoading",
"attributes" : [ "LoadedClassCount", "TotalLoadedClassCount", "UnloadedClassCount" ],
"object_alias" : "ClassLoading"
}, {
"object_name" : "java.lang:type=Runtime",
"attributes" : [ "Uptime", "StartTime" ],
"object_alias" : "Runtime"
}, {
"object_name" : "java.lang:type=Threading",
"attributes" : [ "ThreadCount", "TotalStartedThreadCount", "DaemonThreadCount", "PeakThreadCount" ],
"object_alias" : "Threading"
}, {
"object_name" : "java.lang:type=OperatingSystem",
"attributes" : [ "OpenFileDescriptorCount", "FreePhysicalMemorySize", "CommittedVirtualMemorySize", "FreeSwapSpaceSize", "ProcessCpuLoad", "ProcessCpuTime", "SystemCpuLoad", "TotalPhysicalMemorySize", "TotalSwapSpaceSize", "SystemLoadAverage" ],
"object_alias" : "OperatingSystem"
} ]

This is all we need to configure logstash to get JMX details from WSO2 servers. Note that we have given a directory as the path for JMX configuration. This means that all the configs inside "/etc/logstash/jmx" will be loaded. So, we need to make sure that there are no other files.

I'm querying only the required attributes for now. It is possible to add as many queries as you need.

Securing JMX access of WSO2 servers.

WSO2 servers by default start the JMX service and you should be able to see the JMX Service URL in wso2carbon.log

For example:
TID: [-1234] [] [DSS] [2014-05-31 01:09:11,103]  INFO {org.wso2.carbon.core.init.JMXServerManager} -  JMX Service URL  : service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi

You can see JMX configuration in <CARBON_HOME>/repository/conf/etc/jmx.xml and the JMX ports in <CARBON_HOME&gt/repository/conf/carbon.xml

<!-- The JMX Ports -->
<!--The port RMI registry is exposed-->
<!--The port RMI server should be exposed-->

You may change ports from this configuration.

It is recommended to create a role with only "Server Admin" permission and assign to the "jmx_user". Then the "jmx_user" will have the required privileges to monitor WSO2 servers.

Also if we enable Java Security Manager, we need to have following permissions. Usually the WSO2 servers are configured to use the security policy file at  <CARBON_HOME>/repository/conf/sec.policy if the Security Manager is enabled.

grant {
// JMX monitoring requires following permissions. Check Logstash JMX input configurations
permission "-#-[-]", "queryNames";
permission "*[java.lang:type=Memory]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=MemoryPool,name=*]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=GarbageCollector,name=*]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=ClassLoading]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=Runtime]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=Threading]", "queryNames,getMBeanInfo,getAttribute";
permission "*[java.lang:type=OperatingSystem]", "queryNames,getMBeanInfo,getAttribute";

That's it. You should be able to push JMX stats via logstash now.


First of all you can check whether the configurations are correct by running following command.

logstash --configtest

This must tell that the configuration is OK.

However I encountered following issue in logstash 1.4.0 when running the logstash command.

LoadError: no such file to load -- jmx4r
require at org/jruby/
require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
require at file:/opt/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53
require at /opt/logstash/lib/logstash/JRUBY-6970.rb:27
require at /opt/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.4/lib/polyglot.rb:65
thread_jmx at /opt/logstash/bin/lib//logstash/inputs/jmx.rb:132
run at /opt/logstash/bin/lib//logstash/inputs/jmx.rb:251

For this issue, we need to extract the plugins to the same directory of logstash installation instead of contrib plugin installation. I got help from #logstash IRC to figure this out. Thanks terroNZ!

I did following steps

wget  --no-check-certificate -O logstash-contrib-1.4.0.tar.gz
tar -xvf logstash-contrib-1.4.0.tar.gz
sudo rsync -rv --ignore-existing logstash-contrib-1.4.0/* /opt/logstash/

Please note that JMX input plugin works fine in logstash-1.4.1 after installing contrib plugin and above steps are not required.

Then the next issue can occur is when connecting to WSO2 server. Check <CARBON_HOME>/repository/logs/audit.log and see whether the user can connect successfully. If it is not successful, you should check user permissions.

Another issue can be the failure of JMX queries. You can run logstash with "--debug" option and see debug logs.

I noticed following.
{:timestamp=>"2014-05-30T00:09:29.373000+0000", :message=>"Find all objects name java.lang:type=Memory", :level=>:debug, :file=>"logstash/inputs/jmx.rb", :line=>"165"}
{:timestamp=>"2014-05-30T00:09:29.392000+0000", :message=>"No jmx object found for java.lang:type=Memory", :level=>:warn, :file=>"logstash/inputs/jmx.rb", :line=>"221"}
{:timestamp=>"2014-05-30T00:09:29.393000+0000", :message=>"Find all objects name java.lang:type=Runtime", :level=>:debug, :file=>"logstash/inputs/jmx.rb", :line=>"165"}
{:timestamp=>"2014-05-30T00:09:29.396000+0000", :message=>"No jmx object found for java.lang:type=Runtime", :level=>:warn, :file=>"logstash/inputs/jmx.rb", :line=>"221"}

This issue came as we have enabled the Java Security Manager and after adding permissions as mentioned above, the logstash JMX input plugin worked fine.

Next is to create dashboards in Kibana using these data. Hopefully I will be able to write a blog post on that as well.

Prabath SiriwardenaWSO2 Identity Server 5.0.0 - Service Pack 1 is now is publicly available

You can now download the WSO2 Identity Server 5.0.0 - Service Pack 1 from

This Service Pack(SP) contains all the fixes issued for WSO2 Identity Server 5.0.0 up to now.

Installing the service pack into a fresh WSO2 Identity Server 5.0.0 release, prior to the first start up. 

1. Download WSO2 Identity Server 5.0.0 ( and extract it to a desired location.

2. Download, extract it and copy the WSO2-IS-5.0.0-SP01 directory to the same location, where you have wso2is-5.0.0.

3. The directory structure, from the parent directory will look like following.


4. In the command line, from the 'WSO2-IS-5.0.0-SP01' directory run the following command.

   On Microsoft Windows: \> install_sp.bat
   On Linux/Unix: $ sh

5. Start the Identity Server.

   Linux/Unix : $ sh -Dsetup
   Windows : \> wso2server.bat -Dsetup

 6. Open the file, wso2is-5.0.0/repository/logs/patches.log and look for the following line. If you find it, that means the service pack has been applied successfully.

INFO {org.wso2.carbon.server.util.PatchUtils} - Applying - patch1016 Installing the service pack into a WSO2 Identity Server 5.0.0 release, in production already.

If you have an Identity Server instance running already in your production environment - and want to update it with the service pack, please refer the README file which comes with the service pack itself.

Lakmali BaminiwattaHow to invoke APIs in SOAP style in Swagger

WSO2 API Manager has integrated Swagger to allow API consumers to explore APIs through a interactive console which is known as 'API Console'

This swagger based API Console supports invoking APIs i REST style out of the box. So this post going to show how we can invoke APIs in SOAP style in API console of WSO2 API Manager 1.7.0. For that we need to do few extra configurations.

1. Send SOAPAction and Content-Type header in the request
2. Enable sending SOAPAction header in the CORS configuration

First create an API for a SOAP Service. In this example I am using HelloService sample SOAP service of WSO2 Application Server. This HelloService has a operation named greet which accepts a payload as below.


1. Create API

Figure-1 : Design API 

Figure-2 : Implement API by giving SOAP endpoint

Figure-3 :Save and Publish API

2. Update Swagger API Definition

Now we have to edit the default Swagger content and add SOAPAction and Content-Type header. For that go to 'Docs' tab and click 'Edit Content' for API definition.

Figure-4: Edit Swagger API definition

Since we have to send a payload in the request, locate the POST http method in the content. Then add below content into the 'parameters' section of POST http method. 

                            "name": "SOAPAction",
                            "description": "OAuth2 Authorization Header",
                            "paramType": "header",
                            "required": false,
                            "allowMultiple": false,
                            "dataType": "String"
                            "name": "Content-Type",
                            "description": "OAuth2 Authorization Header",
                            "paramType": "header",
                            "required": false,
                            "allowMultiple": false,
                            "dataType": "String"

Then the complete POST HTTP method definition would look like below.

Figure-5: Edited Swagger API definition

After doing the above changes Save & Close.

Now if you go to API Store and click on the created API, and the go to API Console, you should see the SOAPAction and Content-Type fields are added to the Swagger UI.

Figure-6: API Console with new header parameters

3. Add New Headers to CORS Configuration

Although we have added required headers, those will be sent in the request oly if they are set as allowed headers in the CORS configuration.

For that open APIM_HOME/repository/conf/api-manager.xml file and locate CORSConfiguration. Then add SOAPAction in to available list of Access-Control-Allow-Headers as below (Content-Type is added by default, So we have to only add SOAPAction). 

Figure-7: API Console with new header parameters

After adding the headers, restart the api manager and invoke the API through API Console. 
When invoking the API, set the SOAPAction according to your SOAP service. Also set the COntent-Type header as 'text/xml'

Figure-8: API Console Invoke API

If you face any issues with swagger invoke, please go through this.

Lali DevamanthriProductivity in SOA Development team

I have frequently noticed that SOA development team is always critical path in all projects. Is there a problem of capacity, skills, life cycle, design or maturity? If you have your development team on the critical path for everything, that’s a PROCESS choice and a POLICY choice. If you don’t think that choice is working for you, you need to examine alternatives.  SOA, if implemented in a couple different ways, actually eliminates the need for bottle-necked development. SOA will not give you any answer about people management. Get inspiration from Agile methodology (Alister Cockburn for instance) in order to drive the output value from a team.

If you are team lead who experiencing the same kind of atmosphere … 

#1 SOA strategy

In our post-cold-war world, the only who decides is still MONEY. Technology not yet.
– SOA is not an intellectual thing.
– SOA MUST make your enterprise more competitive on the market.
– SOA must maximize the ROI of your enterprise.
– At your level, SOA must help for productivity (after some impediments removal)

Reusing services means factoring team skills and factoring costs spend on research/education related to some technology within the organization.

#2 Ownership

Are you really not owner of the team? As an architect, you have to give the good advises to the team in order they are more productive. For sure, you’re challenged / monitored by your management on those aspects.

If the architect of your house makes unreadable plans for the brickies and carpenters, this will result in a disaster. Even if the architect of your house followed an SOA initiative.

The IT architect is assimilated with team / people management.
What is the added value of giving advises if your team is not hearing/following you ?

#3 Organization

Your problem is not technical. This is an organizational issue.

Indeed you have to take on you those problem first : proper life cycle, artifact definition.
This is your job. But nothing to do with SOA.
When those are clear / make a plan, and discuss with the team.
And then, for those issues related to ownership confusion / Team Leading, you have to prepare to expose the situation to your management. Be very careful on your communication; choose your words, expose the facts / not problems / otherwise you’ll be designated to be the source of problem (!). Do not tell to the management about SOA or anything technical. Just tell the facts. And then, if (and only if) you are asked for, propose different solutions.

Manula Chathurika ThantriwatteHow to work with Apache Stratos 4.1.0 with Kubernetes

Below are the simple steps, how to configure Kubernetes cluster and work it with Apache Stratos.

Setup Kubernetes host cluster by cloning and setting up the virtual machines      
    Login to Kubernetes master and pull the following Docker image
    • cd [vagrant-kubernetes-setup-folder]
    • vagrant ssh master
    • sudo systemctl restart controller-manager
    • docker pull stratos/php:4.1.0-alpha

    Verify Kubernetes cluster status, once the following command is run there should be at least one minion listed
    • cd [vagrant-kubernetes-setup-folder]
    • vagrant ssh master
    • kubecfg list minions

    Start Stratos instance and tail the log
    • cd [stratos-home-folder]
    • sh bin/ start
    • tail -f repository/logs/wso2carbon.log

    Set Message Broker and Complex Event Processor IP addresses to Stratos host IP address in the Kubernetes custer
    • cd [stratos-samples-folder]
    • vim single-cartridge/kubernetes/artifacts/kubernetes-cluster.json


    Once the server is started run one of the Kubernetes samples available in the Stratos samples checked out above
    • cd [stratos-samples-folder]
    • cd single-cartridge/kubernetes
    • ./

    Monitor Stratos log and wait until the application activated log is printed
    • INFO {org.apache.stratos.autoscaler.applications.topic.ApplicationsEventPublisher} -  Publishing Application Active event for [application]: single-cartridge-app [instance]:single-cartridge-app-1

    Ajith VitharanaAdding custom password policy enforcer to WSO2 Identity Server.

    1. Lets say, user password should meet the following requirements

    * password should have at least one lower case
    * password should have at least one upper case
    * password should have at least one digit
    * password should have at least one special character (!@#$%&*).
    * password should have 6-8 characters.

     You can write new custom password enforcer extending the AbstractPasswordPolicyEnforcer class.

    1. You can download the java project from following git repository location [i]


    2. Build the project (Follow the README.txt).

    3. Copy the jar file in to <IS5.0.0_HOME>/repository/components/lib directory.

    4. Open the file (<IS5.0.0_HOME>/repository/conf/security/

    5. Enable the  identity listener.


    6. Disable the default Password.policy.extensions configurations.


    7. Add new configuration for custom policy enforcer.


    8. Restart the server.

    9. Test.

    i) user : ajith  password : 1Acws@d  (this password meet above  policy).

    ii) user : ajith1 password : 1Acws@dgggg (this password doesn't meet above policy because length is  11.)

    Ajith VitharanaSOAP web service as REST API using WSO2 API Manager -1


    Download the latest version of   WSO2 ESB, API Manager ,Application Server and WSO2 Developer Studio from web site.

    1. Deploy sample web service.

     1.1 Download this sample web service archive[i] (SimpleStockQuoteService.aar) and deploy on WSO2 Application Sever.

    [i] :

    1.2 Open the carbon.xml file and changed the port off set value to 1, then start the server. (carbon.xml file under the wso2as-5.2.1/repository/conf directory)

    1.3. Log in to the administrator console(admin/admin) and deploy the SimpleStockQuoteService.aar file using aar file deploying wizard. (Please check the bellow image)

    https://[host or IP]:9444/carbon/

     1.4 After few seconds refresh the service "List" page, now you should see the "SimpleStockQuoteService" service in the services list. (Please see bellow image)

    1.5 Click on the "SimpleStockQuoteService" name, now  you should see the  WSDL locations(1) and endpoints(2) of that services along with some other features.

    1.6 Create a SOAP UI project and invoke some operation. (as an example I'm going to invoke the getQuote operation)

    1.7 Now I'm going to expose this operation(getQuote)  using WSO2 API Manager.

    POST : http://<Host or IP>:<port>/stock/1.0.0/getquote/

    request payload

    "getQuote": {
    "request": { "symbol": "IBM" }

    1.8 Expose the SimpleStockQuoteService service as proxy service using WSO2 ESB, because when I call the above operation in RESTful manner we want to create the SOAP payload which is expecting by back end web service. That conversion can be easily achieved using WSO2 ESB.

    2.0 Create ESB configuration project

    WSO2 Developer Studio(DevS)  provides rich graphical editor to create a message mediation flow without writing XML configuration by hand.

    2.1 File --> New project ---> Other , then select the "ESB Config Project" and create new esb config project called 'ESBConfigProject'.

    [You can find my project in following git repository location [i]. (Download and import that to DevS)]


    3.0 Message mediation flow.

    (1) API will forward the JSON request to the "StockQuoteProxy"

    JSON request:

    "getQuote": {
    "request": { "symbol": "IBM" }
    Complete proxy configuration:
    <?xml version="1.0" encoding="UTF-8"?>
    <proxy xmlns="" name="StockQuoteProxy" transports="http https" startOnLoad="true" trace="disable">
    <property name="messageType" value="text/xml" scope="axis2" type="STRING" description=""/>
    <source clone="true" xpath="$body/jsonObject/getQuote"/>
    <target type="body"/>
    <xslt key="in_transform"/>
    <address uri="http://localhost:9764/services/SimpleStockQuoteService/" format="soap12"/>
    <property name="messageType" value="application/json" scope="axis2" type="STRING"/>

    (2) Once you define the property mediator message will be converted to XML.

    <soapenv:Envelope xmlns:soapenv="">

    (3) Enrich mediator will be removed the extra <jasonObject> tag.

    <soapenv:Envelope xmlns:soapenv="">

    The XSLT configuration(in-transform) has added as a local entry.

    (4) XSLT mediator will add the required namespace to the message.

    <soapenv:Envelope xmlns:soapenv="">
    <m0:getQuote xmlns:m0="http://services.samples">

    (5) Send mediator will send  the above message to the address endpoint.

    (6) ESB will get the following response at the out sequence.

    <?xml version="1.0" encoding="UTF-8"?>
    <soapenv:Envelope xmlns:soapenv="">
    <ns:getQuoteResponse xmlns:ns="http://services.samples">
    <ns:return xmlns:ax23="http://services.samples/xsd" xmlns:xsi="" xsi:type="ax23:GetQuoteResponse">
    <ax23:lastTradeTimestamp>Sun Jan 11 21:24:45 EST 2015</ax23:lastTradeTimestamp>
    <ax23:name>IBM Company</ax23:name>

    (7) The property media will set the content-type to "application/json", then outgoing message from ESB to API Manager will be converted to JSON.

    (8) API Manager get the JSON message and send back to the original client who called the REST endpoint.

    getQuoteResponse: {
    return: {
    @type: "ax23:GetQuoteResponse"
    change: -2.7998649191202554
    earnings: -8.327004353136367
    high: 64.81427412887071
    last: 62.87761070119457
    lastTradeTimestamp: "Sun Jan 11 21:56:23 EST 2015"
    low: 65.364164349526
    marketCap: 56492843.72473255
    name: "IBM Company"
    open: 65.48021093533967
    peRatio: -18.06979273115794
    percentageChange: 4.597394502400419
    prevClose: -60.90112383565025
    symbol: "IBM"
    volume: 15407

    3.1 Crete a "Composite Application Project" using DevS. (File ---> new --> Composite Application Project).  Name of that project is "ESBCARApp".

    3.2. Right click on the ESBCARApp , then "Export Composite Application Project".

    3.3 Select the file location to export the deployable archive file. Then add the "ESBConfigProject" as dependency in next window.

    4.0 Deploy the CAR file in ESB.

    Extract (unzip) the WSO2 ESB distribution and change the port off set value to 2 in carbon.xml file, then start the server.


    4.1 Log in to the WSO2 ESB management console and deploy the Carbon Application Archive(CAR) file.

      4.2 Now you should see the following info logs in WSO2 ESB startup console (Or in wso2carbon.log  file)

    4.3 After few second you should see the deployed "StockQuoteProxy" in the services list.

    4.4 Click on the proxy name, then you should see the proxy endpoint. That is the endpoint you should invoke inside your API.

    5.0 Create and Publish API

    Log in to the publisher web portal to create new API.

    https://<Host or IP>:9443/publisher

    5.1 Fill the required field with following values and click on "Add New Resource" button.

    Name    :  StockQuoteAPI
    Context :  stock
    Version :  1.0.0

    URL Pattern : getquote
    Check the resource type POST

     5.2 Go to the "Implement" wizard and add the endpoint related details.

    Endpoint Type : select the "Address Endpoint" from drop down.

    Production URL: https://localhost:8245/services/StockQuoteProxy

     5.3 Go to the "Manage" wizard and select the required "Transport" , and "Tier Availability". Finally click on "Save & Publish" button to publish this API to Store.

    5.4 Go to the API store,  then you should see the deployed "StockQuoteAPI".
    https://<Host or IP>:9443/store/

    5.5 Log in to the store and go the "My Application", then add new application called "StockQuoteApplication".

    5.6. Go to the "My Subscriptions" page , then generate  new application token.

    5.7. Go to the API page and select "StockQuoteAPI", then subscribe that API against the  "StockQuoteApplication".

    5.8 Invoke the API  proving the REST endpoint, Application token, and the JSON payload. (Click on the API name to get the endpoint URL).

    5.9. You can use the SOAP UI to invoke that API.

    Hiranya JayathilakaCreating Eucalyptus Machine Images from a Running VM

    I often use Eucalyptus private cloud platform for my research. And very often I need to start Linux VMs in Eucalyptus, and install a whole stack of software on them. This involves a lot of repetitive work, so in order to save time I prefer creating machine images (EMIs) from fully configured VMs. This post outlines the steps one should follow to create an EMI from a VM running in Eucalyptus (tested on Ubuntu Lucid and Precise VMs).

    Step 1: SSH into the VM running in Eucalyptus, if you already haven't.

    Step 2: Run euca-bundle-vol command to create an image file (snapshot) from the VM's root file system.
    euca-bundle-vol -p root -d /mnt -s 10240
    Here "-p" is the name you wish to give to the image file. "-s" is the size of the image in megabytes. In the above example, this is set to 10GB, which also happens to be the largest acceptable value for "-s" argument. "-d" is the directory in which the image file should be placed. Make sure this directory has enough free space to accommodate the image size specified in "-s". 
    This command may take several minutes to execute. For a 10GB image, it may take around 3 to 8 minutes. When completed, check the contents of the directory specified in argument "-d". You will see an XML manifest file and a number of image part files in there.

    Step 3: Upload the image file to the Eucalyptus cloud using the euca-upload-bundle command.
    euca-upload-bundle -b my-test-image -m /mnt/root.manifest.xml
    Here "-b" is the name of the bucket (in Walrus key-value store) to which the image file should be uploaded. You don't have to create the bucket beforehand. This command will create the bucket if it doesn't already exist. "-m" should point to the XML manifest file generated in the previous step.
    This command requires certain environment variables to be exported (primarily access keys and certificate paths). The easiest way to do that is to copy your eucarc file and the associated keys into the VM and source the eucarc file into the environment.
    This command also may take several minutes to complete. At the end, it will output a string of the form "bucket-name/manifest-file-name".

    Step 4: Register the newly uploaded image file with Eucalyptus.
    euca-register my-test-image/root.manifest.xml
    The only parameter required here is the "bucket-name/manifest-file-name" string returned from the previous step. I've noticed that in some cases, running this command from the VM in Eucalyptus doesn't work (you will get an error saying 404 not found). In that case you can simply run the command from somewhere else -- somewhere outside the Eucalyptus cloud. If all goes well, the command will return with an EMI ID. At this point you can launch instances of your image using the euca-run-instances command.

    Dinusha SenanayakaHow to enable login to WSO2 API Manager Store using Facebook credentials

    WSO2 Identity Server 5.0.0 release has provided several default federated authenticators like Google, Facebook, Yahoo. Even it's possible to write custom authenticator as well, in addition to default authenticators provided.

    In this post we are going to demonstrate, how we can configure WSO2 API Manager with WSO2 Identity Server, so that users comes to API Store can use their Facebook account as well to login to API Store.

    Step 1 : Configure SSO between API Store and API Publisher

    First you need to configure SSO between publisher and store as mentioned in this document.

    Step 2 : You need to have App Id and App secret key pair generated for a application registered in facebook developers site. This can be done by login to facebook developer site and creating a new app.

    Step 3 :  Login to the Identity Server and register a IdP with Facebook authenticator

    This can be done by navigating to Main -> Identity Providers -> Add. This will prompt the following window. In the "Federated Authenticators" section expand the "Facebook Configuration" and provide the details.

    App Id and App Secrete generated in the step two maps to Client Id and Client Secret values asked in the form.

    Step 4 : Go to the two service providers created in step-1 and associate the above created IdP to it.

    This configuration is available under "Local & Outbound Authentication Configuration" section of the SP.

    Step 5 : If you try to access store url (i.e: https://localhost:9443/store) , it should redirect to the facebook login page.

    Step 6: In order to store users to capable in using their facebook account as a login, they need to follow this step and associate their facebook account to their user account in the API Store.

    Identity Server has provided a dashboard which gives multiple features for users in maintaining their user accounts. Associating a social login for their account is a one option provided in this dashboard.

    This dashboard can be accessed in the following url .

    eg: https://localhost:9444/dashboard

    Note: If you are running Identiry Server with port offset, you need to do changes mentioned here, in order to get this dashboard working.

    Login to the dashboard with API Store user account. It will give you a dashboard like follows.

    Click on the "View details" button provided in "Social Login" gadget. In the prompt window, there is a option to "Associate Social Login".  Click on this and give your Facebook account id as follows.

    Once account is registered, it will list down as follows.

    That's all we have to configure . This user should be able to login to API Store using his facebook account now.

    Note: This post explained , when there is already a user account is exist in the API Store , how these users can associate their facebook account to authenticate to API Store. If someone needs to enable API Store login for all facebook accounts without having user account in API Store, that should be done though a custom authenticator added to Identity Server. i.e Provision this user using JIT (Just In Time Provisioning) functionality provided in IdP and using custom authenticator associate "subscriber" role to this provisioned user.

    Ajith VitharanaAdding namespace and prefix to only root tag using XSLT

    1. Lets assume that we have following XML content.


    2. Add the namespace and prefix only for the root tag.

    <ns0:ByExchangeRateQuery xmlns:ns0="">

    3. Define the XSLT file.

    <xsl:stylesheet xmlns:xsl="" version="2.0">
    <xsl:output indent="yes"/>
    <xsl:strip-space elements="*"/>
    <!--match all the nodes and attributes-->
    <xsl:template match="node()|@*">
    <xsl:apply-templates select="node()|@*"/>
    <!--Select the element need to be apply the namespace and prefix -->
    <xsl:template match="ByExchangeRateQuery">
    <!--Define the namespace with prefix ns0 -->
    <xsl:element name="ns0:{name()}" namespace="">
    <!--apply to above selected node-->
    <xsl:apply-templates select="node()|@*"/>

    Kalpa WelivitigodaWSO2 Carbon Kernel 4.3.0 Released

    I am a bit late to announce, but here it is...

    Hi Folks,

    WSO2 Carbon team is pleased announce the release of the Carbon Kernel 4.3.0.

    What is WSO2 Carbon

    WSO2 Carbon redefines middleware by providing an integrated and componentized middleware platform that adapts to the specific needs of any enterprise IT project - on premise or in the cloud. 100% open source and standards-based, WSO2 Carbon enables developers to rapidly orchestrate business processes, compose applications and develop services using WSO2 Developer Studio and a broad range of business and technical services that integrate with legacy, packaged and SaaS applications.

    WSO2 Carbon kernel, the lean, modular, OSGi-based platform, is the base of the WSO2 Carbon platform. It is a composable server architecture which inherits modularity and dynamism from OSGi framework. WSO2 Carbon kernel can be considered as a framework for server development. All the WSO2 products are composed as a collection reusable components running on this kernel. These products/components inherits all the core services provided by Carbon kernel such as Registry/repository, User management, Transports, Caching, Clustering, Logging, Deployment related features.

    You can download the released distribution from the product home page :

    How to Contribute 

    What's New In This Release

    • Simplified logging story with pluggable log provider support.
    • Upgraded versions of Hazelcast, Log4j, BouncyCastle.
    • Improved Composite application support.

    Key Features

    • Composable Server Architecture - Provides a modular, light-weight, OSGi-based server development framework.
    • Carbon Application(CApp) deployment support.
    • Multi-Profile Support for Carbon Platform - This enable a single product to run on multiple modes/profiles.
    • Carbon + Tomcat JNDI Context - Provide ability to access both carbon level and tomcat level JNDI resources to applications using a single JNDI context.
    • Distributed Caching and Clustering functionality - Carbon kernel provides a distributed cache and clustering implementation which is based on Hazelcast- a group communication framework
    • Pluggable Transports Framework - This is based on Axis2 transports module.
    • Registry/Repository API- Provide core registry/repository API for component developers.
    • User Management API  - Provides a basic user management API for component developers.
    • Logging - Carbon kernel supports both Java logging as well as Log4j. Logs from both these sources will be aggregated to a single output
    • Pluggable artifact deployer framework - Kernel can be extended to deploy any kind of artifacts such as Web services, Web apps, Business processes, Proxy services, User stores etc.
    • Deployment Synchronization - Provides synchronization of deployed artifacts across a product cluster.
    • Ghost Deployment - Provides a lazy loading mechanism for deployed artifacts</li>
    • Multi-tenancy support - The roots of the multi-tenancy in Carbon platform lies in the Carbon kernel. This feature includes tenant level isolation as well as lazy loading of tenants.

    Fixed Issues

    Known Issues

    Contact Us

    WSO2 Carbon developers can be contacted via the mailing lists:

    Reporting Issues
    You can use the Carbon JIRA issue tracker to report issues, enhancements and feature requests for WSO2 Carbon.

    Thank for you interest in WSO2 Carbon Kernel.

    --The WSO2 Carbon Team--

    Amila MaharachchiLets get started with WSO2 API Cloud

    Few weeks ago, I wrote a blog post on "Getting started with WSO2 App Cloud". Intention of writing it was to line up some of the screencasts we have done and published which helps you to use the WSO2 App Cloud.

    Couple of weeks after we started publishing App Cloud videos, we started publishing API Cloud videos too. Intention of this blog post is to introduce those screencasts we have added.

    WSO2 API Cloud provides you the API Management service in the cloud. You can create and publish APIs very easily using it and it is powered by the well established product of WSO2 product stack, WSO2 API Manager. If you already have an account in WSO2 Cloud or, you can login to WSO2 Cloud and navigate to the API Cloud.

    First thing you would do, when you go to the API Cloud is, creating and publishing an API. Following screencast will help you on how to do it.

    After publishing your API, it appears in your API store. If you advertise your API store to others they can subscribe to the APIs and invoke them. Next screencast shows how to that.
    You may have a very promising API created and published, now available in your API store. But, for it to be successful, you need to spread the word. To do that, WSO2 API Cloud have some useful social and feedback channels. You can allow the users to rate your API, comment on them, share using social media and also start interactive discussions via the forums. Following tutorial showcases those capabilities.
    For a developer to use your API easily, its documentation is important. Therefore, WSO2 API Cloud allows you to add documentation for your API. There are several options such as inline documentations, uploading already available docs and pointing to online available docs. This video showcases the documentation features.
    We will keep adding more screencasts/tutorials to our API Cloud playlist. Stay tuned and enjoy experiencing the power of WSO2 API Cloud.

    Chathurika Erandi De SilvaUpdating the Secure Vault after changing the default keystore

    In this post, I will be explaining a simple technique that WSO2 Carbon provides as a feature.

    Before following this post, please take time to read on Enabling Secure Vault for Carbon Server  

    Let's consider the following scenario

    Tom has secure vault applied to his WSO2 Carbon Server. Then he decided to change the default keystore. Now he needs to update the Secure Vault with the new keystore.

    How can he achieve this?

    Answer: WSO2 Carbon provides an option for ciphertool as -Dchange to achieve this.

    E.g. After the keystore is changed, if we need to update the secure vault we need to run the cipertool with -Dchange option

    sh -Dchange

    As shown below Figure 1, when the above command is run, it will provide us the facility to re-encrypt the passwords that the secure vault has encrypted before.

    Figure 1: Running Ciphertool with -Dchange option

    Saliya EkanayakeRunning C# MPI.NET Applications with Mono and OpenMPI

    I wrote an earlier post on the same subject, but just realized it's not detailed enough even for me to retry, hence the reason for this post.
    I've tested this in FutreGrid with Infiniband to run our C# based pairwise clustering program on real data up to 32 nodes (I didn't find any restriction to go above this many nodes - it was just the maximum I could reserve at that time)
    What you'll need
    • Mono 3.4.0
    • MPI.NET source code revision 338.
        svn co -r 338
    • OpenMPI 1.4.3. Note this is a retired version of OpenMPI and we are using it only because that's the best that I could get MPI.NET to compile against. If in future MPI.NET team provides support for a newer version of OpenMPI, you may be able to use it as well.
    • Automake 1.9. Newer versions may work, but I encountered some errors in the past, which made me stick with version 1.9.
    How to install
    1. I suggest installing everything to a user directory, which will avoid you requiring super user privileges. Let's create a directory called build_mono inside home directory.
       mkdir ~/build_mono
      The following lines added to your ~/.bashrc will help you follow the rest of the document.

      Once these lines are added do,
       source ~/.bashrc
    2. Build automake by first going to the directory that containst automake-1.9.tar.gz and doing,
       tar -xzf automake-1.9.tar.gz
      cd automake-1.9
      ./configure --prefix=$BUILD_MONO
      make install
      You can verify the installation by typing which automake, which should point to automake inside $BUILD_MONO/bin
    3. Build OpenMPI. Again, change directory to where you downloaded openmpi-1.4.3.tar.gz and do,
       tar -xzf openmpi-1.4.3.tar.gz
      cd openmpi-1.4.3
      ./configure --prefix=$BUILD_MONO
      make install
      Optionally if Infiniband is available you can point to the verbs.h (usually this is in /usr/include/infiniband/) by specifying the folder /usr in the above configure command as,
       ./configure --prefix=$BUILD_MONO --with-openib=/usr
      If building OpenMPI is successfull, you'll see the following output for mpirun --version command,
       mpirun (Open MPI) 1.4.3

      Report bugs to
      Also, to make sure the Infiniband module is built correctly (if specified) you can do,
       ompi_info|grep openib
      which, should output the following.
       MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.3)
    4. Build Mono. Go to directory containing mono-3.4.0.tar.bz2 and do,
       tar -xjf mono-3.4.0.tar.bz2
      cd mono-3.4.0
      Mono 3.4.0 release is missing a file, which you'll need to add by pasting the following content to a file called./mcs/tools/xbuild/targets/Microsoft.Portable.Common.targets
       <Project xmlns="">
      <Import Project="..\Microsoft.Portable.Core.props" />
      <Import Project="..\Microsoft.Portable.Core.targets" />
      You can continue with the build by following,
       ./configure --prefix=$BUILD_MONO
      make install
      There are several configuration parameters that you can play with and I suggest going through them either in or in./configure --help. One parameter, in particular, that I'd like to test with is --with-tls=pthread
    5. Build MPI.NET. If you were wonder why we had that ac_cv_path_ILASM variable in ~/.bashrc then this is where it'll be used. MPI.NET by default tries to find the Intermediate Language Assembler (ILASM) at /usr/bin/ilasm2, which for 1. does not exist because we built Mono into $BUILD_MONO and not /usr 2. does not exist because newer versions of Mono calls this ilasm notilasm2. Therefore, after digging through the configure file I found that we can specify the path to the ILASM by exporting the above environment variable.
      Alright, back to building MPI.NET. First copy the downloaded to the subversion checkout of MPI.NET. Then change directory there and do,
       patch MPI/ <
      This will say some hunks failed to apply, but that should be fine. It only means that those are already fixed in the checkout. Once patching is completed continue with the following.
      ./configure --prefix=$BUILD_MONO
      make install
      At this point you should be able to find MPI.dll and MPI.dll.config inside MPI directory, which you can use to bind against your C# MPI application.
    How to run
    • Here's a sample MPI program written in C# using MPI.NET.
        using System;
      using MPI;
      namespace MPINETinMono

      class Program

      static void Main(string[] args)
      using (new MPI.Environment(ref args))
      Console.Write("Rank {0} of {1} running on {2}\n",,,
    • There are two ways that you can compile this program.
      1. Use Visual Studio referring to MPI.dll built on Windows
      2. Use mcs from Linux referring to MPI.dll built on Linux
        mcs Program.cs -reference:$MPI.NET_DIR/tools/mpi_net/MPI/MPI.dll
        where $MPI.NET_DIR refers to the subversion checkout directory of MPI.NET
        Either way you should be able to get Program.exe in the end.
    • Once you have the executable you can use mono with mpirun to run this in Linux. For example you can do the following within the directory of the executable,
        mpirun -np 4 mono ./Program.exe
      which will produce,
        Rank 0 of 4 running on i81
      Rank 2 of 4 running on i81
      Rank 1 of 4 running on i81
      Rank 3 of 4 running on i81
      where i81 is one of the compute nodes in FutureGrid cluster.
      You may also use other advance options with mpirun to determine process mapping and binding. Note. the syntax for such controlling is different from latest versions of OpenMPI. Therefore, it's a good idea to look at different options from mpirun --help. For example you may be interested in specifying the following options,

      mpirun --display-map --mca btl ^tcp --hostfile $hostfile --bind-to-core --bysocket --npernode $ppn --cpus-per-proc $cpp -np $(($nodes*$ppn)) ...
      where, --display-map will print how processes are bind to processing units and --mca btl ^tcp forces to turn off tcp
    That's all you'll need to run C# based MPI.NET applications in Linux with Mono and OpenMPI. Hope this helps!

    Hiranya JayathilakaDeveloping Web Services with Go

    Golang facilitates implementing powerful web applications and services using a very small amount of code. It can be used to implement both HTML rendering webapps as well as XML/JSON rendering web APIs. In this post, I'm going to demonstrate how easy it is to implement a simple JSON-based web service using Go. We are going to implement a simple addition service, that takes two integers as the input, and returns their sum as the output.

    package main

    import (

    type addReq struct {
    Arg1,Arg2 int

    type addResp struct {
    Sum int

    func addHandler(w http.ResponseWriter, r *http.Request) {
    decoder := json.NewDecoder(r.Body)
    var req addReq
    if err := decoder.Decode(&req); err != nil {
    http.Error(w, err.Error(), http.StatusInternalServerError)

    jsonString, err := json.Marshal(addResp{Sum: req.Arg1 + req.Arg2})
    if err != nil {
    http.Error(w, err.Error(), http.StatusInternalServerError)
    w.Header().Set("Content-Type", "application/json")

    func main() {
    http.HandleFunc("/add", addHandler)
    http.ListenAndServe(":8080", nil)
    Lets review the code from top to bottom. First we need to import the JSON and HTTP packages into our code. The JSON package provides the functions for parsing and marshaling JSON messages. The HTTP package enables processing HTTP requests. Then we define two data types (addReq and addResp) to represent the incoming JSON request and the outgoing JSON response. Note how addReq contains two integers (Arg1, Arg2) for the two input values, and addResp contains only one integer (Sum) for holding the total.
    Next we define what is called a HTTP handler function which implements the logic of our web service. This function simply parses the incoming request, and populates an instance of the addReq struct. Then it creates an instance of the addResp struct, and serializes it into JSON. The resulting JSON string is then written out using the http.ResponseWriter object.
    Finally, we have a main function that ties everything together, and starts executing the web service. This main function, simply registers our HTTP handler with the "/add" URL context, and starts an HTTP server on port 8080. This means any requests sent to the "/add" URL will be dispatched to the addHandler function for processing.
    That's all there's to it. You may compile and run the program to try it out. Use Curl as follows to send a test request.

    curl -v -X POST -d '{"Arg1":5, "Arg2":4}' http://localhost:8080/add
    You will get a JSON response back with the total.

    Chandana NapagodaWSO2 G-Reg Modify Subject of the Email Notification

    WSO2 Governance Registry generates notifications for events triggered by various operations performed on resources and collections stored in the repository. Notifications can be consumed in a variety of formats including E-mail. The sample "E-mail NotificationCustomization" shows how we can modify the content of emails. It describes how to edit email body and restrict email notification to some email addresses.

    Here I am going to extend that sample to modify Subject of the email.

    private void editSubject(MessageContext msgContext, String newSubject) {
    msgContext.getOptions().getProperty( MessageContext.TRANSPORT_HEADERS)).put(MailConstants.MAIL_HEADER_SUBJECT, newSubject);

    You will have to import "org.apache.axis2.transport.mail.MailConstants;" class and other related Java util classes as well.

    When you are building the sample code, please follow the instructions available in the Governance Registry Sample Documentation.

    Umesha GunasingheUse Case scenarios with WSO2 Identity Server 5.0.0 - Part 2

    Hi All,

    Today lets talk about database connectivity with WSO2 Identity Server.  As you know WSO2 Identity Server can be deployed over any LDAP, AD or JDBC user store. In fact, you can create write a custom user store manager , and connect to any legacy databases.

    The WSO2 IS has the concept of primary database and secondary databases. If you are to change the primary database, you will have to change the configuration files and start-up the server. But , if you are going to add the secondary databases, you can do this through the IS management console. This is some background information on the product.

    Now, lets talk about a common use case scenario.

    Say, you have a need of connecting the IS server to many databases. Clearly you can do this by connecting all the databases as secondary databases. Therefore, if a use is trying to get authenticated, the user will be authenticated against checking all the databases connected.

    Solution 1
    If your user bases are located in different geographical locations, say for an example you have three offices located in three countries , and you need to connect Identity Server to the three user databases located in these countries, what  you can do is connecting these databases as secondary databases via VPN connections.

    Solutions 2
    Another solution would be to have 3 Identity Servers in each of these countries, and have one central Identity Server where you can provision users from other three servers to the central server where the user will be authenticated against.

    Please check on following resource links for implementation of these scenarios :-


    Cheers ! Last post for year 2014...have a wonderful 2015 ahead...see you in the next year ;)

    Malintha AdikariHow to grant access permission for user from external machine to MySQL database

    I faced this problem while doing a deployment. I create a database on the sql server which is running on one machine. Then I wanted to grant access permission to this database from another machine. There are two step process to achieve this

    Think you want to grant access permission for machine and your DB server is running on machine

    1. Create a user for the remote machine with preferred username and password

    mysql> CREATE USER `abcuser`@`` IDENTIFIED BY 'abcpassword'; 

     Here "abcuser" is the username

               "abcpassword" is the password for that user

    2. Then grant permission for that user to your database 

    GRANT ALL PRIVILEGES ON registry.* TO 'abcuser'@''; 

    Here "registry" is the DB name

    Thats it...............!


    Sohani Weerasinghe

    Get a JSON response with WSO2 DSS

    WSO2 Data Services Server supports for both XML and JSON outputs and in order to receive the message as JSON you need to change following configurations.
    You can achieve this by enabling the 'content negotiation' property in the axis2.xml file and axis2_client.xml file. Also you should send the requests to the server by adding the "Accept:Application/json" to the request header, and as a result, DSS will return the response in JSON format. Please follow the below steps 

    1. In axis2.xml file at <DSS_HOME>/repository/conf/axis2, include the following property 

    <parameter name="httpContentNegotiation>true</parameter> 

    2. In axis2_client.xml file at <DSS_HOME>/repository/conf/axis2 enable the following property 

    <parameter name="httpContentNegotiation>true</parameter> 

    3. When sending the request please make sure to add the "Accept:Application/json" to the request header 

    Note that if you are using tenant, then the above parameter need to be set in 'tenant-axis2.xml' as well.
    Now you can use the ResourceSample, which is a sample available in your DSS distribution to test the result. Send a request to the server using  CURL by adding “Accept:Application/json” to the request header as shown below.

    curl -X GET -H "Accept:application/json" http://localhost:9763/services/samples/ResourcesSample.HTTPEndpoint/product/S10_1678

    Sohani Weerasinghe

    XML to JSON conversion using a proxy in WSO2 ESB

    This post describes about the way to convert an XML payload into JSON. Basically this can be achieved by using the property "messageType" in synapse configuration.

    The property is as follows.

    <property name="messageType" value="application/json" scope="axis2"/>

    I have created a proxy called TestProxy with the  SimpleStockQuote endpoint as follows.

     <?xml version="1.0" encoding="UTF-8"?>  

     <proxy xmlns=""  








     <property name="messageType" value="application/json" scope="axis2"/>  

     <log level="full"/>  



     <address uri="http://localhost:9000/services/SimpleStockQuoteService/"/>  









    In order to invoke this proxy you can follow below steps.

    1. Build the SimpleStockQuote Service at <ESB_HOME>/samples/axis2Server/src/SimpleStockQuoteService by running "ant" command

    2. Then start sample backend to recover the response which is provided with WSO2 ESB by navigating to <ESB_HOME>/samples/axis2Server and enter the command "sh"

    3. Send the below SOAP request to proxy service using SOAP UI.

    <soapenv:Envelope xmlns:soapenv="" xmlns:m0="http://services.samples" xmlns:xsd="http://services.samples/xsd">  



     <m0:getQuote xmlns:m0="http://services.samples" id="12345">  










    Response should be as follows:

     HTTP/1.1 200 OK  

     Host: sohani-ThinkPad-T530:8280  

     SOAPAction: "urn:getQuote"  

     Accept-Encoding: gzip,deflate  

     Content-Type: application/json  

     Date: Fri, 29 Dec 2014 09:00:00 GMT  

     Server: WSO2-PassThrough-HTTP  

     Transfer-Encoding: chunked  

     Connection: Keep-Alive  


    Lakmali BaminiwattaCustomizing workflows in WSO2 API Manager

    In WSO2 API Manager, Workflow extensions allow you to attach a custom workflow to various operations in the API Manager for

    • User Signup
    • Application Creation
    • Application Registration
    • Subscription

    By default, the API Manager workflows have Simple Workflow Executor engaged in them. The Simple Workflow Executor carries out an operation without any intervention by a workflow admin. For example, when the user creates an application, the Simple Workflow Executor allows the application to be created without the need for an admin to approve the creation process.

    In order to enforce intervention by a workflow admin, you can engage the WS Workflow Executor. It invokes an external Web service when executing a workflow and the process completes based on the output of the Web service. For example, when the user creates an application, the request goes into an intermediary state where it remains until authorized by a workflow admin.

    You can try out the default workflow extensions provided by WSO2 API Manager to engage business processes with API management operations as described in here

    There are two extension points exposed with WSO2 API Manager to customize workflows.

    Customizing the Workflow Executor
    • When you need to change the workflow logic
    • When you need to change the Data Formats

    Customizing the Business Process
    • When you are happy with the Data Formats and need to change only the business flow
    This blog post will provide a sample on how we can customize workflow executors and change the workflow logic.

    First let's look at WorkflowExecutor class which each WS workflow executor is extended from.

    * This is the class that should be extended by each workflow executor implementation.
    public abstract class WorkflowExecutor {

    * The Application Registration Web Service Executor.
    public class ApplicationRegistrationWSWorkflowExecutor extends WorkflowExecutor{

    //Logic to execute the workflow
    public void execute(WorkflowDTO workflowDTO) { }

    //Logic to complete the workflow
    public void complete(WorkflowDTO workflowDTO) { }

    //Returns the workflow type - ex: WF_TYPE_AM_USER_SIGNUP
    public String getWorkflowType() { }

    //Used to get workflow details
    public List getWorkflowDetails(String workflowStatus) { }


    As the example scenario, let's consider the Application registration workflow of WSO2 API manager.
    After an application is created, you can subscribe to available APIs, but you get the consumer key/secret and access tokens only after registering the application. There are two types of registrations that can be done to an application: production and sandbox. You change the default application registration workflow in situations such as the following:

    • To issue only sandbox keys when creating production keys is deferred until testing is complete.
    • To restrict untrusted applications from creating production keys. You allow only the creation of sandbox keys.
    • To make API subscribers go through an approval process before creating any type of access token.
    Find step by step instructions on how we can configure Application Registration Workflow from here

    Sending an email to Administrator upon Application Registration

    As the extension of this Application Registration workflow, we are going customize the workflow executor and send an email to Administrator once the workflow is triggered

    1. First write a new executor extending ApplicationRegistrationWSWorkflowExecutor

    public class AppRegistrationEmailSender extends 
    ApplicationRegistrationWSWorkflowExecutor {

    2. Add private String attributes and public getters and setters for email properties (adminEmail, emailAddress, emailPassword)

    private String adminEmail;
    private String emailAddress;
    private String emailPassword;
    public String getAdminEmail() {
    return adminEmail;

    public void setAdminEmail(String adminEmail) {
    this.adminEmail = adminEmail;

    public String getEmailAddress() {
    return emailAddress;

    public void setEmailAddress(String emailAddress) {
    this.emailAddress = emailAddress;

    public String getEmailPassword() {
    return emailPassword;

    public void setEmailPassword(String emailPassword) {
    this.emailPassword = emailPassword;

    3. Override execute(WorkflowDTO workflowDTO) method and implement email sending logic. Finally invoke super.execute(workflowDTO).

    public void execute(WorkflowDTO workflowDTO) throws WorkflowException {

    ApplicationRegistrationWorkflowDTO appDTO = (ApplicationRegistrationWorkflowDTO) workflowDTO;

    String emailSubject = appDTO.getKeyType() + "Application Registration";

    String emailText = "Appplication " + appDTO.getApplication().getName() + " is registered for " +
    appDTO.getKeyType() + " key by user " + appDTO.getUserName();

    try {
    EmailSender.sendEmail(emailAddress, emailPassword, adminEmail, emailSubject, emailText);
    } catch (MessagingException e) {
    // TODO Auto-generated catch block



    Find the complete source code of custom workflow executor. 
    package org.wso2.sample.workflow;

    import javax.mail.MessagingException;

    import org.wso2.carbon.apimgt.impl.dto.ApplicationRegistrationWorkflowDTO;
    import org.wso2.carbon.apimgt.impl.dto.WorkflowDTO;
    import org.wso2.carbon.apimgt.impl.workflow.ApplicationRegistrationWSWorkflowExecutor;
    import org.wso2.carbon.apimgt.impl.workflow.WorkflowException;

    public class AppRegistrationEmailSender extends ApplicationRegistrationWSWorkflowExecutor {

    private String adminEmail;
    private String emailAddress;
    private String emailPassword;

    public void execute(WorkflowDTO workflowDTO) throws WorkflowException {

    ApplicationRegistrationWorkflowDTO appDTO = (ApplicationRegistrationWorkflowDTO) workflowDTO;

    String emailSubject = appDTO.getKeyType() + "Application Registration";

    String emailText = "Appplication " + appDTO.getApplication().getName() + " is registered for " +
    appDTO.getKeyType() + " key by user " + appDTO.getUserName();

    try {
    EmailSender.sendEmail(emailAddress, emailPassword, adminEmail, emailSubject, emailText);
    } catch (MessagingException e) {
    // TODO Auto-generated catch block



    public String getAdminEmail() {
    return adminEmail;

    public void setAdminEmail(String adminEmail) {
    this.adminEmail = adminEmail;

    public String getEmailAddress() {
    return emailAddress;

    public void setEmailAddress(String emailAddress) {
    this.emailAddress = emailAddress;

    public String getEmailPassword() {
    return emailPassword;

    public void setEmailPassword(String emailPassword) {
    this.emailPassword = emailPassword;


    Now modify the existing ProductionApplicationRegistration as below.

    You can do the same modification to SandboxApplicationRegistration workflow as below.

    With this change, Application Registration workflows will trigger the workflows through AppRegistrationEmailSender which will send an email to adminEmail email address. Then it will invoke the default ApplicationRegistrationWSWorkflowExecutor. 

    Nadeeshaan GunasingheMySQL Performance Testing with MySQLSLAP and Apache Bench

    When we create a database it is always a difficult task to determine the performance of the database system under heavy data sets. In the real time, there are situations in which more than one user tries to access your application, web site, etc.. concurrently. In such situations the performance degradation might have a great affect with your database. Therefore it is always necessary to put your database under stress test before you put it to work in the real time.
    In this article I am trying to give a brief explanation about how to use some effective tools to test your database system under heavy loads.


    This tool always come with your MySQL installation and can be found PATH_TO_MYSQL/mysql/bin . Inside this directory you can find a script called mysqlslap.exe (In Windows). After locating the relevant script direct to the location and then you can invoke the mysqlslap as mysqlslap [options].

    Above figure shows most widely used set of options to test a certain query with mysqlslap. With this query under the options parameter --query we include the location of relevant sql file in which you have mentioned your query. Under the parameter --create-schema you include the name of your database. Under the parameter --concurrency mention the number of concurrent accesses which tries to access the query concurrently. Then you can mention under the parameter -- iterations, the number of time the query runs. If you wish to include more than one query in the sql you need to mention the query delimiter to separate one query from the other and use --delimiter to specify the relevant delimiter.
    According to the above figure the mentioned query runs 5 time under one concurrent access. As a total the same query runs 250 times for the configurations.
    After the benchmarking you can see the results and analyze them depending on your requirements. Rather than the above mentioned options there are other options which you can find in mysqlslap official documentation.

    Apache Bench

    With mysqlslap you can mention the set of or individual queries to be tested under stress testing. With Apache bench it is easy to test your web application when a certain page loads. With this tool you do not need to mention all the queries and this allows to measure the performance of your application when executing a set of queries when the loads.
    After installing Apache Bench in your system you  need to direct to the location where you have installed it.
    Now you can simply issue the command ab –n 500 –c 100 <Enter Url here> to test the page which is mentioned by the url. With the options -n you can ensure the number of requests to perform and the parameter -c determines the number of concurrent requests.
    You can find the other available set of options in Apache Bench Options.

    Sanjiva WeerawaranaNorth Korea, The Interview and Movie Ethics

    Its been quite a while since I blogged .. I'm going to try to write a bit more consistently from now (try being the key!). I thought I'll start with a light topic!

    So I watched the now infamous The Interview two nights ago. I'm no movie critic, but I thought it was a cheap, crass stupid movie with no depth whatsoever. More of a dumbass slapstick movie than anything else.

    Again, I'm no movie critic so I don't recommend you listen to me; watch it and make up your own mind :-). I have made up mine!

    HOWEVER, I do think the Internet literati's reaction to this movie is grossly wrong, unfair and arrogant.

    Has there ever been any other Hollywood movie where the SITTING president of a country is made to look like a jackass and assassinated in the most stupid way? I can't think of any movies like that. In fact, I don't think Bollywood or any other movie system has produced such a movie.

    When Hollywood movies have US presidents in them they're always made out to be the hero (e.g. White House Down) and they pretty much never die. If they do die, then they die a hero (e.g. 2012) in true patriotic form.

    I don't recall seeing a single movie where David Cameron or Angela Merkel or Narendra Modi or any other sitting president was made to look like a fool and gets killed as the main point of the movie (or in any other fashion).

    I believe the US Secret Service takes ANY threats against the US president very seriously. According to Wikipedia, a threat against the US president is a class D felony (presumably a bad thing). I've heard of students who send anonymous (joking) email threats get tracked down and get a nice visit.

    So, suppose Sony Pictures decided to make a movie which shows President Obama being a jackass and then being killed? How far would that go before the US Secret Service shuts it down?

    In my view the fact that this movie was conceived, funded and made just goes to show how little respect the US system has for people that are not lined up in the US way. Its fine for the US government, and even the US people, to have no respect for some country, its president or whatever, but I have to agree with North Korea when they say that this movie is a violation of the UN charter:

    With no rhetoric can the U.S. justify the screening and distribution of the movie. This is because "The Interview" is an illegal, dishonest and reactionary movie quite contrary to the UN Charter, which regards respect for sovereignty, non-interference in internal affairs and protection of human rights as a legal keynote, and international laws.

      Would all the Internet literati who hailed the release of the movie act the same way if Bollywood produced a movie mocking Obama and killing him off? If not, why the double standard??

      Its disappointing that thinking people also get caught up in the rhetoric and ignore basic decency. Just to be clear- I'm not saying North Korea is a great place. I have no idea what things are really like there. What I do know is that I don't trust the managed news rhetoric that is delivered as fact by CNN, Fox, BBC, Al Jazeera or anyone any more about any topic. This is after observing how Sri Lanka was represented in various of these channels during the war and after being here to observe some side of it myself. After Iraq (where are those WMDs now?) you'd think that smart people wouldn't just believe any old crap that's put out .. I distinctly remember watching the news conference (broadcast on BBC) immediately after Colin Powell made his speech with pictures to the UN Security Council where the then Iraqi Foreign Minister (can't remember his name - fun looking dude) went thru each picture and gave an entirely different explanation. We now know who was telling the truth. I try hard not to get caught up in any of the rhetoric as a result now.

      There's an entirely different topic of whether the North Koreans attacked Sony Pictures' network and whether the US government hackers shut down their Internet. It seems that the general trend (as of today) is that it wasn't the North Koreans, despite what the FBI said:

      So I'm with the North Koreans on this one: This movie should not have been conceived, funded and produced. I don't condone the hackers' approach for trying to stop it; instead Sony Pictures should've had more ethics and not done it at all. So, IMO: Shame on you Sony Pictures Entertainment!

      Chandana NapagodaWSO2 Governance Registry - Configurable Governance Artifacts

      Configurable Governance Artifacts is one of many well-defined extension points supported by the WSO2 Governance Registry. This is also known as Registry Extensions Types. This allows you to define own metadata models other than the default metadata model which is shipped with the product. This will support modeling any type of asset according to the user requirements.

      When deploying Configurable Governance Artifacts in WSO@ Governance Registry, it creates a web service which supports CURD (Create, Update, Retrieve, Delete) operations. So using external web services, client application users can consume them.

      Below are the main elements in RXT configuration.

      • artifactType element
      • artifactKey element
      • storagePath element
      • nameAttribute element
      • namespaceAttribute element
      • menu element
      • ui element
      • relationships element
      Using above basic model, you can create/modify RXTs based on your requirement. Let’s go through a sample RXT file and understand requirements of each element one by one. For an example, let’s consider a scenario where we need to store user contact information. There we need to capture and store information such as First Name, Last Name, Birthday, Address, Organization Department, Email address, Mobile Number, etc.

      Here is one of the RXT representation which we can create to capture and store above mentioned information. RXT Source

      Now let’s go through the main XML elements in the above sample.

      artifactType element

      The root element of the RXT configuration is artifactType and it has few attributes which need to be defined.

      • type - Defines the mediatype of the artifact. 
      • shortName - Short name for the artifact
      • singularLabel - Singular label of the artifact. This name is shown in the UI to display link to add new artifacts.
      • pluralLabel - Plural label of the artifact. This plural label is used when listing artifacts.
      • hasNamespace - Defines whether the artifact has a namespace (boolean)
      • iconSet - Icon set number is used for the artifact icons(from 1 to 30)
      storagePath element

      This element is used to define the storage path of the artifact. Users can customize storage path based on the information available. They can use some fixed attributes such as @{name}, @{namespace} and other attribute such as @{overview_version}. Above name and namespace attributes need to be mapped using nameAttribute and namespaceAttribute.

      nameAttribute element

      This is the identifier for name attribute used in storage path.

      namespaceAttribute element

      This is the identifier for namespace attribute used in storage path.

      lifecycle element

      This element is used to define default lifecycle of the given artifact. When creating an artifact, this lifecycle will be automatically assigned to resources.

      ui element

      This element is used to define list view of the artifact. Using UI element, list view is automatically generated.

      relationships element
      Using relationship element, we can define the association in between other artifacts and this.

      content element

      This is the data model of the new artifact and with information available in content element, artifact add view will be automatically generated.

      Lakmali BaminiwattaMulti Tenant API Management with WSO2 API Manager - Part 2

      In the previous post we discussed what is multi-tenancy, multi-tenancy in API Development and multi-tenancy in API Store(Consumption). In this post we will be discussing how subscriptions can be managed among multiple tenants, how APIs an be published into different tenant domains, multi-tenancy in API Gateway, multi-tenancy in Key Manager and also multi-tenancy in API Statistics. 

      Manage subscriptions among multiple tenants

      In the previous post we discussed how different tenants can develop and consume APIs in isolated views of API Publisher and API Store.This section describes how API creators can control who can subscribe to an API. In the Add API page, under Subscriptions you can select the Subscriptions Category.

      There are 3 subscription categories.

      1. Available to current Tenant Only

      The API will be allowed to subscribe for users in current tenant domain only(tenant domain of API Creator).

      1. Available to All the Tenants

      The API will be allowed to subscribe for all the tenants in the deployment.

      1. Available to Specific Tenants

      The API will be allowed to subscribe for specific tenants who are mentioned and the current tenant.

      Example: UserProfileAPI is an API in API developer from tenant domain set the subscription category of UserProfileAPI to and subscribers as below.


      Figure 1 : Subscription availability to specific tenants

      Now a Subscriber from can login to his API Store and then access API Store. He will be able to subscribe to UserProfileAPI.

      Although API subscription can be allowed to different tenant domains, this approach have a drawback. Because API subscribers need to login to own ( tenant store, then browse store and discover UserProfileAPI. How can we make UserProfileAPI visible in Store ? Let’s see in the next section.

      Publishing APIs to multiple tenant stores

      WSO2 API Manager allows API developers to publish APIs to external stores. BY default, when a tenant user publishes an API, it is getting published in that tenant’s own API Store. With this ‘Publishing to external stores’ feature, each tenant can configure set of external stores that they wish to publish APIs. Then API developers can push APIs to those configured different tenant stores. This allows them to expose APIs to a wider community.

      However, when subscribing to such APIs, users will be redirected to original publisher's store.


      Figure 2 : Publish to multiple tenant stores

      We can configure external stores as below.

      1. Login to API Manager management console (https://:9443/carbon) as admin and select Browse menu under Resources.

      2. The Registry opens. G o to /_system/governance/apimgt/externalstores/external-api-stores.xml resource.


      3. Click the Edit as Text link and change the element of each external API store that you need to publish APIs to. 

      Example: HR department configure external stores for Sales and Engineering departments as below. So that UserProfileAPI can be pushed into and API Stores.  

      Figure 3 : External store configuration


      Figure 4 : External API Stores in API Publisher

      As shown in the figure 9, API Publishers can push UserProfileAPI into Engineering Store and Sales Store from the ‘External API Stores’ tab.

      Example: publishes the UserProfileAPI into Engineering Store and Sales Store. When a subscriber from clicks on UserprofileAPI, there is a link to access original Store.


      Figure 5 : UserProfileAPI appearing in Store

      Figure 6 : Link to Publisher Store ( store)

      Multi-Tenancy in API Gateway

      Above we discussed the Multi-Tenant features supported in API Store and API Publisher. There we saw how we can achieve isolation in API development and consumption. Further, how API subscriptions can be managed among tenants and how APIs can be published to different tenant domains were discussed. In this section, let’s look at how Multi-Tenancy is achieved API Gateway and Key Manager level.

      In WSO2 API Manager, the API gateway is a simple API proxy that intercepts API requests and applies policies such as throttling and security checks. These API proxies are deployed in WSO2 API Manager as Synapse REST resources. In a multi-tenant deployment, APIs are deployed in tenant isolated manner by having isolated deployment spaces for each tenant. Also APIs are exposed with tenant domain based URL patterns as below.

      Example:  We created UserProfileAPI in domain and ArticleFeeds API in domain. In the API Gateway these APIs are deployed in different spaces. Also APIs are exposed with tenant domain based URLs with /t/. So as shown in below, UserProfile API is exposed as http://gateway.cin/t/ On the other hand, ArticleFeeds API is exposed as http://gateway.cin/t/ Now when Application developers are consuming these APIs from different domains, they’ll see these tenant based API Endpoint URLs.

      Figure 7 : Multi-tenancy in API Gateway level

      Multi-Tenancy in API Key Manager

      The API Key Manager component handles all security and key-related operations. When API Gateway receives API calls, it contacts the API Key Manager service to verify the validity of tokens and do security checks. All tokens used for validation are based on OAuth 2.0.0 protocol. First API subscribers have to create an Application, then subscribe to APIs and generate tokens against that application.
      In a multi-tenant deployment, consumer applications are tenant isolated. At the API subscription and key generation, keys (consumer key/secret) are issued against these consumer applications. Then the tenant users, who consume those applications can generate user tokens. Further when storing keys, tenant ids are used to achive tenant separation. This is how mult-tenancy is achieved in API Key Manager.

      Multi-Tenancy in Statistics

      We can set up WSO2 Business Activity Monitor to collect and analyze runtime statistics from the API Manager. To publish data from the API Manager to BAM, the Thrift protocol is used.
      Here, usage data publisher is created per tenant.

      Information processed in BAM is stored in a database from which the API Publisher and Store retrieves information before displaying in the corresponding UI screens.
      Statistics view in API Store and API Publisher are tenant isolated, since API Store and Publisher apps are tenant isolated. 

      Figure 8 : Multi-tenancy in API Statistics


      This post discussed how organizations can collaborate and monetize their APIs across multiple entities such as departments, partners or simply between separate development groups with Multi-tenancy features in WSO2 API Manager. Basically API developers of multiple entities can have isolated views in API Publisher and manage their APIs. Further API consumers correspond to multiple entities can explore and consume APIs from tenant isolated API stores. Moreover this article described how APIs subscriptions can be controlled among tenants and how APIs can be published into multiple API Stores. Finally how multi tenancy is achieved in API Gateway, Key Manager and Statistic were discussed. 

      Lakmali BaminiwattaMulti Tenant API Management with WSO2 API Manager - Part 1


      WSO2 API Manager provides a complete solution for API Management. With Multi-tenancy in WSO2 API Manager, organizations can collaborate and monetize their APIs across multiple entities such as departments, partners or simply between separate development groups. 

      Why Multi-Tenancy

      The goal of Multi Tenancy is, maximizing resource sharing among multiple tenants while providing an isolated view for each tenant.

      One of the main benefits of multi-tenancy is the ability to use a single deployment for multiple tenants which lowers the cost and provides better administration. Further this is ideal for  multi departmental and multi-partner type of business settings.

      Multi-Tenancy in API Development

      WSO2 API Manager provides a simplified Web interface called WSO2 API Publisher for API Design, Implementation and Management. It is a structured GUI designed for API creators to design, implement, manage, document, scale and version APIs, while also facilitating more API management-related tasks such as life-cycle management, monetization, analyzing statistics, quality & usage and promoting & encouraging potential consumers & partners to adopt the API in their solutions.

      While providing all these capabilities, WSO2 API Publisher is a tenant isolated application. Meaning, developers from different tenant domains can develop APIs and manage them while having isolated views for each tenant. Let’s look into more details by using a example scenario. 

      Example : There is a multi departmental organization in which 3 departments namely HR, Sales and Engineering need to expose their core functionality/services as APIs to internal and external consumption. They are using WSO2 API Manager as the API management solution. So we can consider those 3 departments as 3 tenants in WSO2 API Manager. So that each department can develop and manage their APIs independently.

      First we need to create 3 tenants in WSO2 API Manager. Please refer this doc, for tenant creation and listing steps. 

      Let’s assume that following tenant domain names are used for each department.

      Department NameTenant Domain

      Figure 1 : Tenant Isolated API Publishing for each department

      Once you create the tenants, you can login to API Publisher using tenant credentials and design, implement, manage & publish APIs. Find User Guide on API development from here (

      Now when tenant users of each domain will have isolated views in API Publisher as below where each tenant have their own view. 


      Figure 2 : API Publisher view

      Figure 3 : API Publisher view

      Multi-Tenancy in API Store

      API Manager provides a structured Web interface called the WSO2 API Store to host published APIs. API consumers and partners can self-register to it on-demand to find, explore and subscribe to APIs, evaluate available resources and collaboration channels. The API store is where the interaction between potential API consumers and API providers happen. Its simplified UI reduces time and effort taken when evaluating enterprise-grade, secure, protected, authenticated API resources.

      When there are multiple tenants in the API Manager deployment, there is a tenant isolated view of API store for each tenant domain. In other words there will be a separate store for each tenant.

      Public Store and Tenant Stores 

      When API Store URL (ex: http://localhost:9443/store) is accessed in a multi tenant deployment, we can see the ‘Public Store’ which is a store of stores.
      Public Store is linking to Stores  of all the tenants. For anonymous users, each of this Stores can be browsed. All the Public APIs of each Store will be visible. If one needs to subscribe to APIs, then he should log in to one of the Stores.


      Figure 4 : Public Store linking all the tenant stores

      Each of the Stores representing each tenant domain is known as ‘Tenant Store’. It is the tenant isolated API Store of each domain. ex: http://localhost:9443/store?

      Subscribers from each tenant domain can consume APIs in their tenant store. Let’s look into more details by using an example scenario.

      Example: You are a subscriber on tenant domain. You can first access the Public Store and then visit Store. Then you can log in to the store with your credentials and consume APIs. Also you can go back to the Public Store and access other Stores as well. But if you want to consume an API in other tenant stores, API developers should allow that. It will be discussed further in “Manage subscriptions among multiple tenants” section in next post. 

      Figure 5 : Tenant Store

      So as described above, different tenants can develop and consume APIs in isolated views of API Publisher and API Store. Next post will describe how API creators can control who can subscribe to APIs. 

      Hasitha Aravinda[WSO2 ESB] [4.8.1] How to Convert XML to JSON Array

      Following API Demonstrate this functionality.

      Try above api with a rest client with following sample requests.

      1)Multiple Stocks.
      XML request: JSON response : 2) Single Stock
      XML request: JSON response (As an array): This is with following message formatter and builder
              <messageformatter class="org.apache.synapse.commons.json.JsonStreamFormatter" contenttype="application/json" />

              <messageformatter class="org.apache.synapse.commons.json.JsonStreamFormatter" contenttype="application/json" />

      Waruna Lakshitha JayaweeraJMS performance tuning with WSO2 ESB


      WSO2 ESB can be configured as both a producer and consumer for a JMS broker[1]. As an example ESB can listen to JMS queue(Apache ActiveMQ) consume messages and send them to back end service. It can also be act as JMS producer which can send messages to JMS queue. In this post I am discussing about JMS performance tuning with WSO2 ESB. Performance of JMS service over HTTP can be reduced due to messages are being served by a single threaded JMS listener. There are few ways to address that. These are the steps to tune Performance. 


      At first You need to follow [2] for optimal ESB and OS level performance.  
      Memory configurations are based on your system capability.As an example if you have more than 4GB RAM still you can increase parameters like this. 
      -Xms4096m -Xmx4096m -XX:MaxPermSize=2048m 

      JMS listener performance 

      Step 1 

      We can increase the JMS listener performance through concurrent consumers. Please note concurrent consumers are only applicable to JMS queues not for JMS Topics 

      Add the following configuration to the Queue Connection Factory properties in JMSListener configuration in axis2.xml. 

      <parameter name="transport.jms.ConcurrentConsumers" locked="false">50</parameter> 
      <parameter name="transport.jms.MaxConcurrentConsumers" locked="false">50</parameter> 

      Step 2

      Then another way of improving performance is by adding caching. To enable caching add following parameter to the Queue Connection Factory properties. 

      <parameter name="transport.jms.CacheLevel">consumer</parameter> 

      Preferred values for the cache level are none, auto, connection, session and consumer. "consumer" is the highest level which provides maximum performance. So after adding concurrency consumers and cache level your complete configuration would look like as given below. 

         <transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener"> 
              <parameter name="myTopicConnectionFactory" locked="false"> 
              <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter> 
              <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616&lt;/parameter> 
              <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">TopicConnectionFactory</parameter> 
      <parameter name="transport.jms.ConnectionFactoryType" locked="false">topic</parameter> 


              <parameter name="myQueueConnectionFactory" locked="false"> 
              <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter> 
              <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616&lt;/parameter> 
              <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter> 
      <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter> 
      <parameter name="transport.jms.ConcurrentConsumers" locked="false">50</parameter> 
      <parameter name="transport.jms.MaxConcurrentConsumers" locked="false">50</parameter> 
      <parameter name="transport.jms.CacheLevel">consumer</parameter> 

              <parameter name="default" locked="false"> 
              <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter> 
              <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616&lt;/parameter> 
              <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter> 
      <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter> 
        <parameter name="transport.jms.ConcurrentConsumers" locked="false">50</parameter> 
                <parameter name="transport.jms.MaxConcurrentConsumers" locked="false">50</parameter> 
      <parameter name="transport.jms.CacheLevel">consumer</parameter> 


      I have enable security as you can see additional parameter as follows in my axis2.xml .So You can remove those if you haven't enable security in ActiveMQ. 

              <parameter name="transport.jms.UserName">system</parameter> 
           <parameter name="transport.jms.Password">manager</parameter> 
           <parameter name="">JMSUSERID</parameter> 
           <parameter name="">manager</parameter> 

      JMS Sender performance

      Additionally to further optimize the performance of JMSSender you can just add , 

      <parameter name="transport.jms.CacheLevel">producer</parameter> 

      Make sure to add JMS destination which is a mandatory parameter when adding "producer" cache level. 
      <parameter name="transport.jms.Destination" locked="false">dynamicQueues/SimpleStockQuoteService</parameter> 

      Then your new configuration for JMSSender would look like as given bellow, 

         <transportSender name="jms" class="org.apache.axis2.transport.jms.JMSSender">
      <parameter name="myQueueConnectionFactory" locked="false"> 
                    <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter> 
                    <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616&lt;/parameter> 
                    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter> 
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter> 
                  <parameter name="transport.jms.Destination" locked="false">dynamicQueues/SimpleStockQuoteService</parameter> 
                    <parameter name="transport.jms.DestinationType" locked="false">queue</parameter> 
      <parameter name="transport.jms.ReplyDestination" locked="false">dynamicQueues/SimpleStockQuoteServiceReply</parameter> 
                    <parameter name="transport.jms.CacheLevel" locked="false">producer</parameter> 

      You need to specify Connection Factory in your endpoint as follows. 

               <address uri="jms:/?transport.jms.ConnectionFactory=myQueueConnectionFactory"/> 

      Preferred values for the cache level are none, auto, connection, session and producer. Maximum cache level is “producer”. 
      In case you have multiple queues, setting up default Connection Factory won't work. Because all the JMS endpoints will share the same destination queue. So what you need to do is to define connection factory per JMS endpoint basis in axis2.xml and make use of that JMS connection factory definition in the JMS endpoint as follows. 

               <address uri="jms:/?transport.jms.ConnectionFactory=myQueueConnectionFactory1"/> 
               <address uri="jms:/?transport.jms.ConnectionFactory=myQueueConnectionFactory2"/> 


      Harshan LiyanageIs iOS Enterprise Ready?

      The rapid growth of mobile technologies allowed Enterprise Computing to achieve greater personalization, real-time data availability and data delivery. Enterprises of all kinds are recognizing the importance of using mobile technologies for growing revenue, improving service, and capturing customers’ attention. There are several key reasons behind this.
      • Mobility empowers people to take decisions at the moment they need to
      • Improved data availability & delivery offers real-time information which leads to faster decision making
      • Mobility allows the employees to engage with customers very quickly which increases customer satisfaction because there will be no processing or communication delays
      • Increased employee interaction

      Organizations have to consider following factors when they consider implementing Enterprise mobility strategies in their organizations.
      • Remote Device Management & App Management
      • Access to the corporate data & apps
      • Security of corporate data
      When we speak about iOS platform it has always been an Enterprise-friendly platform while providing the freedom & flexibility to employees to use their iOS devices as they want. According to Craig Federighi (Apple’s Senior Vice President of Software Engineering) 98% of Fortune 500 companies has already started using iOS devices.
      Further the release of iOS 8 has introduced a truly impressive list of capabilities to simplify and enhance device management, app development, security and enterprise content publishing. So lets go through the iOS features which designed to cater enterprise mobility requirements.

      Device Management

      Apple provides a comprehensive device management framework with many features which are built into the iOS. Enterprises can easily support both BYOD and COPE scenarios with the support of the iOS MDM framework.

      Support for BYOD


      In BYOD scenario employees has the freedom to enroll their personal devices to Corporate MDM server by installation of a configuration profile and when they want they can opt out of the enrollment at any time by uninstalling the profile. IOS provides the ability to use corporate data & personal data in native apps which preserves a great user experience. Further in BYOD scenario iOS ensures that the device owners’ privacy is untouched while ensuring the maximum protection to the corporate data  Given below is a list of things that a MDM can see & can not see on an IOS device in BYOD scenario.
      Visible to MDM:
      • Device name
      • Phone number
      • Serial number
      • Model name and number
      • Capacity and space available
      • iOS version number
      • Installed apps
      Hidden from MDM:
      • Personal mail, calendars, and contacts
      • SMS or iMessages
      • Safari browser history
      • FaceTime or phone call logs
      • Personal reminders and notes
      • Frequency of all use
      • Device location

      Support for COPE

      IOS provides an extensive support for managing Corporate-owned iOS devices. Features like Device Enrollment Program (DEP), Streamlined enrollment setup, lockable MDM settings, device supervision and always-on VPN ensure that all COPE devices are configured to the organization’s specific requirements.byod-mdm.jpg

      Additional features for managing COPE devices:

      • Device Enrollment Program (DEP)
      • Streamlined setup
      • Supervised controls
      • Always-on VPN
      • Global proxy
      • Advanced content filtering
      • Device queries — list installed apps, etc
      • Activation Lock Bypass Code
      • Full remote wipeLocked MDM

      iOS MDM Feature List

      Given below are the list of MDM features which are supported by iOS platform.

      1. Device Lock Enterprise admin can lock or unlock any registered iOS device remotely which will prohibit employees from using their mobile devices at unnecessary occasions.
      2. Clear passcode This feature will allow admin to clear the unlock password if the user has added a passcode to unlock the device.
      3. WIFI Enterprise admins can configure the WIFI settings of any registered iOS device remotely which ensures that every iOS device in the organization is properly configured to internal wifi settings.
      4. Camera This feature will allow the enterprise admins to enable/ disable the camera remotely. This will helpful in restricted environments like military to limit the unnecessary exposure to the public.
      5. Configure Passcode policy Enterprise admins can add a policy to the user’s passcode, such as maximum failed attempts, minimum length, days of expiration, minimum complex characters, and passcode history. This will ensure that the device is secured from unauthorized accesses which will protect both corporate & personal data on the device.
      6. Email configuration Enterprise admins can configure email settings in the device like account credentials, server address & other required settings. This will ensure that all iOS devices in the enterprise have the access to the corporate emails.
      7. Selective wipe of corporate accounts, apps, and documents This feature will WIPE the enterprise container of the device. It will remove all the enterprise related data, apps & MDM profiles. This ensures that the corporate data & apps are protected when the device is stolen or employee leaves the organization.
      8. Install / uninstall apps Enterprise admins can install/uninstall apps to/from employee devices. So that every employee has all or set of enterprise or 3rd party apps according to the enterprises’ requirements. This selective app management will separate enterprise apps from  personal apps which enables enterprises to take actions on the device and secure corporate data and apps without completely wiping the device. Further this ensures that the native experience of the BYOD user is unharmed. Additional MDM Features provided with iOS 8
      9. Set Device Name The Device Name API is useful for remotely setting the device name of the user, such as “Paul’s iPad.”  So Enterprise admins can simplify the device management by setting the device name as part of a bulk registration workflow. 
      10. Administrators can prevent users from adding their own restrictions in the Settings menu on the device.
      11. Administrators can also disable users’ ability to locally erase, reset or wipe devices. This feature will be useful in COPE scenarios & when the managed devices have been stolen.
      12. Device Enrollment Program (DEP) With DEP users can’t remove MDM and this ensures that all COPE devices are enrolled to the MDM. Further this feature ensures that users are prohibited from removing content or device-level settings, which will help make shared devices easier to manage.
      13. New queries New MDM queries will provide more information to the administrators by letting them to see the last time a device was backed up to iCloud so they know whether it’s safe to perform certain tasks like enterprise wipe or full-wipe. 
      14. Manage Safari downloads, books and PDFs
      15. Per app VPN
      Enterprise App Development
      The enterprise app market is growing faster than ever. To aid that Apple has provided an impressive set of developer features in iOS 8 which includes 4,000 new APIs. This clearly shows that the latest iOS is designed to enable the development community to create more powerful apps for the end-users as well as for the Enterprises. Most promising feature in iOS is the “App Extensions” which allows to develop end-user productivity enhancement apps.

      App Extensions

      Extensions was introduced in iOS 8. An extension is a separate piece of software that runs independently when used in another app which greatly improve the workflow of data for enterprise users. But these extensions still face many restrictions in order to maintain long battery life and to keep apps secure. All iOS extensions that will be downloaded from the Apple App Store must be part of a bigger or “container” app and each extension must live within a “containing app”. Furthermore  Apple mandates that the containing app must offer some functionality to the user. So no one can write only extensions and distribute them through App Store. There are six different kinds of extensions. Here we will discuss only about five extensions that will be useful to the Enterprise users.


      1. Storage and Document Provider APIs This extension allows developer apps to enable users to open and edit documents using other apps and share documents between apps without creating unnecessary copies. For this purpose Storage Provider extensions provide a document storage location that can be accessed by other apps, such as iCloud, EverNote or any other third party app. Files which are managed by the Storage provider can be opened by apps which uses a document picker view controller. Enterprises can create Enterprise-apps which uses DocumentPickerViewController which will simplify the document workflow on the device and increase the accessibility.
      2. Custom Keyboard This feature allows users to download and install third-party custom keyboards on their device. These third-party keyboards could include specialized features such as a self-learning dictionary that saves unique words as they are typed into it, which could benefit in professions with highly technical terms. So if necessary Enterprises can come-up with their own keyboards which will increase the productivity of their employees. 
      3. Sharing Extensions Sharing extensions allow any app downloaded from the App Store to connect to Share Sheets so users can share content such as comments, photos, videos, audio, and links right from the app. Any popular app can use sharing. As an example a user looking up information in Safari or in email can easily share the link through a popular app like Salesforce One or through an enterprise app.
      4. Custom Action Extensions This feature allows any app to add a custom action extension to extend its functionality to other apps. For example, if a user opens a file in App A and edits it, but needs a watermarking feature from App B. In this case, App A is used to make changes to the file and App B is used to add a company watermark and convert it into an image or a to a PDF file. Although App B adds the capability to modify the content in App A, the file is not transferred between two apps. But it is as if part of App B is running inside of App A. Enterprises can use extensions to enhance the capability of the enterprise apps & extend /share features without creating new apps.
      5. Notification Center Notification Center provides the capability to respond to notifications either from the lockscreen or the top of the screen. User can respond to messages without closing the app he or she using when a notification drops down from the top of the screen. The user can take action such as accepting a calendar invitation without unlocking the phone even if a notification is received while the phone is on the lockscreen. This feature can also be used by enterprise apps to transmit important reminders or actions without letting the users to close apps.
      Enterprise Content Management
      Managed Enterprise Content (PDFs, EBooks)
      The new features in iOS allows enterprises to manage their content such as ebooks, PDFs and other files easily by silently pushing and removing eBooks and PDF documents from the iBooks app using MDM tools. This capability offers a huge advantage for Enterprises with a mobile strategy because they can instantly deliver mobile content such as quotations, marketing content, sales brochures, technical guides, or business books without requiring any action from the user.
      Further iOS provides tools to quickly find, create, archive, and share content in enterprises by providing features like,

      iCloud Document Controls
      iCloud document controls will allow enterprise admins to control the use of enterprise managed apps. But to help prevent personal data loss it will allow its usage for unmanaged or personal apps. This capability can be managed by the MDM.

      Storage and Document Provider APIsThese new APIs support the content lifecycle by enabling enterprise users to open and edit documents using more apps and share documents between apps to increase their productivity.

      Managed Safari Downloads
      Enterprise admins can specify that files downloaded from enterprise domains (or specific domains) using Safari may only be opened with specified managed apps. For example, a PDF downloaded from may be opened with enterprise-developed PDFViewer and not iBooks or another 3rd party app. This provides Safari as an enterprise browser and it improves containerization of Safari browser.


      Mobile devices are rapidly becoming the primary computing platform for enterprise users. With the release of iOS 8, Apple has introduced an incredible number of features and capabilities for mobile users, including new life-critical apps, developer APIs, and content publishing capabilities. While many of these features offer significant enterprise benefits, they also introduce new privacy, security, and device management considerations that can only be managed with a comprehensive mobile strategy based on clear guidelines and enabled through an EMM provider. Furthermore addition of around 4,000 new developer APIs , PIM apps (Personal Information Management) and device management features, apps, and content show that, with iOS 8, Apple means business.



      Chathurika MahaarachchiBest approach to evaluate software security

      Security testing is most essential aspect of software testing. The developed software that is not able to protect the sensitive data inside it, and is not able to sustain the data as per the requirement is of no use. Penetration testers use different kind of approaches to  evaluate the security of web applications. some testers totally relay on automated tools and some uses manual testing methods. 

      Automated testing is where a one piece of software is used to test another piece of software in compiled or source code form. On the other hand, manual testing is the process where a person is performing the tests directly by hand.

      There are number of advantages and disadvantages in both approaches. Manual test done by real thinking person and he can made different set of test cases which suite for the application under test. Therefore the quality of the test tends to be better. This approach is not much suitable for the large scale applications
      Additionally, manual tests often provide inconsistent and difficult to verify results.
      Automated test are consistent and suit well with larger applications. Those results are easy to reproducible and easy to verify the results. The disadvantage of automated testing practices is that the rate of false-positives may be high and therefore the outcome of the test may not be particularly useful.
      The best approach is often to combine automated and manual tests. Automated tests are very useful at the initial stages where the requirement is to cover as much area as possible. The results from the test are analyzed and manual investigation is performed in the areas that seem critical. The process can be repeated until a satisfactory level of coverage is reached.


      sanjeewa malalgodaHow to get custom error messages for authentication faliures in WSO2 API Manager 1.8.0

      Here in this post i will discuss how we can generate custom error messages for auth failures. If you need to retrieve message in application/vnd.error+json format you need to add following parameter to _auth_failure_handler_.xml sequence file.

       <property name="error_message_type" value="application/vnd.error+json"/> 

      And also we need to have message builders/formatter defined in axis2.xml file to map this message type. If you plan to use JSON formatter please use following configuration(assume you create message according to given template).
      <messageFormatter contentType="application/vnd.error+json" 

      Umesha GunasingheProduct releases, and relevant information - WSO2

      This is just a note + anyone who is looking for this information - not a fancy blog post :)

      Once a WSO2 product is released,  the release related information is recorded in the release matrix [1] :-


      You can refer the relevant release dates , the released chunk , relevant P2 repo link (for feature installations), compatible carbon version, and the platform.

      If you click on P2 repo link , it will redirect you to the relevant P2 repo information and the link. We use this for the feature installations for the WSO2 Products. For and example, when you need to install WSO2 Identity Server , Key Manager to WSO2 API-M, then you can install those features to API-M using the relevant P2 repo link.

      If you want to refer the relevant source code for a particular release, you can check the the Chunk where the product is released.

      Normally, in the WSO2 svn, there will be following categories.

      1) trunk  - normal development
      2) branch - getting ready for a relase development
      3) tag - once released the product is available under the tag

      If you want to look for the source code for a particular release you can check under the relevant released , you can check for the chunk the product is released, then check for the relevant feature source code under  components...

      For an example :-

      API-M 1.8.0 can be found under [1], and you can check relevant source code for API-M : Store at [2].



      Aruna Sujith KarunarathnaAdding a custom proxy path for WSO2 Carbon 4.3.0 Based Products

      The objective of this article is to give a comprehensive guide on, custom proxy paths, why we need a custom proxy path and how to enable a custom proxy path for WSO2 products. This feature was introduced in Carbon 4.3.0 release. Custom proxy paths Custom proxy path is used when mapping a proxy url pattern into a back-end url pattern. For example lets consider Proxy entry url path :

      Aruna Sujith KarunarathnaWSO2 Carbon 4.3.0 Released..!!!

      Hi Folks, WSO2 Carbon team is pleased announce the release of the Carbon Kernel 4.3.0. What is WSO2 Carbon WSO2 Carbon redefines middleware by providing an integrated and componentized middleware platform that adapts to the specific needs of any enterprise IT project - on premise or in the cloud. 100% open source and standards-based, WSO2 Carbon enables developers to rapidly orchestrate

      Niranjan KarunanandhamWSO2 Carbon Kernel 4.3.0 is Released!!

      Hi Folks,

      WSO2 Carbon team is pleased announce the release of the Carbon Kernel 4.3.0.

      What is WSO2 Carbon

      WSO2 Carbon redefines middleware by providing an integrated and componentized middleware platform that adapts to the specific needs of any enterprise IT project - on premise or in the cloud. 100% open source and standards-based, WSO2 Carbon enables developers to rapidly orchestrate business processes, compose applications and develop services using WSO2 Developer Studio and a broad range of business and technical services that integrate with legacy, packaged and SaaS applications.

      WSO2 Carbon kernel, the lean, modular, OSGi-based platform, is the base of the WSO2 Carbon platform. It is a composable server architecture which inherits modularity and dynamism from OSGi framework. WSO2 Carbon kernel can be considered as a framework for server development. All the WSO2 products are composed as a collection reusable components running on this kernel. These products/components inherits all the core services provided by Carbon kernel such as Registry/repository, User management, Transports, Caching, Clustering, Logging, Deployment related features.

      You can download the released distribution from the product home page :

      How to Contribute 

      What's New In This Release
      • Simplified logging story with pluggable log provider support.
      • Upgraded versions of Hazelcast, Log4j, BouncyCastle.
      • Improved Composite application support.

      Key Features
      • Composable Server Architecture - Provides a modular, light-weight, OSGi-based server development framework.
      • Carbon Application(CApp) deployment support.
      • Multi-Profile Support for Carbon Platform - This enable a single product to run on multiple modes/profiles.
      • Carbon + Tomcat JNDI Context - Provide ability to access both carbon level and tomcat level JNDI resources to applications using a single JNDI context.
      • Distributed Caching and Clustering functionality - Carbon kernel provides a distributed cache and clustering implementation which is based on Hazelcast- a group communication framework
      • Pluggable Transports Framework - This is based on Axis2 transports module.
      • Registry/Repository API- Provide core registry/repository API for component developers.
      • User Management API  - Provides a basic user management API for component developers.
      • Logging - Carbon kernel supports both Java logging as well as Log4j. Logs from both these sources will be aggregated to a single output
      • Pluggable artifact deployer framework - Kernel can be extended to deploy any kind of artifacts such as Web services, Web apps, Business processes, Proxy services, User stores etc.
      • Deployment Synchronization - Provides synchronization of deployed artifacts across a product cluster.
      • Ghost Deployment - Provides a lazy loading mechanism for deployed artifacts</li>
      • Multi-tenancy support - The roots of the multi-tenancy in Carbon platform lies in the Carbon kernel. This feature includes tenant level isolation as well as lazy loading of tenants.

      Fixed Issues

      Known Issues

      Contact Us

      WSO2 Carbon developers can be contacted via the mailing lists:

      Reporting Issues
      You can use the Carbon JIRA issue tracker to report issues, enhancements and feature requests for WSO2 Carbon.

      Thank for you interest in WSO2 Carbon Kernel.

      --The WSO2 Carbon Team--

      sanjeewa malalgodaHow to use two layer throttling in WSO2 API Manager

      Create new tier definitions

      Here in this post i will discuss how we can use two throttling policies at a given time for single API.When we have complex use cases we might need to apply different policies at same time.
      Below table shows how throttling policies are defined.

      300 per month
      5 per 3 min
      2000 per month
      1 per 5 sec
      Gold - Unlimited

      As we need to engage two throttling layers, we will add two throttling tier definitions and and engage them to the API.
      In order to do that edit (API definition synapse configuration file)

      Ex: AM_HOME/repository/deployment/server/synapse-configs/default/api/admin--animal_v1.0.0.xml file to with the following content

      <api xmlns="" name="admin--animal" context="/animal" version="1.0.0" version-type="url">
      <handler class=""/>
         <handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
                 <property name="id" value="B"/>
                 <property name="policyKey" value="gov:/apimgt/applicationdata/throttling-l2.xm"/>
             <handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
                 <property name="id" value="A"/>
                 <property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>

      Two layer Throttling
      Here we will use two layer throttling to achieve 2 policies for each role(free, silver, gold). Then we will engage them to API with different keys. So both of them will execute in runtime sequentially. In this case you need to replace tiers.xml file in gov:/apimgt/applicationdata/tiers.xml path of gov registry.

      1) Copy throttling-l1.xml(create file with following contents) to GOV_REG/apimgt/applicationdata/tiers.xml
      2) Copy throttling-l2.xml(create file with following contents) to GOV_REG/apimgt/applicationdata/throttling-l2.xml

      (NOTE : GOV_REG is the governance registry root in Carbon console )

      Throttling configurations - Policy 01(throttling-l1.xml)

      <wsp:Policy xmlns:wsp=""
                 <throttle:ID throttle:type="ROLE">Gold</throttle:ID>
                 <throttle:ID throttle:type="ROLE">Silver</throttle:ID>
                 <throttle:ID throttle:type="ROLE">free</throttle:ID>
                 <throttle:ID throttle:type="ROLE">Unauthenticated</throttle:ID>

      Policy 02(throttling-l2.xml)

      <wsp:Policy xmlns:wsp=""
                 <throttle:ID throttle:type="ROLE">Gold</throttle:ID>
                 <throttle:ID throttle:type="ROLE">Silver</throttle:ID>
                 <throttle:ID throttle:type="ROLE">Free</throttle:ID>
                 <throttle:ID throttle:type="ROLE">Unauthenticated</throttle:ID>

      Lakmal WarusawithanaUpcoming Composite Application Support in Apache Stratos 4.1.0

      Since we are almost in the feature freeze (alpha) of 4.1.0 release, I thought of writing a post summarizing composite application support in Apache Stratos. Simply its allows to deploy an application that required to have different service runtimes with their relationship and dependencies. Further, each service runtimes in the application can scale independently or jointly with the dependent services.

      In this blog post, I am trying to explain in detail of functionality and benefits that you can get from composite application support.

      First of all, I like to give very big thanks to CISCO individuals, who provided real world use cases, feedbacks and support for carrying out all the testing and also all Apache Stratos committers who did really awesome job to make this into reality with tremendous effort.

      With this release cartridge subscription going to be obsolete and application deployment going to be replace it with more functionalities. User can defined application with its required service runtimes, deployment and auto scaling policies, artifact repositories and all dependencies in simple structured json files which Stratos can understand and provision all required runtimes in defined manner.

      With this functionality few new terminologies will introduce to the Stratos which are group, dependency, startup order, group scaling, dependence scaling, termination behaviors (terminate none, terminate dependant, terminate all), metadata service ...etc. I will discuss in details what exactly they're meant to do.


      In Stratos service runtimes are creating by run time of the cartridges. For an example if we want to have PHP runtime we can used PHP cartridge to get the run time to deploy our PHP application.


      Group is a metadata that we can define by grouping several cartridges together. Also its support nested groups as well inter dependencies of the group members. See diagram - 01 for sample group.

      Group in an Application.png

      Startup order

      Within a group or in an application, we can define what is the order need to maintain in two or more depending cartridges or groups. in the diagram-01, group G1 has two members, C1 and G2, which has startup order C1,G2, means G2 has to wait until C1 create first and comes in active mode. In the Group G2, startup order C3,C2  means C2 has to wait until C3 available.

      Termination behavior

      There are three termination behaviors that supported in Stratos 4.1.0 release.
      • termination all
      • termination dependents
      • termination none

      With the termination behavior, we can define the action need to be taken if any dependency runtime failure in an application. In the diagram-01, in G2, terminate all behavior, either C2 or C3 failure ( failures are identified as not satisfying the required minimum members count), terminate all members and re-create with defined startup order.  In the G1, terminate dependents, if C1 failed, need to terminate G2, because G2 dependents on C1 in the startup order. But if G2 failed, no need to terminate C1, because its not dependent on G2.

      Dependent scaling

      When dependent scaling defined among members (cartridge or a group), and any of a member taking a place of scaling (up or down), all other dependent members also scaling and maintaining the defined ratio.

      dependents scaling (2).png
      In diagram-02, group G2, C2 and C3 defined in the dependent scaling list.  Will take 4 autoscaling iterations and try to understand how scale up and down taking place with the dependent scaling. Note that, C2 and C3 has two different autoscaling policies, one is based on CPU utilization and other memory consumption.  Also C2 has define minimum instance in the cluster is 2 and maximum is 6 while C3 minimum is 1 and maximum is 3.

      Iteration 1 : Both C2 and C3 are in below the threshold values hence maintain the minimum instance count defined, which C2=2 and C3=1.

      Iteration 2 : In the C3 predicted memory consumption is 85% which is exceeding the threshold. Please see below equation for calculate the required C3 instances for handle this predicted value.

      required instance count (1).png
      Required C3 instances = 85/80 * 1 = 1.06 , which mean we need 2 C3 instances for handle the predicted load. (because we need to create one full instance which is minimum unit if we comes to instances)

      Since C2 depends on C3 with the ratio (C2 minimum instances : C3 minimum instances) 2:1 and C3 new instance count is 2, therefore Stratos will create new 2, C2 instances which make  total 4 C2 instances for keep the ratio which we defined. Also note that C2 was in under the threshold value but priorities goes to dependant scaling decision.

      Iteration 3 : In this case, C2 predicted CPU consumption is 75% and if we applied above equation for calculation required instance count is 75/60 * 4 = 5. Since C3 dependent on C2 with ratio 2:1, C3 will increase it instance count to 3.

      Iteration 4 : In this scenario, C3 is under threshold but C2 is above. Required instance count calculation for C2 is 36/60 * 5 = 3. Which mean we need to scale down C2 instances into 3 instances, with the dependent ratio, C3 should be in 2 instances and C3 is below the autoscaling threshold, hence scale down taking place.

      Group scaling

      If we defined group scaling true in a group definition, means group itself can scale by group, depending on each member reach out its maximum with dependent manner. Please see diagram - 02 , iteration 5.

      Iteration 5 :  In this scenario, C3 predicted value is 160, with required instance calculation 160/80 * 2 = 4 mean need 4 C3 instances but C3 max is 3, hence can’t scale more than 3. In this situation, if group scaling enable, Stratos will create new G2, group instance, which have 2 instance of C2 and one instance of C3. With this result we are having two G2 group instances G2-1 and G2-2. See below diagram-03 for how its looks like.

      Group Scaling in an Application - edited.png

      Application json

      Application json is a structured json, that you can defined run time of your application by using, cartridges, cartridge groups, dependencies, artifact repositories and auto scaling policies. Application json can be convert into an application template, that can be reused to deploy same application with different deployment patterns.  Deployment policy is the way to define the deployment pattern like high availability, disaster recovery, cloudbursting, multicloud with 4 nines or 5 nines ..etc.

      Below is the basic application json structure

      • applicationId
      • alias
      • components
        • groups
          • alias
          • min/max
          • group scaling enable/disable
          • cartridges
            • min/max
            • subscribable info
          • groups
            • alias
            • min/max
            • group scaling enable/disable
            • cartridges
              • min/max
              • subscribable info
        • cartridges
          • min/max
          • subscribable info
        • dependencies
          • startup order
          • termination behavior
          • dependent scaling

      Deployment policy

      In Stratos, deployment policy will help to defined the deployment patterns need to take place.
      Will look at different deployment policies that how can achieve different deployment patterns. Before that will have a look at different concept that used in Stratos.

      Network Partition : NP : Network partition is a logical partition that you can defined, in networkly bound manner. Simply its mapping with IaaS regions. See below samples;
      • NP1: EC2-US-WEST
      • NP2: EC2-US-EAST
      • NP3: OPENSTACK-NY-RegionOne

      Partition : P : Partitions also an individual or a  logical group within a network partition in a fine grained manner. For sample;
      NP1:P1 ->  US-WEST -> us-west-1 (N. California)
      NP1:P2 ->  US-WEST -> us-west-2 (Oregon)

      Below is the basic structure of the deployment policy.

      + appId
      + applicationPolicy[1..1]
      + networkPartition[1..n]
      + id
      + activeByDefault
      + partition[1..n]
      + id
      + provider
      + properties[1..n]
      + childPolicies[1..n]
      + alias (group alias or cartridge alias)
      + networkPartition[1..n]
      + id
      + algorithm
      + partition[1..n]
      + id
      + max

      applicationPolicy : will have definition of all the network partition and partition which will be used throughout the application.

      activeByDefault : If true means, that network partition will be used by default. If false, means it can be used when all the resources are exhausted(in bursting)

      childPolicies : Each child policy will refer the network partition and relevant partition from applicationPolicy to define their own deployment pattern. Please note that, if you define a childPolicy by referring to group, underlying cluster(cartridge runtime)/group will inherit the same policy.

      max : Maximum no of instances that can be handled by a partition.
      For group: max group instances can be in a partition.
      For cluster: max members that can be kept for a cluster instance in a partition.

      algorithm : Stratos support two algorithms, namely “round robin and “one after another” which can used depending on the required scenarios.

      Please see simple example that two child policies applied in cartridge runtime level.

      Child policy : sample1
      Partitions : P1, P2
      P1 Max : 4
      P2 Max : 3
      Algorithm : Round robin

      Child policy : sample2
      Partitions : P3, P4
      P3 Max : 2
      P4 Max : 3
      Algorithm : One after another

      partitions and deployment policies (1).png

      In diagram-04 you can see, C2 has applied sample1 child policy and all C2 instances are created in P1 and P2 in round robin manner.  In C3, sample2 child policy, it will create all C3 instances in P3 until P3 maxout (which is 2 in this case) and then filling P4, which is “one after another” algorithm applied.

      Metadata service

      Stratos metadata service responsible for data/information sharing among the run time services which may need to have in composite application. When we are creating an application, Stratos Manager will create a JW (json web token) oauth token which include application id as a claim. This token will pass into cartridge runtime instance/container as a payload. Cartridge agent use this token to authenticate and authorize against the application whenever get/publish data from/to metadata service.

      sanjeewa malalgodaHow to implement custom JWT generator and custom claim retriever and link them in WSO2 API Manager 1.8.0

      Here in this post we will discuss how to use custom code for JWT generation and Claims retrieve logic. I have explained custom JWT generation with API Manager 1.8.0 in this post( Moving forward we will see how we can call custom claim retrieve method from JWT generator implementation. Once everything configured properly you will see JWT similar to below.

      {"iss":"","exp":"1418619165375","":"admin","":"2","":"DefaultApplication","":"Unlimited","":"/testam/sanjeewa","":"1.0.0","":"Bronze","":"PRODUCTION","":"APPLICATION_USER","":"admin","":"-1234","current_timestamp":"1418618265391","messge":"This is custom JWT"}

      As you can see current_timestamp and message properties will be there in JWT with customized JWT generator code.

      public Map populateCustomClaims(APIKeyValidationInfoDTO keyValidationInfoDTO, String apiContext, String version, String accessToken)
                  throws APIManagementException {
              Long time = System.currentTimeMillis();
              String text = "This is custom JWT";
              Map map = new HashMap();
              map.put("current_timestamp", time.toString());
              map.put("messge" , text);
      //If need you can generate access token based claims and embedded them to map.
      return map;

      Also if need to generate custom claims based on access token you can extend org.wso2.carbon.apimgt.impl.token.ClaimsRetriever class and implement method for that as follows.

       public SortedMap getClaims(String endUserName, String accessToken) throws APIManagementException {
       //you implementation should go here

      Then call it inside populateCustomClaims as follows.

         public Map populateCustomClaims(APIKeyValidationInfoDTO keyValidationInfoDTO, String apiContext, String version, String accessToken)
                  throws APIManagementException {
              CustomClaimsRetriever claimsRetriever = (CustomClaimsRetriever)getClaimsRetriever();
              if (claimsRetriever != null) {
                  String tenantAwareUserName = keyValidationInfoDTO.getEndUserName();

                  if (MultitenantConstants.SUPER_TENANT_ID == APIUtil.getTenantId(tenantAwareUserName)) {
                      tenantAwareUserName = MultitenantUtils.getTenantAwareUsername(tenantAwareUserName);

                  try {
                      //Call getClaims method implemented in custom claim retriever class
                      return claimsRetriever.getClaims(tenantAwareUserName,accessToken);

                  } catch (Exception e) {
              return null;
      You can download complete sample from this URL(Sample Code).

      Iranga MuthuthanthriWSO2 Big Data Platform: Collect Extract and Publish Twitter feed from WSO2 ESB to WSO2 BAM

       WSO2  provides is a unique big data platform which combines real time and batch processing allowing business to make immediate action upon based on analysis of real time information and historical information to identify new business opportunities and channels as well act immediately on  immediate threats to mitigate business risk.  

      Big Data can be described through the 3V of Data:Volume, Variety  and Velocity.  For business what is valuable is not of these three properties but on on the business insights that can be extracted from Big Data.  In the following series of post it is attempted to demonstrate on how utilising the WSO2 Big Data Platform it would be possible to gain meaningful insights from Big Data. The first post will demonstrate on collecting data for analytics through product offerings of  WSO2 ESB(4.8.1) and WSO2 BAM (2.5.0).  

      WSO2 ESB 4.8.1

      WSO2 ESB provides enterprises to connect to services such as Twitter, Sales forces through set of pre defined connectors. Connecters can be downloaded from the connector store We will be using the Twitter Connector as an example for this post. 

      Step 1: 

      Initialise Twitter Connector 

         <localEntry key="Twitter">

      Step 2

      Setup a BAM Server Profiler

      WSO2 ESB provides a  BAM mediator to publish data to BAM. A Server profile needs to be created for the mediator publish data. In this example for the stream configuration payload properties , we will be  getting the values through at runtime based on the mediation flow specified in the next step.

      Step3: Create Proxy Service for Data Publishing

      The message flow will be to connect to Twitter and and use the search operation to get the twitter feed. The received tweeter feed data is extracted  using the property mediator through using of  JSON expressions. In this example we  extract  twitter user and text of the feed.  The message will be published to BAM through the BAM mediator.The complete configuration for the proxy is as below.

       <proxy name="TwitterProxy"
      transports="http https"
      <log level="full">
      <property name="INIT" value="##Call Proxy###"/>
      < configKey="Twitter">
      <log level="full">
      <property name="RESULT" value="### Search Result ###"/>
      <property xmlns:ns="http://org.apache.synapse/xsd"
      <property xmlns:ns="http://org.apache.synapse/xsd"
      <serverProfile name="Tweet_Stream">
      <streamConfig name="TweetStream" version="1.0.0"/>

      WSO2 BAM 2.5.0

      WSO2 BAM Collects published data through defined event streams and persists data in a Cassandra data storage to process through Hadoop for analytics.

      Step 5: View Collected Data

      Login to WSO2 BAM and view Cassandra Explorer for the created Stream.

      Nandika JayawardanaPackt's $5 ebook Bonanza

      Following the success of last year’s offer, Packt Publishing will be celebrating the holiday season with a bigger $5 offer. Check it out here From Thursday 18th December, every eBook and video will be available on the publisher’s website for just $5. Customers are invited to purchase as many as they like before the offer ends on Tuesday January 6th, making it the perfect opportunity to try something new or to take your skills to the next level as 2015 begins.

      Sajith RavindraSample on using Apache Cassandra with WSO2 CEP

      In this post I hope to explain how you can configure WSO2 CEP to store events in Apache Cassandra.This don't intent to describe how you can configure  WSO2 CEP to receive events . Hence, I'm using configurations of sample 0001 and extend it to publish events to Cassandra as well in addition to publishing as WSO2Event*

      In this sample,
      - CEP receives events as WSO2Events
      - Store received events in Cassandra.
      For the sake of simplicity there's no any processing done and events are just passed through.

      *WSO2Event is a Thrift based format for events supported by WSO2 CEP.

      Configuring WSO2 CEP Server

      1. Bring up CEP server with sample configuration 0001 by executing following command in <CEP_HOME>/bin,

      ./ -sn 0001 -Ddisable.cassandra.server.startup=false

      By setting disable.cassandra.server.startup=false we bring up the embedded Cassandra in CEP server and event's will be stored in it. But, note that this embedded cassandra is only to be used for testing purposes and it's recommended to DISABLE IN PRODUCTION deployments.

      2.  Add an Cassandra output event Adaptor[1] by navigating to "Configure" -> "Output Event Adaptors". Output adaptor is the component which communicate with Cassandra server and carry out the transport.

      Note that I have configured the output adaptor to connect to  embedded Cassandra server @ localhost:9160 and used both user name and password as "admin"

      3. Create an event formatter using the "CassandraOutoutEventAdaptor" by navigating to "Event Streams" -> "OutFlows"  of stream.  

      Event builder is the component which converts the outbound events to a format which can be stored in Cassandra. Here you have to enter keyspace name and column family name to be used when storing events in Cassandra. 

      Only 'map' "Output Event Type" can be used with Cassandra, it will map value of each attribute in an event to key value pairs. Please refer [2] for more on map event formatters.

      Now you have configured CEP server to store/publish events to Cassandra.

      Verifying if it works

      To verify if events have really stored in Cassandra server let's use "cassandra-cli".

      1. Login to Cassandra server with using user name and password as "admin".

      ./cassandra-cli -u admin -pw admin

      2. Execute the following commands,
      USE TestEventKeyspace; 
      LIST TestEventColumnFamily; <- Listing all rows in the Keyspace
      If events are successfully stored something similar to following should be shown.


      [1] - Output cassandra event adaptor - WSO2 CEP Documentation
      [2] - 

      sanjeewa malalgodaHow to use account lock/ unlock feature in WSO2 API Manager 1.6.0

      You may use account lock/unlock feature to block user token generation. I have tried this in my local machine.Here are the steps i followed.

      I installed following features to API Manager 1.6.0 from p2( repository. For this i used IS 4.5.0 features.
      User Profiles Feature
      Claim Management Feature
      Account Recovery and Credential Management Feature

      Create new user named testuser. Grant subscriber permission.

      Then install required features to APIM 1.6.0 and restarted server

      Then locked test user as follows.
      Goto claim management UI and make accountLocked to support by default claim

      Then go to users and select required user and lock account

      I enabled following property in file.


      I restarted server to make sure this is not claim cache issue. Now this account is locked and will not be able to use anymore.

      Now if you tried to generate token you should see something like this.

      curl -k -d "grant_type=password&username=testuser&password=testuser&scope=PRODUCTION" -H "Authorization: Basic ZkZlZkRFY0dtNDFJVk50VUl2YXdMeDJubUxFYTozNG9aTmZhQmpHWHdUQmo1N19mT045dHpqaUVh, Content-Type: application/x-www-form-urlencoded" https://localhost:9443/oauth2/token

      {"error":"invalid_grant","error_description":"Provided Authorization Grant is invalid."}

      In back end logs you should see this.

      [2014-12-18 16:56:28,832]  WARN {org.wso2.carbon.identity.mgt.IdentityMgtEventListener} -  User account is locked for user : testuser. cannot login until the account is unlocked
      [2014-12-18 16:56:28,833] ERROR {org.wso2.carbon.identity.oauth2.token.handlers.grant.PasswordGrantHandler} -  Error when authenticating the user for OAuth Authorization.
      org.wso2.carbon.user.core.UserStoreException: 17003
          at org.wso2.carbon.identity.mgt.IdentityMgtEventListener.doPreAuthenticate(

      If you need more information please visit this (

      Umesha GunasingheHow to enable audit logs for WSO2 API-M

      In API-M there is no audit logs enabled by default. If you consider IS, start up the server and log-in as admin, you can see under [IS-HOME]/repository/logs folder there is a file called audit.log.

      But this is not the case with WSO2 API-M. The audit logs are not enabled by default with API-M. You have to manually enable it in configurations files. But this can be done in few easy steps.

      1) Download WSO2 API Manager
      2) Then extract it to a folder
      3) Go to [API-M HOME]/repository/conf/ file and add the following configuration for the log file

      log4j.logger.AUDIT_LOG=INFO, AUDIT_LOGFILE

      then add the following set of configurations...

      # Appender config to AUDIT_LOGFILE
      log4j.appender.AUDIT_LOGFILE.layout.ConversionPattern=[%d] %P%5p - %x %m %n
      log4j.appender.AUDIT_LOGFILE.layout.TenantPattern=%U%@%D [%T] [%S]

      4) Save the configurations and start the server

      5) TA-DA now you have the audit logs in API-M :)

      Chathurika MahaarachchiSolving Compound class names are not supported error in Selenium

      When your running your selenium based test cases if you come across a issue like
      "org.openqa.selenium.InvalidSelectorException: Compound class names are not supported. Consider searching for one class name and filtering the results or use CSS selectors." 

      This issue is getting due to currently selendroid can not find elements by name or className with value contains white space. 
      This is a simple way to resolve the issue.

      Imagine you have a Div class like div class ="ddArrow arrowoff" and you want to use the Xpath to identify the element and you will get the above error.

        @FindBy(how = How.CLASS_NAME, using = "ddArrow arrowoff")
          private static WebElement ddarrow;

      To resolve the issue you need to use cssSelector . The selector will be 

      Sajith KariyawasamPuppet configs

      Puppet master
          server name in /etc/puppet/puppet.conf is not needed
          set autosign=true in /etc/puppet/puppet.conf

      Obtain the puppet master hostname by executing  "hostname -f"

      Puppet agent
          Set the proper puppet hostname in /etc/puppet/puppet.conf which is obtained in the previous command, if that is resolvable. Otherwise need to add that hostname into /etc/hosts file
          agent's hostname can be any value. No need to have master's hostnames init. puppet agents hostname need to be resolvable. for eg, to


      Denis WeerasiriNear Lindt cafe @ Martin Place, Sydney

      Flowers and Messages left at the memorial site in Martin place

      Aruna Sujith KarunarathnaExtending SCIM User Schema Of WSO2 Identity Server

      In this post we are going to extend the SCIM User Schema Of WSO2 Identity Server and add custom fields. More details about Extending SCIM User Schema Of WSO2 Identity Server and SCIM User Provisioning With WSO2 Identity Server using these links. [1], [2]. In this sample we are going to add a custom field called dateOfBirth to the schema. Follow the following steps to enable the custom field.

      Lali DevamanthriThree IT transformations for The Future and How to Get There

      The headache of IT has always been the infrastructure, with IT
      spending so much time focusing on infrastructure that there’s
      been little time for services or innovation. That ratio is changing.
      The service model is taking over: software-as-a-service, platform-
      as-a-service, desktop-as-a-service, infrastructure-as-a-service—
      you name it. It’s all about service. In the future, IT will spend less
      of its time handling infrastructure and more of its time managing
      and enabling service relationships throughout the enterprise,
      whether those relationships are between IT and their business
      peers or between the dozens of service domains that exist inside
      and outside of a company.
      But in order to manage and enable those relationships, having
      insight into its internal workings is critical. IT should have a firm
      understanding of its own services (if only to be able to compare
      them financially and operationally with other options). Those who
      work in IT should become their own service engineers before they
      can apply their service expertise to other service disciplines.
      The problem is that IT spends much of its time helping every
      other department implement systems to the detriment of its own:
      project management, service desk and systems management
      all tend to be fragmented and frustrating for everyone who
      comes in contact with them. These systems contribute to IT fire
      drills when they should be contributing to IT fire prevention. In
      short, IT requires systems that help it manage IT. Only then can
      it adequately—and even, in time, superlatively—extend the IT
      service model to automate service processes for other internal
      service domains within the enterprise, including HR, facilities,
      legal, finance, operations, etc. This same model can also be
      applied to service relationships that extend beyond the walls of
      the enterprise to customers, providers, suppliers and partners.

      Three IT transformations can help IT get its own house in order to
      become the proactive partner of the business. By applying these
      concepts, IT departments at major enterprises are changing the
      way they engage with their business peers.

      Transformation No. 1: Service Consolidation, Standardization,

      The first transformation starts with getting
      IT systems under control. That involves consolidating
      fragmented and redundant service systems to standardize
      operational processes for global usage. Just as ERP systems
      brought finance, manufacturing, human resources and other
      applications into a single system, IT needs its own single
      system of record. Ideally, this “ERP for IT” uses one data
      model, one code base, one set of APIs and one user interface
      to give IT staff complete visibility into the business of IT.
      Imagine a single window into everything relating to management:
      the service portfolio, cost, governance, project portfolios and
      reporting. Imagine a single window into operations: the service
      catalog, planned changes, release schedules, incidents, software
      development phases and service levels—and being able to
      communicate about them in real time. Imagine a single window
      into the infrastructure: the configuration management database,
      automated discovery, asset management, orchestration of
      automated workflow for cloud
      and VM lifecycle management,and even a way to create
      custom applications. As Peter Argumaniz, VP of tool and automation at
      financial services firm AIG, noted at the user conference,
      his company found itself with 2,000 management tools. It’s
      already consolidated close to 100 service applications,
      with more being moved onto the ServiceNow Platform
      this year. “Now we have one system of record,” Argumaniz
      said, adding that the platform has worked so well that the infrastructure and application
      development teams are moving onto it, too.
      A director of IT for a major chip manufacturer noted that his
      company has used ServiceNow to consolidate 70 systems within
      IT down to 40. “Each of our IT employees uses this platform to
      make us more service-centric,” he said, “because as our CIO
      says, when IT is mediocre, our enterprise is mediocre.”

      Transformation No. 2: Consumerized Service Experience.

      Once IT transforms the way it manages its internal workings, it
      can turn to how it interacts with users. Users aren’t generally
      keen on interacting with IT, because it is more often frustrating
      than fruitful. Consumerization is the idea of delivering a
      consumer-like service experience—making access to enterprise
      systems as easy and intuitive as consumer applications such
      as online banking, e-commerce or social media. In those
      scenarios, when consumers have questions, they have a
      multitude of options. They can get what they need through an
      intuitive service catalog; they can search for answers using
      keywords, engage in collaborative social streams and chat in
      real time with a customer service representative.
      NYSE-Euronext North America CIO Paul Cassell remembered
      when he first came to the exchange, it was using a competing
      trouble-ticket application. The traders on the floor hated it so
      much, “they wouldn’t touch it—they would just call IT.” After the
      exchange deployed ServiceNow, IT worked with the traders to
      set up a system that was easier to use. “Now the traders enter
      their requests from the floor, something they would never do
      before,” says Cassell. The new system completely changed how
      traders interact with IT, making IT more accessible and integral
      to business efficiency.

      Transformation No. 3: Service Automation

      This may be the most important transformation of all: moving from manual,
      time-intensive, resource-intensive activities to automated ones. Just as
      IT spends too much time on infrastructure and too little time on
      service, it tends to spend too much time on handling processes
      manually and too little time on analyzing them to see what can be
      automated. This is not a subterfuge designed to reduce numbers in
      IT; it is a way to make IT more efficient and to redeploy staff away
      from repetitive, mindless work toward more strategic capabilities.
      It is an opportunity to systematize what IT does in such a way that
      it can be done faster and more efficiently.

      This kind of automation will drive a new level of efficiency in the
      enterprise, enabling IT to scale the delivery of service to support
      the overwhelming demands of the business. As consumerization
      and self-service come together with automation, the future of
      IT begins to unfold. Operational activities happen automatically
      and correctly. Employees and IT beneficiaries receive faster
      and more consistent service, while IT can scale service delivery
      processes dramatically. Consumerization and self-service becomes
      the backbone of the typical service experience. All
      of this activity will be trackable and auditable.
      Automating a business process also gives enterprises the
      opportunity to rethink IT, to question why it’s done a certain way.
      When Underwriters Laboratories started automating some of
      its IT processes, it discovered that some of those processes
      had been unchanged for decades. A typical change-request form had
      90 fields; it took a long time to fill in, and the need for all those fields
      wasn’t always obvious.

      Alison Collop, director of global IT for Coca- Cola, uses ServiceNow to automate not only third-party access
      management, serving thousands of customers, vendors, bottlers
      and partners, but also its hardware and software provisioning.
      “The systems used to be very fragmented, but now we have a
      catalogue where we present applications,” she said. “It links into
      the ServiceNow workflow for approvals.”
      occurring in IT. Bob Melk, senior vice president of IDG Enterprise,
      reported on the most recent results of CIO magazine’s “State of
      the CIO” survey, noting that the percentage of CIOs who felt they
      were perceived as a “service provider” dropped between 2011 and
      2012. The percentage of those who thought they were perceived
      as a “business peer” climbed 5 percent, while those who felt they
      were perceived as a “cost center” dropped 6 percent.
      Melk also noted other shifts in the way in CIOs’ priorities were
      focused more now than in previous years on service rather than
      maintenance. “Where CIOs once focused on cost, they’re looking at
      growth. Where they once looked at efficiency, they’re now looking
      at speed and customer satisfaction. They’ve moved from worrying
      Cassell automated software delivery for his company’s
      eight stock exchanges, which all used different bug tracking
      systems. With ServiceNow, development, QA and project
      management are all tied together, so that software delivery
      can happen faster than ever before. “We can deliver hundreds
      of new packages over a weekend, so that IT is no longer the
      bottleneck for new features,” he said.
      Sometimes the automation of processes doesn’t even have to
      be IT related. In many cases, the ServiceNow application has
      been adopted beyond IT. GE Energy sells, services and maintains
      equipment for power generation, water process technology and
      gas turbines. “We use ServiceNow as more than an IT service
      platform,” said Reggie Acloque, information management leader.
      “We’re using it to automate workflow processes of our engineering
      team, which uses it to enter questions about servicing. We use it
      to process claims as well. It’s transformed the way we operate.”

      Umesha GunasingheUse cases with WSO2 IS 5.0.0 - Part 2 - User Provisioning - Part 1

      Lets discuss about a user provisioning use case with regards to the provisioning framework of WSO2 Identity Server 5.0.0.

      With the introduction of the the new Identity Server, There are lot of provisioning capabilities available. There are 3 major concepts as Inbound, outbound provisioning and Just-In-Time provisioning. Inbound provisioning means , provisioning users and groups from an external system to IS. Outbound provisioning means , provisioning users from IS to other external systems. JIT provisioning means , once a user tries to login from an external IDP, a user can be created on the fly in IS with JIT. Please read this awesome blog post about Provisioning framework of WSO2 Identity Server.

      Now, lets take a sample scenario and talk about provisioning would work using provisioning capabilities of WSO2 IS.

      The above diagram depicts a scenario where a user will be provisioned from and external system (Inbound provisioning), and in the same flow once the user is provisioned to the IS - A, this user will be provisioned to the other external systems like Google Apps, or another IS (Out bound provisioning).

      From an external system you can provision users with SCIM or SPML connector, as well as you can use SOAP admin services to add a user. Or else another option would be, if none of the above mentioned can be used, you can always write a custom provisioning connector and plug in with WSO2 Identity Server.

      For provisioning users to external systems, there are OOTB connectors shipped with WSO2 IS, or else you can always write a custom connector according to your requirement.

      Lets talk about how to configure such a provisioning scenario in the next related post .....

      Srinath PereraICTER 2014 Invited Talk: Large Scale Data Processing in the Real World: from Analytics to Predictions

      Last week I did an invited talk at ICTER 2015 conference in colombo, discussing "Big Data", and following are the slides.



      Large scale data processing analyses and makes sense of large amounts of data. Although the field itself is not new, it is finding many usecases under the theme "Bigdata" where Google itself, IBM Watson, and Google's Driverless car are some of success stories. Spanning many fields, Large scale data processing brings together technologies like Distributed Systems, Machine Learning, Statistics, and Internet of Things together. It is a multi-billion-dollar industry including use cases like targeted advertising, fraud detection, product recommendations, and market surveys. With new technologies like Internet of Things (IoT), these use cases are expanding to scenarios like Smart Cities, Smart health, and Smart Agriculture. Some usecases like Urban Planning can be slow, which is done in batch mode, while others like stock markets need results within Milliseconds, which are done in streaming fashion. There are different technologies for each case: MapReduce for batch processing and Complex Event Processing and Stream Processing for real-time usecases. Furthermore, the type of analysis range from basic statistics like mean to complicated prediction models based on machine Learning. In this talk, we will discuss data processing landscape: concepts, usecases, technologies and open questions while drawing examples from real world scenarios.

      sanjeewa malalgodaSample JAX-RS web application to test Application servers for basic vulnerabilities

      I have created web application[1] which we can use for security tests. With this rest service we can perform basic security tests like file copy, delete, system property read etc. Also added sample Jmeter test case to verify its functionality. You need to deploy this in tenant space and call rest APIs as follows.


      Requests should be send with following format:

      HTTP GET - Read file (complete file path)

      HTTP POST - Create file (complete file path)

      HTTP DELETE - Delete file in Server (complete file path)

      HTTP GET - Read file (file path from carbon server home)

      HTTP POST - Create file (file path from carbon server home)

      HTTP DELETE - Delete file in Server (file path from carbon server home)

      HTTP GET - Read system property

      HTTP POST - Copy files in server using carbon Utility methods

      HTTP POST - Delete files in server using carbon Utils

      HTTP POST - Get registryDBConfig as string

      HTTP POST - Get userManagerDBConfig config as string

      HTTP GET - Get network configs as string

      HTTP GET - Get server configuration as string

      HTTP POST - Get network configs as string

      ============Following operations will not be covered using Java security Manager=========
      HTTP POST - Generate OOM

      HTTP POST - Generate high CPU

      HTTP POST - Generate system call

      sanjeewa malalgodaHow to write API Manager selenium test case to login publisher and view stats

      In this article i will share sample code to login API publisher and view stats dashboard. You can use similar tests to test API Manager stats dash board related functionalities.

      Add following source to integration tests and full class name to testings.xml file

      import org.testng.Assert;
      import org.testng.annotations.Test;
      import org.openqa.selenium.By;
      import org.openqa.selenium.WebDriver;
      import org.openqa.selenium.WebElement;
      import org.openqa.selenium.firefox.FirefoxDriver;

      public class DOJOUIElementTestCase {
          @Test(groups = {""}, description = "APIM stats DOJO element test case")
          public void LoginWithEmailUserNameTestCase() throws Exception {
              WebDriver driver = new FirefoxDriver();
              WebDriverWait wait = new WebDriverWait(driver, 10);        wait.until(ExpectedConditions.elementToBeClickable(By.linkText("Statistics")));
              driver.findElement(By.linkText("API Response Times")).click();    wait.until(ExpectedConditions.elementToBeClickable("serviceTimeChart")));
              driver.findElements("serviceTimeChart")).get(0).click();    wait.until(ExpectedConditions.elementToBeClickable(By.xpath("//div[contains(@class,'dijitTooltipRight')]")));
              WebElement toolTip = driver.findElement(By.xpath("//div[contains(@class,'dijitTooltipRight')]"));
              Assert.assertEquals(toolTip.getText().contains("ms"), true);

      Umesha GunasingheRun Time Governance Use Case with WSO2 GREG and ESB - 1

      Hi Ya''ll,

      Long time ...How are you all doing? It is Christmas time again....Lets try to learn a run time governance scenario with WSO2 Governance Registry today.....:)

      Lets start understanding the scenario with a diagram....

      We can describe the above diagram as follows :-

      1. Custom security policy is uploaded via GREG.
      2. GREG is mounted with ESB.
      3. Security proxy is created applying the custom policy in the registry (referring the policy in the GREG)
      4. Proxy is created for the service hosted in the application server.

      Once the service is invoked via SoapUI, since the security policy is applied at ESB , it will refer to the policy in the Governance Registry at the rum-time. Once the security policy is properly validated, the response will be passed back to the invoking party.

      In the next post lets talk about how to simply build up the above scenario......

      Bye bye for now...:)

      Chanaka FernandoImplementing Rule based systems with WSO2 products - Learning tutorials and resources

      Here are some resources which you can use to learn about implementing rule based solutions using WSO2 products stack.

      Writing Business rules with WSO2 Carbon platform

      Samples available at WSO2 BRS documentation

      Integrating business rules and the architecture of rules component

      Integrate business rules with WSO2 ESB and WSO2 BRS

      Integrate business rules with WSO2 BPS and WSO2BRS

      Complex event processing and business rules integrations with SOA

      Developing business rule services with WSO2 Developer Studio

      Asanka Dissanayake[WSO2 ESB] Change local name of an element of the payload

      There was a requirement to change the local name of an element in the payload using a parameter in the incoming payload. For example

      user sends a following payload to the ESB


      Requirement was, value in


      to be the root element of the response message like below.


      Guess what !!! .. it is just following piece of code.

      <sequence xmlns="" name="createPayload">
           <property name="RESPONSE_ROOT_TAG" expression="//action/text()" type="STRING"/>
              <script language="js">
                  var rootElementName=mc.getProperty('RESPONSE_ROOT_TAG');

      hope this can save someone’s time :) :)

      Isuru PereraJava JIT Compilation, Inlining and JITWatch

      Dr. Srinath recently shared an InfoQ article with us and its title is "Is Your Java Application Hostile to JIT Compilation?". I'm writing what I learnt from that article in this blog post.

      Overview of Just-In-Time (JIT) compiler

      Java code is usually compiled into platform independent bytecode (class files) using "javac" command. This "javac" command is the Java programming language compiler.

      The JVM is able to load the class files and execute the Java bytecode via the Java interpreter. Even though this bytecode is usually interpreted, it might also be compiled in to native machine code using the JVM's Just-In-Time (JIT) compiler. 

      Unlike the normal compiler, the JIT compiler compiles the code (bytecode) only when required. With JIT compiler, the JVM monitors the methods executed by the interpreter and identifies the “hot methods” for compilation. After identifying the Java method calls, the JVM compiles the bytecode into a more efficient native code.

      In this way, the JVM can avoid interpreting a method each time during the execution and thereby improves the run time performance of the application.

      The -client and -server systems

      It's important to note that there are different JIT compilers for -client and -server systems. A server application needs to be run for a longer time and therefore it needs more optimizations. However a client application may not need lot of optimizations compared to a server application.


      There are many optimization techniques for JIT compilation. Such optimization techniques are “inlining”, "dead code elimination" etc. 

      Let's look at "inlining" optimization technique in this blog post.

      The inlining process optimizes the code by substituting the body of a method into the places where that method is called.

      Following are some advantages:

      • Eliminating the need for virtual method lookup
      • Save the cost of calling another method
      • Not needed to create a new stack frame
      • No performance penalty for good coding practices

      Inlining depends on the method size. The value is configured by “-XX:MaxInlineSize” and the default value is 35 bytes.

      For “hot” methods, which are called in high frequency, the threshold value is increased to 325 bytes. This threshold value is configured by “-XX:FreqInlineSize”.

      JarScan tool in JITWatch

      The JITWatch is an open source tool developed to get much better insight into how the JIT compiler affects the code.

      JarScan is a tool included in JITWatch to analyze jar files and count the bytes of each method’s bytecode.

      With this tool, we can identify the methods, which are too large to JIT.

      PrintCompilation JVM Flag

      The “-XX:+PrintCompilation” flag shows basic information about the HotSpot method compilation.

      It generates logs like:
      37    1      java.lang.String::hashCode (67 bytes)
      124 2 s! java.lang.ClassLoader::loadClass (58 bytes)

      In the example, the first columns show the time in milliseconds since the process started.

      Second column is the compile ID, which track an individual method as it is compiled, optimized, and possibly deoptimized again.

      The next column show additional information in the form of flags. (s - “synchronized”, ! - “has exception handlers”).

      Last two columns show the the method name and the bytes of bytecode.

      This flag doesn’t have much impact on JIT compiler performance and therefore we can use this flag in production.

      We can use the PrintCompilation output and the JarScan output to determine which methods are compiled.

      There are two minor problems with  PrintCompilation output.

      1. The method signature is not printed, which makes it difficult to identify overloaded methods.
      2. No way to configure log output to a different file.

      Identifying JIT-friendly methods

      Following is a simple process to determine whether methods are JIT-friendly.
      1. Identifying methods, which are in critical path for the transactions.
      2. JarScan output should not indicate such methods
      3. PrintCompilation output should show such methods being compiled.

      Comparison of Java 7 and Java 8 methods

      The InfoQ article compares the “$JAVA_HOME/jre/lib/rt.jar” of JDK 7 & 8 to identify the changes in inlining behaviour.

      The Java 7 has 3684 inline-unfriendly methods and Java 8 has 3576 such methods. It’s important to know that methods like “split”, “toLowerCase”, &  “toUpperCase” in String are not inline-friendly in both Java versions. This is due to handling UTF-8 data rather than ASCII.


      The JITWatch tool can analyze the compilation logs generated with the “-XX:+LogCompilation” flag.

      The logs generated by LogCompilation are XML-based and has lot of information related to JIT compilation. Hence these files are very large.


      This blog post is about the Just-In-Time (JIT) compiler and its "Inlining" optimization technique. The JIT compiler mainly helps to optimize run-time performance in HotSpot JVM

      With JITWatch tools and PrintCompilation, we can understand the JIT behaviour in our applications. With a quick analysis we can figure out performance impacts.

      The important point is that if a method is too large, the inlining optimization will not be used. Therefore it's important to write JIT-friendly methods when we consider the performance of a system.

      It’s also important to measure the performance of original system and compare after applying fixes. We should never apply any performance driven changes blindly.

      Chanaka Fernando12 Useful tcpdump commands you can use to troubleshoot network issues

      tcpdump is a most powerful and widely used command-line packets sniffer or package analyzer tool which is used to capture or filter TCP/IP packets that received or transferred over a network on a specific interface. It is available under most of the Linux/Unix based operating systems. tcpdump also gives us a option to save captured packets in a file for future analysis. It saves the file in a pcap format, that can be viewed by tcpdump command or a open source GUI based tool called Wireshark (Network Protocol Analyzier) that reads tcpdump pcap format files.

      How to Install tcpdump in Linux

      Many of Linux distributions already shipped with tcpdump tool, if in case you don’t have it on systems, you can install it using following Yum command.
      # yum install tcpdump
      Once tcpdump tool is installed on systems, you can continue to browse following commands with their examples.

      1. Capture Packets from Specific Interface

      The command screen will scroll up until you interrupt and when we execute tcpdump command it will captures from all the interfaces, however with -i switch only capture from desire interface.
      # tcpdump -i eth0

      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
      11:33:31.976358 IP > Flags [P.], seq 3500440357:3500440553, ack 3652628334, win 18760, length 196
      11:33:31.976603 IP > Flags [.], ack 196, win 64487, length 0
      11:33:31.977243 ARP, Request who-has tell, length 28
      11:33:31.977359 ARP, Reply is-at 00:14:5e:67:26:1d (oui Unknown), length 46
      11:33:31.977367 IP > 4240+ PTR? (44)
      11:33:31.977599 IP > 4240 NXDomain 0/1/0 (121)
      11:33:31.977742 IP > 40988+ PTR? (44)
      11:33:32.028747 IP > NBT UDP PACKET(137): QUERY; REQUEST; BROADCAST
      11:33:32.112045 IP > NBT UDP PACKET(137): QUERY; REQUEST; BROADCAST
      11:33:32.115606 IP > NBT UDP PACKET(137): QUERY; REQUEST; BROADCAST
      11:33:32.156576 ARP, Request who-has tell, length 46
      11:33:32.348738 IP > 40988 NXDomain 0/1/0 (121)

      2. Capture Only N Number of Packets

      When you run tcpdump command it will capture all the packets for specified interface, until youHit cancel button. But using -c option, you can capture specified number of packets. The below example will only capture 6 packets.
      # tcpdump -c 5 -i eth0

      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
      11:40:20.281355 IP > Flags [P.], seq 3500447285:3500447481, ack 3652629474, win 18760, length 196
      11:40:20.281586 IP > Flags [.], ack 196, win 65235, length 0
      11:40:20.282244 ARP, Request who-has tell, length 28
      11:40:20.282360 ARP, Reply is-at 00:14:5e:67:26:1d (oui Unknown), length 46
      11:40:20.282369 IP > 49504+ PTR? (44)
      11:40:20.332494 IP > Flags [P.], seq 3058424861:3058424914, ack 693912021, win 64190, length 53 NBT Session Packet: Session Message
      6 packets captured
      23 packets received by filter
      0 packets dropped by kernel

      3. Print Captured Packets in ASCII

      The below tcpdump command with option -A displays the package in ASCII format. It is a character-encoding scheme format.
      # tcpdump -A -i eth0

      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
      09:31:31.347508 IP > Flags [P.], seq 3329372346:3329372542, ack 4193416789, win 17688, length 196
      09:31:31.347760 IP > Flags [.], ack 196, win 64351, length 0
      ^C09:31:31.349560 IP > 11148+ PTR? (42)

      3 packets captured
      11 packets received by filter
      0 packets dropped by kernel

      4. Display Available Interfaces

      To list number of available interfaces on the system, run the following command with -D option.
      # tcpdump -D

      3.usbmon1 (USB bus number 1)
      4.usbmon2 (USB bus number 2)
      5.usbmon3 (USB bus number 3)
      6.usbmon4 (USB bus number 4)
      7.usbmon5 (USB bus number 5)
      8.any (Pseudo-device that captures on all interfaces)

      5. Display Captured Packets in HEX and ASCII

      The following command with option -XX capture the data of each packet, including its link level header in HEX and ASCII format.
      # tcpdump -XX -i eth0

      11:51:18.974360 IP > Flags [P.], seq 3509235537:3509235733, ack 3652638190, win 18760, length 196
      0x0000: b8ac 6f2e 57b3 0001 6c99 1468 0800 4510 ..o.W...l..h..E.
      0x0010: 00ec 8783 4000 4006 275d ac10 197e ac10 ....@.@.']...~..
      0x0020: 197d 0016 1129 d12a af51 d9b6 d5ee 5018 .}...).*.Q....P.
      0x0030: 4948 8bfa 0000 0e12 ea4d 22d1 67c0 f123 IH.......M".g..#
      0x0040: 9013 8f68 aa70 29f3 2efc c512 5660 4fe8 ...h.p).....V`O.
      0x0050: 590a d631 f939 dd06 e36a 69ed cac2 95b6 Y..1.9...ji.....
      0x0060: f8ba b42a 344b 8e56 a5c4 b3a2 ed82 c3a1 ...*4K.V........
      0x0070: 80c8 7980 11ac 9bd7 5b01 18d5 8180 4536 ..y.....[.....E6
      0x0080: 30fd 4f6d 4190 f66f 2e24 e877 ed23 8eb0 0.OmA..o.$.w.#..
      0x0090: 5a1d f3ec 4be4 e0fb 8553 7c85 17d9 866f Z...K....S|....o
      0x00a0: c279 0d9c 8f9d 445b 7b01 81eb 1b63 7f12 .y....D[{....c..
      0x00b0: 71b3 1357 52c7 cf00 95c6 c9f6 63b1 ca51 q..WR.......c..Q
      0x00c0: 0ac6 456e 0620 38e6 10cb 6139 fb2a a756 ..En..8...a9.*.V
      0x00d0: 37d6 c5f3 f5f3 d8e8 3316 d14f d7ab fd93 7.......3..O....
      0x00e0: 1137 61c1 6a5c b4d1 ddda 380a f782 d983 .7a.j\....8.....
      0x00f0: 62ff a5a9 bb39 4f80 668a b....9O.f.
      11:51:18.974759 IP > 14620+ PTR? (44)
      0x0000: 0014 5e67 261d 0001 6c99 1468 0800 4500 ..^g&...l..h..E.
      0x0010: 0048 5a83 4000 4011 5e25 ac10 197e ac10 .HZ.@.@.^%...~..
      0x0020: 105e ee18 0035 0034 8242 391c 0100 0001 .^...5.4.B9.....
      0x0030: 0000 0000 0000 0331 3235 0232 3502 3136 .......125.25.16
      0x0040: 0331 3732 0769 6e2d 6164 6472 0461 7270
      0x0050: 6100 000c 0001 a.....

      6. Capture and Save Packets in a File

      As we said, that tcpdump has a feature to capture and save the file in a .pcap format, to do this just execute command with -w option.
      # tcpdump -w 0001.pcap -i eth0

      tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
      4 packets captured
      4 packets received by filter
      0 packets dropped by kernel

      7. Read Captured Packets File

      To read and analyze captured packet 0001.pcap file use the command with -r option, as shown below.
      # tcpdump -r 0001.pcap

      reading from file 0001.pcap, link-type EN10MB (Ethernet)
      09:59:34.839117 IP > Flags [P.], seq 3353041614:3353041746, ack 4193563273, win 18760, length 132
      09:59:34.963022 IP > Flags [.], ack 132, win 65351, length 0
      09:59:36.935309 IP > NBT UDP PACKET(138)
      09:59:37.528731 IP > Flags [P.], seq 1:53, ack 132, win 65351, length 5

      8. Capture IP address Packets

      To capture packets for a specific interface, run the following command with option -n.
      # tcpdump -n -i eth0

      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
      12:07:03.952358 IP > Flags [P.], seq 3509512873:3509513069, ack 3652639034, win 18760, length 196
      12:07:03.952602 IP > Flags [.], ack 196, win 64171, length 0
      12:07:03.953311 IP > Flags [P.], seq 196:504, ack 1, win 18760, length 308
      12:07:03.954288 IP > Flags [P.], seq 504:668, ack 1, win 18760, length 164
      12:07:03.954502 IP > Flags [.], ack 668, win 65535, length 0
      12:07:03.955298 IP > Flags [P.], seq 668:944, ack 1, win 18760, length 276
      12:07:03.956299 IP > Flags [P.], seq 944:1236, ack 1, win 18760, length 292
      12:07:03.956535 IP > Flags [.], ack 1236, win 64967, length 0

      9. Capture only TCP Packets.

      To capture packets based on TCP port, run the following command with option tcp.
      # tcpdump -i eth0 tcp

      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
      12:10:36.216358 IP > Flags [P.], seq 3509646029:3509646225, ack 3652640142, win 18760, length 196
      12:10:36.216592 IP > Flags [.], ack 196, win 64687, length 0
      12:10:36.219069 IP > Flags [P.], seq 196:504, ack 1, win 18760, length 308
      12:10:36.220039 IP > Flags [P.], seq 504:668, ack 1, win 18760, length 164
      12:10:36.220260 IP > Flags [.], ack 668, win 64215, length 0
      12:10:36.222045 IP > Flags [P.], seq 668:944, ack 1, win 18760, length 276
      12:10:36.223036 IP > Flags [P.], seq 944:1108, ack 1, win 18760, length 164
      12:10:36.223252 IP > Flags [.], ack 1108, win 65535, length 0
      ^C12:10:36.223461 IP > Flags [.], seq 283256512:283256513, ack 550465221, win 65531, length 1[|SMB]

      10. Capture Packet from Specific Port

      Let’s say you want to capture packets for specific port 22, execute the below command by specifying port number 22 as shown below.
      # tcpdump -i eth0 port 22

      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
      10:37:49.056927 IP > Flags [P.], seq 3364204694:3364204890, ack 4193655445, win 20904, length 196
      10:37:49.196436 IP > Flags [P.], seq 4294967244:196, ack 1, win 20904, length 248
      10:37:49.196615 IP > Flags [.], ack 196, win 64491, length 0
      10:37:49.379298 IP > Flags [P.], seq 196:616, ack 1, win 20904, length 420
      10:37:49.381080 IP > Flags [P.], seq 616:780, ack 1, win 20904, length 164
      10:37:49.381322 IP > Flags [.], ack 780, win 65535, length 0

      11. Capture Packets from source IP

      To capture packets from source IP, say you want to capture packets for, use the command as follows.
      # tcpdump -i eth0 src

      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
      10:49:15.746474 IP > Flags [P.], seq 3364578842:3364579038, ack 4193668445, win 20904, length 196
      10:49:15.748554 IP > 11289+ PTR? (42)
      10:49:15.912165 IP > 53106+ PTR? (42)
      10:49:16.074720 IP > 38447+ PTR? (38)

      12. Capture Packets from destination IP

      To capture packets from destination IP, say you want to capture packets for, use the command as follows.
      # tcpdump -i eth0 dst

      tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
      listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
      10:55:01.798591 IP > Flags [.], ack 2480401451, win 318, options [nop,nop,TS val 7955710 ecr 804759402], length 0
      10:55:05.527476 IP > Flags [F.], seq 2521556029, ack 2164168606, win 245, options [nop,nop,TS val 7959439 ecr 804759284], length 0
      10:55:05.626027 IP > Flags [.], ack 2, win 245, options [nop,nop,TS val 7959537 ecr 804759787], length 0
      This article may help you to explore tcpdump command in depth and also to capture and analysis packets in future. There are number of options available, you can use the options as per your requirement. Please share if you find this article useful through our comment box.

      This blog post is an extract from the following site.

      Chanaka FernandoWSO2 ESB calling a REST endpoint which expects a not chunked request

      WSO2 ESB provides the following property mediator to remove the "Chunked" encoding type when calling SOAP endpoints. You can use the below configuration to call a SOAP endpoint which expects a non-chunked request (Request with "Content-Length" header).

      <property name=”DISABLE_CHUNKING” value=”true” scope=”axis2″/>

      Here is a sample proxy service configuration.

      <?xml version=”1.0″ encoding=”UTF-8″?>
      <proxy xmlns=”;
      <property name=”DISABLE_CHUNKING” value=”true” scope=”axis2″/>
                     <address uri="http://localhost:9764/services/HelloService" ></address>
      <publishWSDL uri=””/&gt;

      If you need to call a REST endpoint without the "chunked" encoding, sometimes the above property mediator would not work. In that kind of situation, if you need to send "Content-Length" header which means "non-chunked" request, you can use the following 2 properties.

               <property name="FORCE_HTTP_CONTENT_LENGTH" value="true" scope="axis2"></property>
               <property name="COPY_CONTENT_LENGTH_FROM_INCOMING" value="true" scope="axis2"></property>

      Here is a sample API definition for WSO2 ESB.

      <api xmlns="" name="SampleAPI" context="test">
         <resource methods="POST" url-mapping="/*">
               <log level="custom">
                  <property name="msg" value="Executing IN sequence"></property>
               <property name="FORCE_HTTP_CONTENT_LENGTH" value="true" scope="axis2"></property>
               <property name="COPY_CONTENT_LENGTH_FROM_INCOMING" value="true" scope="axis2"></property>
                     <address uri="" format="rest"></address>
               <log level="custom">
                  <property name="msg" value="Sending response"></property>
               <log level="custom">
                  <property name="msg" value="Error occurred "></property>

      Lali DevamanthriCyber Monday with Couchbase Server

      Wal-Mart Wins Cyber Monday  by leveraging the performance and scalability of Couchbase Server, an enterprise NoSQL database. Performance is key to meeting user experience requirements, web or mobile. It ensures customers can browse, shop, and ultimately purchase products online.

      What is Couchbase Server?
      Couchbase Server is a simple, fast, elastic NoSQL database, optimized for the data
      management needs of interactive web applications. Couchbase Server makes it easy to
      optimally match resources to the changing needs of an application by automatically
      distributing data and I/O across commodity servers or virtual machines. It scales out and supports live cluster topology changes while continuing to service data operations. Its managed object caching technology delivers consistent, sub-millisecond random reads, while sustaining high-throughput writes. As a document-oriented database, Couchbase Server accommodates changing data management requirements without the burden of schema management.

      System architecture
      A Couchbase Server is a computer (e.g., commodity Intel server, VMware virtual machine, Amazon machine instance) running Couchbase Server software. Couchbase Server runs on 32- and 64-bit Linux, Windows and Mac operating systems. The source code is a mix of C, C++ and Erlang, with some utility functionality authored in Python.

      Each server in a Couchbase Server cluster runs identical Couchbase Server software,
      meaning “all Couchbase Server nodes are created equal.” A number of benefits flow from the decision to avoid special-case nodes running differentiated software or exhibiting differentiated functionality.


      Following figure shows the data flow of application and DB cluster.


      Samisa AbeysingheCloud IDEs

      Cloud is evolving.

      Cloud IDEs are the “next” big thing.

      They are quite interesting given the novel approaches that they bring to the software development arena.

      Some of the advantages they bring about include
      •  Code available from anywhere
      •  Simple code sharing
      • Central code quality and best practices governance
      •  Next level of continuous integration and continuous build
      • Team dynamics

      While it looks that there is nothing like the native desktop to implement software using an IDE, the collaboration angle that is made possible by cloud IDEs are quite revolutionary.

      Sivajothy VanjikumaranHow to get the tables list in ms sql server that has the data

      This query will help to see the data size details in tables of the MS SQL database.

      .NAME AS TableName,
      .Name AS SchemaName,
      .rows AS RowCounts,
      (a.total_pages) * 8 AS TotalSpaceKB,
      (a.used_pages) * 8 AS UsedSpaceKB,
      (SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
      .tables t
      .indexes i ON t.OBJECT_ID = i.object_id
      .partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
      .allocation_units a ON p.partition_id = a.container_id
      .schemas s ON t.schema_id = s.schema_id
      .NAME NOT LIKE 'dt%'
      AND t.is_ms_shipped = 0
      AND i.OBJECT_ID > 255
      GROUP BY
      .Name, s.Name, p.Rows
      ORDER BY

      Sivajothy VanjikumaranBind WSO2 Mamanement Console to a certain IP Adress

      There are lot of Security measures when deploy the WSO2 Server products. One of them is to restrict the accessibility of the server for certain IP addresses.

      In WSO2 serves, it is possible to restrict via the the tomcat valve.


      <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="127\.0\.0\.1"/ >

      Sivajothy VanjikumaranRemove the payload and send POST request to backend via wso2 esb

      There are situations where back-end does not need a payload for POST request. However in wso2 ESB has the payload that mediate before the back-end call. In order to drop the message payload you need to perform two tasks in the wso2 ESB.

      Example is given below that demonstrate the use case. 

      Chamila WijayarathnaXbotix (Ruhuna Robotics Challenge) 2014

      Xbotix is a robotics competition organised by Faculty of Engineering, University of Ruhuna. This was the 9th Robotics Competition I participated and our team won championship of this competition.

      The task robot had to do in the competition can be found at . Our robot was able to finish this task in 16 seconds which made us champions of the competition.

      Following are the components we used to built the robot.

      • Teensy 3.1 processing board
      • Pololu QTR Sensor Panel
      • 37D mm Gearmotors
      • Sharp Analog Distance Sensors
      • TB6612FNG Dual Motor Driver

      Other Team Members -:
      Dimuthu Upeksha
      Supun Tharanga
      Maduranga Siriwardena

      Heshan SuriyaarachchiInstall the Yeoman toolset

      1. Prerequisite: Node.js and NPM should be installed in the system.

      2. Install Yeoman tools
      heshans@15mbp-08077.local:~/Dev$npm install --global yo bower grunt-cli

      3. Check installed versions.
      heshans@15mbp-08077.local:~/Dev$yo --version && bower --version && grunt --version



      grunt-cli v0.1.13

      Heshan SuriyaarachchiInstall Node.js and NPM on Mac OSX

      I’m using Homebrew for the installation. If you don't have it installed, please install it.

      1. Install node
      heshans@15mbp-08077.local:~/Dev$brew install node

      2. Check installed versions.
      heshans@15mbp-08077.local:~/Dev$node -v


      heshans@15mbp-08077.local:~/Dev$npm -v


      3. Update Homebrew.
      heshans@15mbp-08077.local:~/Dev$brew update 

      Already up-to-date.

      4. Upgrade Node.
      heshans@15mbp-08077.local:~/Dev$brew upgrade node

      Isuru PereraOracle Java Installation script for Ubuntu

      Few months ago, I wrote a blog post on Installing Oracle JDK 7 (Java Development Kit) on Ubuntu. It has several steps to install the JDK on Ubuntu.

      Every time when there is a new version, I upgrade the Java version in my laptop. Since I do few repetitive steps for every Java installation, I wrote a simple installation script for Java.

      The installation script is available at GitHub:

      You just have to run "" with root privileges once you download the JDK from Oracle. It also supports the installation of JDK Demos and "Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files"

      Please refer the README at GitHub. 

      The script supports JDK 7 and JDK 8. Please try the installation scripts and let me know any feedback! :)

      Isuru PereraEnabling Java Security Manager for WSO2 products

      Why Java Security Manager is needed?

      In Java, the Security Manager is available for applications to have various security policies. The Security Manager helps to prevent untrusted code from doing malicious actions on the system. 

      You need to enable Security Manager, if you plan to host any untrusted user applications in WSO2 products, especially in products like WSO2 Application Server.

      The security policies should explicitly allow actions performed by the code base. If any of the actions are not allowed by the security policy, there will be a SecurityException

      For more information on this, you can refer Java SE 7 Security Documentation.

      Security Policy Guidelines for WSO2 Products

      When enabling Security Manager for WSO2 products, it is recommended to give all permissions to all jars inside WSO2 product. For that, we plan to sign all jars using a common key and grant all permissions to the signed code by using "signedBy" grant as follows.

      grant signedBy "<signer>" {

      We also recommend to allow all property reads and WSO2 has a customized Carbon Security Manager to deny certain system properties.

      One of the main reasons is that in Java Security Policy, we need to explicitly mention which properties are allowed and if there are various user applications, we cannot have a pre-defined list of System Properties. Therefore Carbon Security Manager's approach is to define a list of denied properties using the System Property "". This approach basically changes Java Security Manager's rule of "Deny all, allow specified" to "Allow all, deny specified".

      There is another system property named "restricted.packages" to control the package access. However this "restricted.packages" system property is not working in latest Carbon and I have created CARBON-14967 JIRA to fix that properly in a future Carbon release.

      Signing all JARs inside WSO2 product.

      To sign the jars, we need a key. We can use the keytool command to generate a key.

      $ keytool -genkey -alias signFiles -keyalg RSA -keystore signkeystore.jks -validity 3650 -dname "CN=Isuru,OU=Engineering, O=WSO2, L=Colombo, ST=Western, C=LK"
      Enter keystore password:
      Re-enter new password:
      Enter key password for
      (RETURN if same as keystore password):

      Above keytool command creates a new keystore file. If you omit -dname argument, all key details will be prompted.

      Now extract the WSO2 product. I will be taking WSO2 Application Server as an example.

      $ unzip -q  ~/wso2-packs/

      Let's create two scripts to sign the jars. First script will find all jars and the second script will be used to sign a jar using the keystore we created earlier. script:

      if [[ ! -d $1 ]]; then
      echo "Please specify a target directory"
      exit 1
      for jarfile in `find . -type f -iname \*.jar`
      ./ $jarfile
      done script:


      set -e



      signjar="$JAVA_HOME/bin/jarsigner -sigalg MD5withRSA -digestalg SHA1 -keystore $keystore_file -storepass $keystore_storepass -keypass $keystore_keypass"
      verifyjar="$JAVA_HOME/bin/jarsigner -keystore $keystore_file -verify"

      echo "Signing $jarfile"
      $signjar $jarfile $keystore_keyalias

      echo "Verifying $jarfile"
      $verifyjar $jarfile

      # Check whether the verification is successful.
      if [ $? -eq 1 ]
      echo "Verification failed for $jarfile"

      Now we can see following files.

      $ ls -l
      -rwxrwxr-x 1 isuru isuru 602 Dec 9 13:05
      -rwxrwxr-x 1 isuru isuru 174 Dec 9 12:56
      -rw-rw-r-- 1 isuru isuru 2235 Dec 9 12:58 signkeystore.jks
      drwxr-xr-x 11 isuru isuru 4096 Dec 6 2013 wso2as-5.2.1

      When we run, all JARs found inside WSO2 Application Server will be signed using the "signFiles" key.

      $ ./ wso2as-5.2.1/ > log

      Configuring WSO2 Product to use Java Security Manager

      To configure Java Security Manager, we need to pass few arguments to the main Java process. 

      Java Security Manager can be enabled by using "" system property. We will specify the WSO2 Carbon Security Manager using this argument.

      We also need to specify the security policy file using "" system property.

      As I mentioned earlier, we will also set "restricted.packages" & "" system properties.

      Following is the recommended set of values to be used in (Edit the startup script and add following lines just before the line " org.wso2.carbon.bootstrap.Bootstrap $*"

 \$CARBON_HOME/repository/conf/sec.policy \
      -Drestricted.packages=sun.,,com.sun.xml.internal.bind.,com.sun.imageio.,org.wso2.carbon. \,, \

      Exporting signFiles public key certificate and importing it to wso2carbon.jks

      We need to import the signFiles public key certificate to the wso2carbon.jks as the security policy file will be referring the signFiles signer certificate from the wso2carbon.jks (as specified by the first line).

      $ keytool -export -keystore signkeystore.jks -alias signFiles -file sign-cert.cer
      $ keytool -import -alias signFiles -file sign-cert.cer -keystore wso2as-5.2.1/repository/resources/security/wso2carbon.jks

      Note: wso2carbon.jks' keystore password is "wso2carbon".

      The Security Policy File

      As specified in the system property "", we will keep the security policy file at $CARBON_HOME/repository/conf/sec.policy

      Following policy file should be enough for starting up WSO2 Application Server and deploying a sample JSF & CXF webapps.

      keystore "file:${user.dir}/repository/resources/security/wso2carbon.jks", "JKS";

      // ========= Carbon Server Permissions ===================================
      grant {
      // Allow socket connections for any host
      permission "*:1-65535", "connect,resolve";

      // Allow to read all properties. Use in to restrict properties
      permission java.util.PropertyPermission "*", "read";

      permission java.lang.RuntimePermission "getClassLoader";

      // CarbonContext APIs require this permission
      permission "control";

      // Required by any component reading XMLs. For example: org.wso2.carbon.databridge.agent.thrift:4.2.1.
      permission java.lang.RuntimePermission "";

      // Required by org.wso2.carbon.ndatasource.core:4.2.0. This is only necessary after adding above permission.
      permission java.lang.RuntimePermission "";

      // ========= Platform signed code permissions ===========================
      grant signedBy "signFiles" {

      // ========= Granting permissions to webapps ============================
      grant codeBase "file:${carbon.home}/repository/deployment/server/webapps/-" {

      // Required by webapps. For example JSF apps.
      permission java.lang.reflect.ReflectPermission "suppressAccessChecks";

      // Required by webapps. For example JSF apps require this to initialize com.sun.faces.config.ConfigureListener
      permission java.lang.RuntimePermission "setContextClassLoader";

      // Required by webapps to make HttpsURLConnection etc.
      permission java.lang.RuntimePermission "modifyThreadGroup";

      // Required by webapps. For example JSF apps need to invoke annotated methods like @PreDestroy
      permission java.lang.RuntimePermission "accessDeclaredMembers";

      // Required by webapps. For example JSF apps
      permission java.lang.RuntimePermission "";

      // Required by webapps. For example JSF EL
      permission java.lang.RuntimePermission "getClassLoader";

      // Required by CXF app. Needed when invoking services
      permission javax.xml.bind.JAXBPermission "setDatatypeConverter";

      // File reads required by JSF (Sun Mojarra & MyFaces require these)
      // MyFaces has a fix
      permission "/META-INF", "read";
      permission "/META-INF/-", "read";

      // OSGi permissions are requied to resolve bundles. Required by JSF
      permission org.osgi.framework.AdminPermission "*", "resolve,resource";

      The security policies may vary depending on your requirements. I recommend to test your application thoroughly in a development environment.

      Troubleshooting Java Security

      Java provides the "" system property to set various debugging options and monitor security access.

      I recommend to add following line to whenever you need to troubleshoot some issue with Java Security.


      After adding that line, all the debug information will be printed to standard output. To check the logs, we can start the server using nohup.

      $ nohup ./ &

      Then we can grep the nohup.out and look for access denied messages.

      $ tailf nohup.out | grep denied

      Concerns with Java Security Policy

      There are few concerns with current permission model in WSO2 products.

      • Use of ManagementPermission instead of Carbon specific permissions. The real ManagementPermission is used for a different purpose. I created CARBON-14966 jira to fix that.
      • Ideally the permission ' "control"' should not be specified in policy file as it is only required for privileged actions in Carbon. However due to indirect usage of such privileged actions within Carbon code, we need to specify that permission. This also needs to be fixed.
      If you also encounter any issues when using Java Security Manager, please discuss those issues in our developer mailing list.

      Sivajothy VanjikumaranHow to delete the element in the array of json object in Java-script

      Recently we had a very basic requirement to delete an key value pare in the json array using Javascript.

      Input data


      Expected business logic

      When type of id is video and websiteUrl is null; websiteUrl should be removed from the payload.

      Output data


      In-order to do this you can use delete functionality of the json in javascript.

      Example code

      sanjeewa malalgodaConfigure WSO2 API Manager 1.8.0 with reverse proxy (with proxy context path)

      Remove current installation of Nginx
      sudo apt-get purge nginx nginx-common nginx-full

      Install Nginx
      sudo apt-get install nginx

      Edit configurations
      sudo vi /etc/nginx/sites-enabled/default

      Create ssl certificates and copy then to ssl folder.
      sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt

       Sample configuration:

      server {

             listen 443;
             ssl on;
             ssl_certificate /etc/nginx/ssl/nginx.crt;
             ssl_certificate_key /etc/nginx/ssl/nginx.key;
             location /apimanager/carbon {
                 index index.html;
                 proxy_set_header X-Forwarded-Host $host;
                 proxy_set_header X-Forwarded-Server $host;
                 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                 proxy_pass https://localhost:9443/carbon/;
                 proxy_redirect  https://localhost:9443/carbon/  https://localhost/apimanager/carbon/;
                 proxy_cookie_path / /apimanager/carbon/;
             location /apimanager/publisher/registry {
                 index index.html;
                 proxy_set_header X-Forwarded-Host $host;
                 proxy_set_header X-Forwarded-Server $host;
                 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                 proxy_pass https://localhost:9443/registry;
                 proxy_redirect  https://localhost:9443/registry  https://localhost/apimanager/publisher/registry;
                 proxy_cookie_path /registry /apimanager/publisher/registry;
            location /apimanager/publisher {
                index index.html;
                 proxy_set_header X-Forwarded-Host $host
                 proxy_set_header X-Forwarded-Server $host;
                 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                 proxy_pass https://localhost:9443/publisher;
                 proxy_redirect  https://localhost:9443/publisher  https://localhost/apimanager/publisher;
                 proxy_cookie_path /publisher /apimanager/publisher;
            location /apimanager/store {
                 index index.html;
                 proxy_set_header X-Forwarded-Host $host;
                 proxy_set_header X-Forwarded-Server $host;
                 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                 proxy_pass https://localhost:9443/store;
                 proxy_redirect https://localhost:9443/store https://localhost/apimanager/store;
                 proxy_cookie_path /store /apimanager/store;

      To stop start us following commands

      sudo /etc/init.d/nginx start
      sudo /etc/init.d/nginx stop

      API Manager configurations

      Add following API Manager configurations:

      In API store edit wso2am-1.8.0/repository/deployment/server/jaggeryapps/store/site/conf/site.json  file and add following.

        "reverseProxy" : {
             "enabled" : true,
             "host" : "localhost",

      In API publisher edit wso2am-1.8.0/repository/deployment/server/jaggeryapps/publisher/site/conf/site.json  file and add following.

         "reverseProxy" : {
             "enabled" : true,   
             "host" : "localhost",

      Edit /repository/conf/carbon.xml and update following properties.


      Then start API Manager.
      Server URLs would be something like this