WSO2 Venus

Sohani Weerasinghe

WSO2 Data Mapper - Creating JSON Schema

This post describes about using WSO2 Data Mapper Diagram Editor to create a JSON Schema by adding elements to the Data Mapper tree view

Applies to        : Runtime -WSO2 ESB 5.0.0
                          Tooling -WSO2 Developer Studio ESB Tooling 5.0.0

Prerequisites  : Install WSO2 ESB Tool - Refer [1] for instructions

Please find below details about configuring the Data Mapper Mediator

1. Create an ESB Config project
  • Create an ESB Solution project which contains all the required project types by right click on the Project Explorer, New -> ESB Solution Project
  • Then specify a name for the ESB Config Project and select the required projects which need to be created ( eg: Registry Resources Project, CAPP etc)
  • Then you need to create a proxy or an API by right click on the ESB Config Project , New-> Proxy Service
2. Configure Data Mapper Mediator

  • You can drag and drop the Data Mapper Mediator to the created proxy and then can configure the Data Mapper Mediator by double clicking the mediator
  • You will get a dialog box to specify a prefix for the configuration files which will get deployed to the ESB Server. Then specify the prefix and select a Registry Resources Project to save the configuration files.
  • Then click on OK and the Data Mapper Diagram Editor will be opened in the WSO2 Data Mapper Graphical perspective as shown below
(Please refer the section 'Create ESB Configuration Project' at [2] for more information)

3. Generating JSON Schema

Basically there are 4 types of components in the JSON Schema

  • The root element can be either an object or an array so when creating the tree, first you need to add the root element of the tree by right click on the input or output box and then click on "Add new Root Element"
  •  From the dialog appears, add below fields to create the root element
            - Name : a string value defining the element name
            - Schema Type : the element type as array/object
            - Id :  a string value declaring a unique identifier for the schema
            - isNullable : a boolean value specifying the element is a nullable or not  (optional field)
            - Namespaces : an array of objects defining urls and prefix values (optional field)

            - Required : specifies the fields that are mandatory to contain (optional field)
            - Schema Value : custom URI of the schema

  • Adding an array as a child element : if you want to add an array then right click on the parent element and then select "Add new Array"

  • Additional fields for array are as follows
              - object has identifiers : if the object has element identifiers
                (eg:xsi:type) then select the checkbox and add the value, type and url             
                         <urn:sObjects xsi:type="urn1:Contact">
                         <urn:sObjects xsi:type="urn1:Add">

              - object holds a value : if the array holds a value then select the check
                box and select the data type

                            <urn:sObjects xsi:type="urn1:Contact">Object1

  • When adding the element identifier, it is added as an attribute to the element as shown below
  •  Adding a field as a child element : if you want to add a field then right click on the parent element and then select "Add new Field"

  • Adding an attribute : if you want to add an attribute then right click on the parent element and then select "Add new Attribute

 4. Editing the  Data Mapper Tree nodes (JSON Schema)

  • If you want to edit any fields of a node, then right click on the node and select Edit option eg: if you want to edit an object then select "Edit Object"
  • From the appeared dialog box you can update the field values as required
5. Deleting the tree nodes
  • If you want to delete a node you can right click on the node and select "Delete From Model", and it will delete the node with it's child nodes
6. Enable Nullable for tree nodes

  • If you want to make a particular element a nullable, then you can right click on the node and select "Enable Nullable", and then it will make the element a nullable and you can identify that by the icon change
  •  If you want to disable that, then again you can right click on the node and select "Disable Nullable"


Chanaka JayasenaOverriding defalut look and feel of GREG - 5.3.0

Following list explains what are the best approach for different use cases.

1 ) - You created a new asset type, and you need to change the look and feel of the details page in the listing page just for that new asset type.
  • To create a new asset type you need to login to the carbon console (username:admin, password:admin)
  • https://:9443/carbon/
  • Navigate to Extensions > Configure > Artifacts
  • Click "Add new Artifact" link at the bottom of the page.
  • By default in the "Generic Artifact" area "application" asset type is loaded. Note the shortName="applications" in the root node. "applications" is the name of the asset type.
  • Browse in to /repository/deployment/server/jaggeryapps/store/extensions/assets
  • Create a folder with name "applications"
  •  Now we can override the files in /repository/deployment/server/jaggeryapps/store/extensions/app/greg-store-defaults
  • Since we are overriding the details page we need to override the greg-store-defaults/themes/store/partials/asset.hbs

    Copy the above mentioned file in to the newly created asset extension /repository/deployment/server/jaggeryapps/store/extensions/assets/applications/themes/store/partials/asset.hbs
  • Do a visible change in the new hbs file.
  • View the asset extension is working by browsing to applications details page.
    Note: You need to create a new asset of the new type and log in to the store with admin credentials to view the new assets in store application.
  • Now you will be able to view the changes done.

2 ) -  Do the same change we have done in the above to an existing asset type ( restservice ).
  • We can't override the extensions up to (n) levels. Overriding supports only up to two levels. So we have to change the existing asset extension.
  • You can follow the same steps followed in the above scenario to override the asset details page of "restservice" details page.
3 ) - Change the look and feel of the whole store application.

  • ES store default theme ( css, hbs, js etc..) resides in /repository/deployment/server/jaggeryapps/store/themes/store

    They are override in GREG from "greg-store-defaults" extension. We can't override this extension by creating a new extension since the extension model does not supports ( n ) level overriding. So we have to modify the files in "greg-store-defaults" extension to achieve what we need.

Chanaka JayasenaUnderstanding GREG extension hierarchy

GREG is developed on to of Enterprise Store. So the GREG is married to the ES Extension model.

By default ES ships few extensions. Following are the list of extensions shipped with ES.

Following is the list of additional extensions available with GREG. Following listing have GREG extensions in the left and ES extensions shipped by default on the right hand side

GREG - Store & ES - Store

(A) - App Extensions

  1. greg-apis  
  2. greg-diff  
  3. greg-diff-api  
  4. greg_impact  
  5. greg_store 
  6. greg-store-defaults  
  7. greg_swagger  
  8. soap-viewer
    9.   social-reviews
    10. store-apis
    11. store-common 
    (B) - Asset Extensions

    1. endpoint  
    2. policy  
    3. restservice  
    4. schema  
    5. soapservice  
    6. swagger  
    7. wadl  
    8. wsdl
      9.   default
      10. gadget
      11. site
    GREG - Publisher & ES - Publisher

    (C) - App Extensions

    1. greg-apis  
    2. greg-associations  
    3. greg-diff  
    4. greg-diff-api  
    5. greg_impact  
    6. greg-permissions  
    7. greg_publisher  
    8. greg-publisher-defaults  
    9. greg_swagger 
    10. soap-viewer
      11. publisher-apis
      12. publisher-common
     (D) - App Extensions

    1. endpoint  
    2. note  
    3. policy  
    4. restservice  
    5. schema  
    6. server  
    7. site  
    8. soapservice  
    9. swagger  
    10. wadl  
    11. wsdl
      12. default
      13. gadget
      14. site

    What is the difference between asset extensions and app extensions ?

    App extensions can facilitate common functionality applicable through out each asset type.

    Asset extensions defines asset type specific functionality.

    Where does the common functionality to both application exists?


     What is the overriding hierarchy of the extensions? How do we define it?

    When we simply create a folder in the extensions/app or extensions/asset folders it becomes an app or asset extension. But we can define which extension it overrides by including a app.js or asset.js with the following line.

    app.dependencies = ['default'];

    The 'default' key word is the name of the folder/extension it overrides.

    For an example, (A)-5 overrides (A)-11 ("greg-store" overrides "store-common"). If we open the file "store/extensions/app/greg_store/app.js" the following line is visible in the top.


    (A)-6 overrides (B)-9. "greg-store-defaults" overrides "default" asset extension. Yes, it's possible to override asset extensions from app extensions.

Chathurika Erandi De SilvaCustom Function and Global Variables in WSO2 Data Mapper

In this post I will be discussing of the usage of the Custom Function and Global Variables in Data Mapper. If you are new to Data Mapper functionality in WSO2 ESB, please have a look at  WSO2 Data Mapper and the previous posts as well.

The Data Mapper in WSO2 ESB uses Javascript as it’s engine and it provides a user with a graphical operator where a custom function can be used when mapping values. The inputs for the functions can be taken from the input of the mapping. Following images illustrates the placing of custom function operator and global variable operator


Following illustrates the custom function operator

Following illustrates the Global Variable operator

The Global Variable can be used inside a custom function as below

In the above manner the custom functions and the global variables can be used to manipulate the values that are mapped from the input to the output when converting as required. Sample depicted below


Chathurika Erandi De SilvaUsing Function Scope in Property Operator: WSO2 DATA MAPPER

In this post, let’s have a look in using the scope FUNCTION inside the property operator of the data mapper.. If you are new to the WSO2 DATA MAPPER, please take a few moments to read the documentation and the previous posts.

We need to obtain a value from the function scope and map it as an output value in the data mapper. Here we are using the property operator to get the value from the function scope and to map it as relevant.

The WSO2 ESB Tooling will be used to create the artifacts for this. First of all we need to create a ESB Config Project or a ESB Solution Project (both can be found in the Dashboard itself)

Next we have to create a template of type sequence to hold the data mapper. Sample template design view and source view is given below
<template name="DataMapperTemplate" xmlns="">
   <parameter name="testParam"/>
       <log level="full"/>
       <log level="custom">
           <property name="er_prop" value="I am at data mapper"/>
       <log level="custom">
           <property expression="$func:testParam" name="my_prop"/>
       <datamapper config="gov:datamapper/data_map_property_function_1.dmc" inputSchema="gov:datamapper/data_map_property_function_1_inputSchema.json" inputType="JSON" outputSchema="gov:datamapper/data_map_property_function_1_outputSchema.json" outputType="XML"/>
       <log level="full"/>
       <property name="messageType" scope="axis2" type="STRING" value="application/xml"/>

We are using the data mapper to convert a json to xml and to map the values in the json to the xml as relevant. Since we have to read the function scope and map the value to the relevant output field, we are using the property operator.


Sample mapping diagram

Next we have to define a sequence to use the template that we have earlier defined. For this purpose the Call Template mediator will be used. Sample design and source view is given below


<sequence name="data_map_seq_13" trace="disable" xmlns="">
   <call-template description="" target="DataMapperTemplate">
       <with-param name="testParam" value="ORD_TEST_001"/>

In above we are using the previously created template. In the previous template we are defining a parameter. These parameters can be read from the scope as function. In the Call Template mediator we are setting a value to the parameter when the template is being called.

Shazni NazeerImporting a certificate in Windows

In Windows start menu,
  • open the run window and type “mmc” (without quotes). This will open a console window. 
  • Click File -> Add/Remove Snap-in. 
  • Click “Certficates” and click Add. 
  • Select “Computer account” and click next. 
  • Click Finish. 
  • Click certificates and click ok. 
  • Click “Certificates (Local Computer)”
  • Click “Trusted RootCertification Authorities” and then certificates
 his should show all the trusted certificates of the system. Assuming that you have the certificate with you in .cer format. To import the certificate,
  • Right click Certificates of the Trusted Root Certification Authorities in the left pane and All Tasks -> Import
  • Click next and browse your certificate file in the file system and complete the import

Sohani Weerasinghe

Working with Message Processors via WSO2 ESB Tooling

We have introduced a new form editor for Message Processor implementation from WSO2 ESB Tooling 5.0.0 onwards and please find the details below

Applies to        : Runtime -WSO2 ESB 5.0.0
                          Tooling -WSO2 Developer Studio ESB Tooling 5.0.0

1. If you have already created an ESB Configuration Project, right click on the project and click on New-> Message Processor

2. This will pop up a dialog box to create a Message Prcessor and then you can specify the type of processor you're creating, specify the message store this processor applies to, and then specify values for the other fields required.
3. Click Finish. 

4. Then the created Message Processor will be opened in a form editor as shown below and you can change the properties via the form editor and can view the source view from the source tag of the form editor 

Adding Endpoint Key

Adding Sequence Key

Adding Custom Properties

Sohani Weerasinghe

Working with Local Entries via WSO2 ESB Tooling

We have introduced a new form editor for local entry implementation from WSO2 ESB Tooling 5.0.0 onwards and please find the details below

Applies to        : Runtime -WSO2 ESB 5.0.0
                          Tooling -WSO2 Developer Studio ESB Tooling 5.0.0

1. If you have already created an ESB Configuration Project, right click on the project and click on New-> Local Entry

2. This will pop up a dialog box to create a local entry and you can give a unique name and specify the type from the available types

3. Then you can fill the advanced configuration as described below
       -In-Line Text Entry: Type the text you want to store
       -In-Line XML Entry: Type the XML code you want to store
       -Source URL Entry: Type or browse to the URL you want to store

4. Click Finish. 

5. Then the created local entry will be opened in a form editor as shown below and you can change the properties via the form editor and can view the source view from the source tag of the form editor 

  • Select the type via form editor
  • In-Line Text, XML and URL entry


    • Source view

Sohani Weerasinghe

Working with Scheduled Tasks via WSO2 ESB Tooling

We have introduced a new form editor for schedule task implementation from WSO2 ESB Tooling 5.0.0 onwards and please find the details below

Applies to        : Runtime -WSO2 ESB 5.0.0
                          Tooling -WSO2 Developer Studio ESB Tooling 5.0.0

1. If you have already created an ESB Configuration Project, right click on the project and click on New-> Scheduled Task

2. Type a unique name for this scheduled task and specify the group, implementation class, and other options

3. Click Finish

4.Then the created task will be opened in a form editor as shown below and you can change the properties via the form editor and can view the source view from the source tag of the form editor 


5.  When adding task implementation properties you can click on the "Task Implementation Properties button under the 'Task Implementation' section and add the required properties

6. Source view is as follows


Sohani Weerasinghe

Working with Endpoints via WSO2 ESB Tooling

We have introduced a new form editor for below mentioned endpoint types from WSO2 ESB Tooling 5.0.0 onwards and please find the details below

- Address Endpoints

- HTTP Endpoints

- Default Endpoints

- WSDL Endpoints

- Template Endpoints

1. If you have already created an ESB Configuration Project, right click on the project and click on New-> Endpoint

2. This will pop up a dialog box to create an Endpoint and you can give a unique name and then specify the type of endpoint you are creating

3. In the Advanced Configuration section, specify any additional options you need for that endpoint type

4. Click Finish

5. Then the created endpoint type will be opened in a form editor as shown below and you can change the properties via the form editor and can view the source view from the source tag of the form editor.

  • Endpoint form page ( eg: Address Endpoint)
  •  Adding Endpoint properties

    • Adding QOS properties

Sohani Weerasinghe

Working with Message Stores via WSO2 ESB Tooling

We have introduced a new form editor for Message Store implementation from WSO2 ESB Tooling 5.0.0 onwards and please find the details below

Applies to        : Runtime -WSO2 ESB 5.0.0
                          Tooling -WSO2 Developer Studio ESB Tooling 5.0.0

1. If you have already created an ESB Configuration Project, right click on the project and click on New-> Message Store

2. This will pop up a dialog box to create a Message Store and then you can specify the type of store you are creating, and specify values for the other fields required to create the store type you selected
3. Click Finish. 

4. Then the created Message Store will be opened in a form editor as shown below and you can change the properties via the form editor and can view the source view from the source tag of the form editor 

Viraj RajaguruWSO2 Data Mapper - Properties operator


WSO2 Data Mapper supports using of ESB Properties(Synapse Properties) while data mapping.

Let's say we want to add a value of already defined ESB property into a resultant payload. In this use case we have two properties called "appID" and "time" defined previously and we want to use them while data mapping. See the below Proxy service which defines two properties before the Data mapper mediator

Proxy configuration

<proxy name="DataTransformerWithDataMapper" startOnLoad="true" transports="http https" xmlns="">
            <property name="appID" scope="default" type="STRING" value="APP0001"/>
            <property expression="get-property(&quot;SYSTEM_DATE&quot;, &quot;yyyy.MM.dd G 'at' HH:mm:ss z&quot;)" name="time" scope="default" type="STRING"/>
            <datamapper config="gov:datamapper/propertyOperator.dmc" inputSchema="gov:datamapper/propertyOperator_inputSchema.json" inputType="XML" outputSchema="gov:datamapper/propertyOperator_outputSchema.json" outputType="XML"/>

How to add Property operator and configure.

Go to Data Mapper editor by double clicking on DataMapper mediator. In the tool palette you can see there is a operator called "Properties" under "Common" category. 

This operator can be used to retrieve already defined properties and use them in Data Mapping. 

Drag and drop "Properties" operator into canvas. Right click on this operator and select "Configure Property Operator". Use opened dialog box to provide property scope(synapse(default), transport, axis2, axis2-client, operation), property name values.

You can compose your Data Mapping diagram as shown below to retrieve already defined properties and append them to output payload.  

Try it by yourselves

I have attached sample input.xml, output.xml and proxy service configuration. Load input and output files into input and output of data mapper diagram and do the mapping as shown above using "Property" operator. Deploy all the required artifact into an ESB and invoke the proxy service with input.xml as the payload.

Furthermore I have attached Maven Multi Module project which contains all the required projects, input.xml and output.xml in following location. You can try it by yourself.

Download the archive. Unzip and use steps mentioned in following link to import PropertyOperatorSample project into workspace.

Shazni NazeerSecuring WSO2 ESB proxies with Kerberos - part 1 (WSO2 Identity Server as a KDC)

WSO2 Enterprise Service Bus proxies can be secured in various security mechanisms. There are around 20 out of the box security schemes that we can use to secure proxies. Kerberos is one of them. In this post and next we go through the setup and configuration of the Kerberos with two options.
  • WSO2 Identity Server (IS) as Key Distribution Center (KDC)
  • Active directory (AD) as the KDC
In this post we shall look into the first option and see how we can invoke a secured proxy using a Java client. This post is a supplementary to some awesome posts that WSO2 folks have written in the past. Notable posts are the [1], [2] and [3]. I wanted to write this post to show the steps in the latest (or recent) versions of the products. Some configurations and locations have been changed and I encountered few issues setting this up. So I wanted to compile them up in one location. Hope this would be helpful for anyone wanting to know how to use Kerberos security with WSO2 Enterprise Service Bus (ESB) for proxies, hence for SOAP calls. In the next post I'll show you how to use a Java and a .Net [Windows Communication Foundation (WCF) ] client to invoke a secured proxy.

I would like to thank Prabath for permitting me to copy and modify some of the code he used to demonstrate in his posts.

WSO2 Identity Server (IS) as the Key Distribution Center (KDC)

In this procedure we need to have WSO2 IS and WSO2 ESB setup in two server machines. (You may try this with the same server with two different port offsets for WSO2 IS and WSO2 ESB). It’s assumed that you have configured the two servers beyond this point. Look at ESB and IS documentation for reference at [4] and [5]. WSO2 IS can be used in a standalone mode (without any external database setup) for you to work on this.

In WSO2 IS (here we used IS version 5.0.0), let’s setup the KDC.

First we have to enable the KDC. Open embedded-ldap.xml and enable KDC as below.
<Property name="name">defaultKDC</Property>
<Property name="enabled">true</Property>
<Property name="protocol">UDP</Property>
<Property name="host">localhost</Property>
<Property name="port">${Ports.EmbeddedLDAP.KDCServerPort}</Property>
<Property name="maximumTicketLifeTime">8640000</Property>
<Property name="maximumRenewableLifeTime">604800000</Property>
<Property name="preAuthenticationTimeStampEnabled">false</Property>
If you want to change the default realm of the KDC, change the “realm” property. By default it’s WSO2.ORG. We’ll keep it as it’s in this case for simplicity.
<Property name="realm">WSO2.ORG</Property>
We can also enable the KDC setting in the user-mgt.xml. Enable the following property in the UserStoreManager.
<Property name="kdcEnabled">true</Property>

Create a file named jaas.conf with following contents and place inside <IS_HOME>/repository/conf/security directory.

Server { required

Client { required

Create a file name krb5.conf with following contents and place it in <IS_HOME>/repository/conf/security directory. This says your KDC locates in the current machine.

default_realm = WSO2.ORG
default_tkt_enctypes = rc4-hmac des-cbc-md5
default_tgs_enctypes = rc4-hmac des-cbc-md5
dns_lookup_kdc = true
dns_lookup_realm = false

WSO2.ORG = {
kdc =

Now let’s start the WSO2 IS Server.

Once started the server, we have to create a Service Principal (SPN) and a client principal to use with kerberos ticket granting system (TGS). Once the server is started, navigate to Configure -> Kerberos KDC -> Service Principals and click “Add new Service Principal”. Provide a service principal name, description and a password. In this case we used the following service principal name.

SPN Name : esb/localhost@WSO2.ORG

Now create a new user by navigating to Configure -> Users and Roles -> Users -> Add User.

In our case I have added a user name “test” with a password. This will be the client principal.

That’s all with WSO2 IS KDC configuration.

Let’s now configure security in a sample proxy in WSO2 ESB (We used WSO2 ESB 4.8.1). In my case I’ve secured the default echo proxy. You may do the same for any other proxy.

Configuring ESB for kerberos.

Add the following into jass.conf file and place it in <ESB_HOME>/repository/conf/security directory. Note the principal name configuration

Server { required
Add the following to the krb5.conf and place it in in <ESB_HOME>/repository/conf/security directory. Configure the ip address of the WSO2 IS in kdc section. If the WSO2 IS is running in the same server as WSO2 ESB, this is the loop back address, which is the case in my scenario. Remember to use all caps for the default_realm.

default_realm = WSO2.ORG
default_tgs_enctypes = des-cbc-md5
default_tkt_enctypes = des-cbc-md5
permitted_enctypes = des-cbc-md5
allow_weak_crypto = true

WSO2.ORG = {
kdc =

.wso2.ORG = WSO2.ORG
wso2.ORG = WSO2.ORG

krb4_convert = true
krb4_get_tickets = false

  • Start the ESB server and navigate to the proxy list.
  • Click the echo proxy to navigate into the dashboard
  • Click the security button
  • Select ‘yes’ in the security section
  • Select the kerberos security mechanism (Item number 16) and click next
  • Provide the SPN name you configured in the WSO2 IS together with its password and click Finish

  • Now you can try invoking the echo proxy using soap ui and it should fail saying “security header does not found”. This is because the request didn’t contain any security information, such as the kerberos ticket etc. In fact it’s difficult to create a soap ui project to work with kerberos, as the specification requires a format for the soap request, such as signing the message payload from kerberos ticket, having a time stamp and so on.

    We have written a java client to cater this (Thanks Prabath, once again for permitting me to use and modify your client). Clients need to incorporate same mechanism to call a kerberos secured proxy in the Java code. Please find the client here. We shall discuss about a .Net client in the next post.

    Open this project in IntelliJIDEA or your favorite IDE.

    Navigate to the and configure the keyStorePath, keyStorePassword, axis2ClientPath, policyFilePath and servicEndpoint according to your setup. We are using the default keystore in this setup and the configuration is like the following.

    # Kerberos configs

    Note my ESB run on port offset of 5 hence the port 9448.

    Next, open up the policy.xml file and make the rampart configuration changes for, client.principal.password (This should be the user you created in the WSO2 IS),,, and In my case it’s like the following.

    <rampart:RampartConfig xmlns:rampart="">        


    <rampart:property name="">test_carbon.super</rampart:property>
    <rampart:property name="client.principal.password">test123</rampart:property>
    <rampart:property name="">esb/localhost@WSO2.ORG</rampart:property>
    <rampart:property name="">repo/conf/krb5.conf</rampart:property>
    <rampart:property name="">repo/conf/jaas.conf</rampart:property>
    <rampart:property name="">true</rampart:property>


    Also open up the krb5.conf file located in the client code and configure your kdc ip address. In this case the ip address of the WSO2 IS server.

    Now if you run the client you should see the response of the echo service. You should see an output like the following in the console.

    Calling service with parameter - Hello Shazni!!!!!!!
    Request = <p:echoString xmlns:p=""><in>Hello Shazni!!!!!!!</in></p:echoString>
    The response is : <ns:echoStringResponse xmlns:ns=""><return>Hello Shazni!!!!!

    This client contacts the WSO2 IS KDC to get a kerberos ticket and then uses the rampart library to build the request as compliant to the WS-Security specification and send the request to the proxy. At the ESB server end, the rampart resolves the request and wss4j validates the ticket by decryption and validating the signature of the request payload and call the backend service, if those validation passes. Subsequently gets the response and sign it back with the kerberos key and send the response back, once again in compliance with the WS-Security.

    That's all the configurations you got to do. This is a very simple example on how to use keberos security scheme in WSO2 ESB with WSO2 IS.

    In the next post I'll show how to configure the ESB with AD based KDC (specifically with a keyTab file, instead of using the SPN and it's password as used in this post)


    Sivajothy VanjikumaranChange the authentication error message based on Accept Header in WSO2 API Manger

    When you send the request with Accept header and the api call failed due to authentication failure, by default WSO2 API Manager will return the application/xml as a response. However, If you want your Accept header to be honored in the authentication failure cases. In my sample, I have just introduced to handle JSON and default as XML.

    sanjeewa malalgodaHow to address API Store console API invocation issue if browser completely disabled CORS and prevent calls to other domains - WSO2 API Manager

    In internet explorer we can  set Access data sources across domains as follows. This option specifies whether components that connect to data sources should be allowed to connect to a different server to obtain data. This applies only to data binding, such as active data objects. The settings are as follows:
    Enable allows database access to any source, even other domains.
    Prompt prompts users before allowing database access to any source in other domains.
    Disable allows database access only to the same domain as the page.

    Due to some organization policies we do not allow any cross origin requests from the web browser. Which means we have to prevent browsers from doing CORS. API Store console function performs a cross origin request, if you don't allow the web browser to perform CORS the function simply cannot work.
    The sole reason for the CORS specification to exist and be supported by standard web browsers is to allow cross origin requests safely.

    So if you are allowing only store domain to access from browser then only solution is send gateway requests to same domain and map them to gateway.
    In that case we may need reverse proxy or load balance to front API store node. Then mapping should be done as follows.

    This should go to API store store_host_name:9443/store

    This should route to API gateway api_gateway_host_name:8243/api_context/version/

    Then in store node we need to configure gateway URL to point reverse proxy/load balancer as gateway URL. So from browser side it do not need to send requests to multiple domains. It sends all requests to same domain and sub context(/gateway, /store) will help to resolve exact host.

    Bhathiya Jayasekara[WSO2 IS] User Account Locking/Unlocking with Secondary Userstores

    In WSO2 Identity Server, you can lock user accounts when they are created, and unlock later. For this feature to work, in each userstore, a user attribute should be mapped to ""  claim. In identity server, this claim is already mapped to "accountLock" attribute in embedded LDAP userstore. So you only have to follow below steps to enable "Account locking on creation" feature.

    For Primary Userstore

    1) Enable Identity Management Listener in <IS_HOME>/repository/conf/identity/identity.xml

    <EventListener type="org.wso2.carbon.user.core.listener.UserOperationEventListener" name="org.wso2.carbon.identity.mgt.IdentityMgtEventListener" orderId="50" enable="true"/>

    2)  Do the following configurations in the <IS_HOME>/repository/conf/identity/identity­ file.


    For more information, you can read this.

    3) If you want to see if an account is locked or not in user profile, you can set "" claim "Supported By Default", by ticking it in Claim management UI > > Account Locked > Edit like this.

    Then you'll see it in your profile like this.

    For a Secondary Userstore

    Now, let's try the same with a Secondary userstore. Say you have already added a Secondary Userstore with domain "WSO2". Now we need to map a user attribute from that userstore  to  claim. 

    Let's say we map above claim to an attribute named "locked" in your secondary userstore. You can map it like this. 

    After doing that, user accounts in secondary userstore will also be locked once they are created.

    That's all. Feel free to ask related questions below.

    sanjeewa malalgodaAdd documents with different media types and search content of them / Search documents by passing part of key word - WSO2 API Manager

    API Manager default document search implementation will work only with registry stored document content files with the media types defined in registry.xml under tag as below.

    Text : text/plain
    PDF : application/pdf
    MS word : application/msword
    MS Powerpoint : application/
    MS Excel : application/
    XML : application/xml

    Therefore documents added through the url will not work as that type doesn't have stored document content in registry. As a workaround we can add .jag/.html document files to the registry as API documents, then we need to write custom indexers for those media types and update the registry.xml file with those indexers.

    For htmls you may use already available xml indexer as its almost same to xml.

    <indexer class="org.wso2.carbon.apimgt.impl.indexing.indexer.XMLIndexer" mediaTypeRegEx="application/xml" profiles ="default,api-store,api-publisher"/>

    Also when you search part of the string contained in text you may use following search query for that.  Having * symbol at start and end will help you to search documents which contains that part in any word.


    sanjeewa malalgodaHow endpoint timeouts and timeout handler configurations in synapse related each other WSO2 ESB and API Manager

    Sometimes you may notice setting the timeout to be 10 seconds, and configure backend to always return in 18 seconds, you still get from time to time the response from the back end service even though you should always get the timeout.

    According to current implementation timeout handler runs in every 15 seconds. Which means through there is endpoints timeouts, the TimeoutHandler executes every 15 seconds and real timeout happens after that. Most of the practical cases setting endpoint timeouts to very smaller limits like 5s or 10s is rare. So even if we can change this value its highly recommend to use these tuning parameters carefully(as it may effect to any endpoint with no timeout values and it causes to run handler frequently).

    When server startup it will print timeout handler frequency as follows.

    TID: [0] [AM] [2016-07-13 08:24:39,552] INFO {org.apache.synapse.SynapseControllerFactory} - The timeout handler will run every : 15s {org.apache.synapse.SynapseControllerFactory}

    If you need to change it then you may need to edit file and add following parameter.

    The back end service to which a request has been sent are repeatedly called back for responses at time intervals specified for this parameter. Any endpoints have timed out are identified during these time intervals, and they are no longer called back. Note that specifying a lower value for this parameter results in a higher overhead on the system.

    More information can find here[1]


    Nadeeshaan GunasingheWriting a Custom Message Builder for WSO2 ESB

    In WSO2 ESB, when you write a certain mediation logic, a user can do various types of alterations to the message body, headers, etc. Also a user can just passthrough the incoming raw payload without any message processing inside ESB. In situations where the message processing is done, WSO2 ESB uses a universal payload format within ESB. This universal payload format is SOAP message format.

    When ESB receives a certain payload, the raw payload is converted to SOAP message format before further processing the content. For this purpose we use Message Builders. Depending on the receiving message's content type, a particular message builder is selected at the run time, for building the message. 

    By default bellow Message Builders are available in WSO2 ESB.
    • SOAPBuilder
    • MIMEBuilder
    • MTOMBuilder
    • ApplicationXMLBuilder
    • XFormURLEncodedBuilder
    • BinaryRelayBuilder
    • PlainTextBuilder
    For most of the message content types, these Builders can be used to build the message. If a particular user thinks that they need to furthermore customize the Message building process, WSO2 ESB has the facility to extend by adding custom message builders as well. During the latter part of this post I'll go through how to write a custom message builder for such requirements. 

    Users can configure which message builder should be used for a particular content type by configuring the message builders at axis2.xml (This can be found at <carbon_home>/repository/conf/axis2/ directory) . There you can find a section called <messageBuilders></messageBuilders> . 

    Let's have a look at the bellow configuration element. 

    <messageBuilder contentType="text/plain" class="org.apache.axis2.format.PlainTextBuilder"/>

    We add this configuration element under the messageBuilder element to configure the message builder to be used for a certain content type. 

    According to the example we have configured ESB to use PlainTextBuilder for the incoming messages having content type text/xml.

    Internally in ESB selecting the message Builder is triggered from wso2-synapse Engine. When the message comes to the ESB and then inside the synapse engine You can find class RelayUtils. There you can find the method called builldMessage(). Then through the getDocument() of DeferredMessageBuilder we call the processDocument() method of the Builder which is enabled for the particular builder. You will be able to get a clear idea regarding this procedure by debugging the synapse code base on the particular points I have highlighted here.

    If a certain user would like to use his own, custom message builder, instead of the default message builders, WSO2 ESB has been provided the flexibility to add such custom message builders. What the user has to do is write his own custom message builder and then enable the particular message builder for the desired content type. Bellow code snippet shows such custom message builder I have written, in order to BASE64 encode an xml entry field.

    When writing the custom Builder you need to implement the Builder Interface and then override the processDocument method. Inside the process document method you can define your specific logic to process the content and accordingly convert it to the SOAP format. Bellow sample code snippet assumes the incoming payload is as <sampleElement>Sample Content</sampleElement>. Then I get the content and encode the text before converting the incoming payload to SOAP before processing the content in the mediation flow.

    package org.test.builder;

    import org.apache.axiom.soap.SOAPBody;
    import org.apache.axiom.soap.SOAPEnvelope;
    import org.apache.axiom.soap.SOAPFactory;
    import org.apache.axis2.AxisFault;
    import org.apache.axis2.Constants;
    import org.apache.axis2.builder.Builder;
    import org.apache.axis2.context.MessageContext;
    import org.apache.commons.codec.binary.Base64;


    * Created by nadeeshaan on 7/14/16.

    public class CustomBuilderForTextXml implements Builder{
    public OMElement processDocument(InputStream inputStream, String s, MessageContext messageContext) throws AxisFault {
    SOAPFactory soapFactory = OMAbstractFactory.getSOAP11Factory();
    SOAPEnvelope soapEnvelope = soapFactory.getDefaultEnvelope();

    PushbackInputStream pushbackInputStream = new PushbackInputStream(inputStream);

    try {
    int byteVal =;
    if (byteVal != -1) {
    pushbackInputStream.unread(byteVal); xmlReader = StAXUtils.createXMLStreamReader(StAXParserConfiguration.SOAP,
    pushbackInputStream, (String) messageContext.getProperty(Constants.Configuration.CHARACTER_SET_ENCODING));

    StAXBuilder builder = new StAXOMBuilder(xmlReader);
    OMNodeEx documentElement = (OMNodeEx) builder.getDocumentElement();
    String elementVal = ((OMElement) documentElement).getText();
    byte[] bytesEncoded = Base64.encodeBase64(elementVal.getBytes());
    ((OMElement) documentElement).setText(new String(bytesEncoded ));
    SOAPBody body = soapEnvelope.getBody();
    } catch (IOException e) {
    } catch (XMLStreamException e) {

    return soapEnvelope;

    Similarly you can write your own Message Formatters as well, in order to manipulate the out going payload from ESB. Under the Message formatters section add the bellow configuration element in order to enable the message formatter for the particular content type.

    <messageFormatter contentType="text/xml" class="org.apache.axis2.transport.http.SOAPMessageFormatter"/>

    Find more information about the message builders and formatters on Official Documentation

    Chathurika Erandi De SilvaMapping to CSV and writing the output to a file using WSO2 ESB

    Lately we have been discussing about  WSO2 Data Mapper and this post will give the reader an insight in mapping a XML to CSV and writing the output to a file using the VFS transport in WSO2 ESB.

    For this scenario I have used a mock service hosted in Soap UI that sends the following XML payload.

    <soapenv:Envelope xmlns:soapenv="">
            <orderItem>Ice coffee</orderItem>

    My need is to map the orderID, orderItem, quantity, unitPrice and totalPrice fields to a CSV file. For the conversion I will be using the data mapper and I will be using the VFS transport to write the mapped content to a csv file.

    First of all, I have created a sequence as following using the data mapper mediator as below using the WSO2 ESB Tooling.


    Following is the configuration of the Data Mapper mediator that maps XML to CSV

    <datamapper config="gov:datamapper/PreBetaRegistryResource_1.dmc" inputSchema="gov:datamapper/PreBetaRegistryResource_1_inputSchema.json" inputType="XML" outputSchema="gov:datamapper/PreBetaRegistryResource_1_outputSchema.json" outputType="CSV"/>

    There-after I have created a mapping resource and used the response payload for the “IN” and the expected CSV outcome for the “OUT” as below

    In order to write the response from the data mapper mediator to a file, I have used the transport.vfs.ReplyFileURI, transport.vfs.ReplyFileName and transport.vfs.Append parameters and the used the send mediator to direct the response to the vfs file location as below (make sure that you have enabled the VFSTransportSender in WSO2 ESB).

    VFS endpoint
    <endpoint name="VfsEndpoint" xmlns="">
       <address uri="vfs:file:///home/erandi/esb/release410/vfs/write"/>

    Total Sequence
    <sequence name="pre_beta2_out_seq_1" trace="disable" xmlns="">
       <property name="transport.vfs.ReplyFileURI" scope="transport" type="STRING" value="file:///home/erandi/esb/release410/vfs/write?transport.vfs.Append=true"/>
       <property name="transport.vfs.ReplyFileName" scope="transport" type="STRING" value="order.csv"/>
       <property name="FORCE_SC_ACCEPTED" scope="axis2" type="STRING" value="true"/>
       <property name="REST_URL_POSTFIX" scope="axis2" type="STRING" value=""/>
       <property name="OUT_ONLY" scope="default" type="STRING" value="true"/>
       <log level="full"/>
       <datamapper config="gov:datamapper/PreBetaRegistryResource_1.dmc" inputSchema="gov:datamapper/PreBetaRegistryResource_1_inputSchema.json" inputType="XML" outputSchema="gov:datamapper/PreBetaRegistryResource_1_outputSchema.json" outputType="CSV"/>
       <log level="full"/>
           <endpoint key="VfsEndpoint"/>

    Make sure to add the FORCE_SC_ACCEPTED, REST_URL_POSTFIX and OUT_ONLY to evade problems that might occur with path resolving related to VFS when writing the file.
    The above is a sample on how you can get it done. Hope you will try it!!!!!!!!

    Chathurika Erandi De SilvaConverting from CSV using WSO2 DATA MAPPER

    WSO2 Data Mapper provides capability in converting to and fro with XML, JSON, CSV, etc… T
    The following content is written to explain a very basic scenario in converting from CSV to XML using the data mapper. This is intended to a user who is looking to grab the answer for “how to” in the simplest manner.

    Before reading further if VFS Transport is new to you, please take few minutes and read on it.
    Sample scenario
    Converting a CSV to XML using Data Mapper and using the VFS transport to read the csv contents in the file for ESB to process.

    Sample CSV Input for Data Mapper


    Sample XML Output for Data Mapper

    Scenario building using WSO2 Dev Studio
    Following sequence is created to utilize the data mapper mediator


    Note in the above the input type for the mediator is configured as CSV. As the second step we have to create a mapping resource for the data mapper. Following is a sample mapping resource that is created for the mapping from csv to xml


    Next we have to associate the sequence to a proxy service that is configured to use VFS transport, through VFS transport, the csv file will be read and processed. The proxy has to be configured using the service parameters that are relevant to the VFS transport.

    Sample Proxy Configuration

    <proxy name="CSVProxy" startOnLoad="true" transports="vfs http https" xmlns="">
               <sequence key="data_map_csv_1"/>
       <parameter name="transport.PollInterval">5</parameter>
       <parameter name="transport.vfs.FileURI">file:///home/vfs/in</parameter>
       <parameter name="transport.vfs.ContentType">text/plain</parameter>
       <parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
       <parameter name="transport.vfs.MoveAfterFailure">file:///home//vfs/fail</parameter>
       <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>
       <parameter name="transport.vfs.FileNamePattern">.*.csv</parameter>
       <parameter name="transport.vfs.MoveAfterProcess">file:///home/vfs/out</parameter>

    When the relevant .csv file is placed in the “IN” folder, it will be processed and converted to XML as following

    <?xml version='1.0' encoding='utf-8'?>

    Bhathiya JayasekaraHow to Setup a MySQL Cluster in Ubuntu

    A MySQL cluster has 3 types of nodes. Those are Management node (ndb_mgmd), SQL nodes (mysqld) and Data nodes (ndbd). The terms inside brackets are the names of servers that should be installed in each type of node. 

    And we can use a management client for management node to monitor the status of the cluster. For that, it is recommended to use the management client (ndb_mgm) on the management server host.

    1) Introduction to Node Types

    Management Node

    The management server is the process that reads the cluster configuration file and distributes this information to all nodes in the cluster that request it.

    SQL Nodes 

    SQL nodes are through which SQL clients connect to MySQL cluster.

    Data Nodes

    Data nodes handle all the data in tables using the NDB Cluster storage engine. They are responsible for distributed transaction handling, node recovery, checkpointing to disk, online backup etc.

    In this post, we will be setting up a MySQL cluster with 5 nodes. (1 Management node, 2 SQL nodes and 2 Data nodes)

    First, download MySQL cluster binary distribution from here and extract it to each node's /var/tmp directory.

    > cd /var/tmp 
    > sudo tar -zxvf mysql-5.5.34-ndb-7.2.15-linux2.6-i686.tar.gz

    Management Node

    Go to extracted location and copy ndb_mgm and ndb_mgmd into a suitable directory such as /usr/local/bin.

    > cd /var/tmp/mysql-5.5.34-ndb-7.2.15-linux2.6-i686
    > sudo cp bin/ndb_mgm* /usr/local/bin

    (You can safely delete the directory created by unpacking the downloaded archive, and the files it contains, from/var/tmp once ndb_mgm and ndb_mgmd have been copied to the executables' directory.)

    Change location to the directory into which you copied the files, and then make both of them executable.

    > cd /usr/local/bin
    > sudo chmod +x ndb_mgm*

    SQL Nodes

    Following steps should be done in all SQL nodes. 

    Check your /etc/passwd and /etc/group files to see whether there is already a mysql group and mysql user on the system. If they are not already present, create a new mysql user group, and then add a mysql user to this group.

    sudo groupadd mysql 
    sudo useradd -g mysql mysql 

    Change location to the directory containing the archive, and create a symbolic link namedmysql to the mysql directory. 

    > cd /var/tmp
    > sudo ln -s mysql-5.5.34-ndb-7.2.15-linux2.6-i686 /usr/local/mysql 

    Change location to the mysql directory and run the supplied script for creating the system databases.

    > cd /usr/local/mysql 
    > scripts/mysql_install_db --user=mysql 

    Set the necessary permissions for the MySQL server and data directories. 

    > sudo chown -R root . 
    sudo chown -R mysql data
    sudo chgrp -R mysql . 

    Copy the MySQL startup script to the appropriate directory, make it executable, and set it to start when the operating system is booted up.

    > cp support-files/mysql.server /etc/init.d/
    > sudo chmod +x /etc/init.d/mysql.server
    > sudo chkconfig --add mysql.server 

    Data Nodes

    Data Nodes require executable ndbd (single-threaded) or ndbmtd (multi-threaded) only.

    Do the following steps on all Data nodes.

    Go to extracted location and copy ndbd and  ndbmtd to a suitable path. 

    > cd /var/tmp/mysql-5.5.34-ndb-7.2.15-linux2.6-i686
    > cp bin/ndbd /usr/local/bin/ndbd
    > cp bin/ndbmtd /usr/local/bin/ndbmtd 

    (You can safely delete the directory created by unpacking the downloaded archive, and the files it contains, from/var/tmp once ndbd and ndbmtd have been copied to the executables' directory.)

    Change location to the directory into which you copied the files, and then make both of them executable.

    > cd /usr/local/bin 
    > chmod +x ndb*

    3) Cluster Configurations

    Management Node

    Create the directory in which the configuration file can be found and then create the file (config.ini) itself.

    > sudo mkdir /var/lib/mysql-cluster
    > cd /var/lib/mysql-cluster
    sudo vi config.ini 

    SQL Nodes & Data Nodes

    For these nodes, there should be my.cnf file configured in /etc/my.cnf.

    sudo vi /etc/my.cnf

    4) Starting the Cluster 

    The management node should be started first, followed by the data nodes, and then finally by SQL nodes.

    Management Node

    > ndb_mgmd -f /var/lib/mysql-cluster/config.ini

    Note: Configuration file should be pointed (using -f ) in the initial start up only.   

    Data Nodes

    > ndbd

    SQL Nodes 

    > sudo service mysql.server start

    4) Checking Cluster Status

    For this we use Management Client (ndb_mgm), which we already have in Management node.

    > ndb_mgm 

    -- NDB Cluster -- Management Client -- 

    ndb_mgm> SHOW 

    Connected to Management Server at: localhost:1186 
    Cluster Configuration 
    [ndbd(NDB)] 2 node(s) 
    id=2 @ (Version: 5.5.34-ndb-7.2.15, Nodegroup: 0, *) 
    id=3 @ (Version: 5.5.34-ndb-7.2.15, Nodegroup: 0) 

    [ndb_mgmd(MGM)] 1 node(s) 
    id=1 @ (Version: 5.5.34-ndb-7.2.15) 

    [mysqld(API)] 2 node(s) 
    id=4 @ (Version: 5.5.34-ndb-7.2.15)
    id=5 @ (Version: 5.5.34-ndb-7.2.15)

    This means our cluster setup is successful. :)


    Bhathiya JayasekaraHow to Debug WSO2 Carbon Products using OSGi Console

    This post is basically for myself :) , and will be useful for you as well. Intent of this post is to list down frequently used and most useful OSGi commands with examples. For the sake of completeness I will give a small description before stepping into OSGi commands.

    Here I will show how to debug WSO2 Carbon products via OSGi console. First you have to start Carbon server with -DosgiConsole property.

    ./ -DosgiConsole

    Once the server is started properly, you can start trying commands in OSGi console. Here are some mostly used OSGi commands.

    1) ss

    Lists down the bundles with the life-cycle state of them.

    ss <bundle_name>

    Searches for given bundle name and lists matching bundles.

    eg. osgi> ss data

    osgi> ss data
    "Framework is launched."

    id State       Bundle
    36 ACTIVE      gdata-core_1.47.0.wso2v1
    37 ACTIVE      gdata-spreadsheet_3.0.0.wso2v1
    40 ACTIVE      h2-database-engine_1.2.140.wso2v3
    110 ACTIVE      org.eclipse.equinox.p2.metadata_2.1.0.v20110510
          ... ... ... ... ...

    2) b <bundle_Id>

    Shows the details for the specified bundle. Here bundle_id  can be figured out using above ss command.

    osgi> b 36

    gdata-core_1.47.0.wso2v1 [36]

      Id=36, Status=ACTIVE      Data Root=/data/products/dss/wso2dss-3.1.1/repository/components/default/configuration/org.eclipse.osgi/bundles/36/data

      "No registered services."
      No services in use.
      Exported packages; version="1.47.0.wso2v1"[exported]; version="1.47.0.wso2v1"[exported]; version="1.47.0.wso2v1"[exported]

      Imported packages; version="13.0.1"< [21]>

        com.sun.mirror.type; version="0.0.0"<unwired><optional>
        com.sun.mirror.util; version="0.0.0"<unwired><optional>
      No fragment bundles
      Named class space
        gdata-core; bundle-version="1.47.0.wso2v1"[provided]
      No required bundles

    osgi> ls -c 154 

    Components in bundle org.wso2.carbon.dataservices.core: 
    ID Component details
    26 Component[
    name = dataservices.component
    factory = null
    autoenable = true
    immediate = true
                   ... ... ... ... ...

    osgi> comp 26 

    Components in bundle org.wso2.carbon.dataservices.core: 

    ID Component details

    26 Component[
    name = dataservices.component
    factory = null
    autoenable = true
    immediate = true
                   ... ... ... ... ...

    6) services

    Displays registered service details.

    services <service_name>

    Displays specified service details.

    eg. services org.eclipse.osgi.framework.log.FrameworkLog

    osgi> services org.eclipse.osgi.framework.log.FrameworkLog

    {org.eclipse.osgi.framework.log.FrameworkLog}={service.ranking=2147483647,, - Equinox,}
      "Registered by bundle:" org.eclipse.osgi_3.8.1.v20120830-144521 [0]
      "Bundles using service"
        org.eclipse.equinox.ds_1.4.0.v20120522-1841 [92]
        org.eclipse.core.runtime_3.8.0.v20120521-2346 [82]
        org.eclipse.equinox.app_1.3.100.v20120522-1841 [88]
    {org.eclipse.osgi.framework.log.FrameworkLog}={service.ranking=-2147483648, performance=true,$1, - Equinox,}
      "Registered by bundle:" org.eclipse.osgi_3.8.1.v20120830-144521 [0]
      "Bundles using service"
        org.eclipse.core.runtime_3.8.0.v20120521-2346 [82]
        org.eclipse.equinox.app_1.3.100.v20120522-1841 [88]

              ... ... ... ... ...

    Chathurika Erandi De SilvaWalk Through - Property operator: WSO2 DATA MAPPER

    This is a quick walk through of the Property operator in WSO2 Data Mapper. The Property operator is defined to obtain properties defined in the given specific scope and to be used for the mapping process.
    The Property operator currently has the capability of fetching property values defined in synapse (default), axis2, axis2-client, operation and function scopes.

    Following example demonstrates a simple user scenario where a property defined the synapse scope is retrieved and mapped using the data mapper

    Sample Sequence

    Source view
    <sequence name="data_map_seq_8" trace="disable" xmlns="">
       <property description="" name="test_property" scope="default" type="STRING" value="true"/>
       <log level="full"/>
       <datamapper config="gov:datamapper/PropertyMappingResource_2.dmc" inputSchema="gov:datamapper/PropertyMappingResource_2_inputSchema.json" inputType="JSON" outputSchema="gov:datamapper/PropertyMappingResource_2_outputSchema.json" outputType="XML"/>
       <log level="full"/>
       <property name="messageType" scope="axis2" type="STRING" value="application/xml"/>
    Configuration of property operator

    In above we have used the name of the previously defined property mediator and the respective scope to get the value we are setting through the property mediator

    Overall Sample Mapping Configuration


    sanjeewa malalgodaHow different log levels work and effected product- WSO2 API Manager

    In our logs we are logging message in following order.
    TRACE - Designates finer-grained informational events than the DEBUG.
    DEBUG - Designates fine-grained informational events that are most useful to debug an application.
    INFO - Designates informational messages that highlight the progress of the application at coarse-grained level.
    WARN - Designates potentially harmful situations.
    ERROR - Designates error events that might still allow the application to continue running.
    FATAL - Designates very severe error events that will presumably lead the application to abort.

    Threshold defined there to filter log entries based on their level. For example, threshold set to "WARN" will allow log entry to pass into appender if its level is "WARN," "ERROR" or "FATAL," other entries will be discarded.

    But if you have custom class then you need only log debug logs following code block will work.

    private static org.apache.log4j.Logger log = Logger .getLogger(LogClass.class);

    Thilini IshakaAdding business rules to your enterprise applications

    Business rules help to determine the flow of your processes based on a group of rules you define.

    Compared to traditional systems, the addition of business rules to you application has the following advantages.
    • Shortened development time
    • Changes can be made faster, with a lower risk
    • Rules are externalized and easily shareable among multiple applications
    • Lowers the cost for modifying the business logic
    • Increase the Line of Business (LOB) control over implemented decision logic for compliance and business management.
    • Reduce/remove the reliance on IT for changes in the production systems.
    • Improve process efficiency.
    Business Rules capabilities in the WSO2 middleware platform brings the agility of business rules to your SOA toolkit. Based on a solid platform for hosting business rules, WSO2 business rules excel at extending the capabilities of your SOA.

    It's used to define, deploy, execute, monitor and maintain the variety and complexity of decision logic that is used by operational systems within an enterprise.

    SOA is the most prominent architectural pattern used to integrate heterogeneous systems. Therefore if the business decisions written as rules can be exposed as services then business rules can also be integrated to SOA systems.

    sanjeewa malalgodaHow to setup WSO2 IS as Key Manager 5.1.0 with API Manager 1.10 with docker

    In this post i will discuss how we can setup IS as key manager with docker.

    For this you need to download pre configured IS as key manager pack. You can find more information about this pattern from here(

    Also we need to download pre configured pack from this location(

    Then clone docker files repo with following command.
     >git clone

    Once you did checkout copy JDK and downloaded IS as key manager pack to following location.

    File named should be as follows.

    Then add following content to file.
    +++ b/common/scripts/ -19,6 +19,12 @@ set -e source /etc/profile.d/ ++if [ ! -z $SLEEP ];then+       echo "Going to sleep for ${SLEEP}s..."+       sleep $SLEEP+fi+ prgdir=$(dirname "$0") script_path=$(cd "$prgdir"; pwd)
    Then download and copy it to /dockerfiles root and unzip it. It will have all scripts required to build docker image.

    Now we are going to build docker image. For that first you need to run following command from extracted folder(/dockerfiles/wso2is_km).
    >./ -v 5.1.0 

    Then it will build docker image and add it to local repo. You can check it by typing following command.
    >docker images
    >wso2is_km         5.1.0       9483d4962e97        2 hours ago         889.8 MB

    Now we have docker image. Now we are going to setup deployment using docker-compose and we will use above created image to setup key manager instance.

    My Docker compose file will be as follows. And you can see one API Manager, Key Manager, Database server and Nginx instance there.

    version: '2'services:  dbms:    container_name: apim_rdbms    build:        context: .        dockerfile: database/Dockerfile    environment:        MYSQL_ROOT_PASSWORD: root  api-manager:    container_name: api-manager    build:      context: .      dockerfile: api-manager/Dockerfile    environment:      - SLEEP=20    links:      - nginx:api-manager  key-manager:    container_name: key-manager    build:      context: .      dockerfile: key-manager/Dockerfile    environment:      - SLEEP=30    links:      - nginx:key-manager  nginx:    container_name: nginx    build:      context: .      dockerfile: nginx/Dockerfile    environment:      - SLEEP=100    ports:      - "444:9443"      - "81:9763"      - "8280:8280"      - "8243:8243"

    Please refer below deployment diagram to get clear idea about deployment we are going to have.

    You can download complete docker-compose archive from this location( You need to download it and unzip it. Then move to that directory(/pattern-simple-iskm).

    Then you can run this deployment using following commands.
    Build and start deployment
    docker-compose up --build

    Stop running deployment.
    docker-compose down 

    Cleanup files.
    docker rm -f $(docker ps -qa)

    Now you can access API Manager instance using following URLs from host machine.
    Store - https://api-manager:444/store
    Publisher - https://api-manager:444/store
    Gateway - https://api-manager:8243/
    KeyManager - https://key-manager:444/carbon

    Lahiru SandaruwanSynapse properties available in API Manager mediator extensions

    In this post I'm listing the properties that might be useful for users if they are using WSO2 API Manager Mediator Extensions. I observed these in a debugging session and I have copy pasted the properties with sample values that i used.

    "api.ut.HTTP_METHOD" -> "GET"
    "API_ELECTED_RESOURCE" -> "/edit"
    "REST_SUB_REQUEST_PATH" -> "/edit"
    "API_RESOURCE_CACHE_KEY" -> "/test/1.0.0/1.0.0/edit:GET"
    "CORSConfiguration.Enabled" -> "true"
    "api.ut.context" -> "/test/1.0.0"
    "isStatEnabled" -> "false"
    "Access-Control-Allow-Origin" -> "*"
    "api.ut.apiPublisher" -> "admin@carbon.super"
    "TRANSPORT_IN_NAME" -> "https"
    "api.ut.consumerKey" -> "81tFe*********A65Nkga"
    "api.ut.resource" -> "/edit"
    "api.ut.api" -> "Test"
    "api.ut.version" -> "1.0.0"
    "api.ut.userName" -> "admin@carbon.super"
    "" -> "DefaultApplication"
    "Access-Control-Allow-Methods" -> "GET"
    "API_NAME" -> "Test"
    "api.ut.userId" -> "admin@carbon.super"
    "REST_API_CONTEXT" -> "/test/1.0.0"
    "" -> "1"
    "Access-Control-Allow-Headers" -> "authorization,Access-Control-Allow-Origin,Content-Type"
    "REST_FULL_REQUEST_PATH" -> "/test/1.0.0/edit"
    "api.ut.hostName" -> "1*.**.**.***"
    "api.ut.api_version" -> "Test:v1.0.0"
    "SYNAPSE_REST_API" -> "admin--Test:v1.0.0"

    sanjeewa malalgodaHow to deploy WSO2 API Manager with MySQL database and nginx using docker

    Install Docker. Please follow the instructions below. Original document for instructions available here( But only change is we need to change version to 1.7.0 as i listed here.
    1. Install Docker Engine:
    2. The Docker Toolbox installation includes both Engine and Compose, so Mac and Windows users are done installing. Others should continue to the next step.
    3. Go to the Compose repository release page on GitHub.
    4. Follow the instructions from the release page and run the curl command, which the release page specifies, in your terminal.
      Note: If you get a “Permission denied” error, your /usr/local/bin directory probably isn’t writable and you’ll need to install Compose as the superuser. Run sudo -i, then the two commands below, then exit.
      The following is an example command illustrating the format:
      curl -L`uname -s`-`uname -m` > /usr/local/bin/docker-compose
      If you have problems installing with curl, see Alternative Install Options.
    5. Apply executable permissions to the binary:
      $ chmod +x /usr/local/bin/docker-compose
    6. Optionally, install command completion for the bash and zsh shell.
    7. Test the installation.
      $ docker-compose --version
      docker-compose version: 1.7.0

    Add following entry to /etc/hosts in host machine as follows. api-manager
    Then we can refer api-manager url from host machine and access API Manager deployment. We are going to have simple deployment with following components.
    • One WSO2 API Manager 1.10 instance.
    • Nginx instance to act as proxy.
    • One MySQL instance to store data.

    See following docker compose file with server definitions.

    version: '2'
        container_name: apim_rdbms
            context: .
            dockerfile: database/Dockerfile
            MYSQL_ROOT_PASSWORD: root
        container_name: api-manager
          context: .
          dockerfile: api-manager/Dockerfile
          - SLEEP=100
          - nginx:api-manager
        container_name: nginx
          context: .
          dockerfile: nginx/Dockerfile
          - SLEEP=100
          - "444:9443"
          - "81:9763"
          - "8280:8280"
          - "8243:8243"

    First we need to have docker base image for API Manager and you have to create it using docker or can download from docker repos. If you downloaded then you can import it as follows to local repository. You need to follow same steps to nginx and mysql servers as well(these base images will available on docker public repo). 

    Import image.
    docker load < wso2am.tgz
    List loaded images.
    docker images 
    REPOSITORY                                       TAG                 IMAGE ID            CREATED             SIZE   latest              a71fcdc506aa        18 minutes ago      188 MB
    sanjeewa-ubuntu                                  latest              a71fcdc506aa        18 minutes ago      188 MB                1.10.0              7ae5de86a076        3 weeks ago         1.032 GB                 latest              2fede7433e44        4 weeks ago         182.8 MB               latest              0530ac8e24b0        9 weeks ago         228.8 MB                latest              b72889fa879c        11 weeks ago        188 MB                 5.5                 783151ba5745        11 weeks ago        256.9 MB

    Now you have API Manager image in repo and can use it for your deployments. Complete deployment pattern is available here and you need to download it. Then unzip content to directory. Then you will see following content.

    As you can see we have all configuration files and data required for this deployment. When we start instances these config files and artifacts will replaced in original images.

    Build and start deployment
    docker-compose up --build
    Stop running deployment.
    docker-compose down 

    Cleanup files.
    docker rm -f $(docker ps -qa)

    Then access created instance from host machine by accessing following URL.

    Then you can login to publisher, store and management console to perform user operations. If you need to setup complex deployment you can design it and create docker compose file in same way we discussed here.

    sanjeewa malalgodaHow to define Environment specific URL using system variables- WSO2 API Manager and ESB

    Lets see to define Environment specific URL using system variables- WSO2 API Manager and ESB

    Define your endpoint as follows.

    <endpoint name="EP1" template="endPointTemplete"

      uri="http://{system.prop.env}" xmlns=""/>

    it would refer to an endpoint template named as 'endPointTemplate'. The config for it is given below

    <template name="endPointTemplete" xmlns="">

      <endpoint name="$name">
    <address uri="$uri">

    Proxy service will refer the EP1 endpoint.
    Environment name would be set as a param at server startup -Denv=QA

    sanjeewa malalgodaSome of the useful docker commands

    List Docker images in system
    docker images 
    sanjeewa@sanjeewa-ThinkPad-X1-Carbon-3rd:~/work/docker$ docker images 
    REPOSITORY                                       TAG                 IMAGE ID            CREATED             SIZE   latest              a71fcdc506aa        6 minutes ago       188 MB
    sanjeewa-ubuntu                                  latest              a71fcdc506aa        6 minutes ago       188 MB                latest              b72889fa879c        11 weeks ago        188 MB

    List running docker instances
    docker ps
    Get the differences between instances
    docker diff 
    Commit running instance after some changes to local repository
    docker commit c951e2725fca9519c82d665fba4fc1439a8076a6ca1a266783e0578a7d76fb3f sanjeewa-ubuntu
    Tag docker available in local repo.
    docker tag sanjeewa-ubuntu

    Export image to repo
    >docker push
    The push refers to a repository []
    863e132d4800: Pushed
    5f70bf18a086: Pushed
    f75f146a5022: Pushed
    711b0bd2cb6a: Pushed
    595d1d53a534: Pushed
    latest: digest: sha256:5fbd869f1c9a169c8811b34dd30a7f5dfac4335ecb98e3f22c629c9e2bf4a83a size: 1358

    Import image from file system
    docker load < wso2am.tgz
    docker images 
    REPOSITORY                                       TAG                 IMAGE ID            CREATED             SIZE   latest              a71fcdc506aa        18 minutes ago      188 MB
    sanjeewa-ubuntu                                  latest              a71fcdc506aa        18 minutes ago      188 MB                1.10.0              7ae5de86a076        3 weeks ago         1.032 GB                 latest              2fede7433e44        4 weeks ago         182.8 MB               latest              0530ac8e24b0        9 weeks ago         228.8 MB                latest              b72889fa879c        11 weeks ago        188 MB                 5.5                 783151ba5745        11 weeks ago        256.9 MB

    Run docker instance
    docker run -p 9443:9443
    Creating directory /mnt/

    List running docker instances
    docker ps
    CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS                                                             NAMES
    9867acff0f65   "/usr/local/bin/init."   2 minutes ago       Up 2 minutes        8243/tcp, 8280/tcp, 9763/tcp, 10397/tcp,>9443/tcp   sad_kirch

    Get logs from container
    docker logs -f 9867acff0f65

    Access remote machine terminal
    docker exec -it 9867acff0f65 /bin/bash

    List all meta-data
    docker inspect 3c78e0d8947a
    To stop running instalce
    docker stop 3c78e0d8947a
    Write docker componse file. Edit docker file and add some commands
    //run in remote machine
    RUN apt-get update && apt-get install apache2 apache2-utils -y
    //copy something from local to remote machine
    COPY /var/www/
    RUN chmod 755 /var/www/
    EXPOSE 80
    //This will execute only when run time happens.
    CMD ["/var/www/"]

    Build docker from script file. here testapache is name and . means pick docker file from current location.
    docker build -t testapache .
    edit file
    vim docker-compose.yml
    up composed pattern
    docker-compose up
    Stop running deployment.
    docker-compose down
    Cleanup files.
    docker rm -f $(docker ps -qa)

    Shazni NazeerUnleashing the Git - part 6 - git branching

    Branching is a function to launch separate, similar copy of the present workspace for different usage requirements. It's a line of development which exists independent of another, where both share a commom history.

    Let's understand branching related commands with an example.
    $ git init        // Execute on a preferred location
    Add a file list.txt and add the following content


    $ git add .
    $ git add -m "Initial commit of list.txt"
    $ git checkout -b mybranch  // Create a new branch called mybranch with exact level of master.
    $ git checkout -b 64868c9   // Create a new branch called mybranch from the point of the given commit id
    Above is equivalent to following two commands.
    $ git branch mybranch
    $ git checkout mybranch
    $ git branch   // Shows available branches with the current branch with an *. Now you'll be in mybranch
    $ git branch -r // Shows only remote branches.
    $ git branch -a // Shows all available branches
    $ git branch --merged // Shows all branches merges into the current branch
    $ git branch --no-merged // Shows available branches that are not merged with the current branch
    $ git branch --contains <commit id> // Shows the branch that contains the commit id
    Now suppose you openned list.txt and modified it by adding a new line, making it

    $ git add .
    $ git commit -m "Adding a new line in the mybranch"
    $ git checkout master  // Switch back to master. Now list.txt won't have your new line
    $ git merge mybranch  // merges mybranch with the master branch. This would succeed as there was no conflict in lines. We just added a new line, so git would have no problem in merging. Updating 53a7908..3fd44bc Fast-forward  list.txt | 1 +  1 file changed, 1 insertion(+) Also git automatically commit while merging. If you do not want the commit to happen, you can use the following command.
    $ git merge --no-commit master
    // Now in the master branch remove the first line and commit it. Then switch to mybranch and open list.txt. You would still see the initial first line we removed in the master. Now modify that line in mybranch to change it to xyz from abc and commit it. Say now you want to merge the files in this branch from master. Issue the following command.
    $ git merge master
    Auto-merging list.txt
    CONFLICT (content): Merge conflict in list.txt
    Automatic merge failed; fix conflicts and then commit the result.
    There will be a conflict as both branches have modified the first line.

    // Now if you try to switch to master, you won't be allowed until you fix the conflicts.
    $ git checkout master
    list.txt: needs merge
    error: you need to resolve your current index first
    Now open the list.txt in mybranch. You would see following.
    <<<<<<< HEAD
    >>>>>>> master
    What does this say? HEAD is pointing to your current branch. i.e. mybranch. Whatever you have between <<<<<<< HEAD to ======= is the the contents of the mybranch offending to the conflict. i.e. your xyz modification. Whatever between ======= and >>>>>>> master is what master has for the offending conflict. In this case it's empty meaning there's an empty line.

    Now you may decide to keep the xyz change and commit it. What needs to exist, what needs to remove is all up to you. You may even choose remove both. So once you switch to master branch, you won't see the line. Issue the following command in master branch.
    $ git merge mybranch
    Updating a64cb6d..87fd36d
     list.txt | 1 +
     1 file changed, 1 insertion(+)
    Now you should see the xyz line in the master branch as well.

    Following are few additional commands that is are useful.
    $ git merge --log development // Adding a one line log message from each merge commit to the merge message
    $ git merge --no-ff development // Force creation of a merge commit
    $ git merge -m "Forced commit message that from the merging branch" mybranch
    You can use git commit --amend to modify the commit message after the fact too. Here’s an example:
    $ git merge --log --no-ff development
    $ git commit --amend -c HEAD  // Editor launches
    $ git branch -d mybranch  // Deletes the mybranch. If there are unmerged files, git will warn you about it.
    $ git branch -D mybranch // Git doesn't warn you about unmerged files. It just deletes the branch
    $ git push origin :beta // Remove a remote origin's branch named beta
    $ git push <remote name> <branch name>
    $ git push origin master:production  // Push changes from local master branch to remote origin's production branch
    $ git push --force or git push -f   // To push changes even  if you don't have ceratin changes from the remote. Warning: Use this with extreme caution because it can cause others to get
    out of sync with the repository to which you are pushing.

    As an alternative to 'git pull', we can also use 'git fetch' followed by 'git merge @{u}'

    Some more useful commands
    $ git pull <remote name> <remote branch>   // Pulls changes from the <remote branch> of <remote name> into the current local branch
    $ git pull <remote name> <remote branch>:<local branch> // Pulls changes from the <remote branch> of <remote name> into the <local branch>
        // e.g: git pull origin development:team-dev
    $ git pull --rebase origin master // rebasing the current locak branch on top of remote origin's master branch

    Dakshitha RatnayakeBig Data Analytics with the WSO2 Analytics Platform

    Organizations have more data than ever at their disposal. Actually deriving meaningful insights from that data—and converting knowledge into action—is easier said than done because there’s no single technology that encompasses big data analytics. There are several types of technology that work together to help organizations get the most value from their information. 

    Big data analytics is the process of examining large data sets containing a variety of data types - i.e., big data - to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. The analytical findings can lead to more effective marketing, new revenue opportunities, better customer service, improved operational efficiency, competitive advantages over rival organizations and other business benefits. Data could include Web server logs and Internet click stream data, social media content and social network activity reports, text from customer emails and survey responses, mobile-phone call detail records and machine data captured by sensors connected to the Internet of Things. 

    What does WSO2 offer?

    The WSO2 Analytics Platform, of course. It’s a single platform to address all analytics styles – 
    • Batch Analytics: Analysis of data at rest, running typically every hour or every day, and focused on historical dashboards and reports.
    • Real time Analytics: Analysis of event streams in real-time and detecting patterns and conditions.
    • Predictive Analytics: Using machine learning to create a mathematical model which allows predicting future behavior.
    • Interactive Analytics: Executing queries on the fly on top of data at rest. 
    In a nutshell, you must collect data and feed them to the analytics platform. Then, the analytics platform will analyze the collected data using one or more of the above analytics techniques, and finally communicate the results as alerts, dashboards, notifications etc. 

    Some Terminology

    The WSO2 Analytics Platform processes data as events and also interacts with external systems using events. Let’s get the terminology right.  

    Event =  a unit of data comprising a set of attributes
    Event Stream = a sequence of events of a particular type. 
    Event Stream Definition = Type or schema of events 
    Event Receiver = Events are received through various transport protocols using event receivers
    Event Publisher  = Events that are resulted after data analysis or even direct events are published via various transport protocols through event publishers. 

    Data Collection

    The WSO2 Analytics platform offers a single API to collect data and it can receive data through event receivers from almost any event source through inbuilt agents in all WSO2 products, Java agents (Thrift, Kafka, JMS), JavaScript clients (Web Sockets, REST), IoT(MQTT) and from over 100 WSO2 ESB connectors. You can even write a custom agent to collect data from your system and push it to the analytics platform. Basically, events can be received via multiple transports in JSON, XML, Map, Text, and WSO2Event formats, and the platform converts them into streams of canonical WSO2Events to be processed by the server. 

    Data Analysis

    Once the data is collected, the data must be analyzed through one or more of the following techniques: 

    Batch Analytics 
    Real-time Analytics
    Interactive Analytics
    Predictive Analytics

    The WSO2 Analytics Platform comprises 3 individual products:

    WSO2 Data Analytics Server – can perform batch, real-time and interactive analytics
    WSO2 Complex Event Processor – used only for real-time analytics
    WSO2 Machine Learner – used only for predictive analytics

    I will not cover the technical details on how each of these techniques acts differently on the collected data in this blog post. Please check links [1] and [2] for more details on the four techniques that the WSO2 Analytics Platform uses to analyze data. 

    Data Publishing

    Events that have resulted after data analysis (or even data that is not analyzed) are published through various transport protocols – including but not limited to SMS, Email, HTTP, JMS, Kafka, MQTT, RDBMS, Logger, WebSockets - through event publishers. You can write extensions to support other transports as well. Data can be pushed to dashboards through WebSockets/REST or services/APIs can be invoked. The data can be pushed to the ESB which will in turn push them to legacy systems or even cloud applications. Moreover, the data can be stored in another database or be published to other systems where the systems have implemented Custom WSO2 Data Receivers. 

    This blog post was merely an overview of what the WSO2 Analytics Platform offers and how it operates. The post is based on the content of a webinar [2] I did a few weeks back.  Please check out the full webinar for detailed information about the WSO2 Analytics Platform and the various applications of the different analytics styles. 

    Chathurika Erandi De SilvaFreedom is yours: MAP AS YOU LIKE: Season 1

    Conversion, mapping of  values is something  we hear every other day. For the systems that make life easier for us, its quite important for these two terms to function well.

    WHAT IF, there  is a way to get this tasks done by a single unit with a friendly user interface that allows simple drags and drops and the work is done.

    OF COURSE I am working on something similar these days and that's why I am writing this to share my experience with this magnificent mega-neutron (literally).

    OK, so I know  what I am  trying to say is quite cloudy. That's the whole point in it, good things, better things has to be revealed slowly, so the absorbing level will be high and of course the suprise will be much higher.

    Let me introduce you the WSO2 DATA MAPPER, that provides a versitle, easy to use method of getting your conversions and mapping done with a easy to use interface that will make like much easier for people who don't want to code lines and lines to get a work done. 

    DATA MAPPER provides the functionality of converting and mapping values from String, Numbers, Boolean, provides ways to create your own functions, use of connectors to provide extent-ability (your xml straight in to a email), etc...

    SEASON 1 is dedicated in showing you a sample of Boolean and Numerical operators  that is enabled to provide flexible mapping

    Boolean Operators:

    Currently the DATA MAPPER provides, AND, OR, NOT  operators for easier mapping of values in between conversions. String values can be converted to boolean and then used in boolean operators or primitive can be used for this purpose

    Numerical operators:

    Numerous numerical operators are provided in DATA MAPPER to map values after converting with relevant to numerical operations.

    Chathurika Erandi De SilvaContent based Routing with Switch Mediator

    Content based routing is used to redirect an incoming request to a particular destination based on the content itself. This extends capabilities of systems where the back-ends doesn't have to be changed, rather enables a middle entity to navigate requests as needed.

    WSO2 ESB is such a middleman and content based routing can be achieved using numerous ways. Amongst of them, switch mediator is designed to configure multiple routing paths for a request.

    Switch mediator in WSO2 ESB enables requests to be filtered and routed to the endpoint based on the content. Multiple routes can be configured using the Switch Mediator

    Following sample consist of a Switch mediator where two routes are configured by using the case statement. The switch component extracts the content that should be matched from the request and the case component matches the extracted content against a defined value. This can be a simple string or a complex Xpath. Further a default case can be configured to route the request when neither any of the conditions are matched.

    <sequence xmlns="" name="switch_seq_1">
       <switch xmlns:ns="http://org.apache.synapse/xsd"
          <case regex="mac">
             <header name="Action" scope="default" value="getMacMenu"/>
                <endpoint key="conf:/MenuProvider"/>
          <case regex="other">
             <header name="Action" scope="default" value="getOtherMenu"/>
                <endpoint key="conf:/MenuProvider"/>

    sanjeewa malalgodaHow to enable per API logging in WSO2 API Manager - Create log file per API

     Though the file it is possible to control logging behavior per API as it create separate log file per API, define log category per proxy etc. What you need to do is add following to file and restart server.
    If need you can add this once to file and update log level etc from management console.

    log4j.appender.TestAPI_APPENDER.layout.ConversionPattern=%d{ISO8601} [%X{ip}-%X{host}] [%t] %5p %c{1} %m%n
    log4j.category.API_LOGGER.admin--CalculatorAPI= TRACE,  TestAPI_APPENDER

    If you enabled it successfully then when you invoke APIs it will create wso2-APILogs-service.log file in repository/logs directory. And you can see API specific invocation details will add to that log file.  Content would be something like below.

    2016-07-04 14:26:29,599 [-] [localhost-startStop-1]  INFO admin--CalculatorAPI Initializing API: admin--CalculatorAPI:v1.0
    2016-07-04 14:26:57,546 [-] [PassThroughMessageProcessor-8]  WARN admin--CalculatorAPI ERROR_CODE : 0 ERROR_MESSAGE : Access-Control-Allow-Headers:authorization,Access-Control-Allow-Origin,Content-Type,Access-Control-Allow-Methods:GET,Access-Control-Allow-Origin:*, Unexpected error sending message back
    2016-07-04 14:26:57,548 [-] [PassThroughMessageProcessor-8]  WARN admin--CalculatorAPI Executing fault sequence mediator : fault
    2016-07-04 14:26:57,592 [-] [PassThroughMessageProcessor-8]  INFO admin--CalculatorAPI STATUS = Executing default 'fault' sequence, ERROR_CODE = 0, ERROR_MESSAGE = Access-Control-Allow-Headers:authorization,Access-Control-Allow-Origin,Content-Type,Access-Control-Allow-Methods:GET,Access-Control-Allow-Origin:*, Unexpected error sending message back

    sanjeewa malalgodaHow to limit API edit to only API owner and prevent API edits from other users WSO2 API Manager.

    All users in same tenant with publisher role can edit APIs. If the users are in different tenants,even though the users have publisher role cannot edit the APIs created by users in other tenants. super admin and users who have publisher role in that tenant can edit.

    But we have few other solutions to address similar use cases.

    If you need to avoid API developers updating running APIs in system, we can have solution for that. For that you can have 2 different roles for API creators and publisher with relevant permissions. Then API developers will have permission to only create and edit APIs. But they cannot publish those APIs or change running APIs. Only publishers can review changes with API developer and publish them to run time.

    Another workaround is create different role for the APIs you don't want to access by other users.
    Then grant login, API create, publish permission for that role. And you have to manually set registry permissions of API resource in a way only users with newly created role can edit and modify that.
    But please note that manual resource permission changes can lead to issues if you exactly don't know what you are doing.
    Or you can go to current provider level in registry(/_system/governance/apimgt/applicationdata/provider/sanjeewa) and control access in that level as follows. Here only sanjeewa user(who only have api_creator role) having permission to edit resource and others do not have permission to edit this resource. So other users have api_publisher role and they cannot modify, update resource.

    And when you control access from registry level if someone tried to edit that API then it will print error logs saying you don't have permission to edit API. And in UI you will see error message saying error while updating API or something like that. So in summary this is not good solution but if you need to protect one API from others you may use similar approach. But we do not recommend this for many APIs.



    Service oriented architecture (SOA) is best way to develop softwares (services) to achieve any business use case. But SOA requires higher level of coordination and collaboration between lot of teams within the particular enterprise, from business teams to IT (information technology) teams and as well as among other teams and departments.This coordination and collaboration can be achieved by implementing a proper SOA governance model which deals with task, processes and people for defining and managing how services are created, supported and managed.

    Governance is somewhat a political issue than a technical issue. While technology focuses on the interfaces, protocols and specifications, the business worry about method of serving customer. But both the technical and business emphasis on requirements to satisfy customers. Governance involves in all those aspects even those are separate efforts and processes. Governance conforms that everyone involved in those aspects are working together which are not contradicting with each other to ensure the financial side of the business is achieved as well as customer is satisfied.

    What is governance

    Exercising governance in SOA (Service oriented architecture) means implementing a process to ensure that everything is done according to protocols, defined guidelines, best practises, controls, management, rules, responsibilities and other related factors. Effective SOA governance must consider people, processes, technologies, deliverables, QOS (quality of services) in the entire lifecycle from identifying the business use case of a service to implementing, testing, delivering and up to reuse and until service is retired or no longer usable.

    SOA governance consists two phases.
    1. Design time governance
      This includes the process from identifying a business use case to developing and implementing it as service.
    2. Run time governance
      This includes the process from delivering the service to end users to consume and enforcing policies to manage and control who can access the service and what they can do with it.

    Reason for having two phases in governance is enforcing policies and SLAs from the service itself is expensive and unmanageable. This will be discussed more in runtime governance section.

    Why Governance

    There are many reasons why proper governance is needed not only for SOA but also everything we do. If there is not proper planning involved with the project, there is a higher chance that project end up in chaos.

    Let’s imagine of a situations where there is no SOA governance involved with service development. Suppose X is working in ABC financial institute. And he is in loan branch and he has implemented a service where customers can view their current loan balance. Someone (Let’s say Y) in the saving branch implements another service which can be used to view customer’s current account balance. Account balance should show the loan balance too. And Y does not know that there is an existing service which he can use to get the loan balance. So Y too implements a new service which shows loan balance without using the existing service that X created.

    Someone (Lets say Z) in another department in ABC company start using X loan balance service in his application. That applications start sending heavy traffic to the X’s loan service and crashes the server. Now X starts getting calls from people he does not know of, and complains that loan balance service is not working. Now X finds out that there are a lot of people other than Z in ABC company who use his loan balance service in their applications. Now X is in big trouble because his service is not working. This causes ABC company to lose their revenue and also it’s credibility to sustain their business.

    This is a very simple scenario which could happen in any company or enterprise in the world right now. When there is a lack of effective SOA governance, repetition of services, unknown people using services, over usage of resource and services can happen. This could lead to major financial losses to a company.

    But when there is proper SOA governance process is involved in the service lifecycle, it increases the possibility of achieving business goals and objectives.

    Some reasons for having proper SOA governance in place

    1. Shows well structured responsibility management to empower people
    2. Can easily measure the effectiveness
    3. Well defined rules, protocols and policies to meet the business goals
    4. Avoid the repetitions  and reuse of existing service.
    5. Mechanism to ensure specifications are followed to the details.

    Design Time Governance

    What is Design Time Governance

    SOA design time governance is the process of enforcing rules and protocols from identifying the business and implementing it as service. This is related the design time service cycle. As you can see in the following diagram illustrate the design time life cycle.

    SOA governance starts with identifying the business use case. Most probably this is done by business analyst or somebody who has domain knowledge in business. They analyze the modern business trends and find out business opportunities. They bridge the business problems to technology solutions.

    Next stage of SOA governance lifecycle implementing the business requirement as services. This is typically done by the software developers. They should follow SOA architecture and specifications to develop those services. They must ensure that defined protocols, guidelines and rules are followed to the point.

    Then implemented service(s) should tested by developers, they should create unit test, functional test and etc… to complete the testing.

    Next phase of design time life cycle is QA testing. This is the stage which services are tested for validations (whether developed services fulfill the business requirement), performance test (checks how the load is handled by the service, maximum load before it crashes)

    Next stage of the service is sending on production for consume. This is a critical moment where runtime governance starts come in to picture and there some decisions has to be taken such as

    • who can invoke the service ?
    • what they can do with it ?
    • How long the service will be available for consumption ?
    • How many request will be served in a given moment ?

    Once the service(s) is in production, it belongs to runtime governance which will be discussed it the runtime governance section.

    Final state of service lifecycle is retiring service(s) where it has come to a point that current implementation of the service is no longer valid  for current business requirements. Retirement can happen due to new versions of same service is implemented.

    Use of tools in Design time Governance

    To enforce proper design time governance we must use tools designed for that purpose. WSO2 GREG is a specially designed tool to cater these requirement. We capture the meta data of the service as everything we discussed so far. Such as

    Runtime Governance

    Why do we have a run time governance phase in the SOA governance. Answer is simple. It is very inefficient and ineffective to build runtime governance capabilities to service implementation itself. Let’s us examine what are the requirement in runtime governance phase. So we may better understand why it requires to separate runtime and design time governance

    1. Access control
      1. Authentication  - (Who can use the service)
      2. Authorization  - (What they can do with it)
    2. Logging
    3. Enforcing policies
    4. Versioning
    5. Statistics and Monitoring
      1. Response time
      2. Success and failure rates
      3. Per user usage
      4. Per service usage
      5. etc..
    6. etc… (This list is open ended)

    As show in the above image service implementation must be separated from policy enforcement (runtime governance). If not runtime governance requirements have to be implemented with service itself. Then it becomes a nightmare to manage both service and runtime governance requirements. So separation of service and policy enforcement is the most suitable way to achieve runtime governance.

    Monitoring and Statistic

    This is the one of the most critical requirement in SOA runtime governance. This gives the service provider ability to measure the effectiveness of the services provided.

    Designtime and Runtime Governance Together

    Please consider the following image.

    As you can see I have aggregated both the design time governance and run time governance together in a single image. A strangest thing to notice here is that runtime governance is used in four stages of design time governance.

    The reason for this is runtime governance should not be implemented when the service is in production. It should be tested and verified in from the service implementation state to production state. So the policy enforcing should be done as soon as implementation of the service started. If it is not done in those states of the design time life cycle, there will be a lot of complication when applying them on the production state. And ramifications will bring a nightmare to devOps to correct the issues in SLA policies. Even those policies must go through the testing and validating criterias.

    Use case

    Let’s imagine a use case where a company needs a complete governance process. Company X is financial company, who are mainly focused on giving it’s customers, a full financial solution from saving, loans, managing funds and brokering. This company has hundreds of employees from business analyst to software engineers to devops. They do various tasks in service life cycle. Business analysts are responsible for analysing business and proposing solutions. Software engineers are responsible for implementing the proposed business solutions as services. Devops are responsible for delivering the services in efficient and reliable way.

    So let’s imagine a case where there is a need of a service to view the current account balance. Following activities will be done and questions will be answered to provide this view account balance functionality as a service.

    1. Identify the use case -
      1. Why it is needed ?
      2. What is the gap this service will be filling ?
      3. How this will benefit the company revenue ?
      4. Is this a urgent requirement ?
      5. Is there any other services identical to this which fulfils the requirement ?
    2. Implement the requirement as a service -
      1. What is the language used to implement the service ?
      2. Is there a web service documentation ?
      3. Who is responsible for developing the service ?
      4. What is the service name and the version ?
    3. Developer test -
      1. Does the service works without breaking anything ?
      2. Has checked with a code compatibility tool ?
      3. Is error handling properly done ?
    4. QA Testing -
      1. Does the implemented service fulfil the business requirement ?  (acceptance test)
      2. Does the service works smoothly ? (functional testing)
      3. Is seamless integration with other services possible ? (integration test)
      4. How does the service reacts to a high load ? (Load test)
    5. Deploy on public server for public usage
      1. Enforcing of rules and policies. This will be discussed later .
    6. Retiring the service
      1. Is there anyone using this service or has everyone migrated to new version ?

    Now we have an idea what are the details we need to capture in every state of the service life cycle. So we need some kind tool to record all of this data. This is the place for WSO2 product stack to play a role in this situation.

    GREG is for design-time governance and APIM for run-time governance.
    Let’s see how we can use both WSO2 GREG and WSO2 APIM for the above use case.


    WSO2 GREG is product specially designed to provide the right level of governance for SOA. You can find more information about this in following url.
    You can download  and try WSO2 GREG from this location


    WSO2 APIM is a complete solution for managing apis, routing traffic, specially for managing the runtime governance.
    The documentation about the recent release is available here
    Latest WSO2 APIM product can be downloaded from this url

    Integrated Solution

    Please look at the following diagram. The deployment of both GREG and APIM will be as follows according to what we discussed in the article.

    As shown in left side of the image there is a GREG cluster which is responsible for Design time governance. And there is a clusters of API Managers which are responsible for runtime governance.

    Let me explain what is rationale behind the image.
    GREG cluster used as the tool for managing meta data related to design time governance. It will store meta data related schemas, wsdls, apis, owners, urls, policies etc… and will be responsible for migrating the metadata between different design time life cycle status.
    As seen in the image there are separate clusters of APIM in each environment. Those are acting as the runtime governance enforcers. Those will help the enforce SLAs to service consumers.
    BAM (DAS) will be used as the monitoring tool which will keep track of usage related data.

    As middleware company, WSO2 has full stack of products to support both design and runtime governance effectively.

    Chamara SilvaEnable solr indexing debug log of WSO2 servers.

    Most of the WSO2 products come with the embedded registry which is stored in several types of service artifacts. Each artifact stored as a registry artifacts. Registry artifacts are stored in an underline database as a blob content. In practical scenario WSO2 products (ESB,Governance Registry, API Manager etc) having millions of registry resources. If WSO2 product want to search specific

    sanjeewa malalgodaHow to Revoke access tokens belong to user without login to identity server dashboard and without having application internal details.(WSO2 API Manager, Identity Server)

    We need to have full identity server profile for do this as identity server dashboard will run only within complete identity Server run time(not in API Manager run time).
    But even from that application we are doing exact same soap calls i listed below.
    If you do not run full identity server profile in any place of deployment you have 2 solutions.
    01. Like we discussed can mitigate the security risk by shortening the access/refresh token or completely disabling the refresh token.
    02. Implement custom web application where user can login and revoke their tokes using following web service calls. This approach is having some development tasks and UI implementation.

    Here i have listed complete steps to list your applications and revoke tokens for them.
    You may try this with soap ui or any other soap service client if required. I assume you have installed latest patches issued for API Manager.

    Generate access token using password grant type or any other grant type as follows.
    curl -k -d "grant_type=password&username=sanjeewa&password=sanjeewa" -H "Authorization: Basic ajdNc2pIUzBBMHkwQW9XcUlxcWcyMDZROEdVYTpLNXFPVV9HSDNSZWtNZV91d240U2pUVldscTRh"

    Now use generated token to access API service. As you see you will get successful response.
    curl -X GET --header "Accept: application/json" --header "Authorization: Bearer a745bd59ecd37afd84f686b887e9ff5d" ""
    {"answer": "8.0"}

    Now lets revoke token using soap service.
    URL: https://sanjeewa-ThinkPad-X1-Carbon-3rd:9443/services/OAuthAdminService
    Please note you need to pass basic auth credentials of the user. And in request pay load you need to pass app name of your application.

    Here one limitation is you need to know application name before do this operation. But if you use identity server dashboard then you will see all app names.
    But we do have web service for that as well.

    POST https://sanjeewa-ThinkPad-X1-Carbon-3rd:9443/services/OAuthAdminService.OAuthAdminServiceHttpsSoap11Endpoint HTTP/1.1
    Accept-Encoding: gzip,deflate
    Content-Type: text/xml;charset=UTF-8
    SOAPAction: "urn:getAppsAuthorizedByUser"
    Authorization: Basic YWRtaW46YWRtaW4=
    Content-Length: 231
    Host: sanjeewa-ThinkPad-X1-Carbon-3rd:9443
    Connection: Keep-Alive
    User-Agent: Apache-HttpClient/4.1.1 (java 1.5)

    <soapenv:Envelope xmlns:soapenv="" 

    <soapenv:Envelope xmlns:soapenv="">
    <ns:getAppsAuthorizedByUserResponse xmlns:ns="http://org.apache.axis2/xsd" xmlns:ax2434="" xmlns:ax2430="" xmlns:ax2431="">
             <ns:return xsi:type="ax2434:OAuthConsumerAppDTO" xmlns:xsi="">
                <ax2434:OAuthVersion xsi:nil="true"/>
                <ax2434:callbackUrl xsi:nil="true"/>
                <ax2434:grantTypes>urn:ietf:params:oauth:grant-type:saml2-bearer iwa:ntlm implicit refresh_token client_credentials authorization_code password</ax2434:grantTypes>           <ax2434:oauthConsumerKey>IVPk3cKAsyo2tZImmvv0cD7kkXAa</ax2434:oauthConsumerKey>
                <ax2434:oauthConsumerSecret xsi:nil="true"/>
             <ns:return xsi:type="ax2434:OAuthConsumerAppDTO" xmlns:xsi="">
                <ax2434:OAuthVersion xsi:nil="true"/>      <ax2434:applicationName>admin_DefaultApplication_PRODUCTION</ax2434:applicationName>
                <ax2434:callbackUrl xsi:nil="true"/>
                <ax2434:grantTypes>urn:ietf:params:oauth:grant-type:saml2-bearer iwa:ntlm implicit refresh_token client_credentials authorization_code password</ax2434:grantTypes>   <ax2434:oauthConsumerKey>j7MsjHS0A0y0AoWqIqqg206Q8GUa</ax2434:oauthConsumerKey>
                <ax2434:oauthConsumerSecret xsi:nil="true"/>

    As you can see here you can list all applications belong to authorized user.
    Then we can get required application and revoke tokens for that application. Please refer steps listed below.

    Complete Request:
    POST https://sanjeewa-ThinkPad-X1-Carbon-3rd:9443/services/OAuthAdminService.OAuthAdminServiceHttpsSoap11Endpoint HTTP/1.1
    Accept-Encoding: gzip,deflate
    Content-Type: text/xml;charset=UTF-8
    SOAPAction: "urn:revokeAuthzForAppsByResoureOwner"
    Authorization: Basic c2FuamVld2E6c2FuamVld2E=
    Content-Length: 811
    Host: sanjeewa-ThinkPad-X1-Carbon-3rd:9443
    Connection: Keep-Alive
    User-Agent: Apache-HttpClient/4.1.1 (java 1.5)

    <soapenv:Envelope xmlns:soapenv="" 
                <!--Zero or more repetitions:-->

    <soapenv:Envelope xmlns:soapenv="">
          <ns:revokeAuthzForAppsByResoureOwnerResponse xmlns:ns="http://org.apache.axis2/xsd">
             <ns:return xsi:type="ax2434:OAuthRevocationResponseDTO" xmlns:ax2434="" xmlns:ax2430="" xmlns:xsi="" xmlns:ax2431="">
                <ax2434:errorCode xsi:nil="true"/>
                <ax2434:errorMsg xsi:nil="true"/>

    Now we have revoked all tokens obtained for default application. Now lwts try to invoke API again and see what happen.
    curl -X GET --header "Accept: application/json" --header "Authorization: Bearer 2824ce3682f3cc1396a32dbc0dd4f92a" ""
    <ams:fault xmlns:ams=""
    <ams:message>Invalid Credentials</ams:message>
    <ams:description>Access failure for API: /calc/1.0, version: 1.0. Make sure your have given the correct access token</ams:description></ams:fault>

    Now you will see that access token is revoked and cannot be use anymore.

    sanjeewa malalgodaHow to Revoke access tokens belong to user without login to identity server dashboard and without having application internal details.(WSO2 API Manager, Identity Server)

    We need to have full identity server profile for do this as identity server dashboard will run only within complete identity Server run time(not in API Manager run time).
    But even from that application we are doing exact same soap calls i listed below.
    If you do not run full identity server profile in any place of deployment you have 2 solutions.
    01. Like we discussed can mitigate the security risk by shortening the access/refresh token or completely disabling the refresh token.
    02. Implement custom web application where user can login and revoke their tokes using following web service calls. This approach is having some development tasks and UI implementation.

    Here i have listed complete steps to list your applications and revoke tokens for them.
    You may try this with soap ui or any other soap service client if required. I assume you have installed latest patches issued for API Manager.

    Generate access token using password grant type or any other grant type as follows.
    curl -k -d "grant_type=password&username=sanjeewa&password=sanjeewa" -H "Authorization: Basic ajdNc2pIUzBBMHkwQW9XcUlxcWcyMDZROEdVYTpLNXFPVV9HSDNSZWtNZV91d240U2pUVldscTRh"

    Now use generated token to access API service. As you see you will get successful response.
    curl -X GET --header "Accept: application/json" --header "Authorization: Bearer a745bd59ecd37afd84f686b887e9ff5d" ""
    {"answer": "8.0"}

    Now lets revoke token using soap service.
    URL: https://sanjeewa-ThinkPad-X1-Carbon-3rd:9443/services/OAuthAdminService
    Please note you need to pass basic auth credentials of the user. And in request pay load you need to pass app name of your application.

    Here one limitation is you need to know application name before do this operation. But if you use identity server dashboard then you will see all app names.
    But we do have web service for that as well.

    POST https://sanjeewa-ThinkPad-X1-Carbon-3rd:9443/services/OAuthAdminService.OAuthAdminServiceHttpsSoap11Endpoint HTTP/1.1
    Accept-Encoding: gzip,deflate
    Content-Type: text/xml;charset=UTF-8
    SOAPAction: "urn:getAppsAuthorizedByUser"
    Authorization: Basic YWRtaW46YWRtaW4=
    Content-Length: 231
    Host: sanjeewa-ThinkPad-X1-Carbon-3rd:9443
    Connection: Keep-Alive
    User-Agent: Apache-HttpClient/4.1.1 (java 1.5)

    <soapenv:Envelope xmlns:soapenv="" 

    <soapenv:Envelope xmlns:soapenv="">
          <ns:getAppsAuthorizedByUserResponse xmlns:ns="http://org.apache.axis2/xsd" xmlns:ax2434="" xmlns:ax2430="" xmlns:ax2431="">
             <ns:return xsi:type="ax2434:OAuthConsumerAppDTO" xmlns:xsi="">
                <ax2434:OAuthVersion xsi:nil="true"/>
                <ax2434:callbackUrl xsi:nil="true"/>
                <ax2434:grantTypes>urn:ietf:params:oauth:grant-type:saml2-bearer iwa:ntlm implicit refresh_token client_credentials authorization_code password</ax2434:grantTypes>
                <ax2434:oauthConsumerSecret xsi:nil="true"/>
             <ns:return xsi:type="ax2434:OAuthConsumerAppDTO" xmlns:xsi="">
                <ax2434:OAuthVersion xsi:nil="true"/>
                <ax2434:callbackUrl xsi:nil="true"/>
                <ax2434:grantTypes>urn:ietf:params:oauth:grant-type:saml2-bearer iwa:ntlm implicit refresh_token client_credentials authorization_code password</ax2434:grantTypes>
                <ax2434:oauthConsumerSecret xsi:nil="true"/>

    As you can see here you can list all applications belong to authorized user.
    Then we can get required application and revoke tokens for that application. Please refer steps listed below.

    Complete Request:
    POST https://sanjeewa-ThinkPad-X1-Carbon-3rd:9443/services/OAuthAdminService.OAuthAdminServiceHttpsSoap11Endpoint HTTP/1.1
    Accept-Encoding: gzip,deflate
    Content-Type: text/xml;charset=UTF-8
    SOAPAction: "urn:revokeAuthzForAppsByResoureOwner"
    Authorization: Basic c2FuamVld2E6c2FuamVld2E=
    Content-Length: 811
    Host: sanjeewa-ThinkPad-X1-Carbon-3rd:9443
    Connection: Keep-Alive
    User-Agent: Apache-HttpClient/4.1.1 (java 1.5)

    <soapenv:Envelope xmlns:soapenv="" 
                <!--Zero or more repetitions:-->

    <soapenv:Envelope xmlns:soapenv="">

          <ns:revokeAuthzForAppsByResoureOwnerResponse xmlns:ns="http://org.apache.axis2/xsd">
             <ns:return xsi:type="ax2434:OAuthRevocationResponseDTO" xmlns:ax2434="" xmlns:ax2430="" xmlns:xsi="" xmlns:ax2431="">
                <ax2434:errorCode xsi:nil="true"/>
                <ax2434:errorMsg xsi:nil="true"/>

    Now we have revoked all tokens obtained for default application. Now lwts try to invoke API again and see what happen.

    curl -X GET --header "Accept: application/json" --header "Authorization: Bearer 2824ce3682f3cc1396a32dbc0dd4f92a" ""
    <ams:fault xmlns:ams=""

    <ams:message>Invalid Credentials</ams:message>
    <ams:description>Access failure for API: /calc/1.0, version: 1.0. Make sure your have given the correct access token</ams:description></ams:fault>

    Now you will see that access token is revoked and cannot be use anymore.

    sanjeewa malalgodaHow to recover application if application owner is deleted or blocked in API Store- WSO2 API Manager.

    If you are trying to recover applications created by one user then we have a quick hack. We discussed how we can solve for APIM 1.9.0 and later versions in this( post.

    Now lets see how we can do same for APIM 1.8.0 as well.
    • You need to create an user with application subscriber role using the Management Console or use an existing user which has application subscriber permissions.
    • Below SQL statements needs to be run against the WSO2AM_DB
    • Identify SUBSCRIBER_ID and USER_ID of both users (User who left the organization and the user that we are going to transfer application ownership) from AM_SUBSCRIBER table (eg: SELECT * FROM AM_SUBSCRIBER).
    • Identify the application that needs ownership transferring from the AM_APPLICATION table
    • Update the identified application's SUBSCRIBER_ID with the newly created user's SUBSCRIBER_ID from AM_APPLICATION table.
    • Update SUBSCRIBER_ID to new user's SUBSCRIBER_ID from the AM_APPLICATION_REGISTRATION (You need to identify entries that needs to be updated based on old user's SUBSCRIBER_ID)
    • Update AUTHZ_USER with the newly created user's USER_ID from IDN_OAUTH2_ACCESS_TOKEN table (You need to identify entries that needs to be updated based on old user's USER_ID)
    • Update USERNAME with newly created user's USER_ID from IDN_OAUTH_CONSUMER_APPS table (You need to identify entries that needs to be updated based on old user's USER_ID/USERNAME)
    Note: In above examples, new user's SUBSCRIBER_ID=2 and USER_ID=user2

    Please note that this is a hack and not a formal solution, We are providing above workaround believing you are having issues accessing applications where application owner has left the organization.

    sanjeewa malalgodaHow to revoke access tokens generated by client application on behalf of user - WSO2 API Manager

    First let me explain how OAuth 2.0 works in securing APIs in WSO2 API Manager.

    01. Application owner, designer will create application by bundling set of APIs and create OAuth application for them.
    02. Then they will embed application access token, consumer key and secret within application and release it to client app store.
    03. When clients wanted to use application he may provide user credentials(or authentication challenge) and get access token and refresh token.
    04. Access token is having limited validity(by default 1 hour) period and it need to renew after that time.
    05. And same applied to refresh token as well. So application cannot use access token more than 1 hour without user credentials.
    06. After one hour time (again this is configurable and if need you can reduce that time to 5 minutes or so) application will not be able to do anything on behalf of user.

    If you consider end user point of view its doesn't matter underlying authentication mechanism for them (its OAuth or any other mechanism). And most of the cases we didn't wanted to revoke them specifically unless client application thinks end user is misusing application.

    But in some cases user do not want to app tp proceed with generated tokens by user. In that case user need to revoke tokens by himself.
    If user do not trust client application(if device stolen or app seems misbehaving) then he should log into another system(usually authorization server) and ask to revoke access tokens belong to him.

    If user is willing to do something like that then we may use identity server dashboard to do that(Please refer attached screen shot of identity server dashboard, where we list applications obtained tokens on behalf of user).

    If you remove application from authorized apps then all tokens obtained by app will be revoked. Users can login to user profile and see application which generated tokens and revoke them if need.

    Please note that you need to install API Management features on top of identity server to make this enable. Or we can direct users to web app(which is implemented using soap services to revoke tokens) where they can list active access tokens and revoke them.

    And if you believe client application misuse refresh token and generate token again and again on behalf of user we may completely disable refresh token grant handler(configuration available in identity.xm configuration file). Or we can reduce refresh token validity period. With that i believe we can solve issues due to misusing refresh token. Please see below screenshot where you can disable refresh grant per app. Else disable it completely from server level.

    sanjeewa malalgodaHow to search specific API by name using new REST API in WSO2 API Manager 1.10

    Lets say we have 3 APIs are created namely API, API1, API2. 
    With the below curl command it will retrieve details of 3 APIs available. Because all of these APIs contained API as part of name.

    curl -H "Authorization: Bearer 8551158c1"'API'
    curl -H "Authorization: Bearer 94de782ddd64d3fea012bed4a71d764b"'Name:API'

    But sometimes you may feel it didn't return you exact results (return only API and not API1, API2).
    Reason is to match we used following regex and it will match with any string containing term in any place of string to be tested.
    So if you need exact string match then we can do small hack for that. If you invoke API as follows and added ^ before search term and $ after search term it will return you exact match. See following command. Same applies for owner, content, version and other parameters too.

    curl -H "Authorization: Bearer 94de782ddd64d3fea012bed4a71d764b"'Name:^CalculatorAPI$'

    Dinusha SenanayakaHow to configure WSO2 App Manager with MySQL ?

    All WSO2 products comes with H2 internal database/s by default. Any of these product can be configured with different databases other than the H2 DB. App Manager product contains several databases unique only to App Manager that is not common with other WSO2 products. If you are familiar with changing default database of any of WSO2 product, concept is similar for App Manager as well.  Anyway, here I'm explaining step by step, what each of these database in App Manager is doing and how we can configure them to use MySQL.

    We have following five datasources defined in the {AppM_Home}/repository/conf/datasources/master-datasources.xml file.

    [1] <name>jdbc/WSO2CarbonDB</name> - Use for registry data storage, user store, permission store
    [2] <name>jdbc/WSO2AM_DB</name> - Use to store App details that created from App Manager
    [3] <name>jdbc/WSO2AM_STATS_DB</name> - Stats storage. Needed only when Data Analytic Server is configured
    [4] <name>jdbc/ES_Storage</name> - Use to store thumbnail/banner images of apps
    [5] <name>jdbc/WSO2SocialDB</name> - Use to store comments and ratings added for apps by store users

    This guide explains configuring App Manager standalone instance with MySQL DB. If it is a cluster deployment, there are some additional configurations required other than explains in this post.

    Step 0: Login to the MySQL server and create 4 databases.
    mysql> create database appm_regdb;
    Query OK, 1 row affected (0.01 sec)

    mysql> create database appm_appdb;
    Query OK, 1 row affected (0.00 sec)

    mysql> create database appm_esdb;
    Query OK, 1 row affected (0.00 sec)

    mysql> create database appm_socialdb;
    Query OK, 1 row affected (0.00 sec)

    Step 1: Change  datasources mentioned previously (other than the STATS_DB) as follows by pointing to above created MySQL databases. You need to replace the config with  valid database username/password.
    Note that <defaultAutoCommit>false</defaultAutoCommit> property is mandatory for "jdbc/WSO2AM_DB" datasource.

    File location: {AppM_Home}/repository/conf/datasources/master-datasources.xml

                <description>The datasource used for registry and user manager</description>
                <definition type="RDBMS">
                        <validationQuery>SELECT 1</validationQuery>

                <description>The datasource used for APP Manager database</description>
                <definition type="RDBMS">
                        <validationQuery>SELECT 1</validationQuery>

               <description>The datasource used for by the Jaggery Storage Manager</description>
               <definition type="RDBMS">

                <description>The datasource used for social framework</description>
                <definition type="RDBMS">
                        <validationQuery>SELECT 1</validationQuery>

    Step 2: Copy MySQL jdbc driver to {AppM_Home}/repository/components/lib directory

    Step 3: Start the server with -Dsetup option. This will create the required tables in above four databases.
    sh -Dsetup

    That's all.

    Evanthika AmarasiriHow to list admin services used by WSO2 carbon based servers

    We are given admin services by WSO2 products to perform various tasks but there is no documentation on the list of the services that are being provided. Therefore to list all these admin services, all you have to do is start the server with -DosgiConsole and type in the command listAdminServices in the osgi console.

    This is clearly explained in the stack overflow question at [1].

    [1] -

    sanjeewa malalgodaHow to recover application if application owner is deleted or blocked in API Store- WSO2 API Manager.

    From API Manager 1.9.0 onward we can address this using subscription sharing feature. All users within same group can add subscriptions, remove API subscriptions and update subscriptions, delete apps etc.
    So basically all users in same group can do anything to application. So if one user leaves organization then others in group can act as application owner.

    Now lets see how we can recover application in API Manager 1.8.0 and lower versions. Please note you can use same in API Manager 1.9.0 and later versions if need.

    create new user from management console or use any other existing user who need to transfer application ownership.
    Assign application specific roles to that user(if exists)

    Then update CREATED_BY, SUBSCRIBER_ID to new users values as follows.
    mysql> select * from AM_APPLICATION;
    |              1 | DefaultApplication |             1 | Unlimited        | NULL         | NULL        | APPROVED           |          | admin      | 2016-06-24 13:07:40 | NULL       | 0000-00-00 00:00:00 |
    |              2 | test-admin         |             1 | Unlimited        |              |             | APPROVED           |          | admin      | 2016-06-28 17:44:44 | NULL       | 0000-00-00 00:00:00 |
    |              3 | DefaultApplication |             2 | Unlimited        | NULL         | NULL        | APPROVED           |          | sanjeewa   | 2016-06-28 17:46:49 | NULL       | 0000-00-00 00:00:00 |

    mysql> update AM_APPLICATION set CREATED_BY ='sanjeewa' where NAME = 'test-admin';
    mysql> update AM_APPLICATION set SUBSCRIBER_ID ='2' where NAME = 'test-admin';

    Now table will look like following
    mysql> select * from AM_APPLICATION;
    |              1 | DefaultApplication |             1 | Unlimited        | NULL         | NULL        | APPROVED           |          | admin      | 2016-06-24 13:07:40 | NULL       | 0000-00-00 00:00:00 |
    |              2 | test-admin         |             2 | Unlimited        |              |             | APPROVED           |          | sanjeewa   | 2016-06-28 17:51:03 | NULL       | 0000-00-00 00:00:00 |
    |              3 | DefaultApplication |             2 | Unlimited        | NULL         | NULL        | APPROVED           |          | sanjeewa   | 2016-06-28 17:46:49 | NULL       | 0000-00-00 00:00:00 |

    Then go to API store and log as new user.
    You can generate new access tokens and add new API subscriptions to application.

    sanjeewa malalgodaHow to enforce users to add only https URLs for call back URL when you create Application in API Store

    Even though not required, TLS is strongly recommended for client applications. Since its not something mandate by spec we let our users to add both http and https URLs. But if you need to let users to add only HTTPS url then we have a solution for that as well. Since all users come to API store and create applications we may let users to add only HTTPS urls. You can do this with following steps.

    (1) Navigate to "/repository/deployment/server/jaggeryapps/store/site/themes/fancy/subthemes" directory.
    (2) Create a directory with the name of your subtheme. For example "test".
    (3) Copy the "/wso2am-1.10.0/repository/deployment/server/jaggeryapps/store/site/themes/fancy/templates/application/application-add/js/application-add.js" to the new subtheme location "repository/deployment/server/jaggeryapps/store/site/themes/fancy/subthemes/test/templates/application/application-add/js/application-add.js".
    (4) Update $("#appAddForm").validate in copied file as follows.

    You should replace,
    submitHandler: function(form)
    { applicationAdd(); }


    With following,
    submitHandler: function(form) {
    var callbackURLTest =$("#callback-url").val();
    var pattern = /^((https):\/\/)/;
    { applicationAdd(); }

    { window.alert("Please enter valid URL for Callback URL. Its recommend to use https url."); }


    (5) Then Edit "/repository/deployment/server/jaggeryapps/store/site/conf/site.json" file as below in order to make the new sub theme as the default theme.
    "theme" :
    { "base" : "fancy", "subtheme" : "test" }

    Then users will be able to add only HTTP urls when they create applications in API store. 

    Krishantha SamaraweeraTest Automation Architecture

    Following three repos have been used to facilitate test automation for WSO2 products. Each component will describe in this post later.
    1. Test Automation Framework (git repo link - carbon-platform-integration)
    2. Carbon Automation Test Utils (git repo link - carbon-platform-integration-utils)
    3. Collection of all test suites and a test runner (git repo link - carbon-platform-automated-test-suite)

    Test Automation Framework components

    Architecture - AutomationFramework (2).png

    Components related to TestNG is marked in dark-red colour

    Automation framework engine

    • Automation Context builder - Process automation.xml given though test module and make all configurations available through Automation Context.

    • TestNG extension Executor -  Responsible for execution of internal/external extension classes in various states of the TestNG listeners.

    Pluggable utilities to the test execution (TestNG)

    There are several interfaces that allow you to modify TestNG's behaviour. These interfaces are broadly called "TestNG Listeners".  Test Automation Framework supports the execution of internal/external extension classes in various states of the TestNG listeners. Users can define the class paths in the automation.xml file under the desired TestNG listener states
    Automation Framework uses Java reflection to execute classes. Users are expected to use the interface provided by the framework to develop external extension classes to be executed in the test. There are 5 interfaces provided by TAF for different TestNG listeners. Those interfaces have specific methods defined explicitly to be inline with the corresponding TestNG listeners. The interfaces are:
    • ExecutionListenerExtension
    • ReportListenerExtension
    • SuiteListenerExtension
    • TestListenerExtension
    • TransformListenerExtension

    Automation Framework Common Extensions (Java)

    This module consists of a set of default pluggable modules common to the platform. These pluggable modules can be executed in the test execution registering those in the test configuration file (automation.xml). TAF provides common modules such as the
    • CarbonServerExtension -  For server startup, shutdown operations of carbon server. And coverage    generation is also a part of this class.  
    • Axis2ServerExtension -  For axis2 server startup and shutdown. Facilitate backend for integration tests.
    Also extensions module contain classes facilitate third party framework integration
    • SeleniumBrowserManager - Creates Selenium WebDriver instance based on the given browser configuration at automation.xml
    • JmeterTestRunner - Executes jmeter test scripts in headless mode and inject result to surefire test result reports.
    Users can also add any platform wide common modules into the Automation Framework Extensions. Test specific pluggable modules should be kept in the test module side and those modules can also be used for the tests by registering it in the automation configuration file.

    Automation Framework Utils (Java)

    Utility components that provide frequently used functionality, such as sending SOAP and REST requests to a known endpoint or monitoring a port to determine whether it's available. You can add the Utils module to your test case to get the functionality you need. Some sample utilities are:
    • TCPMon Listener
    • FTP Server Manager
    • SFTP Server Manager
    • SimpleHTTPServer
    • ActiveMQ Server
    • Axis2 clients
    • JsonClients
    • MutualSSLClient
    • HTTPClient
    • HTTPURLConnectionClient
    • Wire message monitor
    • Proxy Server
    • Tomcat Server
    • Simple Web Server (To facilitate content type testing)
    • FileReader, XMLFileReader
    • WireMonitor
    • Concurrent request Generator
    • Database Manager (DAO)

    Test Framework Unit Tests - Unit test classes to verify context builder and utility classes, depends on TestNG for unit testing.

    Carbon Automation Test Utils

    Carbon Platform Utils -  AutomationFramework.png

    Common Framework Tests

    There are integration test scenarios common to all WSO2 products. Implementing tests for those common scenarios in each product might introduce test duplication and test management difficulties. Thus, a set of common tests are introduced under the framework utilities module which can be used directly by extending the test classes.
    1. DistributionValidationTest
    2. JaggeryServerTest
    3. OSGIServerBundleStatusTest
    4. ServerStartupBaseTest

    Common Admin Clients

    This module consists of set of admin clients to invoke backend admin services together with supportive utility methods which are common for WSO2 product platform.
    • ServerAminClient
    • LogViewerAdminClient
    • UserManagementClient
    • SecurityAdminServiceClient

    Common Test Utils

    Frequently used test utility classes which depends on platform dependencies.
    • SecureAxis2ServiceClient
    • TestCoverageReportUtil - (To merge coverage reports)
    • ServerConfiurationManager (To change carbon configuration files)
    • LoginLogoutClient

    Common Extensions

    Consists of Pluggable classes allowing to plug additional extensions to the test execution flow. Frequently used test framework extension classes which are available by default, these classes use platform dependencies.

    e.g UserPopulatorExtension - For user population/deletion operations

    Platform Automated Test Suite

    Builds a distribution containing all integration and platform test jars released with each product. It contains an ant based test executor to run the test cases in each test jar file. This ant script is based on TestNG ant task which can be used as a test case runner. The ant script contains a mail task which can be configured to send out a notification mail upon test execution. The PATS distribution can be used to run the set of tests on a pre-configured product cluster/distribution.

    PATS - AutomationFramework.png

    Nadeeshaan GunasingheDynamically Selecting SOAP Message Elements in WSO2 ESB

    Most recently I faced a requirement where I had to select elements dynamically from a SOAP response message which comes to WSO2 ESB. The use case is as follows.

    I have a proxy service where I get a request from a client and let's say the request looks like following.

    1:  <Request>  
    2: <Values>
    3: <value>2</value>
    4: </Values>
    5: </Request>

    The value can be changed for each request. However when the request is sent to the backend server from ESB we get a response as following

    1:  <Response>  
    2: <Events>
    3: <Event><TestEntry>Entry Val1</TestEntry></Event>
    4: <Event><TestEntry>Entry Val2</TestEntry></Event>
    5: <Event><TestEntry>Entry Val3</TestEntry></Event>
    6: <Event><TestEntry>Entry Val4</TestEntry></Event>
    7: <Event><TestEntry>Entry Val5</TestEntry></Event>
    8: </Events>
    9: </Response>

    Depending on the Value specified in the Request, we need to extract the specified number of event entries from the response (If the value in the request is 2 then we will have to extract two event entries from the response).

    In order to achieve the requirement I had to follow the bellow configurations.

    Sequence Calling the xslt Transform

    Use the xslt mediator to transform the incoming payload to ESB

    1:  <sequence name="get_document_list_seq" trace="disable" mlns="">  
    2: <property expression="//value" name="limit"
    3: scope="default" type="STRING" xmlns:ns="http://org.apache.synapse/xsd"/>
    4: <xslt key="payload_transform" source="//Response"
    5: xmlns:ns="http://org.apache.synapse/xsd"
    6: xmlns:s11="" xmlns:tem="">
    7: <property expression="get-property('limit')" name="PARAM_NAME"/>
    8: </xslt>
    9: <respond/>
    10: </sequence>

    XSLT Transformation to transform the payload and extract the elements

    With the Property set with the name PARAM_NAME which passed to the xslt from the sequence above, will use to determine the number of elements to be extracted.

    1:  <localEntry key="payload_transform" xmlns="">  
    2: <xsl:stylesheet version="2.0" xmlns:xsl="">
    3: <xsl:output indent="yes" method="xml" omit-xml-declaration="yes"/>
    4: <xsl:param name="PARAM_NAME"/>
    5: <xsl:template match="/">
    6: <list>
    7: <responseList>
    8: <xsl:for-each select="//Response/Events/Event[position()&lt;=number($PARAM_NAME)]">
    9: <entry>
    10: <xsl:value-of select="TestEntry"/>
    11: </entry>
    12: </for-each>
    13: </responseList>
    14: </list>
    15: </xsl:template>
    16: </xsl:stylesheet>
    17: </localEntry>

    Chamila WijayarathnaSetting Up WSO2 API Manager Cluster in a Single Server Using Puppet

    In this blog, I am going to write about setting up a WSO2 API Manager cluster with 4 API Manager instances which will be running on different profiles as Gateway, Publisher, Store and Key Manager. More details about these profiles and how they operate can be found from [1]. If we are setting up cluster in different server instances, this is quite straight forward and [2], [3] will guide you on how to do that.

    But in some cases, due to resource limitations, you may need to use same server to host more than 1 instance which will serve you in different ways. Doing this is bit tricky than what I mentioned previously and in this blog I am going to explain you on how to achieve that.

    Other than the nodes you are deploying the APIM instances, you need a separate node to use as the puppet master.

    Install puppet master in that node by following the information given at section 2.1.1 of [2].

    Install puppet master in the nodes you are going to install APIM instances by following section 2.1.2 of [2].

    Configure puppet master node and all client agent nodes with configurations mentioned in section 2.2 at [2].

    Now login to puppet master and add following custom configurations there.

    In /etc/puppet/hieradata/wso2/common.yaml, add the IP addresses and host names for VMs in your setup under wso2::hosts_mapping: as follows.

    # Host mapping to be made in etc/hosts
        hostname: localhost
        ip_address: <puppet master ip>
        hostname: <puppet master host>
        ip_address: <vm 1 ip>
        hostname: <vm 1 host>
        ip_address: <vm 2 ip>
        hostname: <vm 2 host>
        ip_address: <vm 3 ip>
        hostname: <vm 3 host>
        ip_address: <vm 4 ip>
        hostname: <vm 4 host>

    Change the install_dir property of /etc/puppet/hieradata/wso2/common.yaml as follows.

     wso2::install_dir: "/mnt/%{::ipaddress}/%{::product_profile}"

    Now in the yaml related to each profile, you need to specify the port offset you are going to use for the APIM instance running on that profile. For example, if you are using port offset 1 for store, in /etc/puppet/hieradata/production/wso2/wso2am/1.10.0/default/api-store.yaml as follows.

      offset: 1

    You need to specify port offsets for all profiles you need to run with a port offset.
    Also define a different services name in each profile yaml you gonna use as follows.

    Eg : wso2::service_name: wso2am_store

    Then follow section 3 at [2] to start the servers. Instead of using the at [2]. Please use following.

    echo "#####################################################"
    echo "                   Starting cleanup "
    echo "#####################################################"
    #rm -rf /mnt/*
    sed -i '/environment/d' /etc/puppet/puppet.conf
    echo "#####################################################"
    echo "               Setting up environment "
    echo "#####################################################"
    rm -f /etc/facter/facts.d/deployment_pattern.txt
    mkdir -p /etc/facter/facts.d

    while read -r line; do declare  $line; done < deployment.conf  

    echo product_name=$product_name >> /etc/facter/facts.d/deployment_pattern.txt  
    echo product_version=$product_version >> /etc/facter/facts.d/deployment_pattern.txt  
    echo product_profile=$product_profile >> /etc/facter/facts.d/deployment_pattern.txt  
    echo vm_type=$vm_type >> /etc/facter/facts.d/deployment_pattern.txt  
    echo platform=$platform >> /etc/facter/facts.d/deployment_pattern.txt

    echo "#####################################################"  
    echo "                    Installing "  
    echo "#####################################################"  

    puppet agent --enable  
    puppet agent -vt  
    puppet agent --disable

     For each instance you will have to change the deployment.conf and run


    Shazni NazeerUnleashing the Git - part 5 - git stashing

    More often than not, there can be situations where you have done some changes and you need to temporarily hide those from the working tree since those changes are not completely ready yet. Stashing comes into play in such situations. May be when you have done some changes in a branch and you want to switch to another branch without committing your changes in the current branch. Stash is the easiest option for that.

    Say you have changed some files. Issuing the git status would show what has changed.
    $ git stash // Temporarily hide your changes. Now issue git status. You won't see the modifications. Equivalent to $ git stash save
    $ git stash save --patch // Opens the text editor to choose what portion to stash
    $ git stash apply // Apply the last stashed change and keep the stash in stack. But this is normally not done. Instead following is done
    $ git stash pop // Apply the last stashed change and remove it from stack
    $ git stash list // Shows available stashes.
    Stashes have name in the format of stash@{#} where # is 0 for most recent one and 1 for the second recent and so on.
    $ git stash drop <stash name>   // e.g: git stash drop stash@{0} to remove the first stash
    $ git stash clear // Remove all the stashed changes
    $ git stash branch <stash name>  // Create a branch from an existing stash
    In the next part of this series we'll discuss git branching in detail.

    Shazni NazeerUnleashing the Git - part 4 - Git Tagging

    Tagging comes very handy in managing a repository. Tags and branches are similar concepts. But tags are read only. There are two types of tags. 1) Lightweight 2) Annotated
    $ git tag Basic_Features    // Creates a Basic_Features tag.
    $ git tag            // Shows available tags
    $ git show Basic_Features   // Which show the details of the commit on which Basic_Feature tag is added
    $ git tag beta1 HEAD^ // Creates a tag from next to last commit
    Earlier I showed how to check out code with commit id. We can also use the tags to checkout code of a particular stage using the following command.
    $ git checkout Basic_Feature
    All of the above are lightweight/unannotated tag. Creating an annotated tag is as simple as creating lightweight tag as below,
    $ git tag -a Basic_Messaging_Annotated -m "Annotated tag at Messaging"
    $ git show Basic_Messaging_Annotated   // this will provide the details of the tagger as well.
    Deleting a tag is same for both tag types.
    $ git tag -d Basic_Messaging_Annotated  // Deletes the annotated tag we created.
    $ git push origin v1.0 // Push local tag v1.0 to remote origin.
    $ git push --tags origin // Push all the local tags to remote origin
    $ git fetch --tags origin // fetches all the remote origins tags to local repository. Note: If you have a local tag with same name as a remote tag, it'll be overwritten

    Sachith WithanaSimple Indexing Guide for WSO2 Data Analytics Server

    Interactive search functionality in WSO2 Data Analytics Server is powered by Apache Lucene[1],  a powerful, high performing, full text search engine!.

    Lucene index data were kept in a database in the first version of the Data Analytics Server[2] (3.0.0). But from DAS 3.0.1 onwards, Lucene indexes are maintained in the local filesystem.
    Data Analytics Server(DAS) has a seperate server profile which enables it to perform as a dedicated indexing node. 

    When a DAS node is started with indexing (disableIndexing=false), there are a quite a few things going on. But first let’s get the meta-information out of the way.

    Configuration files locations:

    Local shard config: <DAS-HOME>/repository/conf/analytics/local-shard-allocation-config.conf
    Analytics config: <DAS-HOME>/repository/conf/analytics/analytics-config.xml

    What is a Shard?

    All the indexing data are partitioned across in partitions called shards ( default is 6). The partitioned data can be observed by browsing in <DAS-HOME>/repository/data/index_data/
    Each and every record belongs to only one shard.

    Digging in ...


    In standalone mode, DAS behaves as a powerful indexer, but it truly shines as a powerhouse when it’s clustered.

    A cluster of DAS indexer nodes behaves as a one giant and powerful distributed search engine. Search queries are distributed across all the nodes resulting in lightning fast result retrieval. Not only that, since indexes are distributed among the nodes, indexing data in the cluster is unbelievably fast. 

    In the clustering mode, the aforementioned shards are distributed across the indexing nodes ( nodes in which indexing is enabled). For example for a cluster including 3 indexing nodes with 6 shards, each indexing node would be assigned two shards each (unless replication is enabled, more on that later).

    The shard allocation is DYNAMIC!. If a new node starts up as an indexing node, the shard allocations would change to allocate some of the shards to the newly spawned indexing node. This is controlled using a global shard allocation mechanism.

    Replication factor:
    Replication factor can be configured in the analytics config file as indexReplicationFactor, which would decide how many replicas of each record ( or shard ) would be kept. Default is 1.

    Manual Shard configuration:

    There are 3 modes when configuring the local shard allocations of a node. They are,
    1. NORMAL
    2. INIT
    3. RESTORE

    NORMAL means, the data for that particular shard would already be residing in that node.
    For example, upon starting up a single indexing node, the shard configuration would look like follows,
    0, NORMAL
    1, NORMAL
    2, NORMAL
    3, NORMAL
    4, NORMAL
    5, NORMAL

    This means, all the shards (0 through 5) are indexed successfully in that indexer node.

    INIT allows you to tell the indexing node to index that particular shard.
    If you restart the server after adding a shard as INIT,  that shard would be re-indexed in that node.
    Ex: if the shard allocations are

    1, NORMAL
    2, NORMAL

    and we add the line 4, INIT and restart the server,

    1, NORMAL
    2, NORMAL
    4, INIT

    This would index the data for the 4th Shard for that indexing node and you can see that it will be returned to the NORMAL state after that once indexing the 4th shard is done.

    RESTORE provides you the opportunity to copy the indexed data from a different node/backup to another node and use that indexing data. This avoids re-indexing and reuses the already available indexing data. After successful restoring, that node is able to support queries on that corresponding shard as well.

    Ex: for the same shard allocation above, if we copy the indexed data for the 5th Shard and add the line,
    5, RESTORE
    and restart the server, the node would then allocate the 5th shard to that node ( which would be used to search and index)

    After restoring, that node would also index the incoming data for that shard as well.


    How do I remove a node as an indexing node?

    If you want to remove a node as an indexing node, then you have to restart that particular node with indexing disabled ( disableIndexing=true) and it will trigger the global shard allocations to be re-allocated, removing that node from the indexing cluster.

    Then you have the option of restarting the other indexing nodes which will automatically distribute the shards among the available nodes again. 

    Or you can manually assign the shards. Then you need to make sure to index the data for the shards that node took in other indexing node as you require.
    You can view the shard list for that particular node in the local indexing shard config (refer above).

    Then you can either use INIT or RESTORE methods to index those shards in other nodes (refer to the Manual Shard configuration section).

    You must restart all the indexing servers for it to get the indexing updates after the indexing node has left.

    Mistakenly started up an indexing node?

    If you have mistakenly started up an indexing server, it will change the global configurations and has to be undone manually.
    If the replication factor is equal to or greater than 1, you would still be able to query and get all the data even this node is down.

    First of all, to remove a node as an indexing node, you need to delete the indexing data (optional).
    Indexing data resides at: <DAS-HOME>/repository/data/indexing_data/

    Second if you want to use the node in another server profile (more info on profiles: [3]), restart the server with the required profile.

    Then follow the steps in the above question on "How do I remove a node as an indexing node?"

    How do I know all my data are indexed?

    The simplest way to do this would be to use the Data Explorer in DAS, refer to [4] for more information.

    There, for a selected table, run the query “*:*” and it should return all the results with the total count.

    For more information or queries please do drop by the mailing lists[4].

    sanjeewa malalgodaHow refresh token validity period works in WSO2 API Manager 1.10.0 and later versions.

    If refresh token renewal not enabled(using

    parameter in identity.xml configuration file), we use existing refresh token else we issue a new refresh token. If issuing new refresh token, use default refresh token validity period(which is configured in

    parameter of identity.xml) otherwise use existing refresh token's validity period.
    That is how refresh grant handler logic implemented in OAuth code

    First i will generate access token using password grant as follows.
    curl -k -d "grant_type=password&username=admin&password=admin" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token

    Then refresh token with default configurations.
    curl -k -d "grant_type=refresh_token&refresh_token=f0b7e3839143eec6439c10faf1a4c714&scope=PRODUCTION" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token

    As you can see refresh token update with token generation request.
    Now i disable reneval refresh token by updating following parameters.


    curl -k -d "grant_type=refresh_token&refresh_token=4f1bebe8b284b3216efb228d523df452&scope=PRODUCTION" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token

    In this case refresh token generated time will remains as it is. While having new access token and refresh token(but refresh token created time would be same).

    Sachith WithanaIncremental Analytics With WSO2 DAS

    This is the second blog post on WSO2 Data Analytics Server. The first post can be found at [1].


    The duration of the batch process is critical in production environments. For a product that does not support incremental processing, it needs to process the whole dataset in order to process the unprocessed data. With incremental processing, the batch job only processes the data partition that’s required to be processed, not the whole dataset (which has already been processed), which improves the efficiency drastically.

    For example let’s say you have a requirement to summarize data for each day. The first time the summarization script is run, it would process the whole data set and summarize the data. That’s where the similarities end between a typical batch process and incremental analytics.

    The next day when the script is run, the batch processing system without incremental analytics support would have to summarize the whole dataset in order to get the last days’ summarization. But with incremental processing, you would only process the last days’ worth of data and summarize, which reduces the overhead of processing the already processed data again.

    Think of how it can improve the performance in summarizations starting from minutes running all the way to years.

    Publishing events
    Incremental analytics uses the timestamps of the events sent when when retrieving the data for processing. Therefore when defining streams for incremental analytics, you need to add an extra field to the event payload as _timestamp LONG to facilitate this.

    When sending the events you have the ability to either add the timestamp to the _timestamp attribute or set it for each event at event creation.


    In DAS, in the spark script, when defining the table, you need to add extra parameters to the table definition for it to support incremental analytics.

    If you do not provide these parameters, it will be treated as a typical analytics table and for each query which reads from that table, would get the whole table.

    The following is an example in defining a spark table with incremental analytics.

    create temporary table orders using CarbonAnalytics options (tableName "ORDERS", schema "customerID STRING, phoneType STIRNG, OrderID STRING, cost DOUBLE, _timestamp LONG -i", incrementalParams "orders, DAY");

    And when you are done with the summarization, then you need to commit the status indicating the reading of the data is successfull. This is done via



    incrementalParams has two required parameters and an optional parameter.
    incrementalParams “uniqueID, timePeriod, #previousTimePeriods

    uniqueID : REQUIRED
        This is the unique ID of the incremental analytics definition. When committing the change, you need to use this ID in the incremental table commit command as shown above.

        The duration of the time period that you are processing. Ex: DAY

    If you are summarizing per DAY (the specified timePeriod in this case), then DAS has the ability to process the timestamp of the events and get the DAY they belongs to.

    Consider the situation with the following received events list. The requirement is we need to get the total number of orders placed per each minute.

    Customer ID
    Phone Type
    Order ID
    Nexus 5x
    26th May 2016 12:00:01
    Galaxy S7
    27th May 2016 02:00:02
    iPhone 6s
    27th May 2016 15:32:04
    Moto X
    27th May 2016 16:22:10
    LG G5
    27th May 2016 19:42:42

    And the last processed event is,

    Galaxy S7
    27th May 2016 15:32:04

    In the summarized table for the day 27th May 2016, would be 2 since when the script ran last, there were only two events for that particular time duration and other events came later.

    So when the script runs the next time, it needs to update the value for the time duration for the day of 27th May 2016.

    This is where the timePeriod parameter is used. For the last processed event, DAS calculates the “time period” it belongs to and pulls the data from the beginning of that time period onwards.

    In this case the last processed event
    Galaxy S7
    27th May 2016 15:32:04

    Would trigger DAS to pull data from 27th May 2016 00:00:00 onwards.

    #previousTimePeriods - Optional (int)
        Specifying this value would allow DAS to pull from previous time periods onwards. For example, if you had set this parameter to 30, then it would fetch 30 more periods worth of data as well.

    As per the above example, it would pull from 27th April 2016 00:00:00 onwards.

    For more information or queries do drop by the mailing lists[2].


    sanjeewa malalgodaDetails about ports in use when WSO2 API Manager started.

    Management console ports. WSO2 products that provide a management console use the following servlet transport ports:
        9443 - HTTPS servlet transport (the default URL of the management console is https://localhost:9443/carbon)
        9763 - HTTP servlet transport

    LDAP server ports
    Provided by default in the WSO2 Carbon platform.
        10389 - Used in WSO2 products that provide an embedded LDAP server

    KDC ports
        8000 - Used to expose the Kerberos key distribution center server

    JMX monitoring ports
    WSO2 Carbon platform uses TCP ports to monitor a running Carbon instance using a JMX client such as JConsole. By default, JMX is enabled in all products. You can disable it using /repository/conf/etc/jmx.xml file.
        11111 - RMIRegistry port. Used to monitor Carbon remotely
        9999 - RMIServer port. Used along with the RMIRegistry port when Carbon is monitored from a JMX client that is behind a firewall

    Clustering ports
    To cluster any running Carbon instance, either one of the following ports must be opened.
        45564 - Opened if the membership scheme is multicast
        4000 - Opened if the membership scheme is wka

    Random ports
    Certain ports are randomly opened during server startup. This is due to specific properties and configurations that become effective when the product is started. Note that the IDs of these random ports will change every time the server is started.

        A random TCP port will open at server startup because of the property set in the server startup script. This property is used for the JMX monitoring facility in JVM.
        A random UDP port is opened at server startup due to the log4j appender (SyslogAppender), which is configured in the /repository/conf/ file.

    These ports are randomly open at the server startup.
        tcp 0 0 :::55746 :::
        This port will be open by property set at the startup script. The purpose of this is to used for the JMX monitoring facility in JVM. So we don't have a control over this port.

        udp 0 0 :::46316 :::
        This port is open due to log4j appender (SyslogAppender). You can find this on
        If you don't want this log on the log file, you can comment it and it will not harm to the server.

    API Manager specific ports.
        10397 - Thrift client and server ports
        8280, 8243 - NIO/PT transport ports
        7711 - Thrift SSL port for secure transport, where the client is authenticated to BAM/CEP: stat pub

    sanjeewa malalgodaHow refresh token validity period works in WSO2 API Manager 1.9.0 and later versions.

    If refresh token renewal not enabled(using

    parameter in identity.xml configuration file), we use existing refresh token else we issue a new refresh token. If issuing new refresh token, use default refresh token validity period(which is configured in

    parameter of identity.xml) otherwise use existing refresh token's validity period.
    That is how refresh grant handler logic implemented in OAuth code

    First i will generate access token using password grant as follows.
    curl -k -d "grant_type=password&username=admin&password=admin" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token

    Then refresh token with default configurations.
    curl -k -d "grant_type=refresh_token&refresh_token=f0b7e3839143eec6439c10faf1a4c714&scope=PRODUCTION" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token

    As you can see refresh token update with token generation request.
    Now i disable reneval refresh token by updating following parameters.


    Then generate access token using refresh token.
    curl -k -d "grant_type=refresh_token&refresh_token=4f1bebe8b284b3216efb228d523df452&scope=PRODUCTION" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token
    As you can see refresh token did not changed and it remains as it is. While having new access token with same refresh token.

    sanjeewa malalgodaHow to add custom log appender to WSO2 products to directs logs to separate file(take mediator logs to different file)

    Implement custom log appender class as follows.

    package org.wso2.test.logging;
    import org.apache.log4j.DailyRollingFileAppender;
    import org.apache.log4j.spi.LoggingEvent;
    import org.wso2.carbon.context.CarbonContext;
    import org.wso2.carbon.utils.logging.LoggingUtils;
    import org.wso2.carbon.utils.logging.TenantAwareLoggingEvent;
    public class CustomAppender extends DailyRollingFileAppender {
        private static final String LOG_FILE_PATH = org.wso2.carbon.utils.CarbonUtils.getCarbonHome() + File.separator + "repository" +
                                                    File.separator + "logs" + File.separator + "messages" + File.separator;
        protected void subAppend(LoggingEvent loggingEvent) {
            int tenantId = AccessController.doPrivileged(new PrivilegedAction() {
                public Integer run() {
                    return CarbonContext.getThreadLocalCarbonContext().getTenantId();
            String logFileName = "test_file";
                try {
                    this.setFile(LOG_FILE_PATH + logFileName, this.fileAppend, this.bufferedIO, this.bufferSize);
                } catch (IOException ex) {
                String serviceName = CarbonContext.getThreadLocalCarbonContext().getApplicationName();
                final TenantAwareLoggingEvent tenantAwareLoggingEvent = LoggingUtils
                        .getTenantAwareLogEvent(loggingEvent, tenantId, serviceName);
                AccessController.doPrivileged(new PrivilegedAction() {
                    public Void run() {
                        return null; // nothing to return

    Then add following to file., NEW_CARBON_LOGFILE

    log4j.appender.NEW_CARBON_LOGFILE.layout.ConversionPattern=TID: [%T] [%S] [%d] %P%5p {%c} - %x %m {%c}%n
    log4j.appender.NEW_CARBON_LOGFILE.layout.TenantPattern=%U%@%D [%T] [%S]

    Then add org.wso2.test.logging.LoggingClassMediator class to your mediation flow. Please see sample mediator code below.

    package org.wso2.test.logging;
    import org.apache.log4j.Logger;
    import org.apache.synapse.MessageContext;
    import org.apache.synapse.mediators.AbstractMediator;
    public class LoggingClassMediator extends AbstractMediator {
    private static final Logger log = Logger.getLogger(AVSLoggingClassMediator.class);
    public LoggingClassMediator() {
    public boolean mediate(MessageContext mc) {
    String apiName = mc.getProperty("SYNAPSE_REST_API").toString();
    String name = "APIName::" + apiName+"::";
    try{"LOGGING MESSAGE " + mc.getProperty("RESPONSE_TIME"));"LOGGING MESSAGE " + mc.getProperty("SYNAPSE_REST_API"));
    }catch(Exception e){
    log.error(name+"ERROR :",e);
    return true;

    Dinusha SenanayakaHow to use App Manager Business Owner functionality ?

    WSO2 App Manager new release (1.2.0) has introduced capability to define business owner for each application. (AppM-1.2.0  is yet to be released by the time this blog post is writing and you could download nightly build from here and tryout until the release is done.)

    1. How to define business owners ?

    Login as a admin user to admin-dashboard by accessing following URL.

    This will give you UI similar to bellow where you can define new business owners.

    Click on "Add Business Owner" option to add new business owners.

    All created business owners are listed in UI as follows, which allows you to edit or delete from the list.

    2. How to associate business owner to application ?

    You can login to Publisher by accessing the following URL to create new app.

    In the add new web app UI, you should be able to see page similar to following, where you can type and select the business owner for the app.

    Once the required data is filled and app is ready to publish to store, change the app life-cycle state to 'published' to publish app into app store.

    Once the app is published, users could access app through the App Store by accessing the following URL.

    App users can find the business owner details in the App Overview page as shown bellow.

    If you are using REST APIs to create and publish the apps, following sample command would help.

    These APIs are protected with OAuth and you need to generate a oauth token before invoking APIs.

    Register a OAuth app and generates consumer key and secret key
    curl -X POST -H "Content-Type: application/json" -H "Authorization: Basic YWRtaW46YWRtaW4=" -d  '{"clientName": "Demo_App", "grantType": "password refresh_token"}'  http://localhost:9763/api/appm/oauth/v1.0/register

    Note: Authorization header should pass base64encoded(username:password) as the value in above request.


    Generate access token using above consumer/secret keys
    curl -k -X POST -H "Authorization: Basic MWthTXJDV0ZyOU5lVDFWQ2ZUeGFjSV9QdTBzYTpZTmtSQV8zMHB3T1o2a05USVpDOUI1NHA3TEVh" -H "Content-Type: application/x-www-form-urlencoded" -d 'username=admin&password=admin&grant_type=password&scope=appm:read appm:administration appm:create appm:publish'  https://localhost:9443/oauth2/token

    Note: Authorization header should pass base64encoded(clientId:clientSecret) as the value in above request.

    {"access_token":"cc78ea5a2fa491ed23c05288f539b5f5","refresh_token":"3b203c859346a513bd3f94fc6bf202e4","scope":"appm:administration appm:create appm:publish appm:read","token_type":"Bearer","expires_in":3600}

    Add new business owner
    curl -X POST -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" -d '{"site": "", "email": "", "description": "this is a test description", "name": "Beth", "properties": [{"isVisible": true,"value": "0112345678","key": "telephone"}]}' http://localhost:9763/api/appm/publisher/v1.1/administration/businessowner


    Create new App and define business owner as previously added business owner
    curl -X POST -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/publisher/v1.1/apps/webapp -d '{
      "name": "TravelBooking",
      "version": "1.0.0",
      "displayName": "Travel Booking",
      "description": "description",
      "isSite": "false",
      "context": "/travel",
      "appUrL": "",
      "acsUrl": "",
      "transport": "http",
      "policyGroups": [
          "policyGroupName": "policy1",
          "description": "Policy 1",
          "throttlingTier": "Unlimited",
          "userRoles": [
          "allowAnonymousAccess": "false"
          "policyGroupName": "policy2",
          "description": "Policy 2",
          "throttlingTier": "Gold",
          "userRoles": [
          "allowAnonymousAccess": "false"
          "policyGroupName": "policy3",
          "description": "Policy 3",
          "throttlingTier": "Unlimited",
          "userRoles": [
          "allowAnonymousAccess": "false"
      "uriTemplates": [
          "urlPattern": "/*",
          "httpVerb": "GET",
          "policyGroupName": "policy1"
          "urlPattern": "/*",
          "httpVerb": "POST",
          "policyGroupName": "policy2"
          "urlPattern": "/pattern1",
          "httpVerb": "POST",
          "policyGroupName": "policy3"

    {"AppId": "78012b68-719d-4e14-a8b8-a899d41dc712"}

    Change app lifecycle state to 'Published'
    curl -X POST -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/publisher/v1.1/apps/webapp/change-lifecycle?appId=78012b68-719d-4e14-a8b8-a899d41dc712&action=Submit%20for%20Review

    curl -X POST -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/publisher/v1.1/apps/webapp/change-lifecycle?appId=78012b68-719d-4e14-a8b8-a899d41dc712&action=Approve

    curl -X POST -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/publisher/v1.1/apps/webapp/change-lifecycle?appId=78012b68-719d-4e14-a8b8-a899d41dc712&action=Publish

    Retrieve App info in store
    curl -X GET -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/store/v1.1/apps/webapp/id/78012b68-719d-4e14-a8b8-a899d41dc712

    {"businessOwnerId":"1","isSite":"false","isDefaultVersion":true,"screenshots":[],"customProperties":[],"tags":[],"rating":0.0,"transport":["http"],"lifecycle":"WebAppLifeCycle","lifecycleState":"PUBLISHED","description":"description","version":"1.0.0","provider":"admin","name":"TravelBooking1","context":"/travel1","id":"78012b68-719d-4e14-a8b8-a899d41dc712","type":"webapp","displayName":"Travel Booking"}

    Retrieve business owner details
    curl -X GET -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/store/v1.1/businessowner/1

    {"site":"","email":"","description":"this is a test description","name":"Beth","properties":[{"isVisible":true,"value":"0112345678","key":"telephone"}],"id":1}

    sanjeewa malalgodaHow to change pooling configurations for caonnections made from API Gateway to backend - WSO2 API Manager, ESB

    Gateway to back end connection pooling related only parameter is
    and default value would be integer max value.  Worker pool size do not have direct relationship with client to gateway connections or gateway to back end connections.  They are just processing threads available within server. 

    So at anytime of you need to change pooling details from gateway to backend you can tunne following parameter in "" file. = 20

    Bhathiya Jayasekara[WSO2 APIM] Setting up API Manager Distributed Setup with Puppet Scripts

    In this post we are going to use puppet to setup a 4 node API Manager distributed setup. You can find the puppet scripts I used, in this git repo.

    NOTE: This blog post can be useful to troubleshoot any issues you get while working with puppet.

    In my puppet scripts there are below IPs of the nodes I used. You have to replace them with yours.

    Puppet Master/MySQL :
    Key Manager:

    That's just some information. Now let's start setting up each node, one by one.

    1) Configure Puppet Master/ MySQL Node 

    1. Install NTP, Puppet Master and MySQL.

    > sudo su
    > ntpdate ; apt-get update && sudo apt-get -y install ntp ; service ntp restart
    > cd /tmp
    > wget
    > dpkg -i puppetlabs-release-trusty.deb
    > apt-get update
    > apt-get install puppetmaster
    > apt-get install mysql-server

    2. Change hostname in /etc/hostname to puppet (This might need a reboot)

    3. Update /etc/hosts with below entry. puppet

    4. Download and copy directory to /etc/puppet

    5. Replace IPs in copied puppet scripts. 

    6. Before restarting the puppet master, clean all certificates, including puppet master’s certificate which is having its old DNS alt names.

    > puppet cert clean --all

    7. Restart puppet master

    > service puppetmaster restart

    8. Download and copy jdk-7u79-linux-x64.tar.gz to /etc/puppet/environments/production/modules/wso2base/files/jdk-7u79-linux-x64.tar.gz

    9. Download and copy to 

    10. Download and copy directory to /opt/db_scripts

    11. Unzip and copy wso2am-2.0.0-SNAPSHOT/dbscripts directory to /opt/db_scripts/dbscripts

    12. Download and copy file to /opt/ (Copy required private keys as well, to ssh to puppet agent nodes)

    13. Open and update script as required, and set read/execution rights.

    > chmod 755

    2) Configure Puppet Agents 

    Repeat these steps in each agent node.

    1. Install Puppet.

    > sudo su
    > apt-get update
    > apt-get install puppet

    2. Change hostname in /etc/hostname to apim-node-1 (This might need a reboot)

    3. Update /etc/hosts with puppet master's host entry. puppet

    4. Download and copy file to /opt/

    5. Set execution rights.

    > chmod 755 

    6. Download and copy file to /opt/deployment.conf (Edit this as required. For example, product_profile should be one of api-store, api-publisher, api-key-manager and gateway-manager)

    3) Execute Database and Puppet Scripts

    Go to /opt in puppet master and run ./ (or you can run in each agent node.)

    If you have any questions, please post below.


    Rushmin FernandoWSO2 App Manager 1.2.0 - How to use custom app properties in ReST APIs

    WSO2 App Manager supports defining and using custom properties for app types. In order to add new custom property, the relevant RXT file (registry extension) and couple of other files should be amended. But these custom properties are not marked as 'custom' properties  in any place. Once defined they are treated just like any other another field.  

    With the introduction of the new ReST API implementation in App Manager 1.2.0, it was bit of challenging to expose these custom fields through APIs. The new ReST APIs are documented using Swagger. So when the relevant API response models are defined, the custom fields can't be added as named properties since they are dynamic. So the solution is to have a field, which is a Map, to represent the custom properties. And the need of marking custom fields as 'custom' should be addressed too. 

    In App Manager, this has been addressed by having another configuration to store the custom properties.

    Where is the definitions file

    In the registry there are JSON resources which are custom property definitions. There is a definition file per app type.

    e.g. Definition file for web apps -  

    What does a definition file look like

    As of now the custom property definitions file only has the names of the custom properties.


    How do I persist these custom properties for an app using the ReST API

    The request payload should contain the custom properties as below.

      "name": "app1",
      "version": "1.0",

    sanjeewa malalgodaHow to Create Service Metadata Repository for WSO2 Products(WSO2 ESB, DSS, AS)

    Sometimes we need to store all service metadata in single place and maintain changes, life cycles etc. To address this we can implement this as automated process.
    Here is the detailed flow.
    • In jenkins we will deployed scheduled task to trigger some event periodically.
    • Periodic task will call WSO2 ESB, DSS and App Server’s admin services to get service metadata. In ESB we can call proxy service admin service to list app proxy services deployed in ESB. From same call we can get WSDLs associated with services. Please refer this article for more information about admin services and how we can use them.
    • In same way we can call all services and get complete service data.
    • Then we can call registry rest API and push that information.  Please refer this article for more information about Registry REST API.  

    If we consider proxy service details then we can follow approach listed below.
    Create web service client for service and invoke it from client.
    See following Soapui sample to get all proxy services deployed in ESB.

    Screenshot from 2016-06-17 11-50-59.png

    You will see response like below. There you will all details related to given proxy service such as wsdls, service status, service type etc. So you can list all service metadata using the information we retrieved from this web service call.

    <soapenv:Envelope xmlns:soapenv="">
        <ns:listServicesResponse xmlns:ns="http://org.apache.axis2/xsd">
           <ns:return xsi:type="ax2539:ServiceMetaDataWrapper" xmlns:ax2541="" 
    xmlns:ax2542="" xmlns:ax2545="" xmlns:xsi="">
              <ax2539:services xsi:type="ax2539:ServiceMetaData">
                 <ax2539:description xsi:nil="true"/>
                 <ax2539:eprs xsi:nil="true"/>
                 <ax2539:mtomStatus xsi:nil="true"/>
                 <ax2539:operations xsi:nil="true"/>
                 <ax2539:scope xsi:nil="true"/>
                 <ax2539:securityScenarioId xsi:nil="true"/>
                 <ax2539:serviceDeployedTime>1970-01-01 05:30:00</ax2539:serviceDeployedTime>
                 <ax2539:serviceId xsi:nil="true"/>
                 <ax2539:serviceUpTime>16969day(s) 6hr(s) 20min(s)</ax2539:serviceUpTime>
                 <ax2539:serviceVersion xsi:nil="true"/>
                 <ax2539:wsdlPortTypes xsi:nil="true"/>
                 <ax2539:wsdlPorts xsi:nil="true"/>
              <ax2539:services xsi:type="ax2539:ServiceMetaData">
                 <ax2539:description xsi:nil="true"/>
                 <ax2539:eprs xsi:nil="true"/>
                 <ax2539:mtomStatus xsi:nil="true"/>
                 <ax2539:operations xsi:nil="true"/>
                 <ax2539:scope xsi:nil="true"/>
                 <ax2539:securityScenarioId xsi:nil="true"/>
                 <ax2539:serviceDeployedTime>1970-01-01 05:30:00</ax2539:serviceDeployedTime>
                 <ax2539:serviceId xsi:nil="true"/>
                 <ax2539:serviceUpTime>16969day(s) 6hr(s) 20min(s)</ax2539:serviceUpTime>
                 <ax2539:serviceVersion xsi:nil="true"/>
                 <ax2539:wsdlPortTypes xsi:nil="true"/>
                 <ax2539:wsdlPorts xsi:nil="true"/>

    We can automate this service metadata retrieving process and persist it to registry. Please refer below diagram to understand flow for this use case. Discovery agent will communicate with servers and use REST client to push events to registry.

    Untitled drawing(1).jpg

    Udara LiyanagePublish WSO2 Carbon logs to Logstash/Elasticsearh/Kibana (ELK) using Filebeat

    You know that Logstash, Elasticsearch and Kibana triple, aka ELK is a well used log
    analysis tool set. This howto guide explains how to publish logs of WSO2 Carbon
    servers to ELK platform

    # Setup ELK

    You can download Logstash, Elasticsearch and Kibana binaries one by one and setup ELK. But I am a Docker fan
    so I use a preconfigured Docker image. Most people use sebp/elk Docker image. This Docker images by default does not comes
    with a Logstash receiver for receiving beats event. Thus I added below Logstash configuration to receive beats events and create my own
    docker image udaraliyanage/elk. Either you can use my Docker image or add below Logstash configuration to default Docker images

    input {
    beats {
    type => beats
    port => 7000
    output {
    elasticsearch {
    hosts => “localhost:9200”
    stdout { codec => rubydebug }

    Above configuration causes Logstash to listen on port 7000 (input section) and forward the logs to Elasticsearch which is running on port 9200
    of Docker container.

    Now start the docker container as
    docekr run -d -p 7000:7000 -p 5601:5601 udaraliyanage/elklog4

    port 6000 => Logstash
    port 5601 => Kibana

      # Setup Carbon Server to publish logs to Logstash

    * Download filebeat deb file from [2] and install
    dpkg -i filebeat_1.2.3_amd64

    * Create a filebeat configuration file /etc/carbon_beats.yml with following content.

    Please make sure to provide the correct wso2carbon.log file location in paths section. You can provide multiple carbon logs as well
    if you are running multiple Carbon servers in your machine.


    – /opt/wso2as-5.3.0/repository/logs/wso2carbon.log
    input_type: log
    document_type: appserver_log
    hosts: [“localhost:7000”]
    pretty: true
    rotateeverybytes: 10485760 # = 10MB

    * Now  start Carbon Server ./bin/ start`

    # View logs from Kibana by visiting http://localhost:5601


    Udara LiyanagePublish WSO2 Carbon logs to Logstash/Elasticsearh/Kibana (ELK) using Log4j SocketAppender

    I assume that you know that Logstash, Elasticsearch and Kibana stack, a.k.a ELK is a well used log analysis tool set. This howto guide explains how to publish logs of WSO2 Carbon
    servers to ELK platform.

    # Setup ELK

    You can download Logstash, Elasticsearch and Kibana binaries one by one and setup ELK. But I am a Docker fan so I use a preconfigured Docker image. Most people use sebp/elk Docker image. This Docker images by default does not comes with a Logstash receiver for log4j events. Thus I added below Logstash configuration to receive log4j events and create my own docker image udaraliyanage/elk. Either you can use my Docker image or add below Logstash configuration to default Docker images

    input {
      log4j {
        mode => server
        host => “”
        port => 6000
        type => “log4j”
    output {
      elasticsearch {
          hosts => “localhost:9200”
      stdout { codec => rubydebug }

    Above configuration causes Logstash to listen on port 6000 (input section) and forward the logs to Elasticsearch which is running on port 9200
    of Docker container.

    Now start the docker container as
    `docekr run -d -p 6000:6000 -p 5601:5601 udaraliyanage/elklog4j`

    port 6000 => Logstash
    port 5601 => Kibana

    # Setup Carbon Server to publish logs to Logstash

    * Download Logstash json even layout dependecy jary from [3] and place it $CARBON_HOME/repository/components/lib .
    This convert the log event to binary format and stream them to
    a remote log4j host, in our case Logstash running on port 6000

    * Add following log4j appended configurations to Carbon servers by editing $CARBON_HOME/repository/conf/ file

    log4j.appender.tcp.layout.ConversionPattern=[%d] %P%5p {%c} – %x %m%n

    RemoteHost => Logstash server where we want to publish events to, it is localhost:6000 in our case.
    Application => Name of the application which publishes log. It is useful for the one who view logs from Kibana so that he can find from which server a particular logs is received.

    * Now  start Carbon Server ./bin/ start`

    # View logs from Kibana by visiting http://localhost:5601


    Shazni NazeerUnleashing the Git - part 3 - Working with remote repositories

    More often we need to share the files that we modify and make it available for other users. This is typically the case in Software development projects, where several people work on a set of files and all needs to modify and do changes. The solution is to have the repository in a centralised place and each and every one work with there own copies before they commit it to the central place. There are many online git repository providers. Examples are GitHub, BitBucket etc.

    Choose, whatever best suites for you and sign up for an account. Most online repositories provide free accounts. I've a created a one in bitbucket and have created a repository named online_workbench.
    $ git remote add origin  // Syntax is git remote add <name> <repository URL>
    $ git remote rm <name> // To remove a remote from your local repository
    $ git push -u origin master
    Password for '':
    Counting objects: 4, done.
    Delta compression using up to 4 threads.
    Compressing objects: 100% (4/4), done.
    Writing objects: 100% (4/4), 88.01 KiB, done.
    Total 4 (delta 0), reused 0 (delta 0)
     * [new branch]      master -> master
    Branch master set up to track remote branch master from origin.

    After you entered your password, you'll have your repository online and its local location is your local $HOME/example directory.

    'git push -u origin master' is to make the git aware, that all the pull and push operations to default to the master branch. If in case '-u origin master' is not specified, each and every time you issue a 'pull' or 'push' command, you will have to specify the origin.

    Ok.. Now if someone needs to get a local copy and work on this remote repository, what he/she needs to do? Only few steps are involved.

    First we need to clone the repository into a directory. Navigate or create a new directory where you need the local clone of the repository and issue the following command.
    $ git clone <remote repository location> <Path you want the clone to be>
    If you omit the <Path you want the clone to be>, it will be in the current directory.

    Depending on how the firewall on your computer or local area network (LAN) is configured, you might get an error trying to clone a remote repository over the network. Git uses SSH by default to transfer changes
    over the network, but it also uses the Git protocol (signified by having git:// at the beginning of the URI) on port 9418. Check with your local network administrator to make sure communication on ports 22—the port SSH communicates on—and 9418 are open if you have trouble communicating with a remote repository.
    $ git clone --depth 50 <remote repository location>  // Create a shallow clone with the last fifty commits

    Then, if we do the modification to the files or added files, next we need to stage it for commits. Issue the following commands.
    $ git add *            // Stages and add all your changes, This is similar to git add -A 
    // If you specify git add -u will only previously added and updated files will be staged. -p parameter will present you with sections so that you are given option to include a change or not.
    $ git commit –m 'My commit comment'     // Commit the changes to the local repository
    $ git pull        // checks whether there are any unsinced updates in the remote location and if exist syncs local and the server
    $ git push        // Add the changes made by you to the server

    sanjeewa malalgodaHow to define environment specific parameters and resolve them during build time using maven - WSO2 ESB endpoints

    Lets say we have multiple environments named DEV, QA, PROD, you need to have different Carbon Application (C-App) projects to group the dynamic artifacts for different environments such as endpoints, WSDLs and policies.

    To address this we can have property file along with artifacts to store environment specific parameters. Maven can build the artifacts based on the properties in the property file to create artifacts that are specific to the given environment.

    Given below is an example of this. An endpoint value should be different from one environment to another. We can follow the below approach to achieve this.

    Define the endpoint EP1 as below, having a token for hostname which could be replaced later.
        <address trace="disable" uri="http://@replace_token/path"/>

    Add the maven antrun plugin to replace the token using a maven property as below in pom file.
                <replace token= "@replace_token" value="${hostName}" dir="${basedir}/target/capp/artifacts/endpoint/EP1/">                                 
                  <include name="**/*.xml"/>

    Now build the project providing the maven property hostName as below,
    mvn clean install -DhostName=
    Instead of giving the hostname manually as above, now we can configure jenkins or any other build server to pick up the hostname from a config file and feed it as a system property to the build process.

    Once we completed environment specific builds we will have 2 car files for each environment and they have same endpoints but actually they are pointed to different URLs as follows.

    |___ End Point Project
            |__ backendEP.xml     //points to dev backend,
            |__ backendEP.xml     //points to qa backend,

    sanjeewa malalgodaBuilding Services with WSO2 Microservices Framework for Java

    Many organizations today are leveraging microservices architecture (MSA) which is becoming increasingly popular because of its many potential advantages. This webinar introduces WSO2 Microservices Framework for Java (MSF4J), which provides the necessary framework and tooling to build an MSA solution.Recently i presented about microservices at Minneapolis, MN user group meetup ( During that session we discussed about following topics.
    • What is Microservice?
    • Why Microservice?
    • Microservices outer architecture.
    • WSO2 Microservice movement.
    • Introduction to WSO2 MSF4J
    • Implementation of WSO2 MSF4J
    • Develop Microservices with MSF4J(security, metrics etc. )
    • Demo.

    Afkham AzeezMicroservices Circuit Breaker Implementation

    Circuit breaker


    Circuit breaker is a pattern used for fault tolerance and the term was first introduced by Michael Nygard in his famous book titled "Release It!". The idea is, rather than wasting valuable resources trying to invoke an operation that keeps failing, the system backs off for a period of time, and later tries to see whether the operation that was originally failing works.

    A good example would be, a service receiving a request, which in turn leads to a database call. At some point in time, the connectivity to the database could fail. After a series of failed calls, the circuit trips, and there will be no further attempts to connect to the database for a period of time. We call this the "open" state of the circuit breaker. During this period, the callers of the service will be served from a cache. After this period has elapsed, the next call to the service will result in a call to the database. This stage of the circuit breaker is called the "half-open" stage. If this call succeeds, then the circuit breaker goes back to the closed stage and all subsequent calls will result in calls to the database. However, if the database call during the half-open state fails, the circuit breaker goes back to the open state and will remain there for a period of time, before transitioning to the half-open state again.

    Other typical examples of the circuit breaker pattern being useful would be a service making a call to another service, and a client making a call to a service. In both cases, the calls could fail, and instead of indefinitely trying to call the relevant service, the circuit breaker would introduce some back-off period, before attempting to call the service which was failing.

    Implementation with WSO2 MSF4J

    I will demonstrate how a circuit breaker can be implemented using the WSO2 Microservices Framework for java (MSF4J) & Netflix Hystrix. We take the stockquote service sample, and enable circuit breaker. Assume that the stock quotes are loaded from a database. We wrap the calls to this database in a Hystrix command. If database calls fail, the circuit trips and stock quotes are served from cache.

    The complete code is available at

    NOTE: To keep things simple and focus on the implementation of the circuit breaker patter, rather than make actual database calls, we have a class called org.example.service.StockQuoteDatabase and calls to its getStock method could result in timeouts or failures. To see an MSF4J example on how to make actual database calls, see

    The complete call sequence is shown below. StockQuoteService is an MSF4J microservice.

    Configuring the Circuit Breaker

     The circuit breaker is configured as shown below.

     We are enabling circuit breaker & timeout, and then setting the threshold of failures which will trigger circuit tripping to 50, and also timeout to 10ms. So any database call that takes more than 10ms will also be registered as a failure. For other configuration parameters, please see

    Building and Running the Sample

    Checkout the code from & use Maven to build the sample.

    mvn clean package

    Next run the MSF4J service.

    java -jar target/stockquote-0.1-SNAPSHOT.jar 

    Now let's use cURL to repeatedly invoke the service. Run the following command;

    while true; do curl -v http://localhost:8080/stockquote/IBM ; done

    The above command will keep invoking the service. Observe the output of the service in the terminal. You will see that some of the calls will fail on the service side and you will be able to see the circuit breaker fallback in action and also the circuit breaker tripping, then going into the half-open state, and then closing.

    Chathurika Erandi De SilvaWSO2 ESB: Connectors, DATA MAPPER -> Nutshell

    Sample Scenario

    We will provide a query to SalesForce and obtain data. Next we will use this data and generate an email using Google Gmail API. Using WSO2 ESB, we have the capability of using SalesForce and Gmail connectors. These connectors contain many operations that will be useful in performing different tasks using the relevant apps. For this sample, I will be using the query operation of the SalesForce connector and the createAMail operation of the Gmail connector.

    Section 1

    Setting up Salesforce Account

    In order to execute the sample with ESB, a Sales Force Account should be setup, in the following manner

    1. Create a Salesforce Free Developer Account
    2. Go to Personal Settings
    3. Reset My security token and get token

    The above token should be used with the password appended as password<token> in ESB Salesforce connector operations.

    Obtaining information from Google Gmail API

    The WSO2 ESB Gmail Connector operations require the userID, accessToken, Client ID, Client Secret and Refresh token to call the Gmail API. Follow the below steps to retrieve that information

    1. Register a project in Google Developer Console
    2. Enable Gmail API for the project
    3. Obtain Client ID and Client Secret for the project by generating credentials
    4. Provide an authenticated Redirect URL for the project.
    5. Give the following request in the browser to obtain the code<redirect_uri>&response_type=code&client_id=<client_id>&scope=

    This will give a code as below

    <Redirect URL>?code=<code>


    6.  Send the following payload to the below given endpoint

    HTTP Method: POST

    code: <code obtained in the above step>
    client_id: <client_id obtained above>
    client_secret: <client_secret obtained above>
    redirect_uri: <redirect uri authorized for the web client in the project>
    This will give you an output as below

     "access_token": "ya29.Ci8CA7JMJYDrKqWsa-jaYUQhuKnQsx4vYdUin7bvjToReA9FD6Z5GeRHeBozFlLowg",
     "token_type": "Bearer",
     "expires_in": 3600,
     "refresh_token": "1/RUjHwS-5pW9HEJ7U8HfZTQPdG-fj7juqeBtAKhScNeg"

    Now we are ready to go through second part of the post

    Section 2

    1. Create a ESB Config Project using WSO2 ESB Tooling
    2. Add the SalesForce Connector and the Gmail Connector to the project
    3. Create a Sequence

    In this sample scenario I am reading the request and obtaining the needed query that I will be sending to SalesForce, the subject of the mail to be generated and the recipient of the email. These information will be set as message context properties which will be used later.

    <property expression="//test:query" name="Query" scope="default" type="STRING" xmlns:test="org.wso2.sample"/>
       <property expression="//test:subject" name="Subject" scope="default" type="STRING" xmlns:test="org.wso2.sample"/>
       <property expression="//test:recipient" name="Recipient" scope="default" type="STRING" xmlns:test="org.wso2.sample"/>

    3.a.  Add the query operation from the SalesForce connector
    3.b. Create a new configuration (in Properties view of the Dev Studio) for the query connector and provide the below information

    Configuration name: <name for the configuration>
    Username: <username of the salesforce account>
    Password: <password<token> of salesforce account>
    Login URL: <specific login URL for the salesforce>

    3.c. In Properties view of the Query Parameter provide the following as provided in the image


    The source view will be as below and a local entry named salesforce should be created in the project under local entries.

    <salesforce.query configKey="salesforce">

    This will return a set of data in xml format as below


    <?xml version="1.0" encoding="UTF-8"?>
                   <type>API REQUESTS</type>
               <result xsi:type="QueryResult">
                   <queryLocator xsi:nil="true"/>
                   <records xsi:type="sf:sObject">
                       <sf:Id xsi:nil="true"/>
                       <sf:MasterRecordId xsi:nil="true"/>
                       <sf:Name>Burlington Textiles Corp of America</sf:Name>
                       <sf:Phone>(336) 222-7000</sf:Phone>
                       <sf:ShippingCountry xsi:nil="true"/>

    3. d. Add an Iterator mediator to the sequence. This will iterate through the obtained xml content.

    3. e. Add a data mapper mediator to map the xml entities to gmail email components as below

    Data Mapper mediator configuration


    For input and output types of mapping use, xml and connector types respectively. Output connector type will be Gmail.


    3.f.  Next add createAMail operation from Gmail Connector to the sequence

    The end sequence view will be as following


    3.g. Create a new configuration with the createAMail operation as below

    Configuration Name: <provide a name for the configuration>
    User ID: <provide the username using which the google project was created before>
    Access Token: <Access Token obtained in section 1>
    Client ID: <Client ID obtained in section 1>
    Refresh Token: <Refresh Token obtained in section 1>

    3.h. Configure the createAMail as shown in the below image


    Source view of configuration

     <gmail.createAMail configKey="gmail">

    There will another local entry created in the project called gmail after this point.

    4. Create an Inbound Endpoint and associate the above sequence

    5. Create a Connector Explorer Project in the workspace and add the SalesForce, Gmail connectors to it


    6. Create a CAR file with the following

    ESB Config Project
    Registry Resource Project for Data Mapper
    Connector Explorer Project

    7. Deploy the CAR file in the WSO2 ESB

    Invoke the inbound endpoint

    Sample Request

    <soapenv:Envelope xmlns:soapenv="" xmlns:test="org.wso2.sample">
        <test:query>select MasterRecordId,name,AccountNumber,Phone,BillingCountry,BillingPostalCode,BillingState,BillingCity,ShippingCountry from Account WHERE BillingCountry='USA'</test:query>
        <test:subject>Test Salesforce</test:subject>

    A mail will be sent to the recipient after invocation with the given subject and the mapped data as the message body.

    Srinath PereraRolling Window Regression: a Simple Approach for Time Series Next value Predictions

    Given a time series, predicting the next value is a problem that fascinated  programmers for a long time. Obviously, a key reason for this attention is stock markets, which promised untold riches if you can crack it. However, except for few (see A rare interview with the mathematician who cracked Wall Street), those riches have proved elusive.

    Thanks to IoT (Internet of Things), time series analysis is poise to a come back into the lime light. IoT let us place ubiquitous sensors everywhere, collect data, and act on that data. IoT devices collect data through time and resulting data are almost always time series data.

    Following are few use cases for time series prediction.

    1. Power load prediction
    2. Demand prediction for Retail Stores
    3. Services (e.g. airline check in counters, government offices) client prediction
    4. Revenue forecasts
    5. ICU care vital monitoring
    6. Yield and crop prediction

    Let’s explore the techniques available for time series forecasts.

    The first question is that “isn’t it regression?”. It is close, but not the same as regression. In a time series, each value is affected by the values just preceding this value. For example, if there is lot of traffic at 4.55 in a junction, chances are that there will be some traffic at 4.56 as well. This is called autocorrelation. If you are doing regression, you will only consider x(t) while due to auto correlation, x(t-1), x(t-2), … will also affect the outcome. So we can think about time series forecasts as regression that factor in autocorrelation as well.

    For this discussion, let’s consider “Individual household electric power consumption Data Set”, which is data collected from a one house hold over four years in one minute intervals. Let’s only consider three fields, and data set will look like following.

    The first question to ask is how do we measure success? We do this via a loss function, where we try to minimize the loss function. There are several loss functions, and they are different pros and cons.

    1. MAE ( Mean absolute error) — here all errors, big and small, are treated equal.
    2. Root Mean Square Error (RMSE) — this penalizes large errors due to the squared term. For example, with errors [0.5, 0.5] and [0.1, 0.9], MSE for both will be 0.5 while RMSE is 0.5 and. 0.45.
    3. MAPE ( Mean Absolute Percentage Error) — Since #1 and #2 depend on the value range of the target variable, they cannot be compared across data sets. In contrast, MAPE is a percentage, hence relative. It is like accuracy in a classification problem, where everyone knows 99% accuracy is pretty good.
    4. RMSEP ( Root Mean Square Percentage Error) — This is a hybrid between #2 and #3.
    5. Almost correct Predictions Error rate (AC_errorRate)—percentage of predictions that is within %p percentage of the true value

    If we are trying to forecast the next value, we have several choices.

    ARIMA Model

    The gold standard for this kind of problems is ARIMA model. The core idea behind ARIMA is to break the time series in o different components such as trend component, seasonality component etc and carefully estimate a model for each component. See Using R for Time Series Analysis for a good overview.

    However, ARIMA has an unfortunate problem. It needs an expert ( a good statistics degree or a grad student) to calibrate the model parameters. If you want to do multivariate ARIMA, that is to factor in multiple fields, then things get even harder.

    However, R has a function called auto.arima, which estimates model parameters for you. I tried that out.

    x_train <- train data set
    X-test <- test data set
    powerTs <- ts(x_train, frequency=525600, start=c(2006,503604))
    arimaModel <- auto.arima(powerTs)
    powerforecast <- forecast.Arima(arimaModel, h=length(x_test))

    You can find detail discussion on how to do ARIMA from the links given above. I only used 200k from the data set as our focus is mid-size data sets. It gave a MAPE of 19.5.

    Temporal Features

    The second approach is to come up with a list of features that captures the temporal aspects so that the auto correlation information is not lost. For example, Stock market technical analysis uses features built using moving averages. In the simple case, an analyst will track 7 days and 21 days moving averages and take decisions based on cross-over points between those values.

    Following are some feature ideas

    1. collection of moving averages/ medians(e.g. 7, 14, 30, 90 day)
    2. Time since certain event
    3. Time between two events
    4. Mathematical measures such as Entropy, Z-scores etc.
    5. X(t) raised to functions such as power(X(t),n), cos((X(t)/k)) etc

    Common trick people use is to apply those features with techniques like Random Forest and Gradient Boosting, that can provide the relative feature importance. We can use that data to keep good features and drop ineffective features.

    I will not dwell too much time on this topic. However, with some hard work, this method have shown to give very good results. For example, most competitions are won using this method (e.g. /).

    Down side, however, is crafting features is a black art. It takes lots of work and experience to craft the features.

    Rolling Windows based Regression

    Now we got to the interesting part. It seems there is an another method that gives pretty good results without lots of hand holding.

    Idea is to to predict X(t+1), next value in a time series, we feed not only X(t), but X(t-1), X(t-2) etc to the model. A similar idea has being discussed in Rolling Analysis of Time Series although it is used to solve a different problem.

    Let’s look at an example. Let’s say that we need to predict x(t+1) given X(t). Then the source and target variables will look like following.

    Data set would look like following after transformed with rolling window of three.

    Then, we will use above transformed data set with a well-known regression algorithm such as linear regression and Random Forest Regression. The expectation is that the regression algorithm will figure out the autocorrelation coefficients from X(t-2) to X(t).

    For example, with the above data set, applying Linear regression on the transformed data set using a rolling window of 14 data points provided following results. Here AC_errorRate considers forecast to be correct if it is within 10% of the actual value.

    LR AC_errorRate=44.0 RMSEP=29.4632 MAPE=13.3814 RMSE=0.261307

    This is pretty interesting as this beats the auto ARIMA right way ( MAPE 0.19 vs 0.13 with rolling windows).

    So we only tried Linear regression so far. Then I tried out several other methods, and results are given below.

    Linear regression still does pretty well, however, it is weak on keeping the error rate within 10%. Deep learning is better on that aspect, however, took some serious tuning. Please note that tests are done with 200k data points as my main focus is on small data sets.

    I got the best results from a Neural network with 2 hidden layers of size 20 units in each layer with zero dropout or regularisation, activation function “relu”, and optimizer Adam(lr=0.001) running for 500 epochs. The network is implemented with Keras. While tuning, I found articles [1] and [2] pretty useful.

    Then I tried out the same idea with few more datasets.

    1. Milk production Data set ( small < 200 data points)
    2. Bike sharing Data set (about 18,000 data points)
    3. USD to Euro Exchange rate ( about 6500 data points)
    4. Apple Stocks Prices (about 13000 data points)

    Forecasts are done as univariate time series. That is we only consider time stamps and the value we are forecasting. Any missing value is imputed using padding ( using most recent value). For all tests, we used a window of size 14 for as the rolling window.

    Following tables shows the results. Here except for Auto.Arima, other methods using a rolling window based data set.

    There is no clear winner. However, rolling window method we discussed coupled with a regression algorithm seems to work pretty well.


    We discussed three methods: ARIMA, Using Features to represent time effects, and Rolling windows to do time series next value forecasts with medium size data sets.

    Among the three, the third method provides good results comparable with auto ARIMA model although it needs minimal hand holding by the end user.

    Hence we believe that “Rolling Window based Regression” is a useful addition for the forecaster’s bag of tricks!

    However, this does not discredit ARIMA, as with expert tuning, it will do much better. At the same time, with hand crafted features methods two and three will also do better.

    One crucial consideration is picking the size of the window for rolling window method. Often we can get a good idea from the domain. Users can also do a parameter search on the window size.

    Following are few things that need further exploration.

    • Can we use RNN and CNN? I tried RNN, but could not get good results so far.
    • It might be useful to feed other features such as time of day, day of the week, and also moving averages of different time windows.


    1. An overview of gradient descent optimization algorithms
    2. CS231n Convolutional Neural Networks for Visual Recognition

    Chathurika Erandi De SilvaJSON to XML mapping using Data Mapper: Quick and Simple guide

    This post is a quick and simple walk through in creating a simple mapping between JSON and XML using Data Mapper for a beginner. If you are new to Data Mapper, please read this before continuing with this post.

    For this sample below json and xmls will be used



     "order": [{

       "orderid": "1",

       "ordername": "Test",
       "customer": "test"

    <?xml version="1.0" encoding="UTF-8" ?>
    <soapenv:Envelope xmlns:soapenv="">

    1. Create a ESB Config project using Eclipse IDE (WSO2 ESB Tooling component should be installed)
    2. Create a sequence
    3. Include DataMapper mediator: Following image illustrates a sample sequence, where the output from the DataMapper mediator is returned to the client.

    4. Create the mapping file by double clicking on the  DataMapper mediator
    5. Include the above json schema as the input and xml schema as the output
    6. Map the values by connecting the variables in the input and output fields
    7. Create an API and include the above created sequence

    When the api is invoked the converted and mapped xml is returned to the client as below.

    Chanaka JayasenaRole of GREG permission-mappings.xml

     GREG has a permission-mapping.xml. We can find it at /home/chanaka/Desktop/greg/wso2greg-5.2.0/repository/conf/etc/permission-mappings.xml

    Each entry has three attributes.
    • managementPermission
    • resourcePermission
    • resourcePaths


    There are default configurations in this file. These entries are mapping each permission in the permission tree in to resource paths and assign them permissions.

    With the above line in the permission-mappings.xml, an admin user who assign the permission "/permission/admin/manage/resources/govern/server/list" will be able to do get operations on registry resources stored at "/_system/governance/trunk/servers". We can provide multiple resource paths by separating them by comas.

    There are 3 types of permissions you can apply.

    We can use these permissions to control each permission tree items behavior. 

    With the following documentation link we can find the default behavior implemented with this permission-mappings.xml.

    Chamila WijayarathnaUpdating RXT's and Lifecycles by a Non Admin User in WSO2 GREG

    We can achieve these by creating two new roles with specific set of permissions for each of the operations and adding your users to those roles.

    Then in the Main tab of management console, select Browse in Resources section. Then go to/_system/config/repository/components/org.wso2.carbon.governance/configuration section as in following.

     From there give read, write and delete permissions for the new role we created as in following image.

     Now you can assign users for your role and those users will be able to update RXTs.

    After finish creating this role, you can assign users for this role and those users will be able to update Lifecycles.

    Lahiru SandaruwanAccess JWT Token in a Mediator Extension in WSO2 API Manager

    There can be requirements for filtering requests at API Manager layer based on user claims.
    That can be easily done using a Mediator Extension in WSO2 API Manager.

    See the references, [1] for enabling JWT and [2] for adding mediator extensions.

    Please see the sample i tested below, I used a Javascript to decode JWT token, used [3] as a help,

    <?xml version="1.0" encoding="UTF-8"?>
        xmlns="" name="Test:v1.0.0--In">
        <log level="custom">
            <property name="--TRACE-- " value="API Mediation Extension"/>
        <property name="authheader" expression="get-property('transport','X-JWT-Assertion')"></property>
        <script language="js"> var temp_auth = mc.getProperty('authheader').trim();var val = new Array();val= temp_auth.split("\\."); var auth=val[1];var jsonStr =, "UTF-8"); var tempStr = new Array();tempStr= jsonStr.split('\":\"'); var decoded = new Array();decoded = tempStr[1].split("\"");mc.setProperty("enduser",decoded[0]); </script>
        <log level="custom">
            <property name=" Enduser " expression="get-property('enduser')"/>

    I created an API and engaged above sample as a mediation extension. I retrieved "enduser" claim as an example.

    Ushani BalasooriyaDifference in API and User Level Advance Resource throttling in WSO2 API Manager 2.0

    From WSO2 API Manager 2.0 throttling implementation onward 2 different throttling levels have been introduced in Resource Level throttling.

    When you login to admin dashboard you can see Advance resource throttling tier configurations under throttle policies section as given in the below screenshot.

    When you add a resource tier, you can select either API or resource level as below.

    API Level Resource Tier

    For API level policy, it is the shared quota of all applications that invoke the API.
    if someone selects API Level policy then selecting resource level policy will be disabled.

    So as an example if there are 2 users subscribe to the same api, the request count or bandwidth  defined to the API is applicable for both users as a shared quota. So if you have defined 10000 requests per minute both users can share that amount.

    User Level Resource Tier

    For User level policy, it is the quota assigned to each application that will invoke the API.
    So the quota is assigned for single user who can access the particular API from multiple applications. Simply when it's user level, throttle key will be associate with user name.

    So as an example if you have selected user level, and when there are 2 users subscribed to the same API, the defined count in tier will be assigned to each user. So if you have defined 10000 requests per minute, both users get 10000 requests /1 minute per each as a total of 20000 requests.

    Prabath SiriwardenaTen Talks On Microservices You Cannot Miss At Any Cost!

    The Internet is flooded with articles, talks and panel discussions on Microservices. According to Google Trends, the word, microservices — has a steep, upward curve since mid 2014. Finding the best talks among all the published talks on microservices is a hard job — and I might be off the track in picking the best 10 — apologize me if your most awesome microservices talk is missing here and please feel free to add a link to it as a comment. To add one more to the pile of microservices talks we already have, I will be doing a talk on Microservices Security at the Cloud Identity Summit, New Orleans next Monday.


    Dimuthu De Lanerolle

    Sample UI test for gerg LifeCycle scenarios

    Refer :


    *Copyright (c) 2005-2015, WSO2 Inc. ( All Rights Reserved.
    *WSO2 Inc. licenses this file to you under the Apache License,
    *Version 2.0 (the "License"); you may not use this file except
    *in compliance with the License.
    *You may obtain a copy of the License at
    *Unless required by applicable law or agreed to in writing,
    *software distributed under the License is distributed on an
    *KIND, either express or implied.  See the License for the
    *specific language governing permissions and limitations
    *under the License.
    package org.wso2.carbon.greg.ui.test.lifecycle;

    import org.openqa.selenium.By;
    import org.openqa.selenium.WebDriver;
    import org.testng.annotations.AfterClass;
    import org.testng.annotations.BeforeClass;
    import org.testng.annotations.Test;
    import org.wso2.carbon.automation.engine.context.TestUserMode;
    import org.wso2.carbon.automation.engine.context.beans.User;
    import org.wso2.carbon.automation.extensions.selenium.BrowserManager;

    import org.wso2.greg.integration.common.utils.GREGIntegrationUIBaseTest;

    import static org.testng.Assert.assertTrue;

     * This UI test class covers a full testing scenario of,
     *  1. Uploading a LC to greg,
     *  2. Implementation of rest service,
     *  3. Adding the LC to rest service,
     *  4. Attaching of LC for rest service
     *  5. Promoting of rest service
     *  6. Store related operations - JIRA STORE-1156
    public class LifeCycleSmokeUITestCase extends GREGIntegrationUIBaseTest {

        private WebDriver driver;
        private User userInfo;

        private String restServiceName = "DimuthuD";
        private UIElementMapper uiElementMapper;

        @BeforeClass(alwaysRun = true)
        public void setUp() throws Exception {

            userInfo = automationContext.getContextTenant().getContextUser();
            driver = BrowserManager.getWebDriver();
            this.uiElementMapper = UIElementMapper.getInstance();

        @AfterClass(alwaysRun = true)
        public void tearDown() throws Exception {

        @Test(groups = "wso2.greg", description = "verify login to governance registry")
        public void performingLoginToManagementConsole() throws Exception {

            LoginPage test = new LoginPage(driver);

            driver.get(getLoginURL() + "admin/index.jsp?loginStatus=true");
            LifeCycleHomePage lifeCycleHomePage = new LifeCycleHomePage(driver);

            String lifeCycle = "<aspect name=\"SampleLifeCycle\" class=\"org.wso2.carbon.governance.registry.extensions.aspects.DefaultLifeCycle\">\n" +
                    "    <configuration type=\"literal\">\n" +
                    "        <lifecycle>\n" +
                    "            <scxml xmlns=\"\"\n" +
                    "                   version=\"1.0\"\n" +
                    "                   initialstate=\"Development\">\n" +
                    "                <state id=\"Development\">\n" +
                    "                    <datamodel>\n" +
                    "                        <data name=\"checkItems\">\n" +
                    "                            <item name=\"Code Completed\" forEvent=\"\">\n" +
                    "                            </item>\n" +
                    "                            <item name=\"WSDL, Schema Created\" forEvent=\"\">\n" +
                    "                            </item>\n" +
                    "                            <item name=\"QoS Created\" forEvent=\"\">\n" +
                    "                            </item>\n" +
                    "                        </data>\n" +
                    "                    </datamodel>\n" +
                    "                    <transition event=\"Promote\" target=\"Tested\"/>\n" +
                    "                    <checkpoints>\n" +
                    "                        <checkpoint id=\"DevelopmentOne\" durationColour=\"green\">\n" +
                    "                            <boundary min=\"0d:0h:0m:0s\" max=\"1d:0h:00m:20s\"/>\n" +
                    "                        </checkpoint>\n" +
                    "                        <checkpoint id=\"DevelopmentTwo\" durationColour=\"red\">\n" +
                    "                            <boundary min=\"1d:0h:00m:20s\" max=\"23d:2h:5m:52s\"/>\n" +
                    "                        </checkpoint>\n" +
                    "                    </checkpoints>\n" +
                    "                </state>\n" +
                    "                <state id=\"Tested\">\n" +
                    "                    <datamodel>\n" +
                    "                        <data name=\"checkItems\">\n" +
                    "                            <item name=\"Effective Inspection Completed\" forEvent=\"\">\n" +
                    "                            </item>\n" +
                    "                            <item name=\"Test Cases Passed\" forEvent=\"\">\n" +
                    "                            </item>\n" +
                    "                            <item name=\"Smoke Test Passed\" forEvent=\"\">\n" +
                    "                            </item>\n" +
                    "                        </data>\n" +
                    "                    </datamodel>\n" +
                    "                    <transition event=\"Promote\" target=\"Production\"/>\n" +
                    "                    <transition event=\"Demote\" target=\"Development\"/>\n" +
                    "                </state>\n" +
                    "                <state id=\"Production\">\n" +
                    "                    <transition event=\"Demote\" target=\"Tested\"/>\n" +
                    "                </state>\n" +
                    "            </scxml>\n" +
                    "        </lifecycle>\n" +
                    "    </configuration>\n" +


            driver.get(getLoginURL() + "lcm/lcm.jsp?region=region3&item=governance_lcm_menu");
            LifeCyclesPage lifeCyclesPage = new LifeCyclesPage(driver);

            assertTrue(lifeCyclesPage.checkOnUploadedLifeCycle("SampleLifeCycle"), "Sample Life Cycle Could Not Be Found");



  "Login test case is completed ");

        @Test(groups = "wso2.greg", description = "logging to publisher",
                dependsOnMethods = "performingLoginToManagementConsole")
        public void performingLoginToPublisher() throws Exception {

            // Setting publisher home page

            PublisherLoginPage test = new PublisherLoginPage(driver);

            // performing login to publisher
            test.loginAs(userInfo.getUserName(), userInfo.getPassword());

            driver.get(getPublisherUrl().split("/apis")[0] + "/pages/gc-landing");

            PublisherHomePage publisherHomePage = new PublisherHomePage(driver);

            //adding rest service
            publisherHomePage.createRestService(restServiceName, "/lana", "1.2.5");

            driver.findElement(By.linkText("Sign out")).click();


        @Test(groups = "wso2.greg", description = "Adding of LC to the rest service",
                dependsOnMethods = "performingLoginToPublisher")
        public void addingLCToRestService() throws Exception {


            LoginPage loginPage = new LoginPage(driver);




        @Test(groups = "wso2.greg", description = "Promoting of rest service with the LC",
                dependsOnMethods = "addingLCToRestService")
        public void lifeCycleEventsOfRestService() throws Exception {

            // Setting publisher home page

            PublisherLoginPage publisherLoginPage = new PublisherLoginPage(driver);

            // performing login to publisher
            publisherLoginPage.loginAs(userInfo.getUserName(), userInfo.getPassword());

            driver.get(getPublisherUrl().split("/apis")[0] + "/pages/gc-landing");

            driver.findElement(By.linkText("REST Services")).click();


            driver.findElement(By.linkText("Other lifecycles")).click();


            // removal of added rest service


    Dimuthu De Lanerolle

    [1] Docker + Java 

    Below code segment can be used for pushing your Docker images to public docker registry / hub.

      private StringBuffer output = new StringBuffer();

      private String gitPush(String command) {

          Process process;
          try {
              process = Runtime.getRuntime().exec(command);
              BufferedReader reader =
                      new BufferedReader(new InputStreamReader(process.getInputStream()));

              String line = "";
              while ((line = reader.readLine()) != null) {
                  output.append(line + "\n");

          } catch (Exception e) {

          return output.toString();


    Build Doker image using fabric8

    public static boolean buildDockerImage(String dockerUrl, String image, String imageFolder)
                throws InterruptedException, IOException {

            Config config = new ConfigBuilder()

            DockerClient client = new DefaultDockerClient(config);
            final CountDownLatch buildDone = new CountDownLatch(1);

            OutputHandle handle = client.image().build()
                    .usingListener(new EventListener() {
                        public void onSuccess(String message) {
                  "Success:" + message);

                        public void onError(String messsage) {
                            log.error("Failure:" + messsage);

                        public void onEvent(String event) {
                  "Success:" + event);

            return true;

    Chanaka JayasenaAdding custom validations to WSO2 Enterprice Store - Publisher asset creation page.

    Writing an asset extension

    Asset extensions allows to create custom functionalities and look and feel to a certain asset type in WSO2 Enterprise Store. With this example, I am going to create a new asset type and add a custom validation to the asset creation page.

    Download WSO2ES product distribution from and extract it to your local drive. Start the server and fire up browser with the admin console url.


    Navigate to Extensions > Configure > Artifact Types and click the "Add new Artifact" link.

    This will load the UI with a new default artifact.

    Notice that it has following attributes in it's root node.

    shortName="applications" singularLabel="Enterprise Application" pluralLabel="Enterprise Applications"

    "applications" is the name of new asset type we are going to add.

    In our example we are going to do a custom validation to the asset creation page. Let's add the fields we are going to validate.

    Our intention is to provide 4 check-boxes and display an error message if at least one checkbox is not selected. Basically if user does not check any of the four checkboxes we provide will display an error message.

     <table name="Tier Availability">  
    <field type="checkbox">
    <field type="checkbox">
    <field type="checkbox">
    <field type="checkbox">

    Restart the server.
    Now when we go to the publisher app, the new content type is available.

    Now click the Add button to add a new asset of "Enterprise Applications".

    You will be able to notice the new section we added on the rxt is available in this page.

    We are going to add a custom validation to this four checkboxes where it will validate if at least one of the checkboxes is checked and if not show an error message bellow the checkboxes.

    We can't do any changes to the core web application. But it's possible to add custom behavior via ES extensions. Since this is an asset specific customization which need to apply only to the new "application" asset type we need to create an "asset extension". There is one other extension type call "app extension" where it applies to all the asset types across the whole web application.

    Navigate to "repository/deployment/server/jaggeryapps/publisher/extensions/assets" folder. Create a new folder and name it "applications". Note that the "applications" is the value of "shortName" attribute we gave on rxt creation. The complete path to the new folder is "repository/deployment/server/jaggeryapps/publisher/extensions/assets/applications".

    Now with this folder we can override the default files in the core application. We need to add a client side javascript to the add asset page and with that file we need to initialize the client side event registrations where it will validate the checkboxes.

    Note the folder structure in "repository/deployment/server/jaggeryapps/publisher/" ( say [A] ). We can override the files from above to "repository/deployment/server/jaggeryapps/publisher/extensions/assets/applications" ( say [B] ).

    Copy [A]/themes/default/helpers/create_asset.js to [B]/themes/default/helpers/create_asset.js.

    Add a new client side script of your choice to the list of js in [B]/themes/default/helpers/create_asset.js. I added a file call 'custom-checkbox-validate.js'. The content of [B]/themes/default/helpers/create_asset.js will be as follows.

    var resources = function (page, meta) {
    return {
    css:['bootstrap-select.min.css','datepick/smoothness.datepick.css','date_picker/datepicker/base.css', 'date_picker/datepicker/clean.css','select2.min.css'],
    code: ['publisher.assets.hbs']

    Now create the new file [B]/themes/default/js/custom-checkbox-validate.js
    Put a alert('') in the above file and see if you are getting a browser alert message.

    If you are not getting the alert message following might be probable causes.

    • You haven't restart the server after the extension creation
    • The asset type is not macing with the folder name.
    • The folder structure in the extension is not aligning with the core  application folder structure.

    Update the custom-checkbox-validate.js with the following content. The script bellow will validate at least one checkbox is checked from the four checkboxes.

    $(document).ready(function () {
    validator.addCustomMethod(function () {
    //Get the 4 checkboxes jQuery objects.
    var bronze = $('#tierAvailability_bronze');
    var unlimited = $('#tierAvailability_unlimited');
    var gold = $('#tierAvailability_gold');
    var silver = $('#tierAvailability_silver');
    var errorMessage = "You need to check at least one from the tires";

    * Custom event handler to the four checkboxes.
    * error by default is shown after the input element.

    var checkboxClickCustomHandler = function () {
    if (':checked') ||':checked') ||':checked') ||':checked')) {
    } else {
    validator.showErrors(bronze, errorMessage);

    //Register event hanlder for the the for checkboxes;;;;

    // Do the custom validation where it checks at least one checkbox is checked. Return type is a json of format {message:"",element:}
    if (':checked') ||':checked') ||':checked') ||':checked')) {
    return {message: "", element: bronze};
    } else {
    return {message: errorMessage, element: bronze};

    Chintana WilamunaSAML2 SSO for Ruby on Rails apps with Identity Server

    Identity Server document has samples on how to configure SAML2 SSO for a Java webapp. SAML2 allows decoupling service providers from identity providers. Only requirement is ability to create/process SAML2 messages which is XML. In an identity management scenario, service provider (referred to as SP) is typically a web application. Identity provider (or IdP) is any system that provide user repository, authentication and user profile details to the service provider application.


    If you want to skip rest of the post and go right then follow these steps,

    1. Download and start Identity Server
    2. Clone the modified rails project from here -
    3. Create a cert keypair and update the SP cert settings in rails app
    4. Upload public cert to Identity Server
    5. Add an SP entry for the rails app in Identity Server (Issuer is already mentioned in app/models/account.rb
    6. Login to Rails app - use admin/admin default credentials. You can add more users and use their accounts to login as well

    Changes to the rails app

    Rest of the post cover details and changes I had to do to get single sign in as well as single sign out working.

    I’m using the sample skeleton rails project that’s integrated with ruby-saml. We need to configure details on connecting to Identity Server. SAML settings are in app/models/account.rb.

    First up we need to change IdP settings to match what we have in Identity Server.

    settings.issuer                         = "railsapp"
    settings.idp_entity_id                  = "localhost"
    settings.idp_sso_target_url             = "https://localhost:9443/samlsso"
    settings.idp_slo_target_url             = "https://localhost:9443/samlsso"
    settings.idp_cert                       = "-----BEGIN CERTIFICATE-----
    -----END CERTIFICATE-----"

    Certificate here is default certificate that comes with Identity Server 5.1.0. I’ve truncated it for brevity. Then we need the following 2 entries for decrypting the encrypting SAML assertions. Certs are truncated.

    settings.certificate                    = "-----BEGIN CERTIFICATE-----
    MIICfjCCAeegAwIBAgIEFFIb3D ...
    -----END CERTIFICATE-----"
    settings.private_key                    = "-----BEGIN PRIVATE KEY-----
    MIICdgIBADANBgkqhkiG9w0BAQEFAA ...
    -----END PRIVATE KEY-----"

    When we’re sending logout requests we need to include the session index with the request. So that Identity Server will logout the correct user. Rails sample doesn’t send the session index by default so you’ll see an exception when you do single logout saying session index cannot be null. Changes can be found here.

    Then we need to make saml/logout route accessible through HTTP POST. Need to add an additional route,

    post :logout

    Service provider configuration in Identity Server

    At the Identity Server we need to register the rails app as a service provider.

    Then click on Inbound Authentication Configuration, click SAML2 Web SSO Configuration.

    In the above configuration http://localhost:3000/saml/acs is the assertion consumer URL. Certificate is the certs created earlier. Also we need to configure single logout URL - http://localhost:3000/saml/logout

    Tracing SAML messages

    I’m using SAML Chrome extension to monitor SAML messages. First SAML call is for authentication. This is the message sent from rails app to Identity Server.

    Second message is SAML response message sent from Identity Server to rails application. As you can see in the below screenshot SAML response is encrypted.

    Let’s do single logout from rails app. Here’s the logout request with session index.

    Then the logout response we get from Identity Server.

    With a library that supports processing SAML requests almost any web app can be integrated into Identity Server using SAML2.

    Afkham AzeezThis blog has reached EOL

    After a decade, I am shutting down this blog. I will be writing on which is my publication on Medium. Medium is so much user friendly than Blogger and the user experience is awesome.

    Even though I will not make any new posts here, I will still retain the posts in this blog because many of them have been referenced from other places.

    Follow me on Medium:

    Ushani BalasooriyaScenarios to Understand Subscription Tier Throttling in WSO2 API Manager 2.0

    • With the new throttling implementation, the subscription level tier quota will define the limit that particular application can access that API 

    • So basically the throttling tier policy is configured for subscription level tier by : appId + apiName + version. 
    • This can be defined as per request count or bandwidth. 
    • If application is used by 1000 users and subscribe to a 50000 Req/Min tier, then all the subscribed 1000 users can invoke maximum of 50000 Request per minute since it does not consider the user identification in subscription level tier policy.
    • With the previous throttling implementation any application user could access limit of 50000 Req/Min. 

    • When configuring Subscription level tier, Burst/rate limiting is also introduced to control the maximum requests that can be sent by a particular subscriber for a given period.
    • So if the burst limit is configured as 1000 Request/s, each user will be able to send only 1000 requests per second as maximum until it reaches 50000 requests for that minute.
    • If there are 10 application users using subscribed using 10 different applications, each user get 50000 requests with the limitation of sending burst requests 1000 per second.

    Scenario 1 – Different users using same application using their user tokens

    Throttle out an API by a subscription tier when few users from same tenantinvoke a particular API subscribed via the same applicationwhen quota limit is 'request count' and when there is no burst limit


    1. API Manager should be up and running and user1 and user2 should be signed up with subscriber permission.
    2. A subscription tier should be created as below.
    • Tier Name : Silver
    • Request Count : 2000
    • Unit Time : 1 minute
    • Burst Control (Rate Limiting) : 0
    • Stop on Quota Reach : Not selected
    • Billing Plan : Free or Commercial
    • Custom Attributes : None
    • Permissions : not defined
    3. API 1 should have been been created and published as below by a publisher,
    • Subscription Tiers : Silver
    • GET resource level Tier : Unlimited is set
    4. A developer subscribe to the API1
    • Application created with an Unlimited Tier. app1
    • Subscribe using application with Silver
    5. Generate production keys for the particular application app1 and retrieve consumer key and secret.
    6. User1 and User2 should generate their user tokens using the consumer key and secret generated in the above step.
    User1 using app1 :

    User1 Token = curl -k -d "grant_type=password&username=<Username>
    &password=<Password>" -H "Authorization: Basic <app1_Token>"
    User2 using app1 :

    User2 Token = curl -k -d "grant_type=password&username=<Username>
    &password=<Password>" -H "Authorization: Basic <app1_Token>"
    Authorization: Basic <app1_token> =
    <Base64encode(consumerkey:consumer secret of app1)>

    StepExpected Result
    User 1 and User 2 invoke the GET resource as below within a minute using their user token
    • User1 : 900 using user1 token
    • User2 : 1101 using user2 token
    User who exceeds the 2000th request should be notified as throttled out.

    Scenario 2 : Sameuser using different applications using their user tokens

    Throttle out an API by a subscription tier when the same user invokes a particular API subscribed via different applications when quota limit is 'request count' and when there isno burst limit.


    1. API Manager should be up and running and user1 should be signed up with subscriber permission.
    2. A subscription tier should be created as below.
    • Tier Name : Silver
    • Request Count : 2000
    • Unit Time : 1 minute
    • Burst Control (Rate Limiting) : 0
    • Stop on Quota Reach : Not selected
    • Billing Plan : Free or Commercial
    • Custom Attributes : None
    • Permissions : not defined
    3. API 1 should have been been created and published as below by a publisher,
    • Subscription Tiers : Silver
    • GET resource level Tier : Unlimited is set
    4. A developer subscribe to the API1 
    • 2 Applications created with an Unlimited Tier. app1 and app2
    • Subscribe API1 using applications (app1 and app2) with Silver
    5. Generate production keys for the paticular applications app1 and app2 and retrieve consumer key and secret.
    6. User1 should generate the user tokens using the consumer key and secret generated in the above step for both apps

    User1 using app1 :

    User1 Token1 = curl -k -d "grant_type=password&username=<Username>&password=<Password>"
     -H "Authorization: Basic <app1_Token>"
    User1 using app2 :

    User1 Token2 = curl -k -d "grant_type=password&username=<Username>&password=<Password>"
     -H "Authorization: Basic <app2_Token>"
    Authorization: Basic <app1_token>
    = <Base64encode(consumerkey:consumer secret of app1)>

    Authorization: Basic <app2_token>
     = <Base64encode(consumerkey:consumer secret of app2)>

    StepExpected Result
    User 1 invoke the GET resource as below within a minute using
    user token1 and token2

    • 900 requests using user1 token1
    • 1101 requests using user1 token2
    User will be able to invoke successfully all the requests
    User 1 invokes the GET resource as below within a minute using their user token1 and token2

    • 2000 using user1 token1 
    • 2001 using user1 token2
    When user1 invokes the 2001st request using token2, will be notified as throttled out while other requests will be successful.

    Scenario 3 : Differentusers via different applications using their user tokens

    Throttle out an API by a subscription tier when few users from same tenant invoke a particularAPI subscribed via different applicationswhen quota limit is request countand when there is no burst limit.


    1. API Manager should be up and running and user1 and user2 should be signed up with subscriber permission.
    2. A subscription tier should be created as below.
    • Tier Name : Silver
    • Request Count : 2000
    • Unit Time : 1 minute
    • Burst Control (Rate Limiting) : 0
    • Stop on Quota Reach : Not selected
    • Billing Plan : Free or Commercial
    • Custom Attributes : None
    • Permissions : not defined
    3. API 1 should have been been created and published as below by a publisher,
    • Subscription Tiers : Silver
    • GET resource level Tier : Unlimited is set
    4. A developer subscribe to the API1
    • Application 1 and application 2 created with an Unlimited Tier. app1 and app2
    • Subscribe using applications with Silver
    5. Generate production keys for the paticular applications app1 and app2 for 2 different users and retrieve consumer key and secret.
    6. User1 and User2 should generate their user tokens using the consumer key and secret generated in the above step.

    User 1 using app1 :

    User token 1 = curl -k -d "grant_type=password&username=user1&password=user1" 
    -H "Authorization: Basic <app1_Token>"
    User 2 using app2 :

    User token 2 = curl -k -d "grant_type=password&username=user2&password=user2" 
    -H "Authorization: Basic <app2_Token>"

    Authorization: Basic <app1_Token>
     = <Base64encode(consumerkey:consumer secret of app1)>

    Authorization: Basic <app2_Token>
    = <Base64encode(consumerkey:consumer secret of app2)>

    StepExpected Result
    User 1 and User 2 invoke the GET resource as below within a minute using their user token

    • User1 : 900 using user1 token 
    • User2 : 1101 using user2 token
    Both users will be able to invoke successfully.
    User 1 and User 2 invoke the GET resource as below within a minute using their user tokens

    • User1 : 2000 using user1 token 
    • User2 : 2000 using user2 token
    Both users will be able to invoke successfully.
    User 1 and User 2 invoke the GET resource as below within a minute using their user tokens

    • User1 : 2001 requests using user1 token 
    • User2 : 2001 requests using user2 token
    Both users will be notified as throttled out.

    Scenario 4 : Differentusers via same application using test access token

    Throttle out an API by a subscription tier when few users from same tenant invoke a paticular API subscribed via the same application andtest access token (grant_type = client credentials)when quota limit is 'request count' and when there is no burst limit

    Preconditions :
    1. API Manager should be up and running and user1 and user2 should be signed up with subscriber permission.
    2. A subscription tier should be created as below.
    • Tier Name : Silver
    • Request Count : 2000
    • Unit Time : 1 minute
    • Burst Control (Rate Limiting) : 0
    • Stop on Quota Reach : Not selected
    • Billing Plan : Free or Commercial
    • Custom Attributes : None
    • Permissions : not defined
    3. API 1 should have been been created and published as below by a publisher,
    • Subscription Tiers : Silver
    • GET resource level Tier : Unlimited is set
    4. A developer subscribe to the API1 
    • Application created with an Unlimited Tier. app1
    • Subscribe using application with Silver
    5. Generate production keys for the paticular application app1 and retrieve test access token.
    6. Test access token can be retrieved via below command.

    Developer generates an access token using app1 :

    Test Access Token = curl -k -d "grant_type=client_credentials
    -H "Authorization: Basic <app1_Token>"

    Authorization: Basic <app1_token>
     = <Base64encode(consumerkey:consumer secret of app1)>

    StepExpected Result
    User 1 and User 2 invoke the GET resource as below within a minute using the same test access token generated

    • User1 : 900 requests using test access token 
    • User2 : 1100 requests using test access token
    Both users will be able to invoke successfully.
    User 1 and User 2 invoke the GET resource as below within a minute using the same test access token

    • User1 : 900 using test access token 
    • User2 : 1101 using test access token
    User who exceeds the 2000th request should be notified as throttled out.

    Scenario 5: Differentusers via same application via user token when the burst limit is configured

    Throttle out an API by a subscription tier when few users from same tenant invoke a particular API subscribed via same applications when quota limit is request countand when there is a burst limit configured.

    Preconditions :

    1. API Manager should be up and running and user1 and user2 should be signed up with subscriber permission.
    2. A subscription tier should be created as below.
    • Tier Name : Silver
    • Request Count : 2000
    • Unit Time : 1 hour
    • Burst Control (Rate Limiting) : 100 Request/m
    • Stop on Quota Reach : Not selected
    • Billing Plan : Free or Commercial
    • Custom Attributes : None
    • Permissions : not defined
    3. API 1 should have been been created and published as below by a publisher,
    • Subscription Tiers : Silver
    • GET resource level Tier : Unlimited is set
    4. A developer subscribe to the API1 
    • Application 1 created with an Unlimited Tier. app1
    • Subscribe using applications with Silver
    5. Generate production keys for the particular applications app1 and app2 for 2 different users and retrieve consumer key and secret.
    6. User1 and User2 should generate their user tokens using the consumer key and secret generated in the above step.

    User 1 using app1 :

    User token 1 = curl -k -d "grant_type=password&username=user1&password=user1" 
    -H "Authorization: Basic <app1_Token>"
    User 2 using app1 :

    User token 2 = curl -k -d "grant_type=password&username=user2&password=user2" 
    -H "Authorization: Basic <app1_Token>"

    Authorization: Basic <app1_Token>
     = <Base64encode(consumerkey:consumer secret of app1)>

    StepExpected Result
    User1 invoke with 100 requests (a burst) within a minute using their user tokenUser should be able to invoke successfully
    User 1 try to send a request within the same minute in step 2. User1 will be notified as you have exceeded your quota until the next minute
    User 2 invoke with 100 requests (a burst) within the same minute in step 2 using user2's user token. User 2 will be able to invoke successfully.
    User 2 invokes again within the same minute in step 2 using user2's user token User should be notified as exceeded quota until the next minute.
    User 1 and User 2 invoke the GET resource as below within an hour with below requests sticking to burst limit. (100 requests/m)

    • User1 : 1000 requests using user1 token 
    • User2 : 1001 requests using user2 token
    User who exceeds the throttling limit by sending the 2001st request will be notified as throttled out until the next hour since it is configured as 2000 req/hr.

    Until that all the requests will be successfully invoked sticking to the burst limit.

    Chathurika Erandi De SilvaMessage flow debugging with WSO2 Message Flow Debugger

    Mediation Debugger is a feature that is included in the upcoming WSO2 ESB 500 release. This feature is bundled as an installable pack for Eclipse Mars. Mediation debugger provides a developer a UI based visualization on the mediation flow so that he can debug in a quite easy and faster manner.

    Using the new debugger we can easily toggle between breakpoints, add skip points, view the message envelope and of course play around with the properties that are passed through the mediation flow.

    A rich graphical interface is provided for the user so that mediation flow debugging has become quite easy rather than reading through a large xml file to find where the problem is.

    I am not writing the entire story of Mediation Debugger here as, all the information you need can be gathered from this post. Since the Beta is out, why not download it and play around a bit????

    Lahiru SandaruwanHow to create simple mock Rest API using WSO2 ESB

    Inspired by this blog by Miyuru for SOAP mock service,

    <api xmlns="" name="SimpleAPI" context="/simple">
       <resource methods="GET">
             <payloadFactory media-type="xml">
                   <Response xmlns="">

    Enter http://localhost:8280/simple in a browser or SOAP UI Rest project to test this. This is tested in WSO2 ESB 4.9.0.

    Shashika UbhayaratneHow to change default keystore password on WSO2 servers

    Sometimes, you may require to change default key store password in WSO2 prodcuts due to security reasons.

    Here are the steps when changing keystore passwords:

    Step 1:
    Navigate to wso2 server location:
    ex: cd $wso2_server/repository/resources/security

    Step 2:
    Change keystore password:
    keytool -storepasswd -new [new password] -keystore [keystore name]
    ex: keytool -storepasswd -new simplenewpassword-keystore wso2carbon.jks

    Step 3:
    Change Private Key password
    keytool -keypasswd -alias wso2carbon -keystore wso2carbon.jks  
    Enter keystore password: <simplenewpassword>
    Enter key password for <wso2carbon> wso2carbon
    New key password for <wso2carbon>: <simplenewpassword>
    Re-enter new key password for <wso2carbon>: <simplenewpassword>

    Both keystore and private key password must be the same in some cases like WSO2 BAM. Specially, in Thrift, we need to configure to use one password for both.

    Step 4:
    Configure wso2 server (example taken here as WSO2 BAM)

    • Change carbon.xml at @wso2_server/repository/conf

    <!-- Keystore file location-->
    <!-- Keystore type (JKS/PKCS12 etc.)-->
    <!-- Keystore password-->
    <!-- Private Key alias-->
    <!-- Private Key password-->
    <!-- Keystore file location-->
    <!-- Keystore type (JKS/PKCS12 etc.)-->
    <!-- Keystore password-->
    <!-- Private Key alias-->
    <!-- Private Key password-->

    • Change identtity.xml at @wso2_server/repository/conf
    <ReceivePort>${Ports.ThriftEntitlementReceivePort}</ReceivePort> <ClientTimeout>10000</ClientTimeout>

    Dimuthu De Lanerolle

    Java Tips .....

    To get directory names inside a particular directory ....

    private String[] getDirectoryNames(String path) {

            File fileName = new File(path);
            String[] directoryNamesArr = fileName.list(new FilenameFilter() {
                public boolean accept(File current, String name) {
                    return new File(current, name).isDirectory();
  "Directories inside " + path + " are " + Arrays.toString(directoryNamesArr));
            return directoryNamesArr;

    To retrieve links on a web page ......

     private List<String> getLinks(String url) throws ParserException {
            Parser htmlParser = new Parser(url);
            List<String> links = new LinkedList<String>();

            NodeList tagNodeList = htmlParser.extractAllNodesThatMatch(new NodeClassFilter(LinkTag.class));
            for (int x = 0; x < tagNodeList.size(); x++) {
                LinkTag loopLinks = (LinkTag) tagNodeList.elementAt(m);
                String linkName = loopLinks.getLink();
            return links;

    To search for all files in a directory recursively from the file/s extension/s ......

    private List<String> getFilesWithSpecificExtensions(String filePath) throws ParserException {

    // extension list - Do not specify "." 
     List<File> files = (List<File>) FileUtils.listFiles(new File(filePath),
                    new String[]{"txt"}, true);

            File[] extensionFiles = new File[files.size()];

            Iterator<File> itFileList = files.iterator();
            int count = 0;

            while (itFileList.hasNext()) {
                File filePath =;
    extensionFiles[count] = filePath;

    Reading files in a zip

         public static void main(String[] args) throws IOException {
            final ZipFile file = new ZipFile("Your zip file path goes here");
                final Enumeration<? extends ZipEntry> entries = file.entries();
                while (entries.hasMoreElements())
                    final ZipEntry entry = entries.nextElement();
                    System.out.println( "Entry "+ entry.getName() );
                    readInputStream( file.getInputStream( entry ) );
            private static int readInputStream( final InputStream is ) throws IOException {
                final byte[] buf = new byte[8192];
                int read = 0;
                int cntRead;
                while ((cntRead =, 0, buf.length) ) >=0)
                    read += cntRead;
                return read;

    Converting Object A to Long[]

     long [] myLongArray = (long[])oo;
            Long myLongArray [] = new Long[myLongArray.length];
            int i=0;

            for(long temp:myLongArray){
                myLongArray[i++] = temp;

    Getting cookie details on HTTP clients

    import org.apache.http.impl.client.DefaultHttpClient;

    HttpClient httpClient = new DefaultHttpClient();

    ((DefaultHttpClient) httpClient).getCookieStore().getCookies(); 

     HttpPost post = new HttpPost(URL);
            post.setHeader("User-Agent", USER_AGENT);
            post.addHeader("Referer",URL );
            List<NameValuePair> urlParameters = new ArrayList<NameValuePair>();
            urlParameters.add(new BasicNameValuePair("username", "admin"));
            urlParameters.add(new BasicNameValuePair("password", "admin"));
            urlParameters.add(new BasicNameValuePair("sessionDataKey", sessionKey));
            post.setEntity(new UrlEncodedFormEntity(urlParameters));
            return httpClient.execute(post);

    Ubuntu Commands

    1. Getting the process listening to a given port (eg: port 9000) 

    sudo netstat -tapen | grep ":9000 "

    Running  a bash script from python script

    import os

    def main():


    if __name__=="__main__":
    #Linux shell Script

    echo "Hello Python from Shell";

    public void scriptExecutor() throws IOException {"Start executing the script to trigger the docker build ... ");

        Process p = Runtime.getRuntime().exec(
                "python  /home/dimuthu/Desktop/Python/ ");
        BufferedReader in = new BufferedReader(new InputStreamReader(
                p.getInputStream()));;"Finished executing the script to trigger the docker build ... ");


    Chandana NapagodaLifecycle Management with WSO2 Governance Registry

    SOA Lifecycle management is one of the core requirements for the functionality of an Enterprise Governance suite. WSO2 Governance Registry 5.2.0 supports multiple lifecycle management capability out of the box. Also, it gives an opportunity to the asset authors to extend the out of the box lifecycle functionality by providing their own extensions, based on the organization requirements. Further, the WSO2 Governance Registry supports multiple points of extensibility. Handlers, Lifecycles and Customized asset UIs(RXT based) are the key types of extensions available.


    A lifecycle is defined with SCXML based XML element and that contains,
    • A name 
    • One or more states
    • A list of check items with role based access control 
    • One or more actions that are made available based on the items that are satisfied 

    Adding a Lifecycle
    To add a new lifecycle aspect, click on the Lifecycles menu item under the Govern section of the extensions tab in the admin console. It will show you a user interface where you can add your SCXML based lifecycle configuration. A sample configuration will be available for your reference at the point of creation.

    Adding Lifecycle to Asset Type
    The default lifecycle for a given asset type will be picked up from the RXT definition. When creating an asset, it will automatically attach lifecycle into asset instance. Lifecycle attribute should be defined in the RXT definition under the artifactType element as below.


    Multiple Lifecycle Support

    There can be instances, where given asset can go through more than one lifecycle. As an example, a given service can a development lifecycle as well as a deployment lifecycle. Above service related states changes will not be able to visualize via one lifecycle and current lifecycle state should depend on the context (development or deployment) which you are looking at.

    Adding Multiple Lifecycle to Asset Type
    Adding multiple lifecycles to an Asset Type can be done in two primary methods.

    Through Asset Definition(Available with G-Reg 5.3.0):Here, you can define multiple lifecycle names in a comma separated manner. Lifecycle name which is defined in first will be considered as the default/primary lifecycle. Here, multiple lifecycles which are specified in the asset definition(RXT configuration) will be attached to the asset when itis getting created. An example of multiple lifecycle configurations is as below,

    Using Lifecycle Executor
    Using custom executor Java code, you can assign another lifecycle into the asset. Executors are one of the facilitators which helps to extend the WSO2 G-Reg functionality and Executors are associated with a Governance Registry life cycle. This custom lifecycle executor class needs to implement the Execution interface that is provided by WSO2 G-Reg. You can find more details from below article[Lifecycles and Aspects].

    Isuru PereraBenchmarking Java Locks with Counters

    These days I am analyzing some Java Flight Recordings from taken from WSO2 API Manager performance tests and I found out that main processing threads were in "BLOCKED" state in some situations.

    The threads were mainly blocked due to "synchronized" methods in Java. Synchronizing the methods in a critical section of request processing causes bottlenecks and it has an impact on the throughput and overall latency.

    Then I was thinking whether we could avoid synchronizing the whole method. The main problem with synchronized is that only one thread can run that critical section. When it comes to consumer/producer scenarios, we may need to give read access to data in some threads and write access to a thread to edit data exclusively. Java provides ReadWriteLock for these kinds of scenarios.

    Java 8 provides another kind of lock named StampedLock. The StampedLock provides an alternative way to the standard ReadWriteLock and it also supports optimistic reads. I'm not going to compare the features and functionalities of the each lock type in this blog post. You may read the StampedLock Idioms by Dr. Heinz M. Kabutz.

    I'm more interested in finding out which lock is faster when it is accessed by multiple threads. Let's write a benchmark!

    The code for benchmarks

    There is an article on "Java 8 StampedLocks vs. ReadWriteLocks and Synchronized" by Tal Weiss, who is the CEO of Takipi. In that article, there is a benchmark for Java locks with different counter implementations. I'm using that counters benchmark as the basis for my benchmark. 

    I also found another fork of the same benchmark and it has added the Optimistic Stamped version and Fair mode of ReentrantReadWriteLock. I found out about that from the slides on "Java 8 - Stamped Lock" by Haim Yadid after I got my benchmark results.

    I also looked at the article "Java Synchronization (Mutual Exclusion) Benchmark" by Baptiste Wicht.

    I'm using the popular JMH library for my benchmark. The JMH has now become the standard way to write Java microbenchmarks. The benchmarks done by Tal Weiss do not use JMH.

    See JMH Resources by Nitsan Wakart for an introduction to JMH and related links to get more information about JMH.

    I used Thread Grouping feature in JMH and the Group states for benchmarking different counter implementations.

    This is my first attempt in writing a proper microbenchmark. If there are any problems with the code, please let me know. When we talk about benchmarks, it's important to know that you should not expect the same results in a real life application and the code may behave differently in runtime.

    There are 11 counter implementations. I also benchmarked the fair and non-fair modes of ReentrantLockReentrantReadWriteLock and Semaphore.

    Class Diagram for Counter implementations

    There are altogether 14 benchmark methods!

    1. Adder - Using LongAdder. This is introduced in Java 8.
    2. Atomic - Using AtomicLong
    3. Dirty - Not using any mechanism to control concurrent access
    4. Lock Fair Mode - Using ReentrantLock
    5. Lock Non-Fair Mode - Using ReentrantLock
    6. Read Write Lock Fair Mode - Using ReentrantReadWriteLock
    7. Read Write Lock Non-Fair Mode - Using ReentrantReadWriteLock
    8. Semaphore Fair Mode - Using Semaphore
    9. Semaphore Non-Fair Mode - Using Semaphore
    10. Stamped - Using  StampedLock
    11. Optimistic Stamped - Using  StampedLock with tryOptimisticRead(); If it fails, I used the read lock. There are no more attempts to tryOptimisticRead().
    12. Synchronized - Using synchronized block with an object
    13. Synchronized Method - Using synchronized keyword in methods
    14. Volatile  - Using volatile keyword for counter variable

    The code is available at

    Benchmark Results

    As I mentioned, I used Thread Grouping feature in JMH and I ran the benchmarks for different thread group distributions. There were 10 iterations after 5 warm-up iterations. I measured only the "throughput". Measuring latency would be very difficult (as the minimum throughtput values were having around 6 digits)

    The thread group distribution was passed by the "-tg" argument to the JMH and the first number was used for "get" (read) operations and the second number was used for "increment" (write) operations.

    There are many combinations we can use to run the benchmark tests. I used 12 combinations for thread group distribution and those are specified in the benchmark script.

    These 12 combinations include the scenarios tested by Tal Weiss and Baptiste Wicht.

    The benchmark was run on my Thinkpad T530 laptop.

    $ hwinfo --cpu --short
    Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3394 MHz
    Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3333 MHz
    Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3305 MHz
    Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3333 MHz
    $ free -m
    total used free shared buff/cache available
    Mem: 15866 4424 7761 129 3680 11204
    Swap: 18185 0 18185

    Note: I added the "Dirty" counter only to compare the results, but I omitted it from the benchmark as no one wants to keep a dirty counter in their code. :)

    I have committed all results to the Github repository and I used gnuplot for the graphs.

    It's very important to note that the graphs show the throughput for both reader and writer threads. If you need to look at individual reader and writer throughput, you can refer the results at

    Let's see the results!

    1 Reader, 1 Writer

    2 Readers, 2 Writers

    4 Readers, 4 Writers

    5 Readers, 5 Writers

    10 Readers, 10 Writers

    16 Readers, 16 Writers

    64 Readers, 64 Writers

    128 Readers, 128 Writers

    1 Reader, 19 Writers

    19 Readers, 1 Writer

    4 Readers, 16 Writers

    16 Readers, 4 Writers


    Following are some conclusions we can make when looking at above results

    1. Optimistic Stamped counter has much better throughput when there is high contention.
    2. Fair modes of the locks are very slow.
    3. Adder counter has better throughput than Atomic counter when there are more writers.
    4. When there are less threads, the Synchronized and Synchronized Method counters has better throughput than using a Read Write Lock (in non-fair mode, which is the default)
    5. The Lock counter also has better throughput than Read Write Lock when there are less threads.

    Adder, Atomic and Volatile counter examples do not show a way to provide mutual exclusion, but those are thread-safe ways to keep a count. You may refer benchmark results for other counters with Java locks if you want to have a mutual exclusion to some logic in your code.

    In this benchmark, the read write lock has performed poorly. The reason could be that there are writers continuously trying to access the write lock. There may be situations that a write lock may be required less frequently and therefore this benchmark is probably not a good way to evaluate performance for read write locks.

    Please make sure that you run the benchmarks for your scenarios before making a decision based on these results. Even my benchmarks give slightly different results for each run. So, it's not a good idea to rely entirely on benchmarks and you must test the performance of the overall application.

    If there are any questions or comments on the results or regarding benchmark code, please let me know.

    Prabath SiriwardenaBuilding Microservices ~ Designing Fine-grained Systems

    The book Building Microservices by Sam Newman is one of the very first on the subject. It’s a great book for anyone who talks about or designs or builds microservices must read — I strongly recommend buying it!. This article reviews the book while highlighting the key takeaways from each chapter.

    Jayanga DissanayakeDeploying artifacts to WSO2 Servers using Admin Services

    In this post I am going to show you, how to deploy artifacts on WSO2 Enterprise Service Bus [1] and WSO2 Business Process Server [2] using Admin Services [3]

    Usual practice with WSO2 artifacts deployment is to, enable DepSync [4] (Deployement Synchronization). And upload the artifacts via the management console of master node. Which will then upload the artifacts to the configured SVN repository and notify the worker nodes regarding this new artifact via a cluster message. Worker nodes then download the new artifacts from the SVN repository and apply those.

    In this approach you have to log in to the management console and do the artifacts deployment manually.

    With the increasing use of continuous integration tools, people are looking in to the possibility of automating this task. There is a simple solution in which you need to configure a remote file copy to the relevant directory inside the [WSO2_SERVER_HOME]/repository/deployment/server directory. But this is a very low level solution.

    Following is how to use Admin Services to do the same in much easier and much manageable manner.

    NOTE: Usually all WSO2 servers accept deployable as .car file but WSO2 BPS prefer .zip for deploying BPELs.

    For ESB,
    1. Call 'deleteApplication' in ApplicationAdmin service and delete the
      application existing application
    2. Wait for 1 min.
    3. Call 'uploadApp' in CarbonAppUploader service
    4. Wait for 1 min.
    5. Call 'getAppData' in ApplicationAdmin, if it returns application data
      continue. Else break
     For BPS,
    1. Call the 'listDeployedPackagesPaginated' in
      BPELPackageManagementService with page=0 and
    2. Save the information
    3. Use the 'uploadService' in BPELUploader, to upload the new BPEL zip
    4. Again call the 'listDeployedPackagesPaginated' in
      BPELPackageManagementService with 15 seconds intervals for 3mins.
    5. If it finds the name getting changed (due to version upgrade. Eg:
      HelloWorld2‐4), then continue. (Deployment is success)
    6. If the name doesn't change for 3mins, break. Deployment has some
      issues. Hence need human intervention


    Chathurika Erandi De SilvaSimple HTTP Inbound endpoint Sample: How to

    What is an Inbound endpoint?

    As per my understanding an inbound endpoint is an entry point. Using this entry point, a message can be mediated directly from the transport layer to the mediation layer. Read more...

    Following is a very simple demonstration on Inbound Endpoints using WSO2 ESB

    1. Create a sequence

    2. Save in Registry

    3. Create an Inbound HTTP endpoint using the above sequence

    Now it's time to see how to send the requests. As I have explained, in the start of this post, the inbound sequence is an entry point for a message. If the above third step is inspected, its illustrated that a port is given for the inbound endpoint. When the incoming traffic is directed towards the given port, the inbound endpoint will receive it and straight away pass it to the sequence defined with it. Here the axis2 layer is skipped.

    In the above scenario the request should be directed to http://localhost:8085/ as given below

    Then the request is directed to the inbound endpoint and directly to the sequence

    Shashika UbhayaratneHow to resolve "File Upload Failure" when importing a schema with dependany in WSO2 GREG

    Schema is one of the main asset model used in WSO2 GREG and you can find more information on

    There can be situations where you want to import a schema to GREG which imports another schema (It has a dependency)

    1. Lets say you have a schema file.
    example: original.xsd
     <?xml version="1.0" encoding="UTF-8"?>  
    <xsd:schema xmlns:xsd="" targetNamespace="urn:listing1">
    <xsd:complexType name="Phone1">
    <xsd:element name="areaCode1" type="xsd:int"/>
    <xsd:element name="exchange1" type="xsd:int"/>
    <xsd:element name="number1" type="xsd:int"/>

    2. Import above schema on publisher as per the instructions given on

    3. Now, you need to import another schema which import/ has reference to previous schema
    example: link.xsd
    <?xml version="1.0" encoding="UTF-8"?>  
    <xsd:schema xmlns:xsd="" targetNamespace="urn:listing">
    <xsd:import namespace="urn:listing1"
    <xsd:complexType name="Phone">
    <xsd:element name="areaCode" type="xsd:int"/>
    <xsd:element name="exchange" type="xsd:int"/>
    <xsd:element name="number" type="xsd:int"/>

    Issue: You may encounter an error similar to following:
    ERROR {org.wso2.carbon.registry.extensions.handlers.utils.SchemaProcessor} - Could not read the XML Schema Definition file. this.schema.needs Could not evaluate Schema Definition. This Schema contains Schema Includes that were not resolved
    Caused by: org.wso2.carbon.registry.core.exceptions.RegistryException: Could not read the XML Schema Definition file. this.schema.needs
    at org.wso2.carbon.registry.extensions.handlers.utils.SchemaProcessor.putSchemaToRegistry(
    at org.wso2.carbon.registry.extensions.handlers.XSDMediaTypeHandler.processSchemaUpload(
    at org.wso2.carbon.registry.extensions.handlers.XSDMediaTypeHandler.put(
    at org.wso2.carbon.registry.core.jdbc.handlers.HandlerManager.put(
    at org.wso2.carbon.registry.core.jdbc.handlers.HandlerLifecycleManager.put(
    at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.put(
    at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.put(
    at org.wso2.carbon.registry.core.session.UserRegistry.putInternal(
    at org.wso2.carbon.registry.core.session.UserRegistry.access$1000(
    at org.wso2.carbon.registry.core.session.UserRegistry$
    at org.wso2.carbon.registry.core.session.UserRegistry$
    at Method)
    at org.wso2.carbon.registry.core.session.UserRegistry.put(

    Solution 1:
    Zip all schemas together and upload

    Solution 2:
    Specify the absolute path for dependent schema file:
     <xsd:schema xmlns:xsd="" targetNamespace="urn:listing">  
    <xsd:import namespace="urn:listing1"

    sanjeewa malalgodaHow to disable throttling completely or partically for given API- WSO2 API Manager 1.10 and below versions

    Sometimes particular requirement(allowing any number of un authenticated requests) will be applied to only few APIs in your deployment. If that is the case we may know those APIs by the time we design system. So one thing we can do is remove throttling handler from handler list of given API. Then all requests dispatched to given API will not perform any throttling related operations. To do that you need to edit synapse API definition manually and remove handler from there.

    We usually do not recommend users to do this because if you updated API again from publisher then handler may add again(each update from publisher UI will replace current synapse configuration). But if you have only one or two APIs for related to this use case and those will not update very frequently. And we can use that approach.

    Another approach we can follow is update velocity template in a way it will not add throttling handler for few pre defined APIs. In that case even if you update API from publisher still deployer will remove throttling handler from synapse configuration. To do this we should know APIs list which do not require throttling. Also then no throttling will apply for all resources in that API.

    Sometimes you may wonder what is the impact of having large number of max requests for unauthenticated tier.
    If we discuss about performance of throttling it will not add huge delay to the request. If we consider throttling alone then it will take less than 10% of complete gateway processing time. So we can confirm that having large number for max request count and having unauthenticated tier will not cause major performance issue. If you don't need to disable throttling for entire API and need to allow any number of unauthenticated requests in tier level then that is the only option we do have now.

    Please consider above facts and see what is the best solution for your use case. If you need further assistance or any clarifications please let us know. We would like to discuss further and help you to find the best possible solution for your use case.

    Chathurika Erandi De SilvaEncoded context to URI using REST_URL_POSTFIX with query parameters

    WSO2 ESB provides a property called REST_URL_POSTFIX that can be used to append context to the target endpoint when invoking a REST endpoint.

    With the upcoming ESB 500 release the value of the REST_URL_POSTFIX can contain non standard special characters such as spaces and these will be encoded when sending to the backend. This provides versatility because we can't expect each and every resource path not to contain non standard special characters.

    In order to demonstrate this, I have a REST service with the following context path

    user/users address/address new/2016.05

    You can see this contains standard as well as non standard  characters.

    Furthermore I am sending values needed for the service execution as query parameters and while appending the above context to the target endpoint, i need to send the query parameters as well.

    The request is as follows


    In order to achieve my requirement I have created the following sequence

    Afterwards I have created a simple API in WSO2 ESB  and  used the above sequence as below

    When invoked following log entry is visible in console (wire logs should be  enabled) that indicates the accomplishment of  the mission

    [2016-05-20 15:13:42,549] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "GET /SampleRestService/restservice/TestUserService/user/users%20address/address%20new/2016.05?id=1&name=jane&address=wso2 HTTP/1.1[\r][\n]"

    Imesh GunaratneEdgar, this is very interesting!

    Great work! A quick question, did you also deploy the metrics dashboard on Heroku?

    Evanthika Amarasiri[WSO2 Governance Registry] - How to analyse the history of registry resources

    Assume that you are working on a setup where you need to analyse the history of registry resources. One might want to know what type of operations have been done to the resource throughout it’s lifetime. This is possible from a simple DB query.

    select * from REG_LOG where REG_PATH=‘resource_name’;

    i.e. select * from REG_LOG where REG_PATH='/_system/governance/apimgt/statistics/ga-config.xml';

    As an example, assume I want to find out the actions taken on the resource ga-config.xml. So when I query the table REG_LOG, below is the result I would receive.

    When you look at the above result set, you would notice that the column REG_ACTION shows different values in each row. The actions that represents these values are configured in the class For example, REG_ACTION=10 means that the resource have been moved from it’s current location. REG_ACTION=7 means that it has been deleted from the system. Likewise, when you go through [1], you can find out the rest of the actions which you can take on these registry resources.

    Therefore as explained above, by going through the REG_LOG of the registry database table, you can audit the actions taken on each and every resource.

    [1] -

    Chandana NapagodaG-Reg and ESB integration scenarios for Governance

    WSO2 Enterprise Service Bus (ESB) employs WSO2 Governance Registry for storing configuration elements and resources such as WSDLs, policies, service metadata, etc. By default, WSO2 ESB shipped with embedded Registry, which is entirely based on the WSO2 Governance Registry (G-Reg). Further based on the requirements, you can connect to a remotely running WSO2 Governance Registry using a remote JDBC connection which is known as a ‘JDBC registry mount’.

    Other than the Registry/Repository aspect of WSO2 G-Reg, its primary use cases are Design time governance and Runtime governance with seamless lifecycle management. It is known as Governance aspect of WSO2 G-Reg. So with this governance aspect of WSO2 G-Reg, more flexibility is provided for integration with WSO2 ESB.

    When integrating WSO2 ESB with WSO2 G-Reg in governance aspect, there are three options available. They are:

    1). Share Registry space with both ESB and G-Reg
    2). Use G-Reg to push artifacts into ESB node
    3). ESB pulls artifacts from the G-Reg when needed

    Let’s go through the advantages and disadvantages of each option. Here we are considering a scenario where metadata corresponds to ESB artifacts such as endpoints are stored in the G-Reg as asset types. Each asset type has their own lifecycle (Ex: ESB Endpoint RXT have it’s own Lifecycle). Then with the G-Reg lifecycle transition, synapse configurations (Ex: endpoints) will be created. Those will be the runtime configurations of ESB.

    Share Registry space with both ESB and G-Reg

    Embedded Registry of every WSO2 product consist of three partitions. They are local, config and governance.

    Local Partition : Used to store configuration and runtime data that is local to the server.
    Configuration Partition : Used to store product-specific configurations. This partition can be shared across multiple instances of the same product.
    Governance Partition : Used to store configuration and data that are shared across the whole platform. This partition typically includes services, service descriptions, endpoints and data sources
    How to integration should work:
    When sharing registry spaces with Both ESB and G-Reg products, we are sharing governance partition only. Here governance space will be shared using JDBC. When G-Reg lifecycle transition happens on the ESB endpoint RXT, it will create the ESB synapse endpoint configuration and copy into relevant registry location using Copy Executor. Then ESB can retrieve that endpoint synapse configuration from the shared registry when required.

         Easy to configure
        Reduced amount of custom code implementation
    If servers are deployed across data centers, JDBC connections will be created in between data centers(may be through WAN or Public networks).
          With the number of environments, there will be many database mounts.
          ESB registry space will be exposed via G-Reg.

    Use G-Reg to push artifacts into ESB node
    How to integration should work:
    In this pattern, G-Reg will create synapse endpoints and push into relevant ESB setup(Ex: Dev/QA/Prod, etc) by using Remote Registry operation. After G-Reg pushing appropriate synapse configuration into ESB, APIs or Services will be able to consume.
    G-Reg Push(1).png

          Provide more flexibility from the G-Reg side to manage ESB assets
          Can plug multiple ESB environments on the go
          Can restrict ESB API/Service invocation until G-Reg lifecycle operation is completed

    ESB pull artifact from the G-Reg

    How to integration should work:

    In this pattern, when lifecycle transition happens, G-Reg will create synapse level endpoints in the relevant registry location.

    When API or Service invocation happens, ESB will first lookup the endpoint in their own registry. If it is not available, it will pull the endpoint from G-Reg using Remote Registry operations.  Here ESB side endpoint lookup should be implemented as a custom implementation.  

    ESB pull.png

            User might be able to deploy ESB API/Service before G-Reg lifecycle transition happens. Disadvantages: 
            First API/Service call get delayed, until Remote API call is completed 
            First API/Service call get failed, if G-Reg lifecycle transition is not completed. 
            Less control compared to option 1 and 2

    Chanaka FernandoWSO2 ESB 5.0.0 Beta Released

    WSO2 team is happy to announce the beta release of the latest WSO2 ESB 5.0.0. This version of the ESB has major improvements to the usability aspects of the ESB in real production deployments as well as development environments. Here are the main features of the ESB 5.0.0 version.

    Mediation debugger provides the capability to debug mediation flows from WSO2 developer studio tooling platform. It allows the users to view/edit/delete properties and the payload of the messages passing through each and every mediator.
    You can find more information about this feature at below post.
    Analytics for WSO2 ESB 5.0.0 (Beta) —

    Malintha AdikariK-Means clustering with Scikit-learn

    K-Means clustering is a popular unsupervised classification algorithm. In simple terms we have unlabeled dataset with us. Unlabeled dataset means we have a dataset but we don't have any clue about how we are going to categorized each row in the dataset. Following is an example few rows from unlabeled dataset about crime data in USA. Here we have one row for each state and set of features related to crime information. We have this dataset with us but we don't know what to do this with this data. One thing we can do is finding similarities of the states. In other way we can try to prepare few buckets and put states into those buckets based on the similarities in crime information.








    Now let's discuss how we can implement K-Means cluster for our dataset with Scikit-learn. You can download USA crime dataset from my github location.

    Import KMeans from Scikit-learn.

    from sklearn.cluster import KMeans

    Load your datafile into Pandas dataframe

    df = Utils.get_dataframe("crime_data.csv")

    Create KMean model providing required number of clusters. Here I have defined required number of clusters to 5

    KMeans_model = KMeans(n_clusters=5, random_state=1)

    Refine your data removing non-numeric data, unimportant features..etc.

    df.drop(['crime$cluster'], inplace=True, axis=1)
    df.rename(columns={df.columns[0]: 'State'}, inplace=True)

    Select only numeric data in your dataset.

    numeric_columns = df._get_numeric_data()

    Train KMeans-clustering model

    Now you can see the label of each row in your training dataset.

    labels = KMeans_model.labels_

    Predic new state’s crime cluster as follows

    print(KMeans_model.predict([[15, 236, 58, 21.2]]))

    Malintha AdikariVisualization in Machine Learning

    Scatter Plots

    We can visualize the correlations (relationship between two variables) between features or features and the classes using scatter plots. In a scatter plot we can use n-dimensional space to visualize correlations between n variables. We plot data points there and finally we can use the output to determine correlations between each variable. Following is a sample 3-D scatterplot (from

    Chathurika Erandi De SilvaStatistics and ESB -> ESB Analytics Server: Message Tracing

    This is the  second post on ESB Analytics server and I hope you have read the previous one.

    When ESB recieves a request, its taken in as a message. This message consists of a header and a body.The Analytics server have provided an comprehensive way of viewing the message that ESB works all through the cycle. this is called as tracing.

    Normally ESB takes in a request, mediate it through some logic and then sends to the backend. The response from the backend is then again mediated along some logic then returned to the client. The analytics server graphically illustrates this flow, so the message flow can be easily viewed and understood

    Sample Message Flow

    Further it provides graphical view of the message tracing by providing details on the message passed through the ESB. Transport properties, message context properties are illustrated with respective to the mediators in the flow

    Sample Mediator Properties

    Basically the capability of viewing the message flow, tracing the flow in a graphical mode is provided which is user friendly and simple.

    sanjeewa malalgodaHow to add soap and WSDL based API to WSO2 API Manager via REST API.

    If you are using old jaggery API then you can add API in the same way you add API from jaggery application. To do that we need to follow steps below. Since exact 3 steps(design > implement > manage) only used by jaggery applications those are not listed in API documents. So i have listed them here for your reference. One thing we need to tell you is we cannot add soap endpoint based with swagger content(soap apis cannot define with swagger content).

    Steps to create soap API with WSDL.

    Login and obtain session.
    curl -X POST -c cookies http://localhost:9763/publisher/site/blocks/user/login/ajax/login.jag -d 'action=login&username=admin&password=admin'

    Design API
    curl -F name="test-api" -F version="1.0" -F provider="admin" -F context="/test-apicontext" -F visibility="public" -F roles="" -F wsdl="" -F apiThumb="" -F description="" -F tags="testtag" -F action="design" -F swagger='{"apiVersion":"1.0","swaggerVersion":"1.2","authorizations":{"oauth2":{"scopes":[],"type":"oauth2"}},"apis":[{"index":0,"file":{"apiVersion":"1.0","basePath":"","swaggerVersion":"1.2","resourcePath":"/test","apis":[{"index":0,"path":"/test","operations":[{"nickname":"get_test","auth_type":"Application & Application User","throttling_tier":"Unlimited","method":"GET","parameters":[
    ]}]}]},"description":"","path":"/test"}],"info":{"title":"test-api","termsOfServiceUrl":"","description":"","license":"","contact":"","licenseUrl":""}}' -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag

    Implement API
    curl -F implementation_methods="endpoint" -F endpoint_type="http" -F endpoint_config='{"production_endpoints":
    ,"endpoint_type":"http"}' -F production_endpoints="http://appserver/resource/ycrurlprod" -F sandbox_endpoints="" -F endpointType="nonsecured" -F epUsername="" -F epPassword="" -F wsdl="" -F wadl="" -F name="test-api" -F version="1.0" -F provider="admin" -F action="implement" -F swagger='{"apiVersion":"1.0","swaggerVersion":"1.2","authorizations":{"oauth2":{"scopes":[],"type":"oauth2"}},"apis":[{"index":0,"file":{"apiVersion":"1.0","basePath":"","swaggerVersion":"1.2","resourcePath":"/test","apis":[{"index":0,"path":"/test","operations":[{"nickname":"get_test","auth_type":"Application & ApplicationUser","throttling_tier":"Unlimited","method":"GET","parameters":[
    ]}]}]},"description":"","path":"/test"}],"info":{"title":"test-api","termsOfServiceUrl":"","description":"","license":"","contact":"","licenseUrl":""}}' -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag

    Manage API.
    curl -F default_version_checked="" -F tier="Unlimited" -F transport_http="http" -F transport_https="https" -F inSequence="none" -F outSequence="none" -F faultSequence="none" -F responseCache="disabled" -F cacheTimeout="300" -F subscriptions="current_tenant" -F tenants="" -F bizOwner="" -F bizOwnerMail="" -F techOwner="" -F techOwnerMail="" -F name="test-api" -F version="1.0" -F provider="admin" -F action="manage" -F swagger='{"paths":{"/*":{"post":{"responses":{"201":{"description":"Created"}},"x-auth-type":"Application & Application
    :"Application & Application User","x-throttling-tier":"Unlimited"},"get":{"responses":{"200":{"description"
    :"OK"}},"x-auth-type":"Application & Application User","x-throttling-tier":"Unlimited"},"delete":{"responses"
    :{"200":{"description":"OK"}},"x-auth-type":"Application & Application User","x-throttling-tier":"Unlimited"
    }}},"swagger":"2.0","info":{"title":"testAPI","version":"1.0.0"}}' -F outSeq="" -F faultSeq="json_fault" -F tiersCollection="Unlimited" -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag

    sanjeewa malalgodaWSO2 API Manager how to change some resource stored in registry for each tenant load.

    As you all know in API Manager we have stored tiers and lot of other data in registry. In some scenarios we may need to modify and update before tenant user use it. In such cases we can write tenant service creator listener and do what we need. In this article we will see how we can change tiers.xml file before tenant load to system. Please note that with this change we cannot change tiers values from UI as this code replace it for each tenant load.

    Java code.

    import org.apache.axis2.context.ConfigurationContext;
    import org.apache.commons.logging.Log;
    import org.apache.commons.logging.LogFactory;
    import org.wso2.carbon.context.PrivilegedCarbonContext;
    import org.wso2.carbon.registry.core.exceptions.RegistryException;
    import org.wso2.carbon.registry.core.session.UserRegistry;
    import org.wso2.carbon.utils.AbstractAxis2ConfigurationContextObserver;
    import org.wso2.carbon.registry.core.Resource;
    import org.wso2.carbon.apimgt.impl.internal.ServiceReferenceHolder;
    import org.wso2.carbon.apimgt.impl.APIConstants;

    import java.util.Iterator;
    public class CustomTenantServiceCreator extends AbstractAxis2ConfigurationContextObserver {
        private static final Log log = LogFactory.getLog(CustomTenantServiceCreator.class);
        public void createdConfigurationContext(ConfigurationContext configurationContext) {
            UserRegistry registry = null;
            try {
                String tenantDomain = PrivilegedCarbonContext.getThreadLocalCarbonContext().getTenantDomain();
                int tenantId = PrivilegedCarbonContext.getThreadLocalCarbonContext().getTenantId();
                registry = ServiceReferenceHolder.getInstance().getRegistryService().getGovernanceSystemRegistry(tenantId);
                Resource resource = null;
                resource = registry.newResource();
                InputStream inputStream =
                byte[] data = IOUtils.toByteArray(inputStream);
                resource = registry.newResource();
                registry.put(APIConstants.API_TIER_LOCATION, resource);
            } catch (RegistryException e) {
            } catch (IOException e) {

    import org.apache.commons.logging.Log;
    import org.apache.commons.logging.LogFactory;
    import org.osgi.framework.BundleContext;
    import org.osgi.service.component.ComponentContext;
    import org.wso2.carbon.utils.Axis2ConfigurationContextObserver;
    import org.wso2.carbon.utils.multitenancy.MultitenantConstants;
    import org.wso2.carbon.apimgt.impl.APIManagerConfigurationService;
    import org.wso2.carbon.apimgt.impl.APIManagerConfiguration;
     * @scr.component name="" immediate="true"
     * @scr.reference name="api.manager.config.service"
     *                interface=
     *                "org.wso2.carbon.apimgt.impl.APIManagerConfigurationService"
     *                cardinality="1..1"
     *                policy="dynamic" bind="setAPIManagerConfigurationService"
     *                unbind="unsetAPIManagerConfigurationService"
    public class CustomObserverRegistryComponent {
        private static final Log log = LogFactory.getLog(CustomObserverRegistryComponent.class);
        public static final String TOPICS_ROOT = "forumtopics";
        private static APIManagerConfiguration configuration = null;
        protected void activate(ComponentContext componentContext) throws Exception {
            if (log.isDebugEnabled()) {
                log.debug("Forum Registry Component Activated");
                CustomTenantServiceCreator tenantServiceCreator = new CustomTenantServiceCreator();
                BundleContext bundleContext = componentContext.getBundleContext();
                bundleContext.registerService(Axis2ConfigurationContextObserver.class.getName(), tenantServiceCreator, null);
            }catch(Exception e){
                log.error("Could not activate Forum Registry Component " + e.getMessage());
                throw e;
        protected void setAPIManagerConfigurationService(APIManagerConfigurationService amcService) {
    log.debug("API manager configuration service bound to the API host objects");
    configuration = amcService.getAPIManagerConfiguration();
    protected void unsetAPIManagerConfigurationService(APIManagerConfigurationService amcService) {
    log.debug("API manager configuration service unbound from the API host objects");
    configuration = null;

    Complete source code for project

    Once tenant loaded you will see updated values as follows.

    Prabath SiriwardenaEnabling FIDO U2F Multi-Factor Authentication for the AWS Management Console with the WSO2 Identity Server

    This tutorial on Medium explains how to enable authentication for the AWS Management Console against the corporate LDAP server and then enable multi-factor authentication (MFA) with FIDO. FIDO is soon becoming the de facto standard for MFA, backed by the top players in the industry including Google, Paypal, Microsoft, Alibaba, Mozilla, eBay and many more.

    Malintha AdikariModel Evaluation With Coress Validation

    We can use cross validation to evaluate the prediction accuracy of the model. We can keep subset of our dataset without using it for training purposes. So those are new or unknown data for the model once we train that with the rest of data. Then we can use that subset of unused data to evaluate the accuracy of the trained model. Here, first we partition data into test dataset and training dataset and then train the model with the training dataset. Finally we evaluate the model with the test dataset. This process is called "Cross Validation".

    In this blog post I would like to demonstrate how we can cross validate a decision tree classification model which is build using scikit-learn + Panda. Please visit decision-tree-classification-using scikit-learn post if you haven't create your classification model yet. As a recap at this point we have a decision tree model which predicts whether a given person in Titanic ship is going to survive from the tragedy or die in the cold, dark sea :(.

    In previous blog post we have used entire Titanic dataset for training the model. Let's see how we can use only 80% of data for training and the rest 20% for evaluation purpose.

    # separating 80% data for training
    train = df.sample(frac=0.8, random_state=1)

    # rest 20% data for evaluation purpose
    test = df.loc[~df.index.isin(train.index)]

    Then we train the model normally but we use training dataset

    dt = DecisionTreeClassifier(min_samples_split=20, random_state=9)[features], train["Survived"])

    Then we predict the result for rest 20% data.

    predictions = dt.predict(test[features])

    Then we can calculate Mean Squared Error of the predictions vs. actual values as a measurement of the prediction accuracy of the trained model.


    We can use scikit-learn built in mean squared error function for this. First import it to current module.

    from sklearn.metrics import mean_squared_error

    Then we can do the calculation as follows

    mse = mean_squared_error(predictions, test["Survived"])

    You can play with the data partition ratio and the features and observe the variation of the Mean Squared Error with those parameters.

    sanjeewa malalgodaHow to avoid issue in default APIs in WSO2 API Manager 1.10

    In API Manager 1.10 you may see issue in mapping resources when you create another version and make it as default version. In this post lets see how we can overcome that issue.

    Lets say we have resource with a path parameter like this.

    Then we will create another API version and make it as default.
    As you can see from the xml generated in synapse config, corresponding to the API, the resource is created correctly in admin--resource_v1.0.xml

    <resource methods="GET" uri-template="/resource/{resourceId} " faultSequence="fault">

    But if you checked newly created default version then you will see following.

    <resource methods="GET"

    Therefore, we cannot call the resource of the API via gateway with the APIs default version.
    Assume we have API named testAPI and we have 3 versions named 1.0.0, 2.0.0 and 3.0.0.
    By defining default API what we do is just create a proxy for default version. Then we will create default proxy which can accept any url pattern and deploy it.
    For that we recommend you to use /* pattern. It will only mediate requests to correct version. Lets think default version is 2.0.0 then default version API
    will forward request to that version. So you can have all your resources in version 2.0.0 and it will be processed there. And you can have any complex url pattern there.

    So for default API having resource definition which match to any request is sufficient. Here is the configuration to match it.

      <resource methods="POST PATCH GET DELETE HEAD PUT"

    To confirm this you can see content of default API. There you will see it is pointed to actual API with given version. So All your resources will be there in versioned API as it is.

    Here i have listed complete velocity template file for default API.
    Please copy it and replace wso2am-1.10.0/repository/resources/api_templates/default_api_template.xml file.

    <api xmlns=""  name="$!apiName" context="$!apiContext" transports="$!transport">
       <resource methods="POST PATCH GET DELETE HEAD PUT"
            <property name="isDefault" expression="$trp:WSO2_AM_API_DEFAULT_VERSION"/>
            <filter source="$ctx:isDefault" regex="true">
                    <log level="custom">
                        <property name="STATUS" value="Faulty invoking through default API.Dropping message to avoid recursion.."/>
                    <payloadFactory media-type="xml">
                            <am:fault xmlns:am="">
                                <am:type>Status report</am:type>
                                <am:message>Internal Server Error</am:message>
                                <am:description>Faulty invoking through default API</am:description>
                    <property name="HTTP_SC" value="500" scope="axis2"/>
                    <property name="RESPONSE" value="true"/>
                    <header name="To" action="remove"/>
                    <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
                    <property name="ContentType" scope="axis2" action="remove"/>
                    <property name="Authorization" scope="transport" action="remove"/>
                    <property name="Host" scope="transport" action="remove"/>
                    <property name="Accept" scope="transport" action="remove"/>
                    <header name="WSO2_AM_API_DEFAULT_VERSION" scope="transport" value="true"/>
                    #if( $transport == "https" )
                    <property name="uri.var.portnum" expression="get-property('https.nio.port')"/>
                    <property name="uri.var.portnum" expression="get-property('http.nio.port')"/>

                    #if( $transport == "https" )
                    <http uri-template="https://localhost:{uri.var.portnum}/$!{fwdApiContext}">
                    <http uri-template="http://localhost:{uri.var.portnum}/$!{fwdApiContext}">
                <handler class="org.wso2.carbon.apimgt.gateway.handlers.common.SynapsePropertiesHandler"/>

    Evanthika AmarasiriHow to solve the famous token regeneration issue in an API-M cluster

    In a API Manager clustered environment (in my case, I have a publisher, a store, two gateway nodes and two key manager nodes fronted by a WSO2 ELB 2.1.1), while regenerating tokens, if you come across an error saying Error in getting new accessToken with an exception as below at Key Manager node, then this is due to a configuration issue.

    TID: [0] [AM] [2014-09-19 05:41:28,321]  INFO {} -  'Administrator@carbon.super [-1234]' logged in at [2014-09-19 05:41:28,321-0400] {}
    TID: [0] [AM] [2014-09-19 05:41:28,537] ERROR {org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService} -  Error in getting new accessToken {org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService}
    TID: [0] [AM] [2014-09-19 05:41:28,538] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} -  Error in getting new accessToken {org.apache.axis2.rpc.receivers.RPCMessageReceiver}
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(
        at java.lang.reflect.Method.invoke(
        at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(
        at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(
        at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(
        at org.apache.axis2.receivers.AbstractMessageReceiver.receive(
        at org.apache.axis2.engine.AxisEngine.receive(
        at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(
        at org.apache.axis2.transport.http.AxisServlet.doPost(
        at org.wso2.carbon.core.transports.CarbonServlet.doPost(
        at javax.servlet.http.HttpServlet.service(
        at javax.servlet.http.HttpServlet.service(
        at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(
        at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(
        at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(
        at javax.servlet.http.HttpServlet.service(
        at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(
        at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(
        at org.apache.catalina.core.StandardWrapperValve.invoke(
        at org.apache.catalina.core.StandardContextValve.invoke(
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(
        at org.apache.catalina.core.StandardHostValve.invoke(
        at org.apache.catalina.valves.ErrorReportValve.invoke(
        at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(
        at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(
        at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(
        at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(
        at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(
        at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(
        at org.apache.catalina.valves.AccessLogValve.invoke(
        at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(
        at org.apache.catalina.core.StandardEngineValve.invoke(
        at org.apache.catalina.connector.CoyoteAdapter.service(
        at org.apache.coyote.http11.AbstractHttp11Processor.process(
        at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(
        at java.util.concurrent.ThreadPoolExecutor.runWorker(
        at java.util.concurrent.ThreadPoolExecutor$
    Caused by:
    org.wso2.carbon.apimgt.keymgt.APIKeyMgtException: Error in getting new accessToken
        at org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService.renewAccessToken(
        ... 45 more
    Caused by:
    java.lang.RuntimeException: Token revoke failed : HTTP error code : 404
        at org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService.renewAccessToken(
        ... 45 more

    This is what you have to do to solve this issue.

    1. In you Gateway nodes, you need to change the host and the port values of the below APIs that resides under $APIM_HOME/repository/deployment/server/synapse-configs/default/api                                                      _TokenAPI_.xml                                                                                                                                         _AuthorizeAPI_.xml                                                                                                                                       _RevokeAPI_.xml                                                                                                                                        
    If you get a HTTP 302 error at Key manager side while regenerating the token, make sure to check the RevokeURL of the api-manager.xml of the Key Manager node to see if it is pointing to the NIO port of the Gateway Node.

    sanjeewa malalgodaHow to change API Managers authentication failure message to message default content type.

    The error response format sent from WSO2 gateway is usually in xml format. Also if need we can change this behavior. To do this we have extension point to customize message error generation. For auth failures throttling failures we have handler to generate messages.

    For auth failures following will be used.

    The handler can be updated to dynamically use the content Type to return the correct response.
    Once you changed it will work for XML and JSON calls.

    Change the file to add a dynamic lookup of the message contentType i.e,

    <sequence name="auth_failure_handler" xmlns="">

    <property name="error_message_type" value="application/json"/>
    <sequence key="cors_request_handler"/>

    <sequence name="auth_failure_handler" xmlns="">