WSO2 Venus

Evanthika AmarasiriHow to list admin services used by WSO2 carbon based servers

We are given admin services by WSO2 products to perform various tasks but there is no documentation on the list of the services that are being provided. Therefore to list all these admin services, all you have to do is start the server with -DosgiConsole and type in the command listAdminServices in the osgi console.

This is clearly explained in the stack overflow question at [1].

[1] - http://stackoverflow.com/questions/21219907/list-admin-services-used-by-wso2-carbon-based-servers

sanjeewa malalgodaHow to recover application if application owner is deleted or blocked in API Store- WSO2 API Manager.



From API Manager 1.9.0 onward we can address this using subscription sharing feature. All users within same group can add subscriptions, remove API subscriptions and update subscriptions, delete apps etc.
So basically all users in same group can do anything to application. So if one user leaves organization then others in group can act as application owner.

Now lets see how we can recover application in API Manager 1.8.0 and lower versions. Please note you can use same in API Manager 1.9.0 and later versions if need.

create new user from management console or use any other existing user who need to transfer application ownership.
Assign application specific roles to that user(if exists)




Then update CREATED_BY, SUBSCRIBER_ID to new users values as follows.
mysql> select * from AM_APPLICATION;
+----------------+--------------------+---------------+------------------+--------------+-------------+--------------------+----------+------------+---------------------+------------+---------------------+
| APPLICATION_ID | NAME               | SUBSCRIBER_ID | APPLICATION_TIER | CALLBACK_URL | DESCRIPTION | APPLICATION_STATUS | GROUP_ID | CREATED_BY | CREATED_TIME        | UPDATED_BY | UPDATED_TIME        |
+----------------+--------------------+---------------+------------------+--------------+-------------+--------------------+----------+------------+---------------------+------------+---------------------+
|              1 | DefaultApplication |             1 | Unlimited        | NULL         | NULL        | APPROVED           |          | admin      | 2016-06-24 13:07:40 | NULL       | 0000-00-00 00:00:00 |
|              2 | test-admin         |             1 | Unlimited        |              |             | APPROVED           |          | admin      | 2016-06-28 17:44:44 | NULL       | 0000-00-00 00:00:00 |
|              3 | DefaultApplication |             2 | Unlimited        | NULL         | NULL        | APPROVED           |          | sanjeewa   | 2016-06-28 17:46:49 | NULL       | 0000-00-00 00:00:00 |
+----------------+--------------------+---------------+------------------+--------------+-------------+--------------------+----------+------------+---------------------+------------+---------------------+


mysql> update AM_APPLICATION set CREATED_BY ='sanjeewa' where NAME = 'test-admin';
mysql> update AM_APPLICATION set SUBSCRIBER_ID ='2' where NAME = 'test-admin';

Now table will look like following
mysql> select * from AM_APPLICATION;
+----------------+--------------------+---------------+------------------+--------------+-------------+--------------------+----------+------------+---------------------+------------+---------------------+
| APPLICATION_ID | NAME               | SUBSCRIBER_ID | APPLICATION_TIER | CALLBACK_URL | DESCRIPTION | APPLICATION_STATUS | GROUP_ID | CREATED_BY | CREATED_TIME        | UPDATED_BY | UPDATED_TIME        |
+----------------+--------------------+---------------+------------------+--------------+-------------+--------------------+----------+------------+---------------------+------------+---------------------+
|              1 | DefaultApplication |             1 | Unlimited        | NULL         | NULL        | APPROVED           |          | admin      | 2016-06-24 13:07:40 | NULL       | 0000-00-00 00:00:00 |
|              2 | test-admin         |             2 | Unlimited        |              |             | APPROVED           |          | sanjeewa   | 2016-06-28 17:51:03 | NULL       | 0000-00-00 00:00:00 |
|              3 | DefaultApplication |             2 | Unlimited        | NULL         | NULL        | APPROVED           |          | sanjeewa   | 2016-06-28 17:46:49 | NULL       | 0000-00-00 00:00:00 |
+----------------+--------------------+---------------+------------------+--------------+-------------+--------------------+----------+------------+---------------------+------------+---------------------+


Then go to API store and log as new user.
You can generate new access tokens and add new API subscriptions to application.

sanjeewa malalgodaHow to enforce users to add only https URLs for call back URL when you create Application in API Store

Even though not required, TLS is strongly recommended for client applications. Since its not something mandate by spec we let our users to add both http and https URLs. But if you need to let users to add only HTTPS url then we have a solution for that as well. Since all users come to API store and create applications we may let users to add only HTTPS urls. You can do this with following steps.

(1) Navigate to "/repository/deployment/server/jaggeryapps/store/site/themes/fancy/subthemes" directory.
(2) Create a directory with the name of your subtheme. For example "test".
(3) Copy the "/wso2am-1.10.0/repository/deployment/server/jaggeryapps/store/site/themes/fancy/templates/application/application-add/js/application-add.js" to the new subtheme location "repository/deployment/server/jaggeryapps/store/site/themes/fancy/subthemes/test/templates/application/application-add/js/application-add.js".
(4) Update $("#appAddForm").validate in copied file as follows.

You should replace,
$("#appAddForm").validate({
submitHandler: function(form)
{ applicationAdd(); }

});

With following,
$("#appAddForm").validate({
submitHandler: function(form) {
var callbackURLTest =$("#callback-url").val();
var pattern = /^((https):\/\/)/;
if(pattern.test(callbackURLTest))
{ applicationAdd(); }

else
{ window.alert("Please enter valid URL for Callback URL. Its recommend to use https url."); }


}
});

(5) Then Edit "/repository/deployment/server/jaggeryapps/store/site/conf/site.json" file as below in order to make the new sub theme as the default theme.
"theme" :
{ "base" : "fancy", "subtheme" : "test" }

Then users will be able to add only HTTP urls when they create applications in API store. 

Krishantha SamaraweeraTest Automation Architecture


Following three repos have been used to facilitate test automation for WSO2 products. Each component will describe in this post later.
  1. Test Automation Framework (git repo link - carbon-platform-integration)
  2. Carbon Automation Test Utils (git repo link - carbon-platform-integration-utils)
  3. Collection of all test suites and a test runner (git repo link - carbon-platform-automated-test-suite)

Test Automation Framework components

Architecture - AutomationFramework (2).png

Components related to TestNG is marked in dark-red colour

Automation framework engine

  • Automation Context builder - Process automation.xml given though test module and make all configurations available through Automation Context.

  • TestNG extension Executor -  Responsible for execution of internal/external extension classes in various states of the TestNG listeners.

Pluggable utilities to the test execution (TestNG)

There are several interfaces that allow you to modify TestNG's behaviour. These interfaces are broadly called "TestNG Listeners".  Test Automation Framework supports the execution of internal/external extension classes in various states of the TestNG listeners. Users can define the class paths in the automation.xml file under the desired TestNG listener states
Automation Framework uses Java reflection to execute classes. Users are expected to use the interface provided by the framework to develop external extension classes to be executed in the test. There are 5 interfaces provided by TAF for different TestNG listeners. Those interfaces have specific methods defined explicitly to be inline with the corresponding TestNG listeners. The interfaces are:
  • ExecutionListenerExtension
  • ReportListenerExtension
  • SuiteListenerExtension
  • TestListenerExtension
  • TransformListenerExtension

Automation Framework Common Extensions (Java)

This module consists of a set of default pluggable modules common to the platform. These pluggable modules can be executed in the test execution registering those in the test configuration file (automation.xml). TAF provides common modules such as the
  • CarbonServerExtension -  For server startup, shutdown operations of carbon server. And coverage    generation is also a part of this class.  
  • Axis2ServerExtension -  For axis2 server startup and shutdown. Facilitate backend for integration tests.
Also extensions module contain classes facilitate third party framework integration
  • SeleniumBrowserManager - Creates Selenium WebDriver instance based on the given browser configuration at automation.xml
  • JmeterTestRunner - Executes jmeter test scripts in headless mode and inject result to surefire test result reports.
Users can also add any platform wide common modules into the Automation Framework Extensions. Test specific pluggable modules should be kept in the test module side and those modules can also be used for the tests by registering it in the automation configuration file.

Automation Framework Utils (Java)

Utility components that provide frequently used functionality, such as sending SOAP and REST requests to a known endpoint or monitoring a port to determine whether it's available. You can add the Utils module to your test case to get the functionality you need. Some sample utilities are:
  • TCPMon Listener
  • FTP Server Manager
  • SFTP Server Manager
  • SimpleHTTPServer
  • ActiveMQ Server
  • Axis2 clients
  • JsonClients
  • MutualSSLClient
  • HTTPClient
  • HTTPURLConnectionClient
  • Wire message monitor
  • Proxy Server
  • Tomcat Server
  • Simple Web Server (To facilitate content type testing)
  • FileReader, XMLFileReader
  • WireMonitor
  • Concurrent request Generator
  • Database Manager (DAO)

Test Framework Unit Tests - Unit test classes to verify context builder and utility classes, depends on TestNG for unit testing.

Carbon Automation Test Utils


Carbon Platform Utils -  AutomationFramework.png


Common Framework Tests


There are integration test scenarios common to all WSO2 products. Implementing tests for those common scenarios in each product might introduce test duplication and test management difficulties. Thus, a set of common tests are introduced under the framework utilities module which can be used directly by extending the test classes.
e.g
  1. DistributionValidationTest
  2. JaggeryServerTest
  3. OSGIServerBundleStatusTest
  4. ServerStartupBaseTest

Common Admin Clients


This module consists of set of admin clients to invoke backend admin services together with supportive utility methods which are common for WSO2 product platform.
e.g
  • ServerAminClient
  • LogViewerAdminClient
  • UserManagementClient
  • SecurityAdminServiceClient

Common Test Utils


Frequently used test utility classes which depends on platform dependencies.
e.g
  • SecureAxis2ServiceClient
  • TestCoverageReportUtil - (To merge coverage reports)
  • ServerConfiurationManager (To change carbon configuration files)
  • LoginLogoutClient

Common Extensions


Consists of Pluggable classes allowing to plug additional extensions to the test execution flow. Frequently used test framework extension classes which are available by default, these classes use platform dependencies.

e.g UserPopulatorExtension - For user population/deletion operations

Platform Automated Test Suite


Builds a distribution containing all integration and platform test jars released with each product. It contains an ant based test executor to run the test cases in each test jar file. This ant script is based on TestNG ant task which can be used as a test case runner. The ant script contains a mail task which can be configured to send out a notification mail upon test execution. The PATS distribution can be used to run the set of tests on a pre-configured product cluster/distribution.


PATS - AutomationFramework.png

Nadeeshaan GunasingheDynamically Selecting SOAP Message Elements in WSO2 ESB

Most recently I faced a requirement where I had to select elements dynamically from a SOAP response message which comes to WSO2 ESB. The use case is as follows.

I have a proxy service where I get a request from a client and let's say the request looks like following.

1:  <Request>  
2: <Values>
3: <value>2</value>
4: </Values>
5: </Request>

The value can be changed for each request. However when the request is sent to the backend server from ESB we get a response as following

1:  <Response>  
2: <Events>
3: <Event><TestEntry>Entry Val1</TestEntry></Event>
4: <Event><TestEntry>Entry Val2</TestEntry></Event>
5: <Event><TestEntry>Entry Val3</TestEntry></Event>
6: <Event><TestEntry>Entry Val4</TestEntry></Event>
7: <Event><TestEntry>Entry Val5</TestEntry></Event>
8: </Events>
9: </Response>

Depending on the Value specified in the Request, we need to extract the specified number of event entries from the response (If the value in the request is 2 then we will have to extract two event entries from the response).

In order to achieve the requirement I had to follow the bellow configurations.


Sequence Calling the xslt Transform

Use the xslt mediator to transform the incoming payload to ESB


1:  <sequence name="get_document_list_seq" trace="disable" mlns="http://ws.apache.org/ns/synapse">  
2: <property expression="//value" name="limit"
3: scope="default" type="STRING" xmlns:ns="http://org.apache.synapse/xsd"/>
4: <xslt key="payload_transform" source="//Response"
5: xmlns:ns="http://org.apache.synapse/xsd"
6: xmlns:s11="http://schemas.xmlsoap.org/soap/envelope/" xmlns:tem="http://tempuri.org/">
7: <property expression="get-property('limit')" name="PARAM_NAME"/>
8: </xslt>
9: <respond/>
10: </sequence>


XSLT Transformation to transform the payload and extract the elements

With the Property set with the name PARAM_NAME which passed to the xslt from the sequence above, will use to determine the number of elements to be extracted.

1:  <localEntry key="payload_transform" xmlns="http://ws.apache.org/ns/synapse">  
2: <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
3: <xsl:output indent="yes" method="xml" omit-xml-declaration="yes"/>
4: <xsl:param name="PARAM_NAME"/>
5: <xsl:template match="/">
6: <list>
7: <responseList>
8: <xsl:for-each select="//Response/Events/Event[position()&lt;=number($PARAM_NAME)]">
9: <entry>
10: <xsl:value-of select="TestEntry"/>
11: </entry>
12: </for-each>
13: </responseList>
14: </list>
15: </xsl:template>
16: </xsl:stylesheet>
17: </localEntry>

Chamila WijayarathnaSetting Up WSO2 API Manager Cluster in a Single Server Using Puppet

In this blog, I am going to write about setting up a WSO2 API Manager cluster with 4 API Manager instances which will be running on different profiles as Gateway, Publisher, Store and Key Manager. More details about these profiles and how they operate can be found from [1]. If we are setting up cluster in different server instances, this is quite straight forward and [2], [3] will guide you on how to do that.

But in some cases, due to resource limitations, you may need to use same server to host more than 1 instance which will serve you in different ways. Doing this is bit tricky than what I mentioned previously and in this blog I am going to explain you on how to achieve that.

Other than the nodes you are deploying the APIM instances, you need a separate node to use as the puppet master.

Install puppet master in that node by following the information given at section 2.1.1 of [2].

Install puppet master in the nodes you are going to install APIM instances by following section 2.1.2 of [2].

Configure puppet master node and all client agent nodes with configurations mentioned in section 2.2 at [2].

Now login to puppet master and add following custom configurations there.

In /etc/puppet/hieradata/wso2/common.yaml, add the IP addresses and host names for VMs in your setup under wso2::hosts_mapping: as follows.

# Host mapping to be made in etc/hosts
wso2::hosts_mapping:
  localhost:
    ip_address: 127.0.0.1
    hostname: localhost
  puppet:
    ip_address: <puppet master ip>
    hostname: <puppet master host>
  vm1:
    ip_address: <vm 1 ip>
    hostname: <vm 1 host>
vm2:
    ip_address: <vm 2 ip>
    hostname: <vm 2 host>
vm3:
    ip_address: <vm 3 ip>
    hostname: <vm 3 host>
vm4:
    ip_address: <vm 4 ip>
    hostname: <vm 4 host>

Change the install_dir property of /etc/puppet/hieradata/wso2/common.yaml as follows.

 wso2::install_dir: "/mnt/%{::ipaddress}/%{::product_profile}"

Now in the yaml related to each profile, you need to specify the port offset you are going to use for the APIM instance running on that profile. For example, if you are using port offset 1 for store, in /etc/puppet/hieradata/production/wso2/wso2am/1.10.0/default/api-store.yaml as follows.

 wso2::ports:
  offset: 1

You need to specify port offsets for all profiles you need to run with a port offset.
Also define a different services name in each profile yaml you gonna use as follows.

Eg : wso2::service_name: wso2am_store

Then follow section 3 at [2] to start the servers. Instead of using the setup.sh at [2]. Please use following.

#!/bin/bash
echo "#####################################################"
echo "                   Starting cleanup "
echo "#####################################################"
#rm -rf /mnt/*
sed -i '/environment/d' /etc/puppet/puppet.conf
echo "#####################################################"
echo "               Setting up environment "
echo "#####################################################"
rm -f /etc/facter/facts.d/deployment_pattern.txt
mkdir -p /etc/facter/facts.d

while read -r line; do declare  $line; done < deployment.conf  

echo product_name=$product_name >> /etc/facter/facts.d/deployment_pattern.txt  
echo product_version=$product_version >> /etc/facter/facts.d/deployment_pattern.txt  
echo product_profile=$product_profile >> /etc/facter/facts.d/deployment_pattern.txt  
echo vm_type=$vm_type >> /etc/facter/facts.d/deployment_pattern.txt  
echo platform=$platform >> /etc/facter/facts.d/deployment_pattern.txt

echo "#####################################################"  
echo "                    Installing "  
echo "#####################################################"  

puppet agent --enable  
puppet agent -vt  
puppet agent --disable

 For each instance you will have to change the deployment.conf and run setup.sh.

[1]. https://docs.wso2.com/display/AM1100/Product+Profiles
[2]. https://github.com/wso2/puppet-modules/wiki/Use-WSO2-Puppet-Modules-in-puppet-master-agent-Environment
[3]. https://movingaheadblog.blogspot.com/2016/03/wso2-apim-setting-up-api-manager.html

Shazni NazeerUnleashing the Git - part 5 - git stashing

More often than not, there can be situations where you have done some changes and you need to temporarily hide those from the working tree since those changes are not completely ready yet. Stashing comes into play in such situations. May be when you have done some changes in a branch and you want to switch to another branch without committing your changes in the current branch. Stash is the easiest option for that.

Say you have changed some files. Issuing the git status would show what has changed.
$ git stash // Temporarily hide your changes. Now issue git status. You won't see the modifications. Equivalent to $ git stash save
$ git stash save --patch // Opens the text editor to choose what portion to stash
$ git stash apply // Apply the last stashed change and keep the stash in stack. But this is normally not done. Instead following is done
$ git stash pop // Apply the last stashed change and remove it from stack
$ git stash list // Shows available stashes.
Stashes have name in the format of stash@{#} where # is 0 for most recent one and 1 for the second recent and so on.
$ git stash drop <stash name>   // e.g: git stash drop stash@{0} to remove the first stash
$ git stash clear // Remove all the stashed changes
$ git stash branch <stash name>  // Create a branch from an existing stash
In the next part of this series we'll discuss git branching in detail.

Shazni NazeerUnleashing the Git - part 4 - Git Tagging

Tagging comes very handy in managing a repository. Tags and branches are similar concepts. But tags are read only. There are two types of tags. 1) Lightweight 2) Annotated
$ git tag Basic_Features    // Creates a Basic_Features tag.
$ git tag            // Shows available tags
Basic_Features
$ git show Basic_Features   // Which show the details of the commit on which Basic_Feature tag is added
$ git tag beta1 HEAD^ // Creates a tag from next to last commit
Earlier I showed how to check out code with commit id. We can also use the tags to checkout code of a particular stage using the following command.
$ git checkout Basic_Feature
All of the above are lightweight/unannotated tag. Creating an annotated tag is as simple as creating lightweight tag as below,
$ git tag -a Basic_Messaging_Annotated -m "Annotated tag at Messaging"
$ git show Basic_Messaging_Annotated   // this will provide the details of the tagger as well.
Deleting a tag is same for both tag types.
$ git tag -d Basic_Messaging_Annotated  // Deletes the annotated tag we created.
$ git push origin v1.0 // Push local tag v1.0 to remote origin.
$ git push --tags origin // Push all the local tags to remote origin
$ git fetch --tags origin // fetches all the remote origins tags to local repository. Note: If you have a local tag with same name as a remote tag, it'll be overwritten

Sachith WithanaSimple Indexing Guide for WSO2 Data Analytics Server

Interactive search functionality in WSO2 Data Analytics Server is powered by Apache Lucene[1],  a powerful, high performing, full text search engine!.

Lucene index data were kept in a database in the first version of the Data Analytics Server[2] (3.0.0). But from DAS 3.0.1 onwards, Lucene indexes are maintained in the local filesystem.
Data Analytics Server(DAS) has a seperate server profile which enables it to perform as a dedicated indexing node. 

When a DAS node is started with indexing (disableIndexing=false), there are a quite a few things going on. But first let’s get the meta-information out of the way.

Configuration files locations:

Local shard config: <DAS-HOME>/repository/conf/analytics/local-shard-allocation-config.conf
Analytics config: <DAS-HOME>/repository/conf/analytics/analytics-config.xml

What is a Shard?

All the indexing data are partitioned across in partitions called shards ( default is 6). The partitioned data can be observed by browsing in <DAS-HOME>/repository/data/index_data/
Each and every record belongs to only one shard.

Digging in ...

Clustering:

In standalone mode, DAS behaves as a powerful indexer, but it truly shines as a powerhouse when it’s clustered.

A cluster of DAS indexer nodes behaves as a one giant and powerful distributed search engine. Search queries are distributed across all the nodes resulting in lightning fast result retrieval. Not only that, since indexes are distributed among the nodes, indexing data in the cluster is unbelievably fast. 

In the clustering mode, the aforementioned shards are distributed across the indexing nodes ( nodes in which indexing is enabled). For example for a cluster including 3 indexing nodes with 6 shards, each indexing node would be assigned two shards each (unless replication is enabled, more on that later).

The shard allocation is DYNAMIC!. If a new node starts up as an indexing node, the shard allocations would change to allocate some of the shards to the newly spawned indexing node. This is controlled using a global shard allocation mechanism.

Replication factor:
Replication factor can be configured in the analytics config file as indexReplicationFactor, which would decide how many replicas of each record ( or shard ) would be kept. Default is 1.

Manual Shard configuration:


There are 3 modes when configuring the local shard allocations of a node. They are,
  1. NORMAL
  2. INIT
  3. RESTORE

NORMAL means, the data for that particular shard would already be residing in that node.
For example, upon starting up a single indexing node, the shard configuration would look like follows,
0, NORMAL
1, NORMAL
2, NORMAL
3, NORMAL
4, NORMAL
5, NORMAL

This means, all the shards (0 through 5) are indexed successfully in that indexer node.

INIT allows you to tell the indexing node to index that particular shard.
If you restart the server after adding a shard as INIT,  that shard would be re-indexed in that node.
Ex: if the shard allocations are

1, NORMAL
2, NORMAL

and we add the line 4, INIT and restart the server,

1, NORMAL
2, NORMAL
4, INIT

This would index the data for the 4th Shard for that indexing node and you can see that it will be returned to the NORMAL state after that once indexing the 4th shard is done.

RESTORE provides you the opportunity to copy the indexed data from a different node/backup to another node and use that indexing data. This avoids re-indexing and reuses the already available indexing data. After successful restoring, that node is able to support queries on that corresponding shard as well.

Ex: for the same shard allocation above, if we copy the indexed data for the 5th Shard and add the line,
5, RESTORE
and restart the server, the node would then allocate the 5th shard to that node ( which would be used to search and index)

After restoring, that node would also index the incoming data for that shard as well.

FAQ!


How do I remove a node as an indexing node?


If you want to remove a node as an indexing node, then you have to restart that particular node with indexing disabled ( disableIndexing=true) and it will trigger the global shard allocations to be re-allocated, removing that node from the indexing cluster.

Then you have the option of restarting the other indexing nodes which will automatically distribute the shards among the available nodes again. 

Or you can manually assign the shards. Then you need to make sure to index the data for the shards that node took in other indexing node as you require.
You can view the shard list for that particular node in the local indexing shard config (refer above).

Then you can either use INIT or RESTORE methods to index those shards in other nodes (refer to the Manual Shard configuration section).

You must restart all the indexing servers for it to get the indexing updates after the indexing node has left.

Mistakenly started up an indexing node?


If you have mistakenly started up an indexing server, it will change the global configurations and has to be undone manually.
If the replication factor is equal to or greater than 1, you would still be able to query and get all the data even this node is down.

First of all, to remove a node as an indexing node, you need to delete the indexing data (optional).
Indexing data resides at: <DAS-HOME>/repository/data/indexing_data/

Second if you want to use the node in another server profile (more info on profiles: [3]), restart the server with the required profile.

Then follow the steps in the above question on "How do I remove a node as an indexing node?"

How do I know all my data are indexed?


The simplest way to do this would be to use the Data Explorer in DAS, refer to [4] for more information.

There, for a selected table, run the query “*:*” and it should return all the results with the total count.

For more information or queries please do drop by the mailing lists[4].

[1] https://lucene.apache.org/core/
[2] http://wso2.com/products/data-analytics-server/
[3]https://docs.wso2.com/display/CLUSTER44x/Fully+Distributed+High+Availability+Deployment+-+DAS+3.0.1
[4] http://wso2.com/products/data-analytics-server/

sanjeewa malalgodaHow refresh token validity period works in WSO2 API Manager 1.10.0 and later versions.

If refresh token renewal not enabled(using
<RenewRefreshTokenForRefreshGrant>

parameter in identity.xml configuration file), we use existing refresh token else we issue a new refresh token. If issuing new refresh token, use default refresh token validity period(which is configured in
<RefreshTokenValidityPeriod>

parameter of identity.xml) otherwise use existing refresh token's validity period.
That is how refresh grant handler logic implemented in OAuth code

First i will generate access token using password grant as follows.
curl -k -d "grant_type=password&username=admin&password=admin" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token
{"scope":"default","token_type":"Bearer","expires_in":3600,"refresh_token":"f0b7e3839143eec6439c10faf1a4c714","access_token":"03a3b01747b92ea714f171abc02791ba"}

Then refresh token with default configurations.
curl -k -d "grant_type=refresh_token&refresh_token=f0b7e3839143eec6439c10faf1a4c714&scope=PRODUCTION" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token
{"scope":"default","token_type":"Bearer","expires_in":3600,"refresh_token":"4f1bebe8b284b3216efb228d523df452","access_token":"d8db6c3892a48adf1a81f320a4a46a66"}

As you can see refresh token update with token generation request.
Now i disable reneval refresh token by updating following parameters.

<RenewRefreshTokenForRefreshGrant>false</RenewRefreshTokenForRefreshGrant>



curl -k -d "grant_type=refresh_token&refresh_token=4f1bebe8b284b3216efb228d523df452&scope=PRODUCTION" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token
{"scope":"default","token_type":"Bearer","expires_in":3600,"refresh_token":"4f1bebe8b284b3216efb228d523df452","access_token":"d7d2603c07fcadb9faf9593107bfbedd"}

In this case refresh token generated time will remains as it is. While having new access token and refresh token(but refresh token created time would be same).

Sachith WithanaIncremental Analytics With WSO2 DAS

This is the second blog post on WSO2 Data Analytics Server. The first post can be found at [1].

Introduction

The duration of the batch process is critical in production environments. For a product that does not support incremental processing, it needs to process the whole dataset in order to process the unprocessed data. With incremental processing, the batch job only processes the data partition that’s required to be processed, not the whole dataset (which has already been processed), which improves the efficiency drastically.

For example let’s say you have a requirement to summarize data for each day. The first time the summarization script is run, it would process the whole data set and summarize the data. That’s where the similarities end between a typical batch process and incremental analytics.

The next day when the script is run, the batch processing system without incremental analytics support would have to summarize the whole dataset in order to get the last days’ summarization. But with incremental processing, you would only process the last days’ worth of data and summarize, which reduces the overhead of processing the already processed data again.

Think of how it can improve the performance in summarizations starting from minutes running all the way to years.

Publishing events
Incremental analytics uses the timestamps of the events sent when when retrieving the data for processing. Therefore when defining streams for incremental analytics, you need to add an extra field to the event payload as _timestamp LONG to facilitate this.

When sending the events you have the ability to either add the timestamp to the _timestamp attribute or set it for each event at event creation.

Syntax


In DAS, in the spark script, when defining the table, you need to add extra parameters to the table definition for it to support incremental analytics.

If you do not provide these parameters, it will be treated as a typical analytics table and for each query which reads from that table, would get the whole table.

The following is an example in defining a spark table with incremental analytics.

create temporary table orders using CarbonAnalytics options (tableName "ORDERS", schema "customerID STRING, phoneType STIRNG, OrderID STRING, cost DOUBLE, _timestamp LONG -i", incrementalParams "orders, DAY");

And when you are done with the summarization, then you need to commit the status indicating the reading of the data is successfull. This is done via

INCREMENTAL_TABLE_COMMIT orders;

Parameters


incrementalParams has two required parameters and an optional parameter.
incrementalParams “uniqueID, timePeriod, #previousTimePeriods

uniqueID : REQUIRED
    This is the unique ID of the incremental analytics definition. When committing the change, you need to use this ID in the incremental table commit command as shown above.

timePeriod: REQUIRED (DAY/MONTH/YEAR)
    The duration of the time period that you are processing. Ex: DAY

If you are summarizing per DAY (the specified timePeriod in this case), then DAS has the ability to process the timestamp of the events and get the DAY they belongs to.

Consider the situation with the following received events list. The requirement is we need to get the total number of orders placed per each minute.

Customer ID
Phone Type
Order ID
Cost
_timestamp
1
Nexus 5x
33slsa2s
400
26th May 2016 12:00:01
12
Galaxy S7
kskds221
600
27th May 2016 02:00:02
43
iPhone 6s
sadl3122
700
27th May 2016 15:32:04
2
Moto X
sdda221s
350
27th May 2016 16:22:10
32
LG G5
lka2s24dkQ
550
27th May 2016 19:42:42

And the last processed event is,

12
Galaxy S7
kskds221
600
27th May 2016 15:32:04

In the summarized table for the day 27th May 2016, would be 2 since when the script ran last, there were only two events for that particular time duration and other events came later.

So when the script runs the next time, it needs to update the value for the time duration for the day of 27th May 2016.

This is where the timePeriod parameter is used. For the last processed event, DAS calculates the “time period” it belongs to and pulls the data from the beginning of that time period onwards.

In this case the last processed event
12
Galaxy S7
kskds221
600
27th May 2016 15:32:04

Would trigger DAS to pull data from 27th May 2016 00:00:00 onwards.

#previousTimePeriods - Optional (int)
    Specifying this value would allow DAS to pull from previous time periods onwards. For example, if you had set this parameter to 30, then it would fetch 30 more periods worth of data as well.

As per the above example, it would pull from 27th April 2016 00:00:00 onwards.

For more information or queries do drop by the mailing lists[2].

[1] http://sachithdhanushka.blogspot.com/2016/04/wso2-data-analytics-server-introduction.html
[2] http://wso2.com/products/data-analytics-server/

sanjeewa malalgodaDetails about ports in use when WSO2 API Manager started.

Management console ports. WSO2 products that provide a management console use the following servlet transport ports:
    9443 - HTTPS servlet transport (the default URL of the management console is https://localhost:9443/carbon)
    9763 - HTTP servlet transport

LDAP server ports
Provided by default in the WSO2 Carbon platform.
    10389 - Used in WSO2 products that provide an embedded LDAP server

KDC ports
    8000 - Used to expose the Kerberos key distribution center server

JMX monitoring ports
WSO2 Carbon platform uses TCP ports to monitor a running Carbon instance using a JMX client such as JConsole. By default, JMX is enabled in all products. You can disable it using /repository/conf/etc/jmx.xml file.
    11111 - RMIRegistry port. Used to monitor Carbon remotely
    9999 - RMIServer port. Used along with the RMIRegistry port when Carbon is monitored from a JMX client that is behind a firewall

Clustering ports
To cluster any running Carbon instance, either one of the following ports must be opened.
    45564 - Opened if the membership scheme is multicast
    4000 - Opened if the membership scheme is wka

Random ports
Certain ports are randomly opened during server startup. This is due to specific properties and configurations that become effective when the product is started. Note that the IDs of these random ports will change every time the server is started.

    A random TCP port will open at server startup because of the -Dcom.sun.management.jmxremote property set in the server startup script. This property is used for the JMX monitoring facility in JVM.
    A random UDP port is opened at server startup due to the log4j appender (SyslogAppender), which is configured in the /repository/conf/log4j.properties file.

These ports are randomly open at the server startup.
    tcp 0 0 :::55746 :::
    This port will be open by -Dcom.sun.management.jmxremote property set at the startup script. The purpose of this is to used for the JMX monitoring facility in JVM. So we don't have a control over this port.

    udp 0 0 :::46316 :::
    This port is open due to log4j appender (SyslogAppender). You can find this on
    /repository/conf/log4j.properties
    If you don't want this log on the log file, you can comment it and it will not harm to the server.

API Manager specific ports.
    10397 - Thrift client and server ports
    8280, 8243 - NIO/PT transport ports
    7711 - Thrift SSL port for secure transport, where the client is authenticated to BAM/CEP: stat pub

sanjeewa malalgodaHow refresh token validity period works in WSO2 API Manager 1.9.0 and later versions.

If refresh token renewal not enabled(using
<RenewRefreshTokenForRefreshGrant>

parameter in identity.xml configuration file), we use existing refresh token else we issue a new refresh token. If issuing new refresh token, use default refresh token validity period(which is configured in
<RefreshTokenValidityPeriod>

parameter of identity.xml) otherwise use existing refresh token's validity period.
That is how refresh grant handler logic implemented in OAuth code

First i will generate access token using password grant as follows.
curl -k -d "grant_type=password&username=admin&password=admin" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token
{"scope":"default","token_type":"Bearer","expires_in":3600,"refresh_token":"f0b7e3839143eec6439c10faf1a4c714","access_token":"03a3b01747b92ea714f171abc02791ba"}

Then refresh token with default configurations.
curl -k -d "grant_type=refresh_token&refresh_token=f0b7e3839143eec6439c10faf1a4c714&scope=PRODUCTION" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token
{"scope":"default","token_type":"Bearer","expires_in":3600,"refresh_token":"4f1bebe8b284b3216efb228d523df452","access_token":"d8db6c3892a48adf1a81f320a4a46a66"}

As you can see refresh token update with token generation request.
Now i disable reneval refresh token by updating following parameters.

<RenewRefreshTokenForRefreshGrant>false</RenewRefreshTokenForRefreshGrant>

Then generate access token using refresh token.
curl -k -d "grant_type=refresh_token&refresh_token=4f1bebe8b284b3216efb228d523df452&scope=PRODUCTION" -H "Authorization: Basic X3kzVVRFSnNLdUNKZlBwOUpNUlNiV3drbFE4YTpmSzVPUzZFNEJfaW8xSFk1SGZsZjVPeWFreW9h" -H "Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token
{"scope":"default","token_type":"Bearer","expires_in":3600,"refresh_token":"4f1bebe8b284b3216efb228d523df452","access_token":"d7d2603c07fcadb9faf9593107bfbedd"}
As you can see refresh token did not changed and it remains as it is. While having new access token with same refresh token.

sanjeewa malalgodaHow to add custom log appender to WSO2 products to directs logs to separate file(take mediator logs to different file)

Implement custom log appender class as follows.

package org.wso2.test.logging;
import org.apache.log4j.DailyRollingFileAppender;
import org.apache.log4j.spi.LoggingEvent;
import org.wso2.carbon.context.CarbonContext;
import org.wso2.carbon.utils.logging.LoggingUtils;
import org.wso2.carbon.utils.logging.TenantAwareLoggingEvent;
import java.io.File;
import java.io.IOException;
import java.security.AccessController;
import java.security.PrivilegedAction;
public class CustomAppender extends DailyRollingFileAppender {
    private static final String LOG_FILE_PATH = org.wso2.carbon.utils.CarbonUtils.getCarbonHome() + File.separator + "repository" +
                                                File.separator + "logs" + File.separator + "messages" + File.separator;
   
    @Override
    protected void subAppend(LoggingEvent loggingEvent) {
        int tenantId = AccessController.doPrivileged(new PrivilegedAction() {
            public Integer run() {
                return CarbonContext.getThreadLocalCarbonContext().getTenantId();
            }
        });
       
        String logFileName = "test_file";
            try {
                this.setFile(LOG_FILE_PATH + logFileName, this.fileAppend, this.bufferedIO, this.bufferSize);
            } catch (IOException ex) {
                ex.printStackTrace();
            }
            String serviceName = CarbonContext.getThreadLocalCarbonContext().getApplicationName();
            final TenantAwareLoggingEvent tenantAwareLoggingEvent = LoggingUtils
                    .getTenantAwareLogEvent(loggingEvent, tenantId, serviceName);
            AccessController.doPrivileged(new PrivilegedAction() {
                public Void run() {
                    CustomAppender.super.subAppend(tenantAwareLoggingEvent);
                    return null; // nothing to return
                }
            });
        }
    }
}


Then add following to log4j.properties file.

log4j.logger.org.wso2.test.logging.LoggingClassMediator=INFO, NEW_CARBON_LOGFILE

log4j.appender.NEW_CARBON_LOGFILE=org.wso2.test.logging.CustomAppender
log4j.appender.NEW_CARBON_LOGFILE.File=${carbon.home}/repository/logs/${instance.log}/wso2carbon${instance.log}.log
#log4j.appender.NEW_CARBON_LOGFILE.Append=true
log4j.appender.NEW_CARBON_LOGFILE.layout=org.wso2.carbon.utils.logging.TenantAwarePatternLayout
log4j.appender.NEW_CARBON_LOGFILE.layout.ConversionPattern=TID: [%T] [%S] [%d] %P%5p {%c} - %x %m {%c}%n
log4j.appender.NEW_CARBON_LOGFILE.layout.TenantPattern=%U%@%D [%T] [%S]
log4j.appender.NEW_CARBON_LOGFILE.threshold=INFO
log4j.appender.NEW_CARBON_LOGFILE.MaxFileSize=5kb


Then add org.wso2.test.logging.LoggingClassMediator class to your mediation flow. Please see sample mediator code below.

package org.wso2.test.logging;
import org.apache.log4j.Logger;
import org.apache.synapse.MessageContext;
import org.apache.synapse.mediators.AbstractMediator;
public class LoggingClassMediator extends AbstractMediator {
private static final Logger log = Logger.getLogger(AVSLoggingClassMediator.class);
public LoggingClassMediator() {
}
public boolean mediate(MessageContext mc) {
String apiName = mc.getProperty("SYNAPSE_REST_API").toString();
String name = "APIName::" + apiName+"::";
try{
log.info(name+"LOGGING MESSAGE " + mc.getProperty("RESPONSE_TIME"));
log.info(name+"LOGGING MESSAGE " + mc.getProperty("SYNAPSE_REST_API"));
}catch(Exception e){
log.error(name+"ERROR :",e);
}
return true;
}

Dinusha SenanayakaHow to use App Manager Business Owner functionality ?

WSO2 App Manager new release (1.2.0) has introduced capability to define business owner for each application. (AppM-1.2.0  is yet to be released by the time this blog post is writing and you could download nightly build from here and tryout until the release is done.)

1. How to define business owners ?

Login as a admin user to admin-dashboard by accessing following URL.
https://localhost:9443/admin-dashboard

This will give you UI similar to bellow where you can define new business owners.


Click on "Add Business Owner" option to add new business owners.


All created business owners are listed in UI as follows, which allows you to edit or delete from the list.




2. How to associate business owner to application ?

You can login to Publisher by accessing the following URL to create new app.
https://localhost:9443/publisher 

In the add new web app UI, you should be able to see page similar to following, where you can type and select the business owner for the app.



Once the required data is filled and app is ready to publish to store, change the app life-cycle state to 'published' to publish app into app store.



Once the app is published, users could access app through the App Store by accessing the following URL.
https://localhost:9443/store

App users can find the business owner details in the App Overview page as shown bellow.





If you are using REST APIs to create and publish the apps, following sample command would help.

These APIs are protected with OAuth and you need to generate a oauth token before invoking APIs.

Register a OAuth app and generates consumer key and secret key
Request:
curl -X POST -H "Content-Type: application/json" -H "Authorization: Basic YWRtaW46YWRtaW4=" -d  '{"clientName": "Demo_App", "grantType": "password refresh_token"}'  http://localhost:9763/api/appm/oauth/v1.0/register

Note: Authorization header should pass base64encoded(username:password) as the value in above request.

Response:
{"clientId":"1kaMrCWFr9NeT1VCfTxacI_Pu0sa","clientName":"Demo_App","callBackURL":null,"clientSecret":"YNkRA_30pwOZ6kNTIZC9B54p7LEa"}

Generate access token using above consumer/secret keys
Request:
curl -k -X POST -H "Authorization: Basic MWthTXJDV0ZyOU5lVDFWQ2ZUeGFjSV9QdTBzYTpZTmtSQV8zMHB3T1o2a05USVpDOUI1NHA3TEVh" -H "Content-Type: application/x-www-form-urlencoded" -d 'username=admin&password=admin&grant_type=password&scope=appm:read appm:administration appm:create appm:publish'  https://localhost:9443/oauth2/token

Note: Authorization header should pass base64encoded(clientId:clientSecret) as the value in above request.

Response:
{"access_token":"cc78ea5a2fa491ed23c05288f539b5f5","refresh_token":"3b203c859346a513bd3f94fc6bf202e4","scope":"appm:administration appm:create appm:publish appm:read","token_type":"Bearer","expires_in":3600}


Add new business owner
Request:
curl -X POST -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" -d '{"site": "http://wso2.com", "email": "beth@gmail.com", "description": "this is a test description", "name": "Beth", "properties": [{"isVisible": true,"value": "0112345678","key": "telephone"}]}' http://localhost:9763/api/appm/publisher/v1.1/administration/businessowner

Response:
{"id":1}


Create new App and define business owner as previously added business owner
curl -X POST -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/publisher/v1.1/apps/webapp -d '{
  "name": "TravelBooking",
  "version": "1.0.0",
  "isDefaultVersion":true,
  "displayName": "Travel Booking",
  "description": "description",
  "businessOwnerId":"1",
  "isSite": "false",
  "context": "/travel",
  "appUrL": "http://www.google.lk",
  "acsUrl": "",
  "transport": "http",
  "policyGroups": [
    {
      "policyGroupName": "policy1",
      "description": "Policy 1",
      "throttlingTier": "Unlimited",
      "userRoles": [
        "role1"
      ],
      "allowAnonymousAccess": "false"
    },
    {
      "policyGroupName": "policy2",
      "description": "Policy 2",
      "throttlingTier": "Gold",
      "userRoles": [
        "role2"
      ],
      "allowAnonymousAccess": "false"
    },
    {
      "policyGroupName": "policy3",
      "description": "Policy 3",
      "throttlingTier": "Unlimited",
      "userRoles": [
        "role3"
      ],
      "allowAnonymousAccess": "false"
    }
  ],
  "uriTemplates": [
    {
      "urlPattern": "/*",
      "httpVerb": "GET",
      "policyGroupName": "policy1"
    },
    {
      "urlPattern": "/*",
      "httpVerb": "POST",
      "policyGroupName": "policy2"
    },
    {
      "urlPattern": "/pattern1",
      "httpVerb": "POST",
      "policyGroupName": "policy3"
    }
  ]
}'

Response:
{"AppId": "78012b68-719d-4e14-a8b8-a899d41dc712"}


Change app lifecycle state to 'Published'
curl -X POST -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/publisher/v1.1/apps/webapp/change-lifecycle?appId=78012b68-719d-4e14-a8b8-a899d41dc712&action=Submit%20for%20Review

curl -X POST -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/publisher/v1.1/apps/webapp/change-lifecycle?appId=78012b68-719d-4e14-a8b8-a899d41dc712&action=Approve

curl -X POST -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/publisher/v1.1/apps/webapp/change-lifecycle?appId=78012b68-719d-4e14-a8b8-a899d41dc712&action=Publish

Retrieve App info in store
Request:
curl -X GET -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/store/v1.1/apps/webapp/id/78012b68-719d-4e14-a8b8-a899d41dc712

Response:
{"businessOwnerId":"1","isSite":"false","isDefaultVersion":true,"screenshots":[],"customProperties":[],"tags":[],"rating":0.0,"transport":["http"],"lifecycle":"WebAppLifeCycle","lifecycleState":"PUBLISHED","description":"description","version":"1.0.0","provider":"admin","name":"TravelBooking1","context":"/travel1","id":"78012b68-719d-4e14-a8b8-a899d41dc712","type":"webapp","displayName":"Travel Booking"}

Retrieve business owner details
Request:
curl -X GET -H "Authorization: Bearer cc78ea5a2fa491ed23c05288f539b5f5" -H "Content-Type: application/json" http://localhost:9763/api/appm/store/v1.1/businessowner/1

Response:
{"site":"http://wso2.com","email":"beth@gmail.com","description":"this is a test description","name":"Beth","properties":[{"isVisible":true,"value":"0112345678","key":"telephone"}],"id":1}

sanjeewa malalgodaHow to change pooling configurations for caonnections made from API Gateway to backend - WSO2 API Manager, ESB

Gateway to back end connection pooling related only parameter is http.max.connection.per.host.port.
and default value would be integer max value.  Worker pool size do not have direct relationship with client to gateway connections or gateway to back end connections.  They are just processing threads available within server. 

So at anytime of you need to change pooling details from gateway to backend you can tunne following parameter in "passthru-http.properties" file.


http.max.connection.per.host.port = 20

Bhathiya Jayasekara[WSO2 APIM] Setting up API Manager Distributed Setup with Puppet Scripts





In this post we are going to use puppet to setup a 4 node API Manager distributed setup. You can find the puppet scripts I used, in this git repo.

NOTE: This blog post can be useful to troubleshoot any issues you get while working with puppet.

In my puppet scripts there are below IPs of the nodes I used. You have to replace them with yours.

Puppet Master/MySQL :   192.168.57.92
Publisher:   192.168.57.93
Store:   192.168.57.94
Key Manager:   192.168.57.96
Gateway:   192.168.57.97

That's just some information. Now let's start setting up each node, one by one.

1) Configure Puppet Master/ MySQL Node 

1. Install NTP, Puppet Master and MySQL.

> sudo su
> ntpdate pool.ntp.org ; apt-get update && sudo apt-get -y install ntp ; service ntp restart
> cd /tmp
> wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
> dpkg -i puppetlabs-release-trusty.deb
> apt-get update
> apt-get install puppetmaster
> apt-get install mysql-server


2. Change hostname in /etc/hostname to puppet (This might need a reboot)

3. Update /etc/hosts with below entry. 

127.0.0.1 puppet


4. Download and copy https://github.com/bhathiya/apim-puppet-scripts/tree/master/puppet directory to /etc/puppet

5. Replace IPs in copied puppet scripts. 

6. Before restarting the puppet master, clean all certificates, including puppet master’s certificate which is having its old DNS alt names.

> puppet cert clean --all


7. Restart puppet master

> service puppetmaster restart

8. Download and copy jdk-7u79-linux-x64.tar.gz to /etc/puppet/environments/production/modules/wso2base/files/jdk-7u79-linux-x64.tar.gz

9. Download and copy wso2am-2.0.0-SNAPSHOT.zip to 
/etc/puppet/environments/production/modules/wso2am/files/wso2am-2.0.0-SNAPSHOT.zip

10. Download and copy https://github.com/bhathiya/apim-puppet-scripts/tree/master/db_scripts directory to /opt/db_scripts

11. Unzip wso2am-2.0.0-SNAPSHOT.zip and copy wso2am-2.0.0-SNAPSHOT/dbscripts directory to /opt/db_scripts/dbscripts

12. Download and copy https://github.com/bhathiya/apim-puppet-scripts/tree/master/run_puppet.sh file to /opt/run_puppet.sh (Copy required private keys as well, to ssh to puppet agent nodes)

13. Open and update run_puppet.sh script as required, and set read/execution rights.

> chmod 755 
run_puppet.sh


2) Configure Puppet Agents 

Repeat these steps in each agent node.

1. Install Puppet.

> sudo su
> apt-get update
> apt-get install puppet


2. Change hostname in /etc/hostname to apim-node-1 (This might need a reboot)

3. Update /etc/hosts with puppet master's host entry.

192.168.57.92 puppet

4. Download and copy https://github.com/bhathiya/apim-puppet-scripts/tree/master/puppet-agents/setup.sh file to /opt/setup.sh

5. Set execution rights.

> chmod 755 
setup.sh 


6. Download and copy https://github.com/bhathiya/apim-puppet-scripts/tree/master/puppet-agents/deployment.conf file to /opt/deployment.conf (Edit this as required. For example, product_profile should be one of api-store, api-publisher, api-key-manager and gateway-manager)


3) Execute Database and Puppet Scripts

Go to /opt in puppet master and run ./run_puppet.sh (or you can run setup.sh in each agent node.)

If you have any questions, please post below.


References:
[1] https://github.com/wso2/puppet-modules/wiki/Use-WSO2-Puppet-Modules-in-puppet-master-agent-Environment
[2] https://github.com/wso2/puppet-modules/

Rushmin FernandoWSO2 App Manager 1.2.0 - How to use custom app properties in ReST APIs


WSO2 App Manager supports defining and using custom properties for app types. In order to add new custom property, the relevant RXT file (registry extension) and couple of other files should be amended. But these custom properties are not marked as 'custom' properties  in any place. Once defined they are treated just like any other another field.  

With the introduction of the new ReST API implementation in App Manager 1.2.0, it was bit of challenging to expose these custom fields through APIs. The new ReST APIs are documented using Swagger. So when the relevant API response models are defined, the custom fields can't be added as named properties since they are dynamic. So the solution is to have a field, which is a Map, to represent the custom properties. And the need of marking custom fields as 'custom' should be addressed too. 

In App Manager, this has been addressed by having another configuration to store the custom properties.


Where is the definitions file


In the registry there are JSON resources which are custom property definitions. There is a definition file per app type.

e.g. Definition file for web apps -  
         /_system/governance/appmgt/applicationdata/custom-property-definitions/webapp.json

What does a definition file look like


As of now the custom property definitions file only has the names of the custom properties.

e.g.
{"customPropertyDefinitions":[{"name":"overview_custom1"}]}

How do I persist these custom properties for an app using the ReST API


The request payload should contain the custom properties as below.

{
  "name": "app1",
  "version": "1.0",
  "isDefaultVersion":true,
  .
  .
  .
  "customProperties":[
    {
       "name":"overview_custom1",
       "value":"custom_property_1"
    }
  ]
}





sanjeewa malalgodaHow to Create Service Metadata Repository for WSO2 Products(WSO2 ESB, DSS, AS)


Sometimes we need to store all service metadata in single place and maintain changes, life cycles etc. To address this we can implement this as automated process.
Here is the detailed flow.
  • In jenkins we will deployed scheduled task to trigger some event periodically.
  • Periodic task will call WSO2 ESB, DSS and App Server’s admin services to get service metadata. In ESB we can call proxy service admin service to list app proxy services deployed in ESB. From same call we can get WSDLs associated with services. Please refer this article for more information about admin services and how we can use them.
  • In same way we can call all services and get complete service data.
  • Then we can call registry rest API and push that information.  Please refer this article for more information about Registry REST API.  

If we consider proxy service details then we can follow approach listed below.
Create web service client for https://127.0.0.1:9444/services/ServiceAdmin service and invoke it from client.
See following Soapui sample to get all proxy services deployed in ESB.

Screenshot from 2016-06-17 11-50-59.png


You will see response like below. There you will all details related to given proxy service such as wsdls, service status, service type etc. So you can list all service metadata using the information we retrieved from this web service call.


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
 <soapenv:Body>
    <ns:listServicesResponse xmlns:ns="http://org.apache.axis2/xsd">
       <ns:return xsi:type="ax2539:ServiceMetaDataWrapper" xmlns:ax2541="http://neethi.apache.org/xsd" 
xmlns:ax2539="http://mgt.service.carbon.wso2.org/xsd" 
xmlns:ax2542="http://util.java/xsd" xmlns:ax2545="http://utils.carbon.wso2.org/xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
          <ax2539:numberOfActiveServices>5</ax2539:numberOfActiveServices>
          <ax2539:numberOfCorrectServiceGroups>5</ax2539:numberOfCorrectServiceGroups>
          <ax2539:numberOfFaultyServiceGroups>0</ax2539:numberOfFaultyServiceGroups>
          <ax2539:numberOfPages>1</ax2539:numberOfPages>
          <ax2539:serviceTypes>axis2</ax2539:serviceTypes>
          <ax2539:serviceTypes>proxy</ax2539:serviceTypes>
          <ax2539:serviceTypes>sts</ax2539:serviceTypes>
          <ax2539:services xsi:type="ax2539:ServiceMetaData">
             <ax2539:CAppArtifact>false</ax2539:CAppArtifact>
             <ax2539:active>true</ax2539:active>
             <ax2539:description xsi:nil="true"/>
             <ax2539:disableDeletion>false</ax2539:disableDeletion>
             <ax2539:disableTryit>false</ax2539:disableTryit>
             <ax2539:eprs xsi:nil="true"/>
             <ax2539:foundWebResources>false</ax2539:foundWebResources>
             <ax2539:mtomStatus xsi:nil="true"/>
             <ax2539:name>testproxy</ax2539:name>
             <ax2539:operations xsi:nil="true"/>
             <ax2539:scope xsi:nil="true"/>
             <ax2539:securityScenarioId xsi:nil="true"/>
             <ax2539:serviceDeployedTime>1970-01-01 05:30:00</ax2539:serviceDeployedTime>
             <ax2539:serviceGroupName>testproxy</ax2539:serviceGroupName>
             <ax2539:serviceId xsi:nil="true"/>
             <ax2539:serviceType>proxy</ax2539:serviceType>
             <ax2539:serviceUpTime>16969day(s) 6hr(s) 20min(s)</ax2539:serviceUpTime>
             <ax2539:serviceVersion xsi:nil="true"/>
             <ax2539:tryitURL>/services/testproxy?tryit</ax2539:tryitURL>
             <ax2539:wsdlPortTypes xsi:nil="true"/>
             <ax2539:wsdlPorts xsi:nil="true"/>
             <ax2539:wsdlURLs>http://sanjeewa-ThinkPad-X1-Carbon-3rd:8281/services/testproxy?wsdl</ax2539:wsdlURLs>
             <ax2539:wsdlURLs>http://sanjeewa-ThinkPad-X1-Carbon-3rd:8281/services/testproxy?wsdl2</ax2539:wsdlURLs>
          </ax2539:services>
          <ax2539:services xsi:type="ax2539:ServiceMetaData">
             <ax2539:CAppArtifact>false</ax2539:CAppArtifact>
             <ax2539:active>true</ax2539:active>
             <ax2539:description xsi:nil="true"/>
             <ax2539:disableDeletion>false</ax2539:disableDeletion>
             <ax2539:disableTryit>false</ax2539:disableTryit>
             <ax2539:eprs xsi:nil="true"/>
             <ax2539:foundWebResources>false</ax2539:foundWebResources>
             <ax2539:mtomStatus xsi:nil="true"/>
             <ax2539:name>testp</ax2539:name>
             <ax2539:operations xsi:nil="true"/>
             <ax2539:scope xsi:nil="true"/>
             <ax2539:securityScenarioId xsi:nil="true"/>
             <ax2539:serviceDeployedTime>1970-01-01 05:30:00</ax2539:serviceDeployedTime>
             <ax2539:serviceGroupName>testp</ax2539:serviceGroupName>
             <ax2539:serviceId xsi:nil="true"/>
             <ax2539:serviceType>proxy</ax2539:serviceType>
             <ax2539:serviceUpTime>16969day(s) 6hr(s) 20min(s)</ax2539:serviceUpTime>
             <ax2539:serviceVersion xsi:nil="true"/>
             <ax2539:tryitURL>/services/testp?tryit</ax2539:tryitURL>
             <ax2539:wsdlPortTypes xsi:nil="true"/>
             <ax2539:wsdlPorts xsi:nil="true"/>
      <ax2539:wsdlURLs>http://sanjeewa-ThinkPad-X1-Carbon-3rd:8281/services/testp?wsdl</ax2539:wsdlURLs>
    <ax2539:wsdlURLs>http://sanjeewa-ThinkPad-X1-Carbon-3rd:8281/services/testp?wsdl2</ax2539:wsdlURLs>
          </ax2539:services>
       </ns:return>
    </ns:listServicesResponse>
 </soapenv:Body>
</soapenv:Envelope>

We can automate this service metadata retrieving process and persist it to registry. Please refer below diagram to understand flow for this use case. Discovery agent will communicate with servers and use REST client to push events to registry.

Untitled drawing(1).jpg

Udara LiyanagePublish WSO2 Carbon logs to Logstash/Elasticsearh/Kibana (ELK) using Filebeat

You know that Logstash, Elasticsearch and Kibana triple, aka ELK is a well used log
analysis tool set. This howto guide explains how to publish logs of WSO2 Carbon
servers to ELK platform

# Setup ELK

You can download Logstash, Elasticsearch and Kibana binaries one by one and setup ELK. But I am a Docker fan
so I use a preconfigured Docker image. Most people use sebp/elk Docker image. This Docker images by default does not comes
with a Logstash receiver for receiving beats event. Thus I added below Logstash configuration to receive beats events and create my own
docker image udaraliyanage/elk. Either you can use my Docker image or add below Logstash configuration to default Docker images

input {
beats {
type => beats
port => 7000
}
}
output {
elasticsearch {
hosts => “localhost:9200”
}
stdout { codec => rubydebug }
}

Above configuration causes Logstash to listen on port 7000 (input section) and forward the logs to Elasticsearch which is running on port 9200
of Docker container.

Now start the docker container as
docekr run -d -p 7000:7000 -p 5601:5601 udaraliyanage/elklog4

port 6000 => Logstash
port 5601 => Kibana

  # Setup Carbon Server to publish logs to Logstash

* Download filebeat deb file from [2] and install
dpkg -i filebeat_1.2.3_amd64

* Create a filebeat configuration file /etc/carbon_beats.yml with following content.

Please make sure to provide the correct wso2carbon.log file location in paths section. You can provide multiple carbon logs as well
if you are running multiple Carbon servers in your machine.

filebeat:
prospectors:

paths:
– /opt/wso2as-5.3.0/repository/logs/wso2carbon.log
input_type: log
document_type: appserver_log
output:
logstash:
hosts: [“localhost:7000”]
console:
pretty: true
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB

* Now  start Carbon Server ./bin/wso2server.sh start`

# View logs from Kibana by visiting http://localhost:5601

[1] https://www.elastic.co/products/beats/filebeat
[2] https://hub.docker.com/r/sebp/elk/


Udara LiyanagePublish WSO2 Carbon logs to Logstash/Elasticsearh/Kibana (ELK) using Log4j SocketAppender

I assume that you know that Logstash, Elasticsearch and Kibana stack, a.k.a ELK is a well used log analysis tool set. This howto guide explains how to publish logs of WSO2 Carbon
servers to ELK platform.

# Setup ELK

You can download Logstash, Elasticsearch and Kibana binaries one by one and setup ELK. But I am a Docker fan so I use a preconfigured Docker image. Most people use sebp/elk Docker image. This Docker images by default does not comes with a Logstash receiver for log4j events. Thus I added below Logstash configuration to receive log4j events and create my own docker image udaraliyanage/elk. Either you can use my Docker image or add below Logstash configuration to default Docker images

input {
  log4j {
    mode => server
    host => “0.0.0.0”
    port => 6000
    type => “log4j”
  }
}
output {
  elasticsearch {
      hosts => “localhost:9200”
  }
  stdout { codec => rubydebug }
}

Above configuration causes Logstash to listen on port 6000 (input section) and forward the logs to Elasticsearch which is running on port 9200
of Docker container.

Now start the docker container as
`docekr run -d -p 6000:6000 -p 5601:5601 udaraliyanage/elklog4j`

port 6000 => Logstash
port 5601 => Kibana

# Setup Carbon Server to publish logs to Logstash

* Download Logstash json even layout dependecy jary from [3] and place it $CARBON_HOME/repository/components/lib .
This convert the log event to binary format and stream them to
a remote log4j host, in our case Logstash running on port 6000

* Add following log4j appended configurations to Carbon servers by editing $CARBON_HOME/repository/conf/log4j.properties file

log4j.rootLogger=INFO, CARBON_CONSOLE, CARBON_LOGFILE, CARBON_MEMORY,tcp

log4j.appender.tcp=org.apache.log4j.net.SocketAppender
log4j.appender.tcp.layout=org.wso2.carbon.utils.logging.TenantAwarePatternLayout
log4j.appender.tcp.layout.ConversionPattern=[%d] %P%5p {%c} – %x %m%n
log4j.appender.tcp.layout.TenantPattern=%U%@%D[%T]
log4j.appender.tcp.Port=6000
log4j.appender.tcp.RemoteHost=localhost
log4j.appender.tcp.ReconnectionDelay=10000
log4j.appender.tcp.threshold=DEBUG
log4j.appender.tcp.Application=myCarbonApp

RemoteHost => Logstash server where we want to publish events to, it is localhost:6000 in our case.
Application => Name of the application which publishes log. It is useful for the one who view logs from Kibana so that he can find from which server a particular logs is received.

* Now  start Carbon Server ./bin/wso2server.sh start`

# View logs from Kibana by visiting http://localhost:5601

[1] https://hub.docker.com/r/sebp/elk/
[2] https://www.elastic.co/guide/en/logstash/current/plugins-inputs-log4j.html
[3] http://mvnrepository.com/artifact/net.logstash.log4j/jsonevent-layout/1.7


Shazni NazeerUnleashing the Git - part 3 - Working with remote repositories

More often we need to share the files that we modify and make it available for other users. This is typically the case in Software development projects, where several people work on a set of files and all needs to modify and do changes. The solution is to have the repository in a centralised place and each and every one work with there own copies before they commit it to the central place. There are many online git repository providers. Examples are GitHub, BitBucket etc.

Choose, whatever best suites for you and sign up for an account. Most online repositories provide free accounts. I've a created a one in bitbucket and have created a repository named online_workbench.
$ git remote add origin https://shazni@bitbucket.org/shazni/online_workbench.git  // Syntax is git remote add <name> <repository URL>
$ git remote rm <name> // To remove a remote from your local repository
$ git push -u origin master
Password for 'https://shazni@bitbucket.org':
Counting objects: 4, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 88.01 KiB, done.
Total 4 (delta 0), reused 0 (delta 0)
To https://shazni@bitbucket.org/shazni/online_workbench.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.

After you entered your password, you'll have your repository online and its local location is your local $HOME/example directory.

'git push -u origin master' is to make the git aware, that all the pull and push operations to default to the master branch. If in case '-u origin master' is not specified, each and every time you issue a 'pull' or 'push' command, you will have to specify the origin.

Ok.. Now if someone needs to get a local copy and work on this remote repository, what he/she needs to do? Only few steps are involved.

First we need to clone the repository into a directory. Navigate or create a new directory where you need the local clone of the repository and issue the following command.
$ git clone <remote repository location> <Path you want the clone to be>
If you omit the <Path you want the clone to be>, it will be in the current directory.

Depending on how the firewall on your computer or local area network (LAN) is configured, you might get an error trying to clone a remote repository over the network. Git uses SSH by default to transfer changes
over the network, but it also uses the Git protocol (signified by having git:// at the beginning of the URI) on port 9418. Check with your local network administrator to make sure communication on ports 22—the port SSH communicates on—and 9418 are open if you have trouble communicating with a remote repository.
$ git clone --depth 50 <remote repository location>  // Create a shallow clone with the last fifty commits

Then, if we do the modification to the files or added files, next we need to stage it for commits. Issue the following commands.
$ git add *            // Stages and add all your changes, This is similar to git add -A 
// If you specify git add -u will only previously added and updated files will be staged. -p parameter will present you with sections so that you are given option to include a change or not.
$ git commit –m 'My commit comment'     // Commit the changes to the local repository
$ git pull        // checks whether there are any unsinced updates in the remote location and if exist syncs local and the server
$ git push        // Add the changes made by you to the server

sanjeewa malalgodaHow to define environment specific parameters and resolve them during build time using maven - WSO2 ESB endpoints

Lets say we have multiple environments named DEV, QA, PROD, you need to have different Carbon Application (C-App) projects to group the dynamic artifacts for different environments such as endpoints, WSDLs and policies.

To address this we can have property file along with artifacts to store environment specific parameters. Maven can build the artifacts based on the properties in the property file to create artifacts that are specific to the given environment.

Given below is an example of this. An endpoint value should be different from one environment to another. We can follow the below approach to achieve this.

Define the endpoint EP1 as below, having a token for hostname which could be replaced later.
    <address trace="disable" uri="http://@replace_token/path"/>

Add the maven antrun plugin to replace the token using a maven property as below in pom file.
  <plugin>
    <artifactId>maven-antrun-plugin</artifactId>
    <executions>
      <execution>
        <phase>prepare-package</phase>
        <configuration>
          <tasks>
            <replace token= "@replace_token" value="${hostName}" dir="${basedir}/target/capp/artifacts/endpoint/EP1/">                                 
              <include name="**/*.xml"/>
                           </replace>
          </tasks>
        </configuration>
        <goals>
          <goal>run</goal>
        </goals>
      </execution>
    </executions>
  </plugin>

Now build the project providing the maven property hostName as below,
mvn clean install -DhostName=127.0.0.1
Instead of giving the hostname manually as above, now we can configure jenkins or any other build server to pick up the hostname from a config file and feed it as a system property to the build process.


Once we completed environment specific builds we will have 2 car files for each environment and they have same endpoints but actually they are pointed to different URLs as follows.

|___ End Point Project
    |___DEV
        |__ backendEP.xml     //points to dev backend,   http://dev.wso2as.com/9763
    |___QA
        |__ backendEP.xml     //points to qa backend,      http://qa.wso2as.com/9763


sanjeewa malalgodaBuilding Services with WSO2 Microservices Framework for Java

Many organizations today are leveraging microservices architecture (MSA) which is becoming increasingly popular because of its many potential advantages. This webinar introduces WSO2 Microservices Framework for Java (MSF4J), which provides the necessary framework and tooling to build an MSA solution.Recently i presented about microservices at Minneapolis, MN user group meetup (http://www.meetup.com/WSO2-Middleware/events/231390337). During that session we discussed about following topics.
  • What is Microservice?
  • Why Microservice?
  • Microservices outer architecture.
  • WSO2 Microservice movement.
  • Introduction to WSO2 MSF4J
  • Implementation of WSO2 MSF4J
  • Develop Microservices with MSF4J(security, metrics etc. )
  • Demo.



Afkham AzeezMicroservices Circuit Breaker Implementation



Circuit breaker


Introduction

Circuit breaker is a pattern used for fault tolerance and the term was first introduced by Michael Nygard in his famous book titled "Release It!". The idea is, rather than wasting valuable resources trying to invoke an operation that keeps failing, the system backs off for a period of time, and later tries to see whether the operation that was originally failing works.

A good example would be, a service receiving a request, which in turn leads to a database call. At some point in time, the connectivity to the database could fail. After a series of failed calls, the circuit trips, and there will be no further attempts to connect to the database for a period of time. We call this the "open" state of the circuit breaker. During this period, the callers of the service will be served from a cache. After this period has elapsed, the next call to the service will result in a call to the database. This stage of the circuit breaker is called the "half-open" stage. If this call succeeds, then the circuit breaker goes back to the closed stage and all subsequent calls will result in calls to the database. However, if the database call during the half-open state fails, the circuit breaker goes back to the open state and will remain there for a period of time, before transitioning to the half-open state again.

Other typical examples of the circuit breaker pattern being useful would be a service making a call to another service, and a client making a call to a service. In both cases, the calls could fail, and instead of indefinitely trying to call the relevant service, the circuit breaker would introduce some back-off period, before attempting to call the service which was failing.

Implementation with WSO2 MSF4J

I will demonstrate how a circuit breaker can be implemented using the WSO2 Microservices Framework for java (MSF4J) & Netflix Hystrix. We take the stockquote service sample, and enable circuit breaker. Assume that the stock quotes are loaded from a database. We wrap the calls to this database in a Hystrix command. If database calls fail, the circuit trips and stock quotes are served from cache.

The complete code is available at https://github.com/afkham/msf4j-circuit-breaker

NOTE: To keep things simple and focus on the implementation of the circuit breaker patter, rather than make actual database calls, we have a class called org.example.service.StockQuoteDatabase and calls to its getStock method could result in timeouts or failures. To see an MSF4J example on how to make actual database calls, see https://github.com/sagara-gunathunga/msf4j-intro-webinar-samples/tree/master/HelloWorld-JPA.

The complete call sequence is shown below. StockQuoteService is an MSF4J microservice.



Configuring the Circuit Breaker

 The circuit breaker is configured as shown below.

 We are enabling circuit breaker & timeout, and then setting the threshold of failures which will trigger circuit tripping to 50, and also timeout to 10ms. So any database call that takes more than 10ms will also be registered as a failure. For other configuration parameters, please see https://github.com/Netflix/Hystrix/wiki/Configuration

Building and Running the Sample

Checkout the code from https://github.com/afkham/msf4j-circuit-breaker & use Maven to build the sample.

mvn clean package

Next run the MSF4J service.

java -jar target/stockquote-0.1-SNAPSHOT.jar 

Now let's use cURL to repeatedly invoke the service. Run the following command;

while true; do curl -v http://localhost:8080/stockquote/IBM ; done

The above command will keep invoking the service. Observe the output of the service in the terminal. You will see that some of the calls will fail on the service side and you will be able to see the circuit breaker fallback in action and also the circuit breaker tripping, then going into the half-open state, and then closing.








Chathurika Erandi De SilvaWSO2 ESB: Connectors, DATA MAPPER -> Nutshell


Sample Scenario

We will provide a query to SalesForce and obtain data. Next we will use this data and generate an email using Google Gmail API. Using WSO2 ESB, we have the capability of using SalesForce and Gmail connectors. These connectors contain many operations that will be useful in performing different tasks using the relevant apps. For this sample, I will be using the query operation of the SalesForce connector and the createAMail operation of the Gmail connector.


Section 1

Setting up Salesforce Account

In order to execute the sample with ESB, a Sales Force Account should be setup, in the following manner

  1. Create a Salesforce Free Developer Account
  2. Go to Personal Settings
  3. Reset My security token and get token

The above token should be used with the password appended as password<token> in ESB Salesforce connector operations.

Obtaining information from Google Gmail API

The WSO2 ESB Gmail Connector operations require the userID, accessToken, Client ID, Client Secret and Refresh token to call the Gmail API. Follow the below steps to retrieve that information

  1. Register a project in Google Developer Console
  2. Enable Gmail API for the project
  3. Obtain Client ID and Client Secret for the project by generating credentials
  4. Provide an authenticated Redirect URL for the project.
  5. Give the following request in the browser to obtain the code

https://accounts.google.com/o/oauth2/auth?redirect_uri=<redirect_uri>&response_type=code&client_id=<client_id>&scope=https://mail.google.com/+https://www.googleapis.com/auth/gmail.compose+https://www.googleapis.com/auth/gmail.insert+https://www.googleapis.com/auth/gmail.labels+https://www.googleapis.com/auth/gmail.modify+https://www.googleapis.com/auth/gmail.readonly+https://www.googleapis.com/auth/gmail.send&approval_prompt=force&access_type=offline

This will give a code as below

<Redirect URL>?code=<code>

E.g.



6.  Send the following payload to the below given endpoint

HTTP Method: POST

Payload:
code: <code obtained in the above step>
client_id: <client_id obtained above>
client_secret: <client_secret obtained above>
redirect_uri: <redirect uri authorized for the web client in the project>
grant_type:authorization_code
This will give you an output as below

{
 "access_token": "ya29.Ci8CA7JMJYDrKqWsa-jaYUQhuKnQsx4vYdUin7bvjToReA9FD6Z5GeRHeBozFlLowg",
 "token_type": "Bearer",
 "expires_in": 3600,
 "refresh_token": "1/RUjHwS-5pW9HEJ7U8HfZTQPdG-fj7juqeBtAKhScNeg"
}


Now we are ready to go through second part of the post


Section 2

  1. Create a ESB Config Project using WSO2 ESB Tooling
  2. Add the SalesForce Connector and the Gmail Connector to the project
  3. Create a Sequence

In this sample scenario I am reading the request and obtaining the needed query that I will be sending to SalesForce, the subject of the mail to be generated and the recipient of the email. These information will be set as message context properties which will be used later.

<property expression="//test:query" name="Query" scope="default" type="STRING" xmlns:test="org.wso2.sample"/>
   <property expression="//test:subject" name="Subject" scope="default" type="STRING" xmlns:test="org.wso2.sample"/>
   <property expression="//test:recipient" name="Recipient" scope="default" type="STRING" xmlns:test="org.wso2.sample"/>

3.a.  Add the query operation from the SalesForce connector
3.b. Create a new configuration (in Properties view of the Dev Studio) for the query connector and provide the below information

Configuration name: <name for the configuration>
Username: <username of the salesforce account>
Password: <password<token> of salesforce account>
Login URL: <specific login URL for the salesforce>


3.c. In Properties view of the Query Parameter provide the following as provided in the image

queryOperation.png

The source view will be as below and a local entry named salesforce should be created in the project under local entries.

<salesforce.query configKey="salesforce">
       <batchSize>200</batchSize>
       <queryString>{$ctx:Query}</queryString>
   </salesforce.query>


This will return a set of data in xml format as below

Sample

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope
   xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
   xmlns="urn:partner.soap.sforce.com"
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns:sf="urn:sobject.partner.soap.sforce.com">
   <soapenv:Header>
       <LimitInfoHeader>
           <limitInfo>
               <current>11</current>
               <limit>15000</limit>
               <type>API REQUESTS</type>
           </limitInfo>
       </LimitInfoHeader>
   </soapenv:Header>
   <soapenv:Body>
       <queryResponse>
           <result xsi:type="QueryResult">
               <done>true</done>
               <queryLocator xsi:nil="true"/>
               <records xsi:type="sf:sObject">
                   <sf:type>Account</sf:type>
                   <sf:Id xsi:nil="true"/>
                   <sf:MasterRecordId xsi:nil="true"/>
                   <sf:Name>Burlington Textiles Corp of America</sf:Name>
                   <sf:AccountNumber>CD656092</sf:AccountNumber>
                   <sf:Phone>(336) 222-7000</sf:Phone>
                   <sf:BillingCountry>USA</sf:BillingCountry>
                   <sf:BillingPostalCode>27215</sf:BillingPostalCode>
                   <sf:BillingState>NC</sf:BillingState>
                   <sf:BillingCity>Burlington</sf:BillingCity>
                   <sf:ShippingCountry xsi:nil="true"/>
               </records>
           </soapenv:Body>
       </soapenv:Envelope




3. d. Add an Iterator mediator to the sequence. This will iterate through the obtained xml content.

3. e. Add a data mapper mediator to map the xml entities to gmail email components as below

Data Mapper mediator configuration

DataMapperSalesForce_2.png

For input and output types of mapping use, xml and connector types respectively. Output connector type will be Gmail.

salesforceDataMapper.png




3.f.  Next add createAMail operation from Gmail Connector to the sequence

The end sequence view will be as following

SalesForceSeq.png

3.g. Create a new configuration with the createAMail operation as below

Configuration Name: <provide a name for the configuration>
User ID: <provide the username using which the google project was created before>
Access Token: <Access Token obtained in section 1>
Client ID: <Client ID obtained in section 1>
Refresh Token: <Refresh Token obtained in section 1>

3.h. Configure the createAMail as shown in the below image

createAMail.png

Source view of configuration

 <gmail.createAMail configKey="gmail">
                   <to>{$ctx:Recipient}</to>
                   <subject>{$ctx:Subject}</subject>
   </gmail.createAMail>


There will another local entry created in the project called gmail after this point.

4. Create an Inbound Endpoint and associate the above sequence

5. Create a Connector Explorer Project in the workspace and add the SalesForce, Gmail connectors to it

connectorExplorer.png


6. Create a CAR file with the following

ESB Config Project
Registry Resource Project for Data Mapper
Connector Explorer Project

7. Deploy the CAR file in the WSO2 ESB

Invoke the inbound endpoint

Sample Request

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:test="org.wso2.sample">
  <soapenv:Header/>
  <soapenv:Body>
    <test:query>select MasterRecordId,name,AccountNumber,Phone,BillingCountry,BillingPostalCode,BillingState,BillingCity,ShippingCountry from Account WHERE BillingCountry='USA'</test:query>
    <test:subject>Test Salesforce</test:subject>
    <test:recipient>sashikawso2@gmail.com</test:recipient>
  </soapenv:Body>
</soapenv:Envelope>


A mail will be sent to the recipient after invocation with the given subject and the mapped data as the message body.




Srinath PereraRolling Window Regression: a Simple Approach for Time Series Next value Predictions

Given a time series, predicting the next value is a problem that fascinated  programmers for a long time. Obviously, a key reason for this attention is stock markets, which promised untold riches if you can crack it. However, except for few (see A rare interview with the mathematician who cracked Wall Street), those riches have proved elusive.

Thanks to IoT (Internet of Things), time series analysis is poise to a come back into the lime light. IoT let us place ubiquitous sensors everywhere, collect data, and act on that data. IoT devices collect data through time and resulting data are almost always time series data.

Following are few use cases for time series prediction.

  1. Power load prediction
  2. Demand prediction for Retail Stores
  3. Services (e.g. airline check in counters, government offices) client prediction
  4. Revenue forecasts
  5. ICU care vital monitoring
  6. Yield and crop prediction

Let’s explore the techniques available for time series forecasts.

The first question is that “isn’t it regression?”. It is close, but not the same as regression. In a time series, each value is affected by the values just preceding this value. For example, if there is lot of traffic at 4.55 in a junction, chances are that there will be some traffic at 4.56 as well. This is called autocorrelation. If you are doing regression, you will only consider x(t) while due to auto correlation, x(t-1), x(t-2), … will also affect the outcome. So we can think about time series forecasts as regression that factor in autocorrelation as well.

For this discussion, let’s consider “Individual household electric power consumption Data Set”, which is data collected from a one house hold over four years in one minute intervals. Let’s only consider three fields, and data set will look like following.

The first question to ask is how do we measure success? We do this via a loss function, where we try to minimize the loss function. There are several loss functions, and they are different pros and cons.

  1. MAE ( Mean absolute error) — here all errors, big and small, are treated equal.
  2. Root Mean Square Error (RMSE) — this penalizes large errors due to the squared term. For example, with errors [0.5, 0.5] and [0.1, 0.9], MSE for both will be 0.5 while RMSE is 0.5 and. 0.45.
  3. MAPE ( Mean Absolute Percentage Error) — Since #1 and #2 depend on the value range of the target variable, they cannot be compared across data sets. In contrast, MAPE is a percentage, hence relative. It is like accuracy in a classification problem, where everyone knows 99% accuracy is pretty good.
  4. RMSEP ( Root Mean Square Percentage Error) — This is a hybrid between #2 and #3.
  5. Almost correct Predictions Error rate (AC_errorRate)—percentage of predictions that is within %p percentage of the true value

If we are trying to forecast the next value, we have several choices.

ARIMA Model

The gold standard for this kind of problems is ARIMA model. The core idea behind ARIMA is to break the time series in o different components such as trend component, seasonality component etc and carefully estimate a model for each component. See Using R for Time Series Analysis for a good overview.

However, ARIMA has an unfortunate problem. It needs an expert ( a good statistics degree or a grad student) to calibrate the model parameters. If you want to do multivariate ARIMA, that is to factor in multiple fields, then things get even harder.

However, R has a function called auto.arima, which estimates model parameters for you. I tried that out.

library("forecast")
....
x_train <- train data set
X-test <- test data set
..
powerTs <- ts(x_train, frequency=525600, start=c(2006,503604))
arimaModel <- auto.arima(powerTs)
powerforecast <- forecast.Arima(arimaModel, h=length(x_test))
accuracy(powerforecast)

You can find detail discussion on how to do ARIMA from the links given above. I only used 200k from the data set as our focus is mid-size data sets. It gave a MAPE of 19.5.

Temporal Features

The second approach is to come up with a list of features that captures the temporal aspects so that the auto correlation information is not lost. For example, Stock market technical analysis uses features built using moving averages. In the simple case, an analyst will track 7 days and 21 days moving averages and take decisions based on cross-over points between those values.

Following are some feature ideas

  1. collection of moving averages/ medians(e.g. 7, 14, 30, 90 day)
  2. Time since certain event
  3. Time between two events
  4. Mathematical measures such as Entropy, Z-scores etc.
  5. X(t) raised to functions such as power(X(t),n), cos((X(t)/k)) etc

Common trick people use is to apply those features with techniques like Random Forest and Gradient Boosting, that can provide the relative feature importance. We can use that data to keep good features and drop ineffective features.

I will not dwell too much time on this topic. However, with some hard work, this method have shown to give very good results. For example, most competitions are won using this method (e.g.http://blog.kaggle.com/2016/02/03/rossmann-store-sales-winners-interview-2nd-place-nima-shahbazi /).

Down side, however, is crafting features is a black art. It takes lots of work and experience to craft the features.

Rolling Windows based Regression

Now we got to the interesting part. It seems there is an another method that gives pretty good results without lots of hand holding.

Idea is to to predict X(t+1), next value in a time series, we feed not only X(t), but X(t-1), X(t-2) etc to the model. A similar idea has being discussed in Rolling Analysis of Time Series although it is used to solve a different problem.

Let’s look at an example. Let’s say that we need to predict x(t+1) given X(t). Then the source and target variables will look like following.

Data set would look like following after transformed with rolling window of three.

Then, we will use above transformed data set with a well-known regression algorithm such as linear regression and Random Forest Regression. The expectation is that the regression algorithm will figure out the autocorrelation coefficients from X(t-2) to X(t).

For example, with the above data set, applying Linear regression on the transformed data set using a rolling window of 14 data points provided following results. Here AC_errorRate considers forecast to be correct if it is within 10% of the actual value.

LR AC_errorRate=44.0 RMSEP=29.4632 MAPE=13.3814 RMSE=0.261307

This is pretty interesting as this beats the auto ARIMA right way ( MAPE 0.19 vs 0.13 with rolling windows).

So we only tried Linear regression so far. Then I tried out several other methods, and results are given below.

Linear regression still does pretty well, however, it is weak on keeping the error rate within 10%. Deep learning is better on that aspect, however, took some serious tuning. Please note that tests are done with 200k data points as my main focus is on small data sets.

I got the best results from a Neural network with 2 hidden layers of size 20 units in each layer with zero dropout or regularisation, activation function “relu”, and optimizer Adam(lr=0.001) running for 500 epochs. The network is implemented with Keras. While tuning, I found articles [1] and [2] pretty useful.

Then I tried out the same idea with few more datasets.

  1. Milk production Data set ( small < 200 data points)
  2. Bike sharing Data set (about 18,000 data points)
  3. USD to Euro Exchange rate ( about 6500 data points)
  4. Apple Stocks Prices (about 13000 data points)

Forecasts are done as univariate time series. That is we only consider time stamps and the value we are forecasting. Any missing value is imputed using padding ( using most recent value). For all tests, we used a window of size 14 for as the rolling window.

Following tables shows the results. Here except for Auto.Arima, other methods using a rolling window based data set.

There is no clear winner. However, rolling window method we discussed coupled with a regression algorithm seems to work pretty well.

Conclusion

We discussed three methods: ARIMA, Using Features to represent time effects, and Rolling windows to do time series next value forecasts with medium size data sets.

Among the three, the third method provides good results comparable with auto ARIMA model although it needs minimal hand holding by the end user.

Hence we believe that “Rolling Window based Regression” is a useful addition for the forecaster’s bag of tricks!

However, this does not discredit ARIMA, as with expert tuning, it will do much better. At the same time, with hand crafted features methods two and three will also do better.

One crucial consideration is picking the size of the window for rolling window method. Often we can get a good idea from the domain. Users can also do a parameter search on the window size.

Following are few things that need further exploration.

  • Can we use RNN and CNN? I tried RNN, but could not get good results so far.
  • It might be useful to feed other features such as time of day, day of the week, and also moving averages of different time windows.

References

  1. An overview of gradient descent optimization algorithms
  2. CS231n Convolutional Neural Networks for Visual Recognition

Chathurika Erandi De SilvaJSON to XML mapping using Data Mapper: Quick and Simple guide


This post is a quick and simple walk through in creating a simple mapping between JSON and XML using Data Mapper for a beginner. If you are new to Data Mapper, please read this before continuing with this post.


For this sample below json and xmls will be used

Json

{

 "order": [{

   "orderid": "1",

   "ordername": "Test",
   "customer": "test"
 }]
}

XML
<?xml version="1.0" encoding="UTF-8" ?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Body>
<order>
  <orderid>1</orderid>
  <ordername>Test</ordername>
  <customer>test</customer>
</order>
</soapenv:Body>
</soapenv:Envelope>

Steps
  1. Create a ESB Config project using Eclipse IDE (WSO2 ESB Tooling component should be installed)
  2. Create a sequence
  3. Include DataMapper mediator: Following image illustrates a sample sequence, where the output from the DataMapper mediator is returned to the client.


  4. Create the mapping file by double clicking on the  DataMapper mediator
  5. Include the above json schema as the input and xml schema as the output
  6. Map the values by connecting the variables in the input and output fields
  7. Create an API and include the above created sequence

When the api is invoked the converted and mapped xml is returned to the client as below.




Chanaka JayasenaRole of GREG permission-mappings.xml


 GREG has a permission-mapping.xml. We can find it at /home/chanaka/Desktop/greg/wso2greg-5.2.0/repository/conf/etc/permission-mappings.xml


Each entry has three attributes.
  • managementPermission
  • resourcePermission
  • resourcePaths
 Ex:

managementPermission="/permission/admin/manage/resources/govern/server/list"
resourcePermission="http://www.wso2.org/projects/registry/actions/get"
resourcePaths="/_system/governance/trunk/servers"
/>

There are default configurations in this file. These entries are mapping each permission in the permission tree in to resource paths and assign them permissions.

With the above line in the permission-mappings.xml, an admin user who assign the permission "/permission/admin/manage/resources/govern/server/list" will be able to do get operations on registry resources stored at "/_system/governance/trunk/servers". We can provide multiple resource paths by separating them by comas.

There are 3 types of permissions you can apply.

  1. http://www.wso2.org/projects/registry/actions/get
  2. http://www.wso2.org/projects/registry/actions/add
  3. http://www.wso2.org/projects/registry/actions/delete
We can use these permissions to control each permission tree items behavior. 
















































With the following documentation link we can find the default behavior implemented with this permission-mappings.xml.

https://docs.wso2.com/display/Governance520/Roles

Chamila WijayarathnaUpdating RXT's and Lifecycles by a Non Admin User in WSO2 GREG

We can achieve these by creating two new roles with specific set of permissions for each of the operations and adding your users to those roles.
UPDATING RXT


Then in the Main tab of management console, select Browse in Resources section. Then go to/_system/config/repository/components/org.wso2.carbon.governance/configuration section as in following.

 From there give read, write and delete permissions for the new role we created as in following image.

 Now you can assign users for your role and those users will be able to update RXTs.
UPDATING LIFE CYCLES

After finish creating this role, you can assign users for this role and those users will be able to update Lifecycles.

Lahiru SandaruwanAccess JWT Token in a Mediator Extension in WSO2 API Manager

There can be requirements for filtering requests at API Manager layer based on user claims.
That can be easily done using a Mediator Extension in WSO2 API Manager.

See the references, [1] for enabling JWT and [2] for adding mediator extensions.

Please see the sample i tested below, I used a Javascript to decode JWT token, used [3] as a help,

<?xml version="1.0" encoding="UTF-8"?>
<sequence
    xmlns="http://ws.apache.org/ns/synapse" name="Test:v1.0.0--In">
    <log level="custom">
        <property name="--TRACE-- " value="API Mediation Extension"/>
    </log>
    <property name="authheader" expression="get-property('transport','X-JWT-Assertion')"></property>
    <script language="js"> var temp_auth = mc.getProperty('authheader').trim();var val = new Array();val= temp_auth.split("\\."); var auth=val[1];var jsonStr = Packages.java.lang.String(Packages.org.apache.axiom.om.util.Base64.decode(auth), "UTF-8"); var tempStr = new Array();tempStr= jsonStr.split('http://wso2.org/claims/enduser\":\"'); var decoded = new Array();decoded = tempStr[1].split("\"");mc.setProperty("enduser",decoded[0]); </script>
    <log level="custom">
        <property name=" Enduser " expression="get-property('enduser')"/>
    </log>
</sequence> 

I created an API and engaged above sample as a mediation extension. I retrieved "enduser" claim as an example.
[1] https://docs.wso2.com/display/AM1100/Passing+Enduser+Attributes+to+the+Backend+Using+JWT
[2] https://docs.wso2.com/display/AM1100/Adding+Mediation+Extensions
[3] http://wso2.com/library/articles/2013/07/use-of-json-web-tokens-in-an-api-fa%C3%A7ade-pattern/



Ushani BalasooriyaDifference in API and User Level Advance Resource throttling in WSO2 API Manager 2.0

From WSO2 API Manager 2.0 throttling implementation onward 2 different throttling levels have been introduced in Resource Level throttling.

When you login to admin dashboard you can see Advance resource throttling tier configurations under throttle policies section as given in the below screenshot.




When you add a resource tier, you can select either API or resource level as below.


API Level Resource Tier


For API level policy, it is the shared quota of all applications that invoke the API.
if someone selects API Level policy then selecting resource level policy will be disabled.

So as an example if there are 2 users subscribe to the same api, the request count or bandwidth  defined to the API is applicable for both users as a shared quota. So if you have defined 10000 requests per minute both users can share that amount.

User Level Resource Tier

For User level policy, it is the quota assigned to each application that will invoke the API.
So the quota is assigned for single user who can access the particular API from multiple applications. Simply when it's user level, throttle key will be associate with user name.

So as an example if you have selected user level, and when there are 2 users subscribed to the same API, the defined count in tier will be assigned to each user. So if you have defined 10000 requests per minute, both users get 10000 requests /1 minute per each as a total of 20000 requests.

Prabath SiriwardenaTen Talks On Microservices You Cannot Miss At Any Cost!

The Internet is flooded with articles, talks and panel discussions on Microservices. According to Google Trends, the word, microservices — has a steep, upward curve since mid 2014. Finding the best talks among all the published talks on microservices is a hard job — and I might be off the track in picking the best 10 — apologize me if your most awesome microservices talk is missing here and please feel free to add a link to it as a comment. To add one more to the pile of microservices talks we already have, I will be doing a talk on Microservices Security at the Cloud Identity Summit, New Orleans next Monday.

Read More...

Dimuthu De Lanerolle

Sample UI test for gerg LifeCycle scenarios

Refer :

[1]

https://github.com/wso2/product-greg/blob/master/modules/integration/tests-ui-integration/tests-es-ui/src/test/java/org/wso2/carbon/greg/ui/test/lifecycle/LifeCycleSmokeUITestCase.java


/*
*Copyright (c) 2005-2015, WSO2 Inc. (http://www.wso2.org) All Rights Reserved.
*
*WSO2 Inc. licenses this file to you under the Apache License,
*Version 2.0 (the "License"); you may not use this file except
*in compliance with the License.
*You may obtain a copy of the License at
*
*http://www.apache.org/licenses/LICENSE-2.0
*
*Unless required by applicable law or agreed to in writing,
*software distributed under the License is distributed on an
*"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
*KIND, either express or implied.  See the License for the
*specific language governing permissions and limitations
*under the License.
*/
package org.wso2.carbon.greg.ui.test.lifecycle;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import org.wso2.carbon.automation.engine.context.TestUserMode;
import org.wso2.carbon.automation.engine.context.beans.User;
import org.wso2.carbon.automation.extensions.selenium.BrowserManager;

import org.wso2.greg.integration.common.ui.page.LoginPage;
import org.wso2.greg.integration.common.ui.page.lifecycle.LifeCycleHomePage;
import org.wso2.greg.integration.common.ui.page.lifecycle.LifeCyclesPage;
import org.wso2.greg.integration.common.ui.page.publisher.PublisherHomePage;
import org.wso2.greg.integration.common.ui.page.publisher.PublisherLoginPage;
import org.wso2.greg.integration.common.ui.page.store.StoreLoginPage;
import org.wso2.greg.integration.common.ui.page.util.UIElementMapper;
import org.wso2.greg.integration.common.utils.GREGIntegrationUIBaseTest;

import static org.testng.Assert.assertTrue;

/**
 * This UI test class covers a full testing scenario of,
 *
 *  1. Uploading a LC to greg,
 *  2. Implementation of rest service,
 *  3. Adding the LC to rest service,
 *  4. Attaching of LC for rest service
 *  5. Promoting of rest service
 *  6. Store related operations - JIRA STORE-1156
 *
 */
public class LifeCycleSmokeUITestCase extends GREGIntegrationUIBaseTest {

    private WebDriver driver;
    private User userInfo;

    private String restServiceName = "DimuthuD";
    private UIElementMapper uiElementMapper;

    @BeforeClass(alwaysRun = true)
    public void setUp() throws Exception {

        super.init(TestUserMode.SUPER_TENANT_ADMIN);
        userInfo = automationContext.getContextTenant().getContextUser();
        driver = BrowserManager.getWebDriver();
        this.uiElementMapper = UIElementMapper.getInstance();
    }

    @AfterClass(alwaysRun = true)
    public void tearDown() throws Exception {
        driver.quit();
    }

    @Test(groups = "wso2.greg", description = "verify login to governance registry")
    public void performingLoginToManagementConsole() throws Exception {

        driver.get(getLoginURL());
        LoginPage test = new LoginPage(driver);
        test.loginAs(automationContext.getContextTenant().getContextUser().getUserName(),
                automationContext.getContextTenant().getContextUser().getPassword());

        driver.get(getLoginURL() + "admin/index.jsp?loginStatus=true");
        LifeCycleHomePage lifeCycleHomePage = new LifeCycleHomePage(driver);


        String lifeCycle = "<aspect name=\"SampleLifeCycle\" class=\"org.wso2.carbon.governance.registry.extensions.aspects.DefaultLifeCycle\">\n" +
                "    <configuration type=\"literal\">\n" +
                "        <lifecycle>\n" +
                "            <scxml xmlns=\"http://www.w3.org/2005/07/scxml\"\n" +
                "                   version=\"1.0\"\n" +
                "                   initialstate=\"Development\">\n" +
                "                <state id=\"Development\">\n" +
                "                    <datamodel>\n" +
                "                        <data name=\"checkItems\">\n" +
                "                            <item name=\"Code Completed\" forEvent=\"\">\n" +
                "                            </item>\n" +
                "                            <item name=\"WSDL, Schema Created\" forEvent=\"\">\n" +
                "                            </item>\n" +
                "                            <item name=\"QoS Created\" forEvent=\"\">\n" +
                "                            </item>\n" +
                "                        </data>\n" +
                "                    </datamodel>\n" +
                "                    <transition event=\"Promote\" target=\"Tested\"/>\n" +
                "                    <checkpoints>\n" +
                "                        <checkpoint id=\"DevelopmentOne\" durationColour=\"green\">\n" +
                "                            <boundary min=\"0d:0h:0m:0s\" max=\"1d:0h:00m:20s\"/>\n" +
                "                        </checkpoint>\n" +
                "                        <checkpoint id=\"DevelopmentTwo\" durationColour=\"red\">\n" +
                "                            <boundary min=\"1d:0h:00m:20s\" max=\"23d:2h:5m:52s\"/>\n" +
                "                        </checkpoint>\n" +
                "                    </checkpoints>\n" +
                "                </state>\n" +
                "                <state id=\"Tested\">\n" +
                "                    <datamodel>\n" +
                "                        <data name=\"checkItems\">\n" +
                "                            <item name=\"Effective Inspection Completed\" forEvent=\"\">\n" +
                "                            </item>\n" +
                "                            <item name=\"Test Cases Passed\" forEvent=\"\">\n" +
                "                            </item>\n" +
                "                            <item name=\"Smoke Test Passed\" forEvent=\"\">\n" +
                "                            </item>\n" +
                "                        </data>\n" +
                "                    </datamodel>\n" +
                "                    <transition event=\"Promote\" target=\"Production\"/>\n" +
                "                    <transition event=\"Demote\" target=\"Development\"/>\n" +
                "                </state>\n" +
                "                <state id=\"Production\">\n" +
                "                    <transition event=\"Demote\" target=\"Tested\"/>\n" +
                "                </state>\n" +
                "            </scxml>\n" +
                "        </lifecycle>\n" +
                "    </configuration>\n" +
                "</aspect>\n";


        lifeCycleHomePage.addNewLifeCycle(lifeCycle);

        driver.get(getLoginURL() + "lcm/lcm.jsp?region=region3&item=governance_lcm_menu");
        LifeCyclesPage lifeCyclesPage = new LifeCyclesPage(driver);

        assertTrue(lifeCyclesPage.checkOnUploadedLifeCycle("SampleLifeCycle"), "Sample Life Cycle Could Not Be Found");

        Thread.sleep(6000);

        driver.findElement(By.linkText("Sign-out")).click();

        log.info("Login test case is completed ");
    }

    @Test(groups = "wso2.greg", description = "logging to publisher",
            dependsOnMethods = "performingLoginToManagementConsole")
    public void performingLoginToPublisher() throws Exception {

        // Setting publisher home page
        driver.get(getPublisherUrl().split("\\/apis")[0]);

        PublisherLoginPage test = new PublisherLoginPage(driver);

        // performing login to publisher
        test.loginAs(userInfo.getUserName(), userInfo.getPassword());

        driver.get(getPublisherUrl().split("/apis")[0] + "/pages/gc-landing");

        PublisherHomePage publisherHomePage = new PublisherHomePage(driver);

        //adding rest service
        publisherHomePage.createRestService(restServiceName, "/lana", "1.2.5");

        driver.findElement(By.cssSelector("div.auth-img")).click();
        driver.findElement(By.linkText("Sign out")).click();

        Thread.sleep(3000);
    }


    @Test(groups = "wso2.greg", description = "Adding of LC to the rest service",
            dependsOnMethods = "performingLoginToPublisher")
    public void addingLCToRestService() throws Exception {

        driver.get(getLoginURL());

        LoginPage loginPage = new LoginPage(driver);
        loginPage.loginAs(automationContext.getContextTenant().getContextUser().getUserName(),
                automationContext.getContextTenant().getContextUser().getPassword());

        loginPage.assignLCToRestService(restServiceName);

        driver.findElement(By.linkText("Sign-out")).click();

        Thread.sleep(3000);
    }

    @Test(groups = "wso2.greg", description = "Promoting of rest service with the LC",
            dependsOnMethods = "addingLCToRestService")
    public void lifeCycleEventsOfRestService() throws Exception {

        // Setting publisher home page
        driver.get(getPublisherUrl().split("/apis")[0]);

        PublisherLoginPage publisherLoginPage = new PublisherLoginPage(driver);

        // performing login to publisher
        publisherLoginPage.loginAs(userInfo.getUserName(), userInfo.getPassword());

        driver.get(getPublisherUrl().split("/apis")[0] + "/pages/gc-landing");

        driver.findElement(By.cssSelector("span.btn-asset")).click();
        driver.findElement(By.linkText("REST Services")).click();
        driver.findElement(By.linkText(restServiceName)).click();

        driver.findElement(By.id("Lifecycle")).click();

        driver.findElement(By.linkText("Other lifecycles")).click();
        driver.findElement(By.linkText("SampleLifeCycle")).click();

        driver.findElement(By.xpath(uiElementMapper.getElement("publisher.promote.checkbox1.xpath"))).click();
        Thread.sleep(2000);
        driver.findElement(By.xpath(uiElementMapper.getElement("publisher.promote.checkbox2.xpath"))).click();
        Thread.sleep(2000);
        driver.findElement(By.xpath(uiElementMapper.getElement("publisher.promote.checkbox3.xpath"))).click();
        Thread.sleep(2000);
        driver.findElement(By.id("lcActionPromote")).click();
        Thread.sleep(2000);
        driver.findElement(By.xpath(uiElementMapper.getElement("publisher.promote.checkbox1.xpath"))).click();
        Thread.sleep(2000);
        driver.findElement(By.xpath(uiElementMapper.getElement("publisher.promote.checkbox2.xpath"))).click();
        Thread.sleep(2000);
        driver.findElement(By.xpath(uiElementMapper.getElement("publisher.promote.checkbox3.xpath"))).click();
        Thread.sleep(2000);
        driver.findElement(By.id("lcActionPromote")).click();

        // removal of added rest service
        driver.findElement(By.id("Edit")).click();
        driver.findElement(By.id("Delete")).click();
        driver.findElement(By.id("btn-delete-con")).click();
    }

}

Dimuthu De Lanerolle






[1] Docker + Java 

Below code segment can be used for pushing your Docker images to public docker registry / hub.


  private StringBuffer output = new StringBuffer();

  private String gitPush(String command) {

      Process process;
      try {
          process = Runtime.getRuntime().exec(command);
          process.waitFor();
          BufferedReader reader =
                  new BufferedReader(new InputStreamReader(process.getInputStream()));


          String line = "";
          while ((line = reader.readLine()) != null) {
              output.append(line + "\n");
          }

      } catch (Exception e) {
          e.printStackTrace();
      }

      return output.toString();

  }


Build Doker image using fabric8
---------------------------------------

public static boolean buildDockerImage(String dockerUrl, String image, String imageFolder)
            throws InterruptedException, IOException {

        Config config = new ConfigBuilder()
                .withDockerUrl(dockerUrl)
                .build();

        DockerClient client = new DefaultDockerClient(config);
        final CountDownLatch buildDone = new CountDownLatch(1);


        OutputHandle handle = client.image().build()
                .withRepositoryName(image)
                .usingListener(new EventListener() {
                    @Override
                    public void onSuccess(String message) {
                        log.info("Success:" + message);
                        buildDone.countDown();
                    }

                    @Override
                    public void onError(String messsage) {
                        log.error("Failure:" + messsage);
                        buildDone.countDown();
                    }

                    @Override
                    public void onEvent(String event) {
                        log.info("Success:" + event);
                    }
                })
                .fromFolder(imageFolder);

        buildDone.await();
        handle.close();
        client.close();
        return true;
    }

Chanaka JayasenaAdding custom validations to WSO2 Enterprice Store - Publisher asset creation page.

Writing an asset extension

Asset extensions allows to create custom functionalities and look and feel to a certain asset type in WSO2 Enterprise Store. With this example, I am going to create a new asset type and add a custom validation to the asset creation page.

Download WSO2ES product distribution from wso2.com and extract it to your local drive. Start the server and fire up browser with the admin console url.

https://:9443/carbon/

Navigate to Extensions > Configure > Artifact Types and click the "Add new Artifact" link.





















This will load the UI with a new default artifact.

Notice that it has following attributes in it's root node.

shortName="applications" singularLabel="Enterprise Application" pluralLabel="Enterprise Applications"

"applications" is the name of new asset type we are going to add.

In our example we are going to do a custom validation to the asset creation page. Let's add the fields we are going to validate.

Our intention is to provide 4 check-boxes and display an error message if at least one checkbox is not selected. Basically if user does not check any of the four checkboxes we provide will display an error message.

 <table name="Tier Availability">  
<field type="checkbox">
<name>Unlimited</name>
</field>
<field type="checkbox">
<name>Gold</name>
</field>
<field type="checkbox">
<name>Silver</name>
</field>
<field type="checkbox">
<name>Bronze</name>
</field>
</table> 

Restart the server.
Now when we go to the publisher app, the new content type is available.















Now click the Add button to add a new asset of "Enterprise Applications".

You will be able to notice the new section we added on the rxt is available in this page.













We are going to add a custom validation to this four checkboxes where it will validate if at least one of the checkboxes is checked and if not show an error message bellow the checkboxes.

We can't do any changes to the core web application. But it's possible to add custom behavior via ES extensions. Since this is an asset specific customization which need to apply only to the new "application" asset type we need to create an "asset extension". There is one other extension type call "app extension" where it applies to all the asset types across the whole web application.

Navigate to "repository/deployment/server/jaggeryapps/publisher/extensions/assets" folder. Create a new folder and name it "applications". Note that the "applications" is the value of "shortName" attribute we gave on rxt creation. The complete path to the new folder is "repository/deployment/server/jaggeryapps/publisher/extensions/assets/applications".

Now with this folder we can override the default files in the core application. We need to add a client side javascript to the add asset page and with that file we need to initialize the client side event registrations where it will validate the checkboxes.

Note the folder structure in "repository/deployment/server/jaggeryapps/publisher/" ( say [A] ). We can override the files from above to "repository/deployment/server/jaggeryapps/publisher/extensions/assets/applications" ( say [B] ).

Copy [A]/themes/default/helpers/create_asset.js to [B]/themes/default/helpers/create_asset.js.

Add a new client side script of your choice to the list of js in [B]/themes/default/helpers/create_asset.js. I added a file call 'custom-checkbox-validate.js'. The content of [B]/themes/default/helpers/create_asset.js will be as follows.


var resources = function (page, meta) {
return {
js:['libs/jquery.form.min.js','publisher-utils.js','create_asset.js','jquery.cookie.js','date_picker/datepicker.js','select2.min.js','tags/tags-common.js','tags/tags-init-create-asset.js','custom-checkbox-validate.js'],
css:['bootstrap-select.min.css','datepick/smoothness.datepick.css','date_picker/datepicker/base.css', 'date_picker/datepicker/clean.css','select2.min.css'],
code: ['publisher.assets.hbs']
};
};

Now create the new file [B]/themes/default/js/custom-checkbox-validate.js
Put a alert('') in the above file and see if you are getting a browser alert message.

If you are not getting the alert message following might be probable causes.

  • You haven't restart the server after the extension creation
  • The asset type is not macing with the folder name.
  • The folder structure in the extension is not aligning with the core  application folder structure.

Update the custom-checkbox-validate.js with the following content. The script bellow will validate at least one checkbox is checked from the four checkboxes.


$(document).ready(function () {
validator.addCustomMethod(function () {
//Get the 4 checkboxes jQuery objects.
var bronze = $('#tierAvailability_bronze');
var unlimited = $('#tierAvailability_unlimited');
var gold = $('#tierAvailability_gold');
var silver = $('#tierAvailability_silver');
var errorMessage = "You need to check at least one from the tires";

/*
* Custom event handler to the four checkboxes.
* error by default is shown after the input element.
*/

var checkboxClickCustomHandler = function () {
if (silver.is(':checked') || gold.is(':checked') || unlimited.is(':checked') || bronze.is(':checked')) {
bronze.removeClass("error-field");
bronze.next().remove();
} else {
validator.showErrors(bronze, errorMessage);
}
};

//Register event hanlder for the the for checkboxes
bronze.click(checkboxClickCustomHandler);
unlimited.click(checkboxClickCustomHandler);
gold.click(checkboxClickCustomHandler);
silver.click(checkboxClickCustomHandler);

// Do the custom validation where it checks at least one checkbox is checked. Return type is a json of format {message:"",element:}
if (silver.is(':checked') || gold.is(':checked') || unlimited.is(':checked') || bronze.is(':checked')) {
return {message: "", element: bronze};
} else {
return {message: errorMessage, element: bronze};
}
});
});

Chintana WilamunaSAML2 SSO for Ruby on Rails apps with Identity Server

Identity Server document has samples on how to configure SAML2 SSO for a Java webapp. SAML2 allows decoupling service providers from identity providers. Only requirement is ability to create/process SAML2 messages which is XML. In an identity management scenario, service provider (referred to as SP) is typically a web application. Identity provider (or IdP) is any system that provide user repository, authentication and user profile details to the service provider application.

TLDR;

If you want to skip rest of the post and go right then follow these steps,

  1. Download and start Identity Server
  2. Clone the modified rails project from here - https://github.com/chintana/ruby-saml-example
  3. Create a cert keypair and update the SP cert settings in rails app
  4. Upload public cert to Identity Server
  5. Add an SP entry for the rails app in Identity Server (Issuer is already mentioned in app/models/account.rb
  6. Login to Rails app - use admin/admin default credentials. You can add more users and use their accounts to login as well

Changes to the rails app

Rest of the post cover details and changes I had to do to get single sign in as well as single sign out working.

I’m using the sample skeleton rails project that’s integrated with ruby-saml. We need to configure details on connecting to Identity Server. SAML settings are in app/models/account.rb.

First up we need to change IdP settings to match what we have in Identity Server.

settings.issuer                         = "railsapp"
settings.idp_entity_id                  = "localhost"
settings.idp_sso_target_url             = "https://localhost:9443/samlsso"
settings.idp_slo_target_url             = "https://localhost:9443/samlsso"
settings.idp_cert                       = "-----BEGIN CERTIFICATE-----
MIICNTCCAZ6gAwIBAg ...
-----END CERTIFICATE-----"

Certificate here is default certificate that comes with Identity Server 5.1.0. I’ve truncated it for brevity. Then we need the following 2 entries for decrypting the encrypting SAML assertions. Certs are truncated.

settings.certificate                    = "-----BEGIN CERTIFICATE-----
MIICfjCCAeegAwIBAgIEFFIb3D ...
-----END CERTIFICATE-----"
settings.private_key                    = "-----BEGIN PRIVATE KEY-----
MIICdgIBADANBgkqhkiG9w0BAQEFAA ...
-----END PRIVATE KEY-----"

When we’re sending logout requests we need to include the session index with the request. So that Identity Server will logout the correct user. Rails sample doesn’t send the session index by default so you’ll see an exception when you do single logout saying session index cannot be null. Changes can be found here.

Then we need to make saml/logout route accessible through HTTP POST. Need to add an additional route,

post :logout

Service provider configuration in Identity Server

At the Identity Server we need to register the rails app as a service provider.

Then click on Inbound Authentication Configuration, click SAML2 Web SSO Configuration.

In the above configuration http://localhost:3000/saml/acs is the assertion consumer URL. Certificate is the certs created earlier. Also we need to configure single logout URL - http://localhost:3000/saml/logout

Tracing SAML messages

I’m using SAML Chrome extension to monitor SAML messages. First SAML call is for authentication. This is the message sent from rails app to Identity Server.

Second message is SAML response message sent from Identity Server to rails application. As you can see in the below screenshot SAML response is encrypted.

Let’s do single logout from rails app. Here’s the logout request with session index.

Then the logout response we get from Identity Server.

With a library that supports processing SAML requests almost any web app can be integrated into Identity Server using SAML2.

Afkham AzeezThis blog has reached EOL




After a decade, I am shutting down this blog. I will be writing on http://me.afkham.org which is my publication on Medium. Medium is so much user friendly than Blogger and the user experience is awesome.

Even though I will not make any new posts here, I will still retain the posts in this blog because many of them have been referenced from other places.

Follow me on Medium: https://medium.com/@afkham_azeez

Ushani BalasooriyaScenarios to Understand Subscription Tier Throttling in WSO2 API Manager 2.0


  • With the new throttling implementation, the subscription level tier quota will define the limit that particular application can access that API 

  • So basically the throttling tier policy is configured for subscription level tier by : appId + apiName + version. 
  • This can be defined as per request count or bandwidth. 
  • If application is used by 1000 users and subscribe to a 50000 Req/Min tier, then all the subscribed 1000 users can invoke maximum of 50000 Request per minute since it does not consider the user identification in subscription level tier policy.
  • With the previous throttling implementation any application user could access limit of 50000 Req/Min. 

  • When configuring Subscription level tier, Burst/rate limiting is also introduced to control the maximum requests that can be sent by a particular subscriber for a given period.
  • So if the burst limit is configured as 1000 Request/s, each user will be able to send only 1000 requests per second as maximum until it reaches 50000 requests for that minute.
  • If there are 10 application users using subscribed using 10 different applications, each user get 50000 requests with the limitation of sending burst requests 1000 per second.


Scenario 1 – Different users using same application using their user tokens


Throttle out an API by a subscription tier when few users from same tenantinvoke a particular API subscribed via the same applicationwhen quota limit is 'request count' and when there is no burst limit


Preconditions:

1. API Manager should be up and running and user1 and user2 should be signed up with subscriber permission.
2. A subscription tier should be created as below.
  • Tier Name : Silver
  • Request Count : 2000
  • Unit Time : 1 minute
  • Burst Control (Rate Limiting) : 0
  • Stop on Quota Reach : Not selected
  • Billing Plan : Free or Commercial
  • Custom Attributes : None
  • Permissions : not defined
3. API 1 should have been been created and published as below by a publisher,
  • Subscription Tiers : Silver
  • GET resource level Tier : Unlimited is set
4. A developer subscribe to the API1
  • Application created with an Unlimited Tier. app1
  • Subscribe using application with Silver
5. Generate production keys for the particular application app1 and retrieve consumer key and secret.
6. User1 and User2 should generate their user tokens using the consumer key and secret generated in the above step.
User1 using app1 :

User1 Token = curl -k -d "grant_type=password&username=<Username>
&password=<Password>" -H "Authorization: Basic <app1_Token>" 
https://192.168.124.1:8243/token
 
 
User2 using app1 :

User2 Token = curl -k -d "grant_type=password&username=<Username>
&password=<Password>" -H "Authorization: Basic <app1_Token>" 
https://192.168.124.1:8243/token
 
 
Authorization: Basic <app1_token> =
<Base64encode(consumerkey:consumer secret of app1)>

StepExpected Result
User 1 and User 2 invoke the GET resource as below within a minute using their user token
  • User1 : 900 using user1 token
  • User2 : 1101 using user2 token
User who exceeds the 2000th request should be notified as throttled out.


Scenario 2 : Sameuser using different applications using their user tokens



Throttle out an API by a subscription tier when the same user invokes a particular API subscribed via different applications when quota limit is 'request count' and when there isno burst limit.



Preconditions:

1. API Manager should be up and running and user1 should be signed up with subscriber permission.
2. A subscription tier should be created as below.
  • Tier Name : Silver
  • Request Count : 2000
  • Unit Time : 1 minute
  • Burst Control (Rate Limiting) : 0
  • Stop on Quota Reach : Not selected
  • Billing Plan : Free or Commercial
  • Custom Attributes : None
  • Permissions : not defined
3. API 1 should have been been created and published as below by a publisher,
  • Subscription Tiers : Silver
  • GET resource level Tier : Unlimited is set
4. A developer subscribe to the API1 
  • 2 Applications created with an Unlimited Tier. app1 and app2
  • Subscribe API1 using applications (app1 and app2) with Silver
5. Generate production keys for the paticular applications app1 and app2 and retrieve consumer key and secret.
6. User1 should generate the user tokens using the consumer key and secret generated in the above step for both apps

User1 using app1 :

User1 Token1 = curl -k -d "grant_type=password&username=<Username>&password=<Password>"
 -H "Authorization: Basic <app1_Token>" https://192.168.124.1:8243/token
 
 
User1 using app2 :

User1 Token2 = curl -k -d "grant_type=password&username=<Username>&password=<Password>"
 -H "Authorization: Basic <app2_Token>" https://192.168.124.1:8243/token
 
 
Authorization: Basic <app1_token>
= <Base64encode(consumerkey:consumer secret of app1)>

Authorization: Basic <app2_token>
 = <Base64encode(consumerkey:consumer secret of app2)>


StepExpected Result
User 1 invoke the GET resource as below within a minute using
user token1 and token2

  • 900 requests using user1 token1
  • 1101 requests using user1 token2
User will be able to invoke successfully all the requests
User 1 invokes the GET resource as below within a minute using their user token1 and token2

  • 2000 using user1 token1 
  • 2001 using user1 token2
When user1 invokes the 2001st request using token2, will be notified as throttled out while other requests will be successful.



Scenario 3 : Differentusers via different applications using their user tokens


Throttle out an API by a subscription tier when few users from same tenant invoke a particularAPI subscribed via different applicationswhen quota limit is request countand when there is no burst limit.

Preconditions:

1. API Manager should be up and running and user1 and user2 should be signed up with subscriber permission.
2. A subscription tier should be created as below.
  • Tier Name : Silver
  • Request Count : 2000
  • Unit Time : 1 minute
  • Burst Control (Rate Limiting) : 0
  • Stop on Quota Reach : Not selected
  • Billing Plan : Free or Commercial
  • Custom Attributes : None
  • Permissions : not defined
3. API 1 should have been been created and published as below by a publisher,
  • Subscription Tiers : Silver
  • GET resource level Tier : Unlimited is set
4. A developer subscribe to the API1
  • Application 1 and application 2 created with an Unlimited Tier. app1 and app2
  • Subscribe using applications with Silver
5. Generate production keys for the paticular applications app1 and app2 for 2 different users and retrieve consumer key and secret.
6. User1 and User2 should generate their user tokens using the consumer key and secret generated in the above step.

User 1 using app1 :

User token 1 = curl -k -d "grant_type=password&username=user1&password=user1" 
-H "Authorization: Basic <app1_Token>" https://192.168.124.1:8243/token
 
 
User 2 using app2 :

User token 2 = curl -k -d "grant_type=password&username=user2&password=user2" 
-H "Authorization: Basic <app2_Token>" https://192.168.124.1:8243/token


Authorization: Basic <app1_Token>
 = <Base64encode(consumerkey:consumer secret of app1)>

Authorization: Basic <app2_Token>
= <Base64encode(consumerkey:consumer secret of app2)>

StepExpected Result
User 1 and User 2 invoke the GET resource as below within a minute using their user token

  • User1 : 900 using user1 token 
  • User2 : 1101 using user2 token
Both users will be able to invoke successfully.
User 1 and User 2 invoke the GET resource as below within a minute using their user tokens

  • User1 : 2000 using user1 token 
  • User2 : 2000 using user2 token
Both users will be able to invoke successfully.
User 1 and User 2 invoke the GET resource as below within a minute using their user tokens

  • User1 : 2001 requests using user1 token 
  • User2 : 2001 requests using user2 token
Both users will be notified as throttled out.



Scenario 4 : Differentusers via same application using test access token


Throttle out an API by a subscription tier when few users from same tenant invoke a paticular API subscribed via the same application andtest access token (grant_type = client credentials)when quota limit is 'request count' and when there is no burst limit



Preconditions :
1. API Manager should be up and running and user1 and user2 should be signed up with subscriber permission.
2. A subscription tier should be created as below.
  • Tier Name : Silver
  • Request Count : 2000
  • Unit Time : 1 minute
  • Burst Control (Rate Limiting) : 0
  • Stop on Quota Reach : Not selected
  • Billing Plan : Free or Commercial
  • Custom Attributes : None
  • Permissions : not defined
3. API 1 should have been been created and published as below by a publisher,
  • Subscription Tiers : Silver
  • GET resource level Tier : Unlimited is set
4. A developer subscribe to the API1 
  • Application created with an Unlimited Tier. app1
  • Subscribe using application with Silver
5. Generate production keys for the paticular application app1 and retrieve test access token.
6. Test access token can be retrieved via below command.

Developer generates an access token using app1 :

Test Access Token = curl -k -d "grant_type=client_credentials
-H "Authorization: Basic <app1_Token>" https://192.168.124.1:8243/token

Authorization: Basic <app1_token>
 = <Base64encode(consumerkey:consumer secret of app1)>

StepExpected Result
User 1 and User 2 invoke the GET resource as below within a minute using the same test access token generated

  • User1 : 900 requests using test access token 
  • User2 : 1100 requests using test access token
Both users will be able to invoke successfully.
User 1 and User 2 invoke the GET resource as below within a minute using the same test access token

  • User1 : 900 using test access token 
  • User2 : 1101 using test access token
User who exceeds the 2000th request should be notified as throttled out.

Scenario 5: Differentusers via same application via user token when the burst limit is configured


Throttle out an API by a subscription tier when few users from same tenant invoke a particular API subscribed via same applications when quota limit is request countand when there is a burst limit configured.

Preconditions :

1. API Manager should be up and running and user1 and user2 should be signed up with subscriber permission.
2. A subscription tier should be created as below.
  • Tier Name : Silver
  • Request Count : 2000
  • Unit Time : 1 hour
  • Burst Control (Rate Limiting) : 100 Request/m
  • Stop on Quota Reach : Not selected
  • Billing Plan : Free or Commercial
  • Custom Attributes : None
  • Permissions : not defined
3. API 1 should have been been created and published as below by a publisher,
  • Subscription Tiers : Silver
  • GET resource level Tier : Unlimited is set
4. A developer subscribe to the API1 
  • Application 1 created with an Unlimited Tier. app1
  • Subscribe using applications with Silver
5. Generate production keys for the particular applications app1 and app2 for 2 different users and retrieve consumer key and secret.
6. User1 and User2 should generate their user tokens using the consumer key and secret generated in the above step.

User 1 using app1 :

User token 1 = curl -k -d "grant_type=password&username=user1&password=user1" 
-H "Authorization: Basic <app1_Token>" https://192.168.124.1:8243/token
 
 
User 2 using app1 :

User token 2 = curl -k -d "grant_type=password&username=user2&password=user2" 
-H "Authorization: Basic <app1_Token>" https://192.168.124.1:8243/token

Authorization: Basic <app1_Token>
 = <Base64encode(consumerkey:consumer secret of app1)>

StepExpected Result
User1 invoke with 100 requests (a burst) within a minute using their user tokenUser should be able to invoke successfully
User 1 try to send a request within the same minute in step 2. User1 will be notified as you have exceeded your quota until the next minute
User 2 invoke with 100 requests (a burst) within the same minute in step 2 using user2's user token. User 2 will be able to invoke successfully.
User 2 invokes again within the same minute in step 2 using user2's user token User should be notified as exceeded quota until the next minute.
User 1 and User 2 invoke the GET resource as below within an hour with below requests sticking to burst limit. (100 requests/m)

  • User1 : 1000 requests using user1 token 
  • User2 : 1001 requests using user2 token
User who exceeds the throttling limit by sending the 2001st request will be notified as throttled out until the next hour since it is configured as 2000 req/hr.

Until that all the requests will be successfully invoked sticking to the burst limit.

Chathurika Erandi De SilvaMessage flow debugging with WSO2 Message Flow Debugger


Mediation Debugger is a feature that is included in the upcoming WSO2 ESB 500 release. This feature is bundled as an installable pack for Eclipse Mars. Mediation debugger provides a developer a UI based visualization on the mediation flow so that he can debug in a quite easy and faster manner.

Using the new debugger we can easily toggle between breakpoints, add skip points, view the message envelope and of course play around with the properties that are passed through the mediation flow.

A rich graphical interface is provided for the user so that mediation flow debugging has become quite easy rather than reading through a large xml file to find where the problem is.

I am not writing the entire story of Mediation Debugger here as, all the information you need can be gathered from this post. Since the Beta is out, why not download it and play around a bit????


Lahiru SandaruwanHow to create simple mock Rest API using WSO2 ESB

Inspired by this blog by Miyuru for SOAP mock service,

<api xmlns="http://ws.apache.org/ns/synapse" name="SimpleAPI" context="/simple">
   <resource methods="GET">
      <inSequence>
         <payloadFactory media-type="xml">
            <format>
               <Response xmlns="">
                  <status>OK</status>
                  <code>1</code>
               </Response>
            </format>
            <args/>
         </payloadFactory>
         <respond/>
      </inSequence>
   </resource>
</api>


Enter http://localhost:8280/simple in a browser or SOAP UI Rest project to test this. This is tested in WSO2 ESB 4.9.0.

Shashika UbhayaratneHow to change default keystore password on WSO2 servers



Sometimes, you may require to change default key store password in WSO2 prodcuts due to security reasons.

Here are the steps when changing keystore passwords:

Step 1:
Navigate to wso2 server location:
ex: cd $wso2_server/repository/resources/security

Step 2:
Change keystore password:
keytool -storepasswd -new [new password] -keystore [keystore name]
ex: keytool -storepasswd -new simplenewpassword-keystore wso2carbon.jks

Step 3:
Change Private Key password
keytool -keypasswd -alias wso2carbon -keystore wso2carbon.jks  
Enter keystore password: <simplenewpassword>
Enter key password for <wso2carbon> wso2carbon
New key password for <wso2carbon>: <simplenewpassword>
Re-enter new key password for <wso2carbon>: <simplenewpassword>

Both keystore and private key password must be the same in some cases like WSO2 BAM. Specially, in Thrift, we need to configure to use one password for both.


Step 4:
Configure wso2 server (example taken here as WSO2 BAM)

  • Change carbon.xml at @wso2_server/repository/conf

<KeyStore>  
<!-- Keystore file location-->
<Location>${carbon.home}/repository/resources/security/wso2carbon.jks</Location>
<!-- Keystore type (JKS/PKCS12 etc.)-->
<Type>JKS</Type>
<!-- Keystore password-->
<Password>simplenewpassword</Password>
<!-- Private Key alias-->
<KeyAlias>wso2carbon</KeyAlias>
<!-- Private Key password-->
<KeyPassword>simplenewpassword</KeyPassword>
</KeyStore>
<RegistryKeyStore>
<!-- Keystore file location-->
<Location>${carbon.home}/repository/resources/security/wso2carbon.jks</Location>
<!-- Keystore type (JKS/PKCS12 etc.)-->
<Type>JKS</Type>
<!-- Keystore password-->
<Password>simplenewpassword</Password>
<!-- Private Key alias-->
<KeyAlias>wso2carbon</KeyAlias>
<!-- Private Key password-->
<KeyPassword>simplenewpassword</KeyPassword>
</RegistryKeyStore>

  • Change identtity.xml at @wso2_server/repository/conf
 <ThirftBasedEntitlementConfig>  
<EnableThriftService>true</EnableThriftService>
<ReceivePort>${Ports.ThriftEntitlementReceivePort}</ReceivePort> <ClientTimeout>10000</ClientTimeout>
<KeyStore>
<Location>${carbon.home}/repository/resources/security/wso2carbon.jks</Location>  
<Password>simplenewpassword</Password>
      </KeyStore>  
</ThirftBasedEntitlementConfig>



Dimuthu De Lanerolle

Java Tips .....

To get directory names inside a particular directory ....

private String[] getDirectoryNames(String path) {

        File fileName = new File(path);
        String[] directoryNamesArr = fileName.list(new FilenameFilter() {
            @Override
            public boolean accept(File current, String name) {
                return new File(current, name).isDirectory();
            }
        });
        log.info("Directories inside " + path + " are " + Arrays.toString(directoryNamesArr));
        return directoryNamesArr;
    }



To retrieve links on a web page ......

 private List<String> getLinks(String url) throws ParserException {
        Parser htmlParser = new Parser(url);
        List<String> links = new LinkedList<String>();

        NodeList tagNodeList = htmlParser.extractAllNodesThatMatch(new NodeClassFilter(LinkTag.class));
        for (int x = 0; x < tagNodeList.size(); x++) {
            LinkTag loopLinks = (LinkTag) tagNodeList.elementAt(m);
            String linkName = loopLinks.getLink();
            links.add(linkName);
        }
        return links;
    }


To search for all files in a directory recursively from the file/s extension/s ......

private List<String> getFilesWithSpecificExtensions(String filePath) throws ParserException {

// extension list - Do not specify "." 
 List<File> files = (List<File>) FileUtils.listFiles(new File(filePath),
                new String[]{"txt"}, true);

        File[] extensionFiles = new File[files.size()];

        Iterator<File> itFileList = files.iterator();
        int count = 0;

        while (itFileList.hasNext()) {
            File filePath = itFileList.next();
           
extensionFiles[count] = filePath;
            count++;
        }
        return
extensionFiles;



Reading files in a zip

     public static void main(String[] args) throws IOException {
        final ZipFile file = new ZipFile("Your zip file path goes here");
        try
        {
            final Enumeration<? extends ZipEntry> entries = file.entries();
            while (entries.hasMoreElements())
            {
                final ZipEntry entry = entries.nextElement();
                System.out.println( "Entry "+ entry.getName() );
                readInputStream( file.getInputStream( entry ) );
            }
        }
        finally
        {
            file.close();
        }
    }
        private static int readInputStream( final InputStream is ) throws IOException {
            final byte[] buf = new byte[8192];
            int read = 0;
            int cntRead;
            while ((cntRead = is.read(buf, 0, buf.length) ) >=0)
            {
                read += cntRead;
            }
            return read;
        } 



Converting Object A to Long[]

 long [] myLongArray = (long[])oo;
        Long myLongArray [] = new Long[myLongArray.length];
        int i=0;

        for(long temp:myLongArray){
            myLongArray[i++] = temp;
        } 


Getting cookie details on HTTP clients

import org.apache.http.impl.client.DefaultHttpClient;

HttpClient httpClient = new DefaultHttpClient();

((DefaultHttpClient) httpClient).getCookieStore().getCookies(); 

 HttpPost post = new HttpPost(URL);
        post.setHeader("User-Agent", USER_AGENT);
        post.addHeader("Referer",URL );
        List<NameValuePair> urlParameters = new ArrayList<NameValuePair>();
        urlParameters.add(new BasicNameValuePair("username", "admin"));
        urlParameters.add(new BasicNameValuePair("password", "admin"));
        urlParameters.add(new BasicNameValuePair("sessionDataKey", sessionKey));
        post.setEntity(new UrlEncodedFormEntity(urlParameters));
        return httpClient.execute(post);



Ubuntu Commands

1. Getting the process listening to a given port (eg: port 9000) 

sudo netstat -tapen | grep ":9000 "


Running  a bash script from python script

shell.py
-----------

import os

def main():

    os.system("sh hello.sh")

if __name__=="__main__":
 os.system("sh hello.sh")


hello.sh
-----------
#Linux shell Script


echo "Hello Python from Shell";

public void scriptExecutor() throws IOException {

    log.info("Start executing the script to trigger the docker build ... ");

    Process p = Runtime.getRuntime().exec(
            "python  /home/dimuthu/Desktop/Python/shell.py ");
    BufferedReader in = new BufferedReader(new InputStreamReader(
            p.getInputStream()));
    log.info(in.readLine());

    log.info("Finished executing the script to trigger the docker build ... ");

}

Chandana NapagodaLifecycle Management with WSO2 Governance Registry

SOA Lifecycle management is one of the core requirements for the functionality of an Enterprise Governance suite. WSO2 Governance Registry 5.2.0 supports multiple lifecycle management capability out of the box. Also, it gives an opportunity to the asset authors to extend the out of the box lifecycle functionality by providing their own extensions, based on the organization requirements. Further, the WSO2 Governance Registry supports multiple points of extensibility. Handlers, Lifecycles and Customized asset UIs(RXT based) are the key types of extensions available.

Lifecycle: 

A lifecycle is defined with SCXML based XML element and that contains,
  • A name 
  • One or more states
  • A list of check items with role based access control 
  • One or more actions that are made available based on the items that are satisfied 

Adding a Lifecycle
To add a new lifecycle aspect, click on the Lifecycles menu item under the Govern section of the extensions tab in the admin console. It will show you a user interface where you can add your SCXML based lifecycle configuration. A sample configuration will be available for your reference at the point of creation.

Adding Lifecycle to Asset Type
The default lifecycle for a given asset type will be picked up from the RXT definition. When creating an asset, it will automatically attach lifecycle into asset instance. Lifecycle attribute should be defined in the RXT definition under the artifactType element as below.

<lifecycle>ServiceLifeCycle</lifecycle>

Multiple Lifecycle Support

There can be instances, where given asset can go through more than one lifecycle. As an example, a given service can a development lifecycle as well as a deployment lifecycle. Above service related states changes will not be able to visualize via one lifecycle and current lifecycle state should depend on the context (development or deployment) which you are looking at.

Adding Multiple Lifecycle to Asset Type
Adding multiple lifecycles to an Asset Type can be done in two primary methods.

Through Asset Definition(Available with G-Reg 5.3.0):Here, you can define multiple lifecycle names in a comma separated manner. Lifecycle name which is defined in first will be considered as the default/primary lifecycle. Here, multiple lifecycles which are specified in the asset definition(RXT configuration) will be attached to the asset when itis getting created. An example of multiple lifecycle configurations is as below,
<lifecycle>ServiceLifeCycle,SampleLifeCycle</lifecycle>

Using Lifecycle Executor
Using custom executor Java code, you can assign another lifecycle into the asset. Executors are one of the facilitators which helps to extend the WSO2 G-Reg functionality and Executors are associated with a Governance Registry life cycle. This custom lifecycle executor class needs to implement the Execution interface that is provided by WSO2 G-Reg. You can find more details from below article[Lifecycles and Aspects].

Isuru PereraBenchmarking Java Locks with Counters

These days I am analyzing some Java Flight Recordings from taken from WSO2 API Manager performance tests and I found out that main processing threads were in "BLOCKED" state in some situations.

The threads were mainly blocked due to "synchronized" methods in Java. Synchronizing the methods in a critical section of request processing causes bottlenecks and it has an impact on the throughput and overall latency.

Then I was thinking whether we could avoid synchronizing the whole method. The main problem with synchronized is that only one thread can run that critical section. When it comes to consumer/producer scenarios, we may need to give read access to data in some threads and write access to a thread to edit data exclusively. Java provides ReadWriteLock for these kinds of scenarios.

Java 8 provides another kind of lock named StampedLock. The StampedLock provides an alternative way to the standard ReadWriteLock and it also supports optimistic reads. I'm not going to compare the features and functionalities of the each lock type in this blog post. You may read the StampedLock Idioms by Dr. Heinz M. Kabutz.

I'm more interested in finding out which lock is faster when it is accessed by multiple threads. Let's write a benchmark!


The code for benchmarks


There is an article on "Java 8 StampedLocks vs. ReadWriteLocks and Synchronized" by Tal Weiss, who is the CEO of Takipi. In that article, there is a benchmark for Java locks with different counter implementations. I'm using that counters benchmark as the basis for my benchmark. 

I also found another fork of the same benchmark and it has added the Optimistic Stamped version and Fair mode of ReentrantReadWriteLock. I found out about that from the slides on "Java 8 - Stamped Lock" by Haim Yadid after I got my benchmark results.

I also looked at the article "Java Synchronization (Mutual Exclusion) Benchmark" by Baptiste Wicht.

I'm using the popular JMH library for my benchmark. The JMH has now become the standard way to write Java microbenchmarks. The benchmarks done by Tal Weiss do not use JMH.

See JMH Resources by Nitsan Wakart for an introduction to JMH and related links to get more information about JMH.

I used Thread Grouping feature in JMH and the Group states for benchmarking different counter implementations.

This is my first attempt in writing a proper microbenchmark. If there are any problems with the code, please let me know. When we talk about benchmarks, it's important to know that you should not expect the same results in a real life application and the code may behave differently in runtime.

There are 11 counter implementations. I also benchmarked the fair and non-fair modes of ReentrantLockReentrantReadWriteLock and Semaphore.

Class Diagram for Counter implementations

There are altogether 14 benchmark methods!

  1. Adder - Using LongAdder. This is introduced in Java 8.
  2. Atomic - Using AtomicLong
  3. Dirty - Not using any mechanism to control concurrent access
  4. Lock Fair Mode - Using ReentrantLock
  5. Lock Non-Fair Mode - Using ReentrantLock
  6. Read Write Lock Fair Mode - Using ReentrantReadWriteLock
  7. Read Write Lock Non-Fair Mode - Using ReentrantReadWriteLock
  8. Semaphore Fair Mode - Using Semaphore
  9. Semaphore Non-Fair Mode - Using Semaphore
  10. Stamped - Using  StampedLock
  11. Optimistic Stamped - Using  StampedLock with tryOptimisticRead(); If it fails, I used the read lock. There are no more attempts to tryOptimisticRead().
  12. Synchronized - Using synchronized block with an object
  13. Synchronized Method - Using synchronized keyword in methods
  14. Volatile  - Using volatile keyword for counter variable

The code is available at https://github.com/chrishantha/microbenchmarks/tree/v0.0.1-initial-counter-impl

Benchmark Results


As I mentioned, I used Thread Grouping feature in JMH and I ran the benchmarks for different thread group distributions. There were 10 iterations after 5 warm-up iterations. I measured only the "throughput". Measuring latency would be very difficult (as the minimum throughtput values were having around 6 digits)

The thread group distribution was passed by the "-tg" argument to the JMH and the first number was used for "get" (read) operations and the second number was used for "increment" (write) operations.

There are many combinations we can use to run the benchmark tests. I used 12 combinations for thread group distribution and those are specified in the benchmark script.

These 12 combinations include the scenarios tested by Tal Weiss and Baptiste Wicht.

The benchmark was run on my Thinkpad T530 laptop.

$ hwinfo --cpu --short
cpu:
Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3394 MHz
Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3333 MHz
Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3305 MHz
Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3333 MHz
$ free -m
total used free shared buff/cache available
Mem: 15866 4424 7761 129 3680 11204
Swap: 18185 0 18185

Note: I added the "Dirty" counter only to compare the results, but I omitted it from the benchmark as no one wants to keep a dirty counter in their code. :)

I have committed all results to the Github repository and I used gnuplot for the graphs.

It's very important to note that the graphs show the throughput for both reader and writer threads. If you need to look at individual reader and writer throughput, you can refer the results at https://github.com/chrishantha/microbenchmarks/tree/v0.0.1-initial-counter-impl/counters/results

Let's see the results!

1 Reader, 1 Writer

2 Readers, 2 Writers

4 Readers, 4 Writers

5 Readers, 5 Writers

10 Readers, 10 Writers

16 Readers, 16 Writers

64 Readers, 64 Writers

128 Readers, 128 Writers

1 Reader, 19 Writers

19 Readers, 1 Writer

4 Readers, 16 Writers

16 Readers, 4 Writers



Conclusion


Following are some conclusions we can make when looking at above results

  1. Optimistic Stamped counter has much better throughput when there is high contention.
  2. Fair modes of the locks are very slow.
  3. Adder counter has better throughput than Atomic counter when there are more writers.
  4. When there are less threads, the Synchronized and Synchronized Method counters has better throughput than using a Read Write Lock (in non-fair mode, which is the default)
  5. The Lock counter also has better throughput than Read Write Lock when there are less threads.

Adder, Atomic and Volatile counter examples do not show a way to provide mutual exclusion, but those are thread-safe ways to keep a count. You may refer benchmark results for other counters with Java locks if you want to have a mutual exclusion to some logic in your code.

In this benchmark, the read write lock has performed poorly. The reason could be that there are writers continuously trying to access the write lock. There may be situations that a write lock may be required less frequently and therefore this benchmark is probably not a good way to evaluate performance for read write locks.

Please make sure that you run the benchmarks for your scenarios before making a decision based on these results. Even my benchmarks give slightly different results for each run. So, it's not a good idea to rely entirely on benchmarks and you must test the performance of the overall application.


If there are any questions or comments on the results or regarding benchmark code, please let me know.

Prabath SiriwardenaBuilding Microservices ~ Designing Fine-grained Systems

The book Building Microservices by Sam Newman is one of the very first on the subject. It’s a great book for anyone who talks about or designs or builds microservices must read — I strongly recommend buying it!. This article reviews the book while highlighting the key takeaways from each chapter.

Jayanga DissanayakeDeploying artifacts to WSO2 Servers using Admin Services

In this post I am going to show you, how to deploy artifacts on WSO2 Enterprise Service Bus [1] and WSO2 Business Process Server [2] using Admin Services [3]

Usual practice with WSO2 artifacts deployment is to, enable DepSync [4] (Deployement Synchronization). And upload the artifacts via the management console of master node. Which will then upload the artifacts to the configured SVN repository and notify the worker nodes regarding this new artifact via a cluster message. Worker nodes then download the new artifacts from the SVN repository and apply those.

In this approach you have to log in to the management console and do the artifacts deployment manually.

With the increasing use of continuous integration tools, people are looking in to the possibility of automating this task. There is a simple solution in which you need to configure a remote file copy to the relevant directory inside the [WSO2_SERVER_HOME]/repository/deployment/server directory. But this is a very low level solution.

Following is how to use Admin Services to do the same in much easier and much manageable manner.

NOTE: Usually all WSO2 servers accept deployable as .car file but WSO2 BPS prefer .zip for deploying BPELs.

For ESB,
  1. Call 'deleteApplication' in ApplicationAdmin service and delete the
    application existing application
  2. Wait for 1 min.
  3. Call 'uploadApp' in CarbonAppUploader service
  4. Wait for 1 min.
  5. Call 'getAppData' in ApplicationAdmin, if it returns application data
    continue. Else break
 For BPS,
  1. Call the 'listDeployedPackagesPaginated' in
    BPELPackageManagementService with page=0 and
    packageSearchString=”Name_”
  2. Save the information
    <ns1:version>
    <ns1:name>HelloWorld2‐1</ns1:name>
    <ns1:isLatest>true</ns1:isLatest>
    <ns1:processes/>
    </ns1:version>
  3. Use the 'uploadService' in BPELUploader, to upload the new BPEL zip
    file
  4. Again call the 'listDeployedPackagesPaginated' in
    BPELPackageManagementService with 15 seconds intervals for 3mins.
  5. If it finds the name getting changed (due to version upgrade. Eg:
    HelloWorld2‐4), then continue. (Deployment is success)
  6. If the name doesn't change for 3mins, break. Deployment has some
    issues. Hence need human intervention

[1] http://wso2.com/products/enterprise-service-bus/
[2] http://wso2.com/products/business-process-server/
[3] https://docs.wso2.com/display/BPS320/Calling+Admin+Services+from+Apps
[4] https://docs.wso2.com/display/CLUSTER420/SVN-Based+Deployment+Synchronizer

Chathurika Erandi De SilvaSimple HTTP Inbound endpoint Sample: How to

What is an Inbound endpoint?

As per my understanding an inbound endpoint is an entry point. Using this entry point, a message can be mediated directly from the transport layer to the mediation layer. Read more...

Following is a very simple demonstration on Inbound Endpoints using WSO2 ESB

1. Create a sequence


2. Save in Registry



3. Create an Inbound HTTP endpoint using the above sequence



Now it's time to see how to send the requests. As I have explained, in the start of this post, the inbound sequence is an entry point for a message. If the above third step is inspected, its illustrated that a port is given for the inbound endpoint. When the incoming traffic is directed towards the given port, the inbound endpoint will receive it and straight away pass it to the sequence defined with it. Here the axis2 layer is skipped.

In the above scenario the request should be directed to http://localhost:8085/ as given below


Then the request is directed to the inbound endpoint and directly to the sequence

Shashika UbhayaratneHow to resolve "File Upload Failure" when importing a schema with dependany in WSO2 GREG


Schema is one of the main asset model used in WSO2 GREG and you can find more information on https://docs.wso2.com/display/Governance520/Adding+a+Schema.

There can be situations where you want to import a schema to GREG which imports another schema (It has a dependency)

1. Lets say you have a schema file.
example: original.xsd
 <?xml version="1.0" encoding="UTF-8"?>  
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:listing1">
<xsd:complexType name="Phone1">
<xsd:sequence>
<xsd:element name="areaCode1" type="xsd:int"/>
<xsd:element name="exchange1" type="xsd:int"/>
<xsd:element name="number1" type="xsd:int"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>

2. Import above schema on publisher as per the instructions given on https://docs.wso2.com/display/Governance520/Adding+a+Schema.

3. Now, you need to import another schema which import/ has reference to previous schema
example: link.xsd
<?xml version="1.0" encoding="UTF-8"?>  
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:listing">
<xsd:import namespace="urn:listing1"
schemaLocation="original.xsd"/>
<xsd:complexType name="Phone">
<xsd:sequence>
<xsd:element name="areaCode" type="xsd:int"/>
<xsd:element name="exchange" type="xsd:int"/>
<xsd:element name="number" type="xsd:int"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>

Issue: You may encounter an error similar to following:
ERROR {org.wso2.carbon.registry.extensions.handlers.utils.SchemaProcessor} - Could not read the XML Schema Definition file. this.schema.needs  
org.apache.ws.commons.schema.XmlSchemaException: Could not evaluate Schema Definition. This Schema contains Schema Includes that were not resolved
at org.apache.ws.commons.schema.SchemaBuilder.handleInclude(SchemaBuilder.java:1676)
at org.apache.ws.commons.schema.SchemaBuilder.handleXmlSchemaElement(SchemaBuilder.java:221)
at org.apache.ws.commons.schema.SchemaBuilder.build(SchemaBuilder.java:121)
at org.apache.ws.commons.schema.XmlSchemaCollection.read(XmlSchemaCollection.java:512)
at org.apache.ws.commons.schema.XmlSchemaCollection.read(XmlSchemaCollection.java:385)
at org.apache.ws.commons.schema.XmlSchemaCollection.read(XmlSchemaCollection.java:425)
....................
Caused by: org.wso2.carbon.registry.core.exceptions.RegistryException: Could not read the XML Schema Definition file. this.schema.needs
at org.wso2.carbon.registry.extensions.handlers.utils.SchemaProcessor.putSchemaToRegistry(SchemaProcessor.java:137)
at org.wso2.carbon.registry.extensions.handlers.XSDMediaTypeHandler.processSchemaUpload(XSDMediaTypeHandler.java:263)
at org.wso2.carbon.registry.extensions.handlers.XSDMediaTypeHandler.put(XSDMediaTypeHandler.java:186)
at org.wso2.carbon.registry.core.jdbc.handlers.HandlerManager.put(HandlerManager.java:2503)
at org.wso2.carbon.registry.core.jdbc.handlers.HandlerLifecycleManager.put(HandlerLifecycleManager.java:957)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.put(EmbeddedRegistry.java:697)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.put(CacheBackedRegistry.java:550)
at org.wso2.carbon.registry.core.session.UserRegistry.putInternal(UserRegistry.java:827)
at org.wso2.carbon.registry.core.session.UserRegistry.access$1000(UserRegistry.java:60)
at org.wso2.carbon.registry.core.session.UserRegistry$11.run(UserRegistry.java:803)
at org.wso2.carbon.registry.core.session.UserRegistry$11.run(UserRegistry.java:800)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.put(UserRegistry.java:800)
at org.wso2.carbon.registry.resource.services.utils.AddResourceUtil.addResource(AddResourceUtil.java:88)

Solution 1:
Zip all schemas together and upload

Solution 2:
Specify the absolute path for dependent schema file:
example:
 <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:listing">  
<xsd:import namespace="urn:listing1"
schemaLocation="http://www.example.com/schema/original.xsd"/>





sanjeewa malalgodaHow to disable throttling completely or partically for given API- WSO2 API Manager 1.10 and below versions

Sometimes particular requirement(allowing any number of un authenticated requests) will be applied to only few APIs in your deployment. If that is the case we may know those APIs by the time we design system. So one thing we can do is remove throttling handler from handler list of given API. Then all requests dispatched to given API will not perform any throttling related operations. To do that you need to edit synapse API definition manually and remove handler from there.

We usually do not recommend users to do this because if you updated API again from publisher then handler may add again(each update from publisher UI will replace current synapse configuration). But if you have only one or two APIs for related to this use case and those will not update very frequently. And we can use that approach.

Another approach we can follow is update velocity template in a way it will not add throttling handler for few pre defined APIs. In that case even if you update API from publisher still deployer will remove throttling handler from synapse configuration. To do this we should know APIs list which do not require throttling. Also then no throttling will apply for all resources in that API.

Sometimes you may wonder what is the impact of having large number of max requests for unauthenticated tier.
If we discuss about performance of throttling it will not add huge delay to the request. If we consider throttling alone then it will take less than 10% of complete gateway processing time. So we can confirm that having large number for max request count and having unauthenticated tier will not cause major performance issue. If you don't need to disable throttling for entire API and need to allow any number of unauthenticated requests in tier level then that is the only option we do have now.


Please consider above facts and see what is the best solution for your use case. If you need further assistance or any clarifications please let us know. We would like to discuss further and help you to find the best possible solution for your use case.

Chathurika Erandi De SilvaEncoded context to URI using REST_URL_POSTFIX with query parameters

WSO2 ESB provides a property called REST_URL_POSTFIX that can be used to append context to the target endpoint when invoking a REST endpoint.


With the upcoming ESB 500 release the value of the REST_URL_POSTFIX can contain non standard special characters such as spaces and these will be encoded when sending to the backend. This provides versatility because we can't expect each and every resource path not to contain non standard special characters.

In order to demonstrate this, I have a REST service with the following context path

user/users address/address new/2016.05

You can see this contains standard as well as non standard  characters.

Furthermore I am sending values needed for the service execution as query parameters and while appending the above context to the target endpoint, i need to send the query parameters as well.

The request is as follows

http://<ip>:8280/testapi?id=1&name=jane&address=wso2

In order to achieve my requirement I have created the following sequence


Afterwards I have created a simple API in WSO2 ESB  and  used the above sequence as below


When invoked following log entry is visible in console (wire logs should be  enabled) that indicates the accomplishment of  the mission

[2016-05-20 15:13:42,549] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "GET /SampleRestService/restservice/TestUserService/user/users%20address/address%20new/2016.05?id=1&name=jane&address=wso2 HTTP/1.1[\r][\n]"


Imesh GunaratneEdgar, this is very interesting!

Great work! A quick question, did you also deploy the metrics dashboard on Heroku?

Evanthika Amarasiri[WSO2 Governance Registry] - How to analyse the history of registry resources

Assume that you are working on a setup where you need to analyse the history of registry resources. One might want to know what type of operations have been done to the resource throughout it’s lifetime. This is possible from a simple DB query.

select * from REG_LOG where REG_PATH=‘resource_name’;

i.e. select * from REG_LOG where REG_PATH='/_system/governance/apimgt/statistics/ga-config.xml';


As an example, assume I want to find out the actions taken on the resource ga-config.xml. So when I query the table REG_LOG, below is the result I would receive.




When you look at the above result set, you would notice that the column REG_ACTION shows different values in each row. The actions that represents these values are configured in the class Activity.java. For example, REG_ACTION=10 means that the resource have been moved from it’s current location. REG_ACTION=7 means that it has been deleted from the system. Likewise, when you go through [1], you can find out the rest of the actions which you can take on these registry resources.

Therefore as explained above, by going through the REG_LOG of the registry database table, you can audit the actions taken on each and every resource.

[1] - https://github.com/wso2/carbon-kernel/blob/4.4.x/core/org.wso2.carbon.registry.api/src/main/java/org/wso2/carbon/registry/api/Activity.java

Chandana NapagodaG-Reg and ESB integration scenarios for Governance


WSO2 Enterprise Service Bus (ESB) employs WSO2 Governance Registry for storing configuration elements and resources such as WSDLs, policies, service metadata, etc. By default, WSO2 ESB shipped with embedded Registry, which is entirely based on the WSO2 Governance Registry (G-Reg). Further based on the requirements, you can connect to a remotely running WSO2 Governance Registry using a remote JDBC connection which is known as a ‘JDBC registry mount’.

Other than the Registry/Repository aspect of WSO2 G-Reg, its primary use cases are Design time governance and Runtime governance with seamless lifecycle management. It is known as Governance aspect of WSO2 G-Reg. So with this governance aspect of WSO2 G-Reg, more flexibility is provided for integration with WSO2 ESB.

When integrating WSO2 ESB with WSO2 G-Reg in governance aspect, there are three options available. They are:

1). Share Registry space with both ESB and G-Reg
2). Use G-Reg to push artifacts into ESB node
3). ESB pulls artifacts from the G-Reg when needed

Let’s go through the advantages and disadvantages of each option. Here we are considering a scenario where metadata corresponds to ESB artifacts such as endpoints are stored in the G-Reg as asset types. Each asset type has their own lifecycle (Ex: ESB Endpoint RXT have it’s own Lifecycle). Then with the G-Reg lifecycle transition, synapse configurations (Ex: endpoints) will be created. Those will be the runtime configurations of ESB.


Share Registry space with both ESB and G-Reg

Embedded Registry of every WSO2 product consist of three partitions. They are local, config and governance.

Local Partition : Used to store configuration and runtime data that is local to the server.
Configuration Partition : Used to store product-specific configurations. This partition can be shared across multiple instances of the same product.
Governance Partition : Used to store configuration and data that are shared across the whole platform. This partition typically includes services, service descriptions, endpoints and data sources
How to integration should work:
When sharing registry spaces with Both ESB and G-Reg products, we are sharing governance partition only. Here governance space will be shared using JDBC. When G-Reg lifecycle transition happens on the ESB endpoint RXT, it will create the ESB synapse endpoint configuration and copy into relevant registry location using Copy Executor. Then ESB can retrieve that endpoint synapse configuration from the shared registry when required.
Mount(3).png

Advantages:
     Easy to configure
    Reduced amount of custom code implementation
Disadvantages:
     
If servers are deployed across data centers, JDBC connections will be created in between data centers(may be through WAN or Public networks).
      With the number of environments, there will be many database mounts.
      ESB registry space will be exposed via G-Reg.

Use G-Reg to push artifacts into ESB node
How to integration should work:
In this pattern, G-Reg will create synapse endpoints and push into relevant ESB setup(Ex: Dev/QA/Prod, etc) by using Remote Registry operation. After G-Reg pushing appropriate synapse configuration into ESB, APIs or Services will be able to consume.
G-Reg Push(1).png

Advantages:
      Provide more flexibility from the G-Reg side to manage ESB assets
      Can plug multiple ESB environments on the go
      Can restrict ESB API/Service invocation until G-Reg lifecycle operation is completed

ESB pull artifact from the G-Reg

How to integration should work:


In this pattern, when lifecycle transition happens, G-Reg will create synapse level endpoints in the relevant registry location.

When API or Service invocation happens, ESB will first lookup the endpoint in their own registry. If it is not available, it will pull the endpoint from G-Reg using Remote Registry operations.  Here ESB side endpoint lookup should be implemented as a custom implementation.  

ESB pull.png

Advantages: 
        User might be able to deploy ESB API/Service before G-Reg lifecycle transition happens. Disadvantages: 
        First API/Service call get delayed, until Remote API call is completed 
        First API/Service call get failed, if G-Reg lifecycle transition is not completed. 
        Less control compared to option 1 and 2

Chanaka FernandoWSO2 ESB 5.0.0 Beta Released

WSO2 team is happy to announce the beta release of the latest WSO2 ESB 5.0.0. This version of the ESB has major improvements to the usability aspects of the ESB in real production deployments as well as development environments. Here are the main features of the ESB 5.0.0 version.

Mediation debugger provides the capability to debug mediation flows from WSO2 developer studio tooling platform. It allows the users to view/edit/delete properties and the payload of the messages passing through each and every mediator.
You can find more information about this feature at below post.
Analytics for WSO2 ESB 5.0.0 (Beta) — https://github.com/wso2/analytics-esb/releases/tag/v1.0.0-beta

Malintha AdikariK-Means clustering with Scikit-learn


K-Means clustering is a popular unsupervised classification algorithm. In simple terms we have unlabeled dataset with us. Unlabeled dataset means we have a dataset but we don't have any clue about how we are going to categorized each row in the dataset. Following is an example few rows from unlabeled dataset about crime data in USA. Here we have one row for each state and set of features related to crime information. We have this dataset with us but we don't know what to do this with this data. One thing we can do is finding similarities of the states. In other way we can try to prepare few buckets and put states into those buckets based on the similarities in crime information.


State

Murder
Assault
UrbanPop
Rape
Alabama

13.2
236
58
21.2
Alaska

10
263
48
44.5
Arizona

8.1
294
80
31
Arkansas

8.8
190
50
19.5
California

9
276
91
40.6


Now let's discuss how we can implement K-Means cluster for our dataset with Scikit-learn. You can download USA crime dataset from my github location.


Import KMeans from Scikit-learn.


from sklearn.cluster import KMeans


Load your datafile into Pandas dataframe


df = Utils.get_dataframe("crime_data.csv")


Create KMean model providing required number of clusters. Here I have defined required number of clusters to 5


KMeans_model = KMeans(n_clusters=5, random_state=1)


Refine your data removing non-numeric data, unimportant features..etc.


df.drop(['crime$cluster'], inplace=True, axis=1)
df.rename(columns={df.columns[0]: 'State'}, inplace=True)


Select only numeric data in your dataset.


numeric_columns = df._get_numeric_data()


Train KMeans-clustering model


KMeans_model.fit(numeric_columns)


Now you can see the label of each row in your training dataset.


labels = KMeans_model.labels_
print(labels)


Predic new state’s crime cluster as follows


print(KMeans_model.predict([[15, 236, 58, 21.2]]))


Malintha AdikariVisualization in Machine Learning

Scatter Plots

We can visualize the correlations (relationship between two variables) between features or features and the classes using scatter plots. In a scatter plot we can use n-dimensional space to visualize correlations between n variables. We plot data points there and finally we can use the output to determine correlations between each variable. Following is a sample 3-D scatterplot (from http://rgraphgallery.blogspot.com/2013/04/rg-3d-scatter-plots-with-vertical-lines.html)


Chathurika Erandi De SilvaStatistics and ESB -> ESB Analytics Server: Message Tracing

This is the  second post on ESB Analytics server and I hope you have read the previous one.

When ESB recieves a request, its taken in as a message. This message consists of a header and a body.The Analytics server have provided an comprehensive way of viewing the message that ESB works all through the cycle. this is called as tracing.

Normally ESB takes in a request, mediate it through some logic and then sends to the backend. The response from the backend is then again mediated along some logic then returned to the client. The analytics server graphically illustrates this flow, so the message flow can be easily viewed and understood

Sample Message Flow



Further it provides graphical view of the message tracing by providing details on the message passed through the ESB. Transport properties, message context properties are illustrated with respective to the mediators in the flow

Sample Mediator Properties



Basically the capability of viewing the message flow, tracing the flow in a graphical mode is provided which is user friendly and simple.

sanjeewa malalgodaHow to add soap and WSDL based API to WSO2 API Manager via REST API.

If you are using old jaggery API then you can add API in the same way you add API from jaggery application. To do that we need to follow steps below. Since exact 3 steps(design > implement > manage) only used by jaggery applications those are not listed in API documents. So i have listed them here for your reference. One thing we need to tell you is we cannot add soap endpoint based with swagger content(soap apis cannot define with swagger content).

Steps to create soap API with WSDL.
============================

Login and obtain session.
curl -X POST -c cookies http://localhost:9763/publisher/site/blocks/user/login/ajax/login.jag -d 'action=login&username=admin&password=admin'

Design API
curl -F name="test-api" -F version="1.0" -F provider="admin" -F context="/test-apicontext" -F visibility="public" -F roles="" -F wsdl="https://svn.apache.org/repos/asf/airavata/sandbox/xbaya-web/test/Calculator.wsdl" -F apiThumb="" -F description="" -F tags="testtag" -F action="design" -F swagger='{"apiVersion":"1.0","swaggerVersion":"1.2","authorizations":{"oauth2":{"scopes":[],"type":"oauth2"}},"apis":[{"index":0,"file":{"apiVersion":"1.0","basePath":"http://10.100.5.112:8280/test-apicontext/1.0","swaggerVersion":"1.2","resourcePath":"/test","apis":[{"index":0,"path":"/test","operations":[{"nickname":"get_test","auth_type":"Application & Application User","throttling_tier":"Unlimited","method":"GET","parameters":[
{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"}
,
{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}
]},{"nickname":"options_test","auth_type":"None","throttling_tier":"Unlimited","method":"OPTIONS","parameters":[
{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"}
,
{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}
]}]}]},"description":"","path":"/test"}],"info":{"title":"test-api","termsOfServiceUrl":"","description":"","license":"","contact":"","licenseUrl":""}}' -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag

Implement API
curl -F implementation_methods="endpoint" -F endpoint_type="http" -F endpoint_config='{"production_endpoints":
{"url":"http://appserver/resource/ycrurlprod","config":null}
,"endpoint_type":"http"}' -F production_endpoints="http://appserver/resource/ycrurlprod" -F sandbox_endpoints="" -F endpointType="nonsecured" -F epUsername="" -F epPassword="" -F wsdl="https://svn.apache.org/repos/asf/airavata/sandbox/xbaya-web/test/Calculator.wsdl" -F wadl="" -F name="test-api" -F version="1.0" -F provider="admin" -F action="implement" -F swagger='{"apiVersion":"1.0","swaggerVersion":"1.2","authorizations":{"oauth2":{"scopes":[],"type":"oauth2"}},"apis":[{"index":0,"file":{"apiVersion":"1.0","basePath":"http://10.100.5.112:8280/test-apicontext/1.0","swaggerVersion":"1.2","resourcePath":"/test","apis":[{"index":0,"path":"/test","operations":[{"nickname":"get_test","auth_type":"Application & ApplicationUser","throttling_tier":"Unlimited","method":"GET","parameters":[
{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"}
,
{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}
]},{"nickname":"options_test","auth_type":"None","throttling_tier":"Unlimited","method":"OPTIONS","parameters":[
{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"}
,
{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}
]}]}]},"description":"","path":"/test"}],"info":{"title":"test-api","termsOfServiceUrl":"","description":"","license":"","contact":"","licenseUrl":""}}' -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag

Manage API.
curl -F default_version_checked="" -F tier="Unlimited" -F transport_http="http" -F transport_https="https" -F inSequence="none" -F outSequence="none" -F faultSequence="none" -F responseCache="disabled" -F cacheTimeout="300" -F subscriptions="current_tenant" -F tenants="" -F bizOwner="" -F bizOwnerMail="" -F techOwner="" -F techOwnerMail="" -F name="test-api" -F version="1.0" -F provider="admin" -F action="manage" -F swagger='{"paths":{"/*":{"post":{"responses":{"201":{"description":"Created"}},"x-auth-type":"Application & Application
User","x-throttling-tier":"Unlimited"},"put":{"responses":{"200":{"description":"OK"}},"x-auth-type"
:"Application & Application User","x-throttling-tier":"Unlimited"},"get":{"responses":{"200":{"description"
:"OK"}},"x-auth-type":"Application & Application User","x-throttling-tier":"Unlimited"},"delete":{"responses"
:{"200":{"description":"OK"}},"x-auth-type":"Application & Application User","x-throttling-tier":"Unlimited"
}}},"swagger":"2.0","info":{"title":"testAPI","version":"1.0.0"}}' -F outSeq="" -F faultSeq="json_fault" -F tiersCollection="Unlimited" -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag

sanjeewa malalgodaWSO2 API Manager how to change some resource stored in registry for each tenant load.

As you all know in API Manager we have stored tiers and lot of other data in registry. In some scenarios we may need to modify and update before tenant user use it. In such cases we can write tenant service creator listener and do what we need. In this article we will see how we can change tiers.xml file before tenant load to system. Please note that with this change we cannot change tiers values from UI as this code replace it for each tenant load.

Java code.

CustomTenantServiceCreator.java

package org.wso2.custom.observer.registry;
import org.apache.axis2.context.ConfigurationContext;
import org.apache.commons.io.IOUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.wso2.carbon.context.PrivilegedCarbonContext;
import org.wso2.carbon.registry.core.exceptions.RegistryException;
import org.wso2.carbon.registry.core.session.UserRegistry;
import org.wso2.carbon.utils.AbstractAxis2ConfigurationContextObserver;
import org.wso2.carbon.registry.core.Resource;
import org.wso2.carbon.apimgt.impl.internal.ServiceReferenceHolder;
import org.wso2.carbon.apimgt.impl.APIConstants;


import java.io.IOException;
import java.io.InputStream;
import java.util.Iterator;
public class CustomTenantServiceCreator extends AbstractAxis2ConfigurationContextObserver {
    private static final Log log = LogFactory.getLog(CustomTenantServiceCreator.class);
    @Override
    public void createdConfigurationContext(ConfigurationContext configurationContext) {
        UserRegistry registry = null;
        try {
            String tenantDomain = PrivilegedCarbonContext.getThreadLocalCarbonContext().getTenantDomain();
            int tenantId = PrivilegedCarbonContext.getThreadLocalCarbonContext().getTenantId();
            registry = ServiceReferenceHolder.getInstance().getRegistryService().getGovernanceSystemRegistry(tenantId);
            Resource resource = null;
            resource = registry.newResource();
            resource.setContent("");
            InputStream inputStream =
                    CustomTenantServiceCreator.class.getResourceAsStream("/tiers.xml");
            byte[] data = IOUtils.toByteArray(inputStream);
            resource = registry.newResource();
            resource.setContent(data);
            registry.put(APIConstants.API_TIER_LOCATION, resource);
        } catch (RegistryException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }




CustomObserverRegistryComponent.java

package org.wso2.custom.observer.registry;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.osgi.framework.BundleContext;
import org.osgi.service.component.ComponentContext;
import org.wso2.carbon.utils.Axis2ConfigurationContextObserver;
import org.wso2.carbon.utils.multitenancy.MultitenantConstants;
import org.wso2.carbon.apimgt.impl.APIManagerConfigurationService;
import org.wso2.carbon.apimgt.impl.APIManagerConfiguration;
/**
 * @scr.component name="org.wso2.custom.observer.services" immediate="true"
 * @scr.reference name="api.manager.config.service"
 *                interface=
 *                "org.wso2.carbon.apimgt.impl.APIManagerConfigurationService"
 *                cardinality="1..1"
 *                policy="dynamic" bind="setAPIManagerConfigurationService"
 *                unbind="unsetAPIManagerConfigurationService"
 */
public class CustomObserverRegistryComponent {
    private static final Log log = LogFactory.getLog(CustomObserverRegistryComponent.class);
    public static final String TOPICS_ROOT = "forumtopics";
    private static APIManagerConfiguration configuration = null;
    protected void activate(ComponentContext componentContext) throws Exception {
        if (log.isDebugEnabled()) {
            log.debug("Forum Registry Component Activated");
        }
        try{
            CustomTenantServiceCreator tenantServiceCreator = new CustomTenantServiceCreator();
            BundleContext bundleContext = componentContext.getBundleContext();
            bundleContext.registerService(Axis2ConfigurationContextObserver.class.getName(), tenantServiceCreator, null);
         
        }catch(Exception e){
            log.error("Could not activate Forum Registry Component " + e.getMessage());
            throw e;
        }
    }
 
 
    protected void setAPIManagerConfigurationService(APIManagerConfigurationService amcService) {
log.debug("API manager configuration service bound to the API host objects");
configuration = amcService.getAPIManagerConfiguration();
}
protected void unsetAPIManagerConfigurationService(APIManagerConfigurationService amcService) {
log.debug("API manager configuration service unbound from the API host objects");
configuration = null;
}
}
}



Complete source code for project
https://drive.google.com/file/d/0B3OmQJfm2Ft8b3cxU3QwU0MwdWM/view?usp=sharing

Once tenant loaded you will see updated values as follows.



Prabath SiriwardenaEnabling FIDO U2F Multi-Factor Authentication for the AWS Management Console with the WSO2 Identity Server

This tutorial on Medium explains how to enable authentication for the AWS Management Console against the corporate LDAP server and then enable multi-factor authentication (MFA) with FIDO. FIDO is soon becoming the de facto standard for MFA, backed by the top players in the industry including Google, Paypal, Microsoft, Alibaba, Mozilla, eBay and many more.


Malintha AdikariModel Evaluation With Coress Validation


We can use cross validation to evaluate the prediction accuracy of the model. We can keep subset of our dataset without using it for training purposes. So those are new or unknown data for the model once we train that with the rest of data. Then we can use that subset of unused data to evaluate the accuracy of the trained model. Here, first we partition data into test dataset and training dataset and then train the model with the training dataset. Finally we evaluate the model with the test dataset. This process is called "Cross Validation".

In this blog post I would like to demonstrate how we can cross validate a decision tree classification model which is build using scikit-learn + Panda. Please visit decision-tree-classification-using scikit-learn post if you haven't create your classification model yet. As a recap at this point we have a decision tree model which predicts whether a given person in Titanic ship is going to survive from the tragedy or die in the cold, dark sea :(.

In previous blog post we have used entire Titanic dataset for training the model. Let's see how we can use only 80% of data for training and the rest 20% for evaluation purpose.

# separating 80% data for training
train = df.sample(frac=0.8, random_state=1)

# rest 20% data for evaluation purpose
test = df.loc[~df.index.isin(train.index)]

Then we train the model normally but we use training dataset

dt = DecisionTreeClassifier(min_samples_split=20, random_state=9)
dt.fit(train[features], train["Survived"])

Then we predict the result for rest 20% data.

predictions = dt.predict(test[features])


Then we can calculate Mean Squared Error of the predictions vs. actual values as a measurement of the prediction accuracy of the trained model.

0686d09b81bdb146174754ee2f74b81f.png

We can use scikit-learn built in mean squared error function for this. First import it to current module.

from sklearn.metrics import mean_squared_error

Then we can do the calculation as follows

mse = mean_squared_error(predictions, test["Survived"])
print(mse)

You can play with the data partition ratio and the features and observe the variation of the Mean Squared Error with those parameters.


sanjeewa malalgodaHow to avoid issue in default APIs in WSO2 API Manager 1.10

In API Manager 1.10 you may see issue in mapping resources when you create another version and make it as default version. In this post lets see how we can overcome that issue.

Lets say we have resource with a path parameter like this.
 /resource/{resourceId}

Then we will create another API version and make it as default.
As you can see from the xml generated in synapse config, corresponding to the API, the resource is created correctly in admin--resource_v1.0.xml

<resource methods="GET" uri-template="/resource/{resourceId} " faultSequence="fault">

But if you checked newly created default version then you will see following.

<resource methods="GET"
             uri-template="$util.escapeXml($resource.getUriTemplate())"
             faultSequence="fault">

Therefore, we cannot call the resource of the API via gateway with the APIs default version.
Assume we have API named testAPI and we have 3 versions named 1.0.0, 2.0.0 and 3.0.0.
By defining default API what we do is just create a proxy for default version. Then we will create default proxy which can accept any url pattern and deploy it.
For that we recommend you to use /* pattern. It will only mediate requests to correct version. Lets think default version is 2.0.0 then default version API
will forward request to that version. So you can have all your resources in version 2.0.0 and it will be processed there. And you can have any complex url pattern there.

So for default API having resource definition which match to any request is sufficient. Here is the configuration to match it.

  <resource methods="POST PATCH GET DELETE HEAD PUT"
             url-mapping="/*"
             faultSequence="fault">

To confirm this you can see content of default API. There you will see it is pointed to actual API with given version. So All your resources will be there in versioned API as it is.

Here i have listed complete velocity template file for default API.
Please copy it and replace wso2am-1.10.0/repository/resources/api_templates/default_api_template.xml file.

<api xmlns="http://ws.apache.org/ns/synapse"  name="$!apiName" context="$!apiContext" transports="$!transport">
   <resource methods="POST PATCH GET DELETE HEAD PUT"
             uri-template="/*"
             faultSequence="fault">
    <inSequence>
        <property name="isDefault" expression="$trp:WSO2_AM_API_DEFAULT_VERSION"/>
        <filter source="$ctx:isDefault" regex="true">
            <then>
                <log level="custom">
                    <property name="STATUS" value="Faulty invoking through default API.Dropping message to avoid recursion.."/>
                </log>
                <payloadFactory media-type="xml">
                    <format>
                        <am:fault xmlns:am="http://wso2.org/apimanager">
                            <am:code>500</am:code>
                            <am:type>Status report</am:type>
                            <am:message>Internal Server Error</am:message>
                            <am:description>Faulty invoking through default API</am:description>
                        </am:fault>
                    </format>
                    <args/>
                </payloadFactory>
                <property name="HTTP_SC" value="500" scope="axis2"/>
                <property name="RESPONSE" value="true"/>
                <header name="To" action="remove"/>
                <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
                <property name="ContentType" scope="axis2" action="remove"/>
                <property name="Authorization" scope="transport" action="remove"/>
                <property name="Host" scope="transport" action="remove"/>
                <property name="Accept" scope="transport" action="remove"/>
                <send/>
            </then>
            <else>
                <header name="WSO2_AM_API_DEFAULT_VERSION" scope="transport" value="true"/>
                #if( $transport == "https" )
                <property name="uri.var.portnum" expression="get-property('https.nio.port')"/>
                #else
                <property name="uri.var.portnum" expression="get-property('http.nio.port')"/>
                #end

            <send>
                <endpoint>
                #if( $transport == "https" )
                <http uri-template="https://localhost:{uri.var.portnum}/$!{fwdApiContext}">
                #else
                <http uri-template="http://localhost:{uri.var.portnum}/$!{fwdApiContext}">
                #end
                        <timeout>
                            <duration>60000</duration>
                            <responseAction>fault</responseAction>
                        </timeout>
                        <suspendOnFailure>
                             <progressionFactor>1.0</progressionFactor>
                        </suspendOnFailure>
                        <markForSuspension>
                            <retriesBeforeSuspension>0</retriesBeforeSuspension>
                            <retryDelay>0</retryDelay>
                        </markForSuspension>
                    </http>
                </endpoint>
            </send>
            </else>
        </filter>
        </inSequence>
        <outSequence>
        <send/>
        </outSequence>
    </resource>
        <handlers>
            <handler class="org.wso2.carbon.apimgt.gateway.handlers.common.SynapsePropertiesHandler"/>
        </handlers>
</api>


Evanthika AmarasiriHow to solve the famous token regeneration issue in an API-M cluster

In a API Manager clustered environment (in my case, I have a publisher, a store, two gateway nodes and two key manager nodes fronted by a WSO2 ELB 2.1.1), while regenerating tokens, if you come across an error saying Error in getting new accessToken with an exception as below at Key Manager node, then this is due to a configuration issue.

TID: [0] [AM] [2014-09-19 05:41:28,321]  INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} -  'Administrator@carbon.super [-1234]' logged in at [2014-09-19 05:41:28,321-0400] {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil}
TID: [0] [AM] [2014-09-19 05:41:28,537] ERROR {org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService} -  Error in getting new accessToken {org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService}
TID: [0] [AM] [2014-09-19 05:41:28,538] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} -  Error in getting new accessToken {org.apache.axis2.rpc.receivers.RPCMessageReceiver}
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
    at java.lang.reflect.Method.invoke(Method.java:619)
    at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
    at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
    at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
    at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:172)
    at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:146)
    at org.wso2.carbon.core.transports.CarbonServlet.doPost(CarbonServlet.java:231)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
    at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
    at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
    at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
    at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
    at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:61)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
    at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:178)
    at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
    at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:56)
    at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
    at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:141)
    at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:156)
    at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
    at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:52)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
    at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
    at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1176)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
    at java.lang.Thread.run(Thread.java:853)
Caused by:
org.wso2.carbon.apimgt.keymgt.APIKeyMgtException: Error in getting new accessToken
    at org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService.renewAccessToken(APIKeyMgtSubscriberService.java:281)
    ... 45 more
Caused by:
java.lang.RuntimeException: Token revoke failed : HTTP error code : 404
    at org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService.renewAccessToken(APIKeyMgtSubscriberService.java:252)
    ... 45 more


This is what you have to do to solve this issue.

1. In you Gateway nodes, you need to change the host and the port values of the below APIs that resides under $APIM_HOME/repository/deployment/server/synapse-configs/default/api                                                      _TokenAPI_.xml                                                                                                                                         _AuthorizeAPI_.xml                                                                                                                                       _RevokeAPI_.xml                                                                                                                                        
If you get a HTTP 302 error at Key manager side while regenerating the token, make sure to check the RevokeURL of the api-manager.xml of the Key Manager node to see if it is pointing to the NIO port of the Gateway Node.

sanjeewa malalgodaHow to change API Managers authentication failure message to message default content type.


The error response format sent from WSO2 gateway is usually in xml format. Also if need we can change this behavior. To do this we have extension point to customize message error generation. For auth failures throttling failures we have handler to generate messages.

For auth failures following will be used.
/repository/deployment/server/synapse-configs/default/sequences/_auth_failure_handler.xml

The handler can be updated to dynamically use the content Type to return the correct response.
Once you changed it will work for XML and JSON calls.

Change the file to add a dynamic lookup of the message contentType i.e,
From

<sequence name="auth_failure_handler" xmlns="http://ws.apache.org/ns/synapse">

<property name="error_message_type" value="application/json"/>
<sequence key="cors_request_handler"/>
</sequence>

To
<sequence name="auth_failure_handler" xmlns="http://ws.apache.org/ns/synapse">

<property name="error_message_type" value="get-property('transport', 'Content-Type')"/>
<sequence key="cors_request_handler"/>
</sequence>


Malintha AdikariDecision Tree Classification using scikit-learn



Please visit Preparing-machine-learning-developing blog post if you haven't prepared your development environment yet.

First we have to load data from a dataset. We can use a dataset in our hand at this point or online dataset for this. Use following python method to load titanic.csv data file into Pandas[1] dataframe.

Here I have used my downloaded csv file. You can download that file from https://github.com/caesar0301/awesome-public-datasets/tree/master/Datasets or Goolge it


def load_data():
  df = pand.read_csv("/home/malintha/projects/ML/datasets/titanic.csv");
  return df

before you use Pandas functions you have to import that module.

import pandas as pand

Now we can print the first few rows of the dataframe using

print(df.head(), end = "\n\n")

And it will output

PassengerId
Survived
Pclass
Name
Sex
Age
SibSp
Parch
Ticket
Fare
Cabin
Embarked

1
0
3
Braund, Mr. Owen Harris
male
22
1
0
A/5 21171
7.25

S

2
1
1
Cumings, Mrs. John Bradley (Florence Briggs Thayer)
female
38
1
0
PC 17599
71.2833
C85
C

3
1
3
Heikkinen, Miss. Laina
female
26
0
0
STON/O2. 3101282
7.925

S

4
1
1
Futrelle, Mrs. Jacques Heath (Lily May Peel)
female
35
1
0
113803
53.1
C123
S



We can remove “Name”,  “Ticket” and “PassengerId”  features for the dataset as they are not much important features over other features. We can use Pandas ‘drop’ facility to remove column from a dataframe.

df.drop(['Name','Ticket',’PassengerId’], inplace=True, axis=1)

Next task is mapping nominal data into integers in order to create the model in scikit-learn.

Here we have 3 nominal feature in our dataset.
  1. Sex
  2. Cabin
  3. Embarked

We can replace the original values with integers with following code segment.


def map_nominal_to_integers(df):
  df_refined = df.copy()
  sex_types = df_refined['Sex'].unique()
  cabin_types = df_refined['Cabin'].unique()
  embarked_types = df_refined["Embarked"].unique()
  sex_types_to_int = {name: n for n, name in enumerate(sex_types)}
  cabin_types_to_int = {name: n for n, name in enumerate(cabin_types)}
  embarked_types_to_int = {name: n for n, name in enumerate(embarked_types)}
  df_refined["Sex"] = df_refined["Sex"].replace(sex_types_to_int)
  df_refined["Cabin"] = df_refined["Cabin"].replace(cabin_types_to_int)
  df_refined["Embarked"] = df_refined["Embarked"].replace(embarked_types_to_int)
  return df_refined

We have one more step to shape-up our dataset. If you look at the refined dataset carefully ,you may able to see there are “NaN” value for some of age values. We should replace this NaN with appropriate integer value. Pandas provide built-in function for this. I will use 0 as the replacement for NaN.

df["Age"].fillna(0, inplace=True)

Now we all set to build the decision tree from our refined dataset. We have to choose the features and the target value for the decision tree.

features = ['Pclass','Sex','Age','SibSp','Parch','Fare','Cabin','Embarked']
X = df[features]
Y = df["Survived"]

Here,X is the feature set and Y is the target set. Now we build the decision tree. For this you should import the scikit-learn decision tree into your python module.
from sklearn.tree import DecisionTreeClassifier

And build the decision tree with our feature set and the target set

dt = DecisionTreeClassifier(min_samples_split=20, random_state=9)
dt.fit(X,Y)

Now it is time to do a prediction with our trained decision tree. We can use sample feature data set and predict the target for that feature value set.

Z = [1,1,22.0,1,0,7.25,0,0]
print(dt.predict(Z))


sanjeewa malalgodaNew API Manager Throttling implementation for API Manager 2.0.0

As you know at the moment we are working on completely new throttling implementation for API Manager next release. So in this article i will briefly summarize what we are going to do with next release. Please note that these facts are depend on the discussions happen at architecture@wso2.com and developers mailing lists. Also these content may subject to change before release.

Existing API Manager Throttling DesignBased on Hazelcast IAtomicLong distributed counter.
High performance with accuracy.
Bit difficult to design complex policies.
Cannot define policies specific to given API.
Can throttle based on requests count only.

Advantages of New DesignBased on Central Policy Server.
Extensible and Flexible to define advanced rules based on API properties such as headers, users and etc.
Efficient siddhi(https://github.com/wso2/siddhi) based implementation for throttle core.
Efficient DB lookups with Bloom Filter based implementation of Siddhi.
Can design throttling policies based on request count and bandwidth both.

New architecture and message flow.

Screenshot from 2016-05-09 21-37-23.png

Message Flow and how it worksAPI Gateway will be associated with new throttle handler.
Throttle handler will extract all the relevant properties from message context and generate throttle keys.
To check API level throttling API level key would be context:version combination.
For resources context:version:resource_path:http_method.
Throttle handler will do map lookup for throttled events when new request comes to API gateway.
Then once throttling process completed handler will set message context to agent.
Throttle data process and publisher agent will asynchronously process message to push events to Central Policy Server.
Central Policy Server will evaluate complex rules based on events and update topic accordingly.
All gateway workers will fetch throttled events from database time to time in asynchronous manner.
Two layers of cache will be used to store throttling decisions.
Local decisions will be based map lookup.

So far we have identified few throttle/ rate limit conditions.
Number of requests per unit time(what we have now). This can be associated with the tier.
Data amount transferred through gateway per unit time. This also can be associated with the tier.
Dynamic rules(such as blocking some IP or API). This should be applied globally.
Rate limiting(this should be applied in node level as replicating counters will cause performance issues). Ex: requests on fly at given time is 500 for API.

Content Based Throttling
We have identified some parameters available in message context which we can use as throttling parameters.
We may use one or more of them to design policy.
  • IP Address.
  • IP Address Range.
  • Query Parameters.
  • Transport Headers.
  • Http Verb.
  • Resource path.
Policy Design Scenarios
You may design new policies based on request count per unit time interval or bandwidth for given time period. 
Policies can be designed per API. Therefore this will facilitate  the current resource level throttling implementation.  
Example: For API named “testAPI” resource “people” HTTP GET allows 5 requests per minute and POST allows 3 requests per minute. 
If our API support only mobile devices then we can add policy in API level to check user agent and throttle.

System administrators can define set of policies which apply across all APIs.
Example: If user bob is identified as fraud user then admin can set policy saying block bob.
Same Way we can block given IP address, user agent, token etc. 

Policies can be applied at multiple levels such as:
  • API Level
  • Application Level
  • Global Level(custom policy and blocking conditions)
  • Subscription Level
We can create new policy using admin dashboard user interface.
Then it will create policy file and send it to central policy server.
Central policy server will deploy it.
Here i have attached some of the images in admin dashboard related to throttling policy design.

How to create API/Resource level policy with multiple conditions

policyEditor1.png



policyEditor2.png


How to block certain requests based on API, Application, IP address and User.
blockEntity.png


How to add and use custom policy to handle custom throttling scenarios based on requirements.
customPolicy.png


Key Advantages
  • Ability to design complex throttle policies.
  • Advanced policy designer user interface.
  • Users can design policies with multiple attributes present in request.
    • Ex: transport headers, body content, HTTP verb etc. 
  • Can design tier by combining multiple policies
    • Ex: For given IP range, given HTTP verb, given header limit access.
  • If client is mobile device throttle them based on user agent header.
  • Can design API specific policies.

sanjeewa malalgodaFix WSO2 API Manager Token generation issue due to no matching grant type(Error occurred while calling token endpoint: HTTP error code : 400)


If you have migrated API Manager setup then sometimes you may see this error due missed entries in tables.
"Error occurred while calling token endpoint: HTTP error code : 400"

If we dont have  grant_type in IDN_OAUTH_CONSUMER_APPS table and then that may cause this error.
Grant_type may be emplty for the Default Application in IDN_OAUTH_CONSUMER_APPS table. Also in IDN_OAUTH2_ACCESS_TOKEN table grant_type may be NULL.

When you try to generate tokens for that application you may see error like below.
"Error occurred while calling token endpoint: HTTP error code : 400"
Since the token regenerate process try to match the grant_types of IDN_OAUTH2_ACCESS_TOKEN with grant_types of IDN_OAUTH_CONSUMER_APPS.

To fix that we can update IDN_OAUTH2_ACCESS_TOKEN table as 'client_credentials' and grant_type of the IDN_OAUTH_CONSUMER_APPS as 'urn:ietf:params:oauth:grant-type:saml2-bearer iwa:ntlm implicit refresh_token client_credentials authorization_code password'

If this effected multiple places do same for all application. Then restart servers.
Now when you generate tokens you should be able to generate tokens.

Chathurika Erandi De SilvaStatistics redefined -> ESB Analytics Server - First peek


I am writing this blog post to make an introduction to the upcoming WSO2 ESB Analytics server since I am working with it these days. This is an introductory post of WSO2 ESB Analytics server. This will be released with ESB 500 and can be downloaded when released from here

Analytics server provides a comprehensive graphical view on the requests received by a certain proxy service, sequence, API, inbound endpoint or an endpoint. These requests can be viewed as hourly, daily, monthly or yearly basis. Of course if you need more granularity with a higher level, a custom defined time frame can be used as well.  Based on the time frame of your selection, a diagrammatic representation of the overall requests received by that particular entity is given. This incorporates both the count and percentage of the successes and failures with respect to requests.


Diagrammatic view of overall request count per proxy





Furthermore it provides graphical representation of the message count and message latency against time. These can be used in a production environment to view and understand many important aspects such as peaks, etc…

Message Count and Message Latency Graphs





The messages that are passed to and fro in ESB with respective to each and every request is listed which is provided by the tracing capability. 



By clicking on a particular message the user has the capability of viewing the entire message flow as well as properties of the message in a detail manner, which will be discussed in a subsequent post. 



 

Sameera JayasomaResolving Startup Order of Carbon Components in WSO2 Carbon 5.0.0

In my previous post https://medium.com/@sameera.jayasoma/startup-order-resolving-mechanisms-in-osgi-48aecde06389, I explained the startup…

Dimuthu De Lanerolle

Troubleshooting Wso2 TAF
=====================


This is series of super important clues to overcome such bugs we would encounter while working with Wso2 TAF - Automation Framework ....


1. Errror 

when building wso2 TAF if you get something like this on the console .......

diamond operator is not supported in -source 1.6
  (use -source 7 or higher to enable diamond operator)
  
  Solution 
Add the maven source plugin to the pom.xml file. 


            <plugin>
                   <artifactId>maven-compiler-plugin</artifactId>
                   <version>2.3.1</version>
                   <inherited>true</inherited>
                       <configuration>
                               <source>1.8</source>
                               <target>1.8</target>
                       </configuration>
            </plugin>

Danushka FernandoCreate an application in WSO2 App Cloud using Maven Plugins

In Application Development life cycle continuous integration is an important factor. How easy to get something deployed which is built in a build server. You can simply use maven exec plugin to run Curl commands to call rest apis.

Following is an example. Before call the create application api we need to call login api and get created a logged in session. To do that we need to call login api with -c cookies and we need to call create application api with -b cookies.

       <plugin>  
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2</version>
<executions>
<execution>
<id>login</id>
<phase>deploy</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>curl</executable>
<arguments>
<argument>-v</argument>
<argument>-k</argument>
<argument>-c</argument>
<argument>cookies</argument>
<argument>-X</argument>
<argument>POST</argument>
<argument>-F</argument>
<argument>action=login</argument>
<argument>-F</argument>
<argument>userName=<email @ replaced with .>@<tenant domain></argument>
<argument>-F</argument>
<argument>password=<password></argument>
<argument>https://newapps.cloud.wso2.com/appmgt/site/blocks/user/login/ajax/login.jag</argument>
</arguments>
</configuration>
</execution>
<execution>
<id>create application</id>
<phase>deploy</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>curl</executable>
<arguments>
<argument>-v</argument>
<argument>-k</argument>
<argument>-b</argument>
<argument>cookies</argument>
<argument>-X</argument>
<argument>POST</argument>
<argument>https://newapps.cloud.wso2.com/appmgt/site/blocks/application/application.jag</argument>
<argument>-F</argument>
<argument>action=createApplication</argument>
<argument>-F</argument>
<argument>applicationName=Buzzwords&#x20;Backend</argument>
<argument>-F</argument>
<argument>applicationDescription=API&#x20;Producer&#x20;application&#x20;for&#x20;buzzword&#x20;sample</argument>
<argument>-F</argument>
<argument>conSpecMemory=512</argument>
<argument>-F</argument>
<argument>conSpecCpu=300</argument>
<argument>-F</argument>
<argument>runtime=2</argument>
<argument>-F</argument>
<argument>appTypeName=mss</argument>
<argument>-F</argument>
<argument>applicationRevision=${parsedVersion.majorVersion}.${parsedVersion.minorVersion}.${parsedVersion.nextIncrementalVersion}</argument>
<argument>-F</argument>
<argument>uploadedFileName=${artifactId}-${version}.jar</argument>
<argument>-F</argument>
<argument>runtimeProperties=runtimeProperties=[{"key":"k1","value":"e1"}]</argument>
<argument>-F</argument>
<argument>tags=[{"key":"k1","value":"t1"}]</argument>
<argument>-F</argument>
<argument>fileupload=@${project.build.directory}/${artifactId}-${version}.jar</argument>
<argument>-F</argument>
<argument>isFileAttached=true</argument>
<argument>-F</argument>
<argument>isNewVersion=true</argument>
</arguments>
</configuration>
</execution>
</executions>
</plugin>


You don't have to deploy it each time you build it. So you can set the phase of the executions to deploy as above. But it might try to deploy the artifact to nexus. To stop that you can skip deploy by adding following.

       <plugin>  
<artifactId>maven-deploy-plugin</artifactId>
<version>2.7</version>
<configuration>
<skip>true</skip>
</configuration>
</plugin>

In App Cloud to deploy the changes we need to create new version. So to do that we will always need to increase the version name of the create request. You can use helper plugin and replace plugin as a combination. With following configuration I am creating a property and I am replacing them in each deploy with next patch version number.

       <plugin>  
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>1.10</version>
<executions>
<execution>
<phase>deploy</phase>
<id>parse-version</id>
<goals>
<goal>parse-version</goal>
</goals>
<configuration>
<versionString>${appcloud.version}</versionString>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>com.google.code.maven-replacer-plugin</groupId>
<artifactId>replacer</artifactId>
<version>1.5.3</version>
<executions>
<execution>
<phase>deploy</phase>
<goals>
<goal>replace</goal>
</goals>
</execution>
</executions>
<configuration>
<file>pom.xml</file>
<replacements>
<replacement>
<token>${appcloud.version}</token>
<value>${parsedVersion.majorVersion}.${parsedVersion.minorVersion}.${parsedVersion.nextIncrementalVersion}</value>
</replacement>
</replacements>
</configuration>
</plugin>


And you need to have a property like below as well.

   <properties>  
<appcloud.version>1.0.7</appcloud.version>
</properties>

Rest of the details of the apis can be found in [1]. Following is the full build tag and the properties tag in the pom.xml. If you run mvn clean install this would not get triggered. This will only trigger when you run mvn deploy.


 <build>  
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>1.10</version>
<executions>
<execution>
<phase>deploy</phase>
<id>parse-version</id>
<goals>
<goal>parse-version</goal>
</goals>
<configuration>
<versionString>${appcloud.version}</versionString>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>com.google.code.maven-replacer-plugin</groupId>
<artifactId>replacer</artifactId>
<version>1.5.3</version>
<executions>
<execution>
<phase>deploy</phase>
<goals>
<goal>replace</goal>
</goals>
</execution>
</executions>
<configuration>
<file>pom.xml</file>
<replacements>
<replacement>
<token>${appcloud.version}</token>
<value>${parsedVersion.majorVersion}.${parsedVersion.minorVersion}.${parsedVersion.nextIncrementalVersion}</value>
</replacement>
</replacements>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2</version>
<executions>
<execution>
<id>login</id>
<phase>deploy</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>curl</executable>
<arguments>
<argument>-v</argument>
<argument>-k</argument>
<argument>-c</argument>
<argument>cookies</argument>
<argument>-X</argument>
<argument>POST</argument>
<argument>-F</argument>
<argument>action=login</argument>
<argument>-F</argument>
<argument>userName=<email @ replaced with .>@<tenant domain></argument>
<argument>-F</argument>
<argument>password=<password></argument>
<argument>https://newapps.cloud.wso2.com/appmgt/site/blocks/user/login/ajax/login.jag</argument>
</arguments>
</configuration>
</execution>
<execution>
<id>create application</id>
<phase>deploy</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>curl</executable>
<arguments>
<argument>-v</argument>
<argument>-k</argument>
<argument>-b</argument>
<argument>cookies</argument>
<argument>-X</argument>
<argument>POST</argument>
<argument>https://newapps.cloud.wso2.com/appmgt/site/blocks/application/application.jag</argument>
<argument>-F</argument>
<argument>action=createApplication</argument>
<argument>-F</argument>
<argument>applicationName=Buzzwords&#x20;Backend</argument>
<argument>-F</argument>
<argument>applicationDescription=API&#x20;Producer&#x20;application&#x20;for&#x20;buzzword&#x20;sample</argument>
<argument>-F</argument>
<argument>conSpecMemory=512</argument>
<argument>-F</argument>
<argument>conSpecCpu=300</argument>
<argument>-F</argument>
<argument>runtime=2</argument>
<argument>-F</argument>
<argument>appTypeName=mss</argument>
<argument>-F</argument>
<argument>applicationRevision=${parsedVersion.majorVersion}.${parsedVersion.minorVersion}.${parsedVersion.nextIncrementalVersion}</argument>
<argument>-F</argument>
<argument>uploadedFileName=${artifactId}-${version}.jar</argument>
<argument>-F</argument>
<argument>runtimeProperties=runtimeProperties=[{"key":"k1","value":"e1"}]</argument>
<argument>-F</argument>
<argument>tags=[{"key":"k1","value":"t1"}]</argument>
<argument>-F</argument>
<argument>fileupload=@${project.build.directory}/${artifactId}-${version}.jar</argument>
<argument>-F</argument>
<argument>isFileAttached=true</argument>
<argument>-F</argument>
<argument>isNewVersion=true</argument>
</arguments>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.7</version>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>
<properties>
<microservice.mainClass>org.wso2.carbon.mss.sample.Application</microservice.mainClass>
<appcloud.version>1.0.7</appcloud.version>
</properties>


[1] https://docs.wso2.com/display/AppCloud/Published+APIs

Prabath SiriwardenaHow Netflix secures Microservices with short-lived certificates?

Today we had our 6th Silicon Valley IAM meetup at the WSO2 office Mountain View. We are glad to have Bryan Payne from Netflix to talk on the topic — ‘PKI at Scale Using Short-Lived Certificates’. Bryan leads the Platform Security team at Netflix and prior to Netflix, he was the Director, Security Research at Nebula.

 This post on medium is written based on Bryan’s talk at the meetup and other related resources.

Malintha AdikariPreparing Machine Learning developing environment



1. Installing pycharm

Download and install pycharm from https://www.jetbrains.com/pycharm/download/#section=linux

2. Installing PIP

PIP is an easy installer for python packages

$ sudo apt-get install python-pip python-dev build-essential 
$ sudo pip install --upgrade pip
$ sudo pip install --upgrade virtualenv

3. Installing required python packages

$ pip install numpy
$ pip install scipy
$ pip install scikit-learn

4. Setting up pycharm 


1. Go to pycharm downloaded folder and extract pycharm-{}.tar.gz file   into a preferred location.

2.
Execute pycharm.sh file

3.
Click File -> New Project and create a project providing project 
name

4. Right click on your project -> new -> python file and create new 
python file providing file name

5. Add following import lines to your python file


HelloWorld.py




import csv



import numpy


import scipy


from sklearn import preprocessing


from sklearn import neighbors


from sklearn.cross_validation import train_test_split


from sklearn import metrics


from sklearn.naive_bayes import GaussianNB


from sklearn import cross_validation


from sklearn.grid_search import GridSearchCV





Note: If you are getting any import error click 

  • File -> Default Settings -> Project Interpreter

  • Select python 2.7.x version

  • Click Apply -> Ok


Happy coding...!!!!




 

  • >>


  • >



Lalaji SureshikaSharing applications and subscriptions across multiple application developers through WSO2 API Store



In previous WSO2 APIM versions before 1.9.0 version,only the application developer who logs into APIStore can view/manage his applications and subscriptions.But there was a requirement arose mainly due to following two reasons;

-- What if there’s a group of employees in an organization worked as developers for an application and how all those user group could get access to same subscription/application.

--  What if the APIStore logged in developer left the organization and organization want to manage his created subscriptions in-order to manage the developed applications under organization name and only prohibit the left developer of accessing those.

Since above two requirements are really valid in an app development organization perspective ,we have introduced the feature of sharing applications and subscriptions across user groups from APIM 1.9.0 version onwards. The API Manager provides facility to users of a specific logical group to view each other's' applications and subscriptions.  

We have written this feature with the capability to extend it depend on an organization requirement.As the attribute to define the logical user group will be vary based on organizations.For example:

1)In one organization,sharing applications and subscriptions need to be control based on user roles

2) In another scenario,an APIStore can be run as a common APIStore across multiple organizational users.And in that,user grouping has to be done based on organization attribute.

Because of above facts,the flow how the sharing apps/subscriptions flow is as below.


  1. An app developer of an organization tries to login to APIStore
  2. Then in the underlying APIM code,it checks,if  that APIStore server’s api-manager.xml have the config <GroupingExtractor> enabled and if a custom java class implementation defined inside it.
  3. If so,that java class implementation will run and a group ID for logged in user will be set.
  4. Once the app developer logged in and try to access ‘My Applications’ page and ‘My subscriptions’ page,from the underlying code,it’ll return all the database saved applications & subscriptions based on the user’s ‘Group ID’.
With above approach,the applications and subscriptions are shared based on defined ‘Group ID’ from the custom implementation defined in <GroupingExtractor> of api-manager.xml.
By default,we are shipping a sample java implementation as “org.wso2.carbon.apimgt.impl.DefaultGroupIDExtractorImpl” for this feature to consider the organization name which a signup user give at the time he sign up to the API Store as the group ID. From the custom java implementation,it extracts the claim http://wso2.org/claims/organization of the user who tries to login and uses the value specified in that claim as the group ID. This way, all users who specify the same organization name belong to the same group and therefore, can view each other's' subscriptions and applications. 
For more information on default implementation on sharing subscriptions and applications,please refer; https://docs.wso2.com/display/AM190/Sharing+Applications+and+Subscriptions
In a real organization,the requirement can be bit different.The API Manager also provides flexibility to change this default group id extracting implementation.
From this blog-post,I’ll explain how to write the group id extracting extension based on below use-case.

Requirement
An organization want to share subscriptions & applications based on user roles of the organization.They have disabled ‘signup’ option for users to access APIStore and their administrator is giving rights to users to access the APIStore. Basically the application developers of that organization can be categorized in-to two role levels.
  1. Application developers with ‘manager’ role
These developers control production environment deployed mobile applications subscriptions through API Store
2. Application developers with ‘dev’ role
These developers control testing environment deployed mobile applications subscriptions through API Store 
Requirement is to share the applications and subscriptions across these two roles separately.

Solution
Above can be achieved through writing a custom Grouping Extractor class to set ‘Group ID’ based on user roles.
1. First write a java class with implementing the interface org.wso2.carbon.apimgt.api.LoginPostExecutor interface  and make it as a maven module.
2. Then implement the method  logic for ‘getGroupingIdentifiers()’ of the interface.
In this method,it has to extract two separate ‘Group ID’s for users having ‘manager’ role and users having ‘dev’ role. Below is a written sample logic for similar requirement with implementing this method.You can find the complete code from here.

   public String getGroupingIdentifiers(String loginResponse) {
JSONObject obj;
String username = null;
String groupId = null;
try {
obj = new JSONObject(loginResponse);
//Extract the username from login response
username = (String) obj.get("user");
loadConfiguration();
/*Create client for RemoteUserStoreManagerService and perform user management operation*/
RoleBasedGroupingExtractor extractor = new RoleBasedGroupingExtractor(true);
//create web service client for userStoreManager
extractor.createRemoteUserStoreManager();
//Get the roles of the user
String[] roles = extractor.getRolesOfUser(username);
if (roles != null) {//If user has roles
//Match the roles to check either he/she is from manager/dev role
for (String role : roles) {
if (Constants.MANAGER_ROLE.equals(role)) {
//Set the group id as role name
groupId = Constants.MANAGER_GROUP;
} else if (Constants.ADMIN_ROLE.equals(role)) {
//Set the group id as role name
groupId = Constants.ADMIN_GROUP;
}
}
}

} catch (JSONException e) {
log.error("Exception occurred while trying to get group Identifier from login response");
} catch (org.wso2.carbon.user.api.UserStoreException e) {
log.error("Error while checking user existence for " + username);
} catch (IOException e) {
log.error("IO Exception occurred while trying to get group Identifier from login response");
} catch (Exception e) {
log.error("Exception occurred while trying to get group Identifier from login response");
}
//return the group id
return groupId;
}
3.  Build the java maven module and copy the jar into AM_Home/repository/components/lib folder.
4. Then open APIStore running AM server’s api-manager.xml located at {AM_Home}/repository/conf location and uncomment  <GroupingExtractor> config inside <APIStore> config and add your wrote custom java class name in it.
For eg: <GroupingExtractor>org.wso2.sample.gropuid.impl.RoleBasedGroupingExtractor</GroupingExtractor>5. Then restart the APIM server.6. Then try accessing API Store as different users with same ‘Group ID’ value.For example try login to API Store with a developer having ‘manager’ role and do a subscription.Then try to login as another user who also has ‘manager’ role and check his ‘My Applications’ and ‘My subscriptions’ views in API Store. The second user will able to see the first user created application and subscription in his API Store view as below.
Then try to login as an app developer with ‘dev’ role as well.He’ll not able to see the subscriptions/applications of users with ‘manager’ role.
  

  



Kalpa WelivitigodaWSO2 Application Server 6.0.0-M2 Released !

Welcome to WSO2 Application Server 6.0.0, the successor of WSO2 Carbon based Application Server. WSO2 Application Server 6.0.0 is a complete revamp and is based on vanilla Apache Tomcat. WSO2 provides a number of features by means of extensions to Tomcat to add/enhance the functionality. It provides first class support for generic web applications and JAX-RS/JAX-WS web applications. The performance of the server and individual application can be monitored by integrating WSO2 Application Server with WSO2 Data Analytics Server. WSO2 Application Server is an open source project and it is available under the Apache Software License (v2.0).

Read more at https://medium.com/@callkalpa/wso2-application-server-6-0-0-m2-released-97cdc4da1987#.udebn5roi

sanjeewa malalgodaHow to fix file upload issue due to header dropping in WSO2 API Manager 1.10

In last ESB run time release we’ve introduced a new new property (http.headers.preserve) to preserve headers. And as result for that sometimes content type(or any other headers) may not pass to back end and that can case this type of issues.

To fix that can you please add this(http.headers.preserve = Content-Type) to following file in product distribution and restart server.
repository/conf/passthru-http.properties.

Hope this solution will work for you. Also with that we can fix issues caused by missing media type (charset) at Pass Through Transport level. 

sanjeewa malalgodaTuning WSO2 API Manager gateway and key manager in distributed deployment.

I have discussed about tuning WSO2 API Manager in previous post as well. But in this article i will list some of the configurations related to distributed deployment when we have gateways and key managers. Please try to add below configurations and see how it help to improve performance.

We may tune synapse configuration by editing /repository/conf/synapse.properties file.
synapse.threads.core=100
synapse.threads.max=250
synapse.threads.keepalive=5
synapse.threads.qlen=1000

Validation interval can be increased to avoid frequent connection validations. As of now it was set to 30000ms.
<testOnBorrow>true</testOnBorrow>

<validationQuery>SELECT 1</validationQuery>
<validationInterval>120000</validationInterval>

Also consider following database tuning parameters as per database administrators recommendation(i have listed sample values we use for performance tests).
<maxWait>60000</maxWait>

<initialSize>20</initialSize>
<maxActive>150</maxActive>
<maxIdle>60</maxIdle>
<minIdle>40</minIdle>


Add following parameters to enable gateway resource and key cache.
<EnableGatewayKeyCache>true</EnableGatewayKeyCache>

<EnableGatewayResourceCache>true</EnableGatewayResourceCache>


For key manager following entry is enough. Since gateway cache is enabled we can disble key manager cache.
But if you have JWT usecase please enable following.
<EnableJWTCache>true</EnableJWTCache>



We need to have HTTP access logs to track incoming out going messages.
But for this deployment if we assume key managers are running in DMZ then no need track http access.
So we may disable http access logs for key manager. We need to consider this parameter case by case and if you don't use http access logs you can consider this option.
Here i assume we are using web service based key validation call from gateway to key manager(not thrift client).

To do that add following entry to /repository/conf/log4j.properties file.
log4j.logger.org.apache.synapse.transport.http.access=OFF

sanjeewa malalgodaHow to avoid swagger console issue in API Manager 1.9.1 due to "Can't read from server. It may not have the appropriate access-control-origin settings." error

Sometimes when you use API Manager swagger console in store side you may seen this error "Can't read from server. It may not have the appropriate access-control-origin settings.".



There is one other simple workaround for this issue and you can use same for your deployment. If we used double quotes for labels in swagger document then it can cause to this type of issue.

If you can survive with provided workaround for 1.9, then when you upgrade to next version(1.10) issue will not be there as mentioned early.

So here i have attached one sample with error and one with fix.

problematic swagger definition

{
  "paths": {
    "/*": {
      "put": {
        "x-auth-type": "Application & Application User",
        "x-throttling-tier": "Unlimited",
        "parameters": [
          {
            "schema": {
              "type": "object"
            },
            "description": "Request Body",
            "name": "Payload",
            "required": false,
            "in": "body"
          }
        ],
        "responses": {
          "200": {}
        }
      },
      "post": {
        "x-auth-type": "Application & Application User",
        "x-throttling-tier": "Unlimited",
        "parameters": [
          {
            "schema": {
              "type": "object"
            },
            "description": "Request Body",
            "name": "Payload",
            "required": false,
            "in": "body"
          }
        ],
        "responses": {
          "200": {}
        }
      },
      "get": {
        "x-auth-type": "Application & Application User",
        "x-throttling-tier": "Unlimited",
        "responses": {
          "200": {}
        }
      },
      "delete": {
        "x-auth-type": "Application & Application User",
        "x-throttling-tier": "Unlimited",
        "responses": {
          "200": {}
        }
      },
      "head": {
        "x-auth-type": "Application & Application User",
        "x-throttling-tier": "Unlimited",
        "responses": {
          "200": {}
        }
      }
    }
  },
  "definitions": {
    "Structure test \"test\" ssssss": {
      "properties": {
        "horaireCotation": {
          "description": "Horaire de cotation",
          "type": "string"
        },
        "statut": {
          "type": "string"
        },
        "distanceBarriere": {
          "format": "double",
          "type": "number"
        },
        "premium": {
          "format": "double",
          "type": "number"
        },
        "delta": {
          "format": "double",
          "type": "number"
        },
        "pointMort": {
          "format": "double",
          "type": "number"
        },
        "elasticite": {
          "format": "double",
          "type": "number"
        }
      }
    }
  },
  "swagger": "2.0",
  "info": {
    "title": "hello",
    "version": "1.0"
  }
}

Corrected Swagger definition

{
    'paths': {
        '/*': {
            'put': {
                'x-auth-type': 'Application & Application User',
                'x-throttling-tier': 'Unlimited',
                'parameters': [
                    {
                        'schema': {
                            'type': 'object'
                        },
                        'description': 'Request Body',
                        'name': 'Payload',
                        'required': false,
                        'in': 'body'
                    }
                ],
                'responses': {
                    '200': {}
                }
            },
            'post': {
                'x-auth-type': 'Application & Application User',
                'x-throttling-tier': 'Unlimited',
                'parameters': [
                    {
                        'schema': {
                            'type': 'object'
                        },
                        'description': 'Request Body',
                        'name': 'Payload',
                        'required': false,
                        'in': 'body'
                    }
                ],
                'responses': {
                    '200': {}
                }
            },
            'get': {
                'x-auth-type': 'Application & Application User',
                'x-throttling-tier': 'Unlimited',
                'responses': {
                    '200': {}
                }
            },
            'delete': {
                'x-auth-type': 'Application & Application User',
                'x-throttling-tier': 'Unlimited',
                'responses': {
                    '200': {}
                }
            },
            'head': {
                'x-auth-type': 'Application & Application User',
                'x-throttling-tier': 'Unlimited',
                'responses': {
                    '200': {}
                }
            }
        }
    },
    'definitions': {
        'Structure test \"test\" ssssss': {
            'properties': {
                'horaireCotation': {
                    'description': 'Horaire de cotation',
                    'type': 'string'
                },
                'statut': {
                    'type': 'string'
                },
                'distanceBarriere': {
                    'format': 'double',
                    'type': 'number'
                },
                'premium': {
                    'format': 'double',
                    'type': 'number'
                },
                'delta': {
                    'format': 'double',
                    'type': 'number'
                },
                'pointMort': {
                    'format': 'double',
                    'type': 'number'
                },
                'elasticite': {
                    'format': 'double',
                    'type': 'number'
                }
            }
        }
    },
    'swagger': '2.0',
    'info': {
        'title': 'hello',
        'version': '1.0'
    }
}


Also if you like to fix this issue by editing jagger file that is also possible. Please find the instructions below.

Edit store/site/blocks/api-doc/ajax/get.jag file and add following instead of just print(jsonObj)
print(JSON.stringify(jsonObj));

Prabath SiriwardenaJSON Message Signing Alternatives

In this post we explore following alternatives available to sign a JSON message and then build a comparison between each of them.
  • JSON Web Signature (JWS) 
  • JSON Cleartext Signature (JCS) 
  • Concise Binary Object Representation (CBOR) Object Signing 

Chathurika Erandi De SilvaTesting Dynamic Timeout with Property Mediator


Must READ


ESB 500 is the upcoming release of WSO2 Enterprise Service Bus. Since I am working on it these days, thought of writing on "Dynamic Timeout for Endpoints".

The following sample is currently tested in ESB 500 Alpha.

Read if you want an explanation on Dynamic Timeout for Endpoints
Testing Dynamic Timeout for Endpoints with Property mediator

In order to maintain the dynamic behaviour query parameters are used to send the timeout value with the request (testing purposes only)

Sample sequence configuration

<sequence name="dyn_seq_2" xmlns="http://ws.apache.org/ns/synapse">
   <property expression="$url:a" name="timeout" scope="default"
       type="INTEGER" xmlns:ns="http://org.apache.synapse/xsd"/>
   <send>
       <endpoint>
           <address uri="http://<ip>:8080/erandi">
               <timeout>
                   <duration>{get-property('timeout')}</duration>
                   <responseAction>discard</responseAction>
               </timeout>
           </address>
       </endpoint>
   </send>
</sequence>

In here as illustrated by using the XPath {get-property('timeout')} inside “duration” element, enables us to achieve the dynamic behaviour. When the Xpath expression is evaluated, the value referenced is read and used. The Property mediator reads the query parameter “a” and obtains the value passed with it.
Testing the sample

For testing purposes, I have setup a mock rest service using SoapUI with a response delay. Next we need to invoke the above sequence  (you can use either API / Inbound Endpoint for this purpose) using query parameters.

Sample Request
http://<ip>:8280/testapi?a=5000

When the this service is invoked through the ESB, following logs will be printed, indicating the timeout as configured dynamically.

[2016-05-04 16:07:44,571]  WARN - TimeoutHandler Expiring message ID : urn:uuid:8de1fdf9-8e0e-43a9-9a4b-2ab086fd019e; dropping message after timeout of : 5 seconds

Chathurika Erandi De SilvaHow Do I integrate WSO2 ESB and WSO2 DSS: A basic demonstration


Sample Scenario


Hotel GreatHeights needs to access an external service hosted to evaluate its customer’s loyalty towards the hotel. The external service which takes in the customer id and the customer name and assesses the customer’s loyalty towards the hotel based on the previous expenditure etc.... Hotel GreatHeights has a system that sends out the guest identification in form of ID/Passport and the guest name. As the obvious fact, here the input parameters and the expectation of the external system does not match. Further Hotel GreatHeights has a full database that consist of guest identification, customer id, customer name, etc… and it has shared the only the needed columns with the external system to maintain privacy. In addition the hotel does not want to change it’s system interface. 

Above scenario brings out a very simple basic integration of two legacy systems. In more understandable wordings legacy systems are hard to change and the integration should facilitate the communication between the two.

There are two aspects here as follows

  1. The client legacy system (Hotel GreatHeights system) sends in guest personal identification and name, whereas the end service expects the customer id and customer name
  2. The guest personal identification should be mapped against the hotel database to find the customer id. The hotel is not sharing the guest personal identification number with the external system
In order to address the above aspects, the integrator should query the database using the guest personal identification and find the customer id and thereafter transform the request to match the expectations of the backend service.

Sample Implementation using WSO2 ESB and WSO2 DSS

In order to achieve the objective, the implementation can be categorized as below

  1. Querying the database: A dataservice will be hosted in WSO2 DSS that communicates with the database to obtain necessary values
  2. Transformation: WSO2 ESB mediators will be used to obtain the above queried value and transform the request.

Sample Data Service in WSO2 DSS 




The above data service returns the customer id in the response.


Sample Sequence in WSO2 ESB




The above sequence has the Payload mediator, Call mediator. The first occurrence of the payload mediator is used to extract the guest NIC from the request and has used that in the transformed request to the dataservice. The transformed request is sent to the dataservice using Call mediator which provides blocking invocation. When the call mediator is executed in blocking mode, it waits for the response without proceeding to the next mediator. The second occurance of the payload mediator is used to transform the message as needed by the backend service by using the customer id obtained through the data service.

Chathurika Erandi De SilvaDynamic Timeout for Endpoints: WSO2 ESB


Must READ


ESB 500 is the upcoming release of WSO2 Enterprise Service Bus. Since I am working on it these days, thought of writing on "Dynamic Timeout for Endpoints", which is introduced in ESB 500

The following sample is currently tested in ESB 500 Alpha. 


Introduction
Before we step on to the details, let's see why we need dynamic time-out values in the first place. Before ESB 500, the time-out value was static. We could define a time-out value but we couldn't change it dynamically. To gain more flexibility now we can provide the time-out value dynamically. This gives us the opportunity to read an incoming request and set the time-out value dynamically through that, as well as define a value outside of endpoint configuration and use the reference in endpoint configuration. With this approach we can change the time-out values without changing the endpoint configuration itself.

Hoping I have given you an insight on the dynamic time-outs, let’s see how to achieve this with WSO2 ESB using a property mediator which is defining the time-out value outside of the endpoint configuration.

Sample sequence configuration



In here as illustrated, by using the XPath {get-property('timeout')} inside “duration” element, enables us to achieve the dynamic behaviour. When the Xpath expression is evaluated, the value referenced is read and used.

Testing the sample

For testing purposes, I have setup a mock service using SoapUI with a response delay. When the this service is invoked through the ESB, following logs will be printed, indicating the timeout as configured dynamically.

[2016-05-04 16:07:44,571]  WARN - TimeoutHandler Expiring message ID : urn:uuid:8de1fdf9-8e0e-43a9-9a4b-2ab086fd019e; dropping message after timeout of : 20 seconds


Chamara SilvaHow to enable HTTP wirelog for non synapse products (WSO2)

As we already know in WSO2 ESB and APIM can enable wirelog for trace the synapse massages in variius situations. But if you want to see messages going inside the Governance Registry, Application Server like non synapse based products, Following wirelog prroperties can be added in to the log4j.properties file.  log4j.logger.httpclient.wire.content=DEBUG  log4j.logger.httpclient.wire.header=DEBUG

Thilini IshakaPart 1: Developing a WS-BPEL Process using WSO2 Developer Studio


In this post I am going to discuss following list of items.

What's a BPEL Process?
A BPEL Process is a container where you can declare relationships to external partners, declarations for process data, handlers for various purposes and most importantly, the activities to be executed. 

Let's start with designing a simple workflow.

Create a BPEL process that returns addition of two integer numbers. For the number addition operation the BPEL process invokes an existing service and gets the result from the web service. This web service takes two integers and returns addition of the two integer values. The web service will be hosted on WSO2 AppServer (or else you can use any other appserver).

Figure 1, shows the axis2 service that we are going to invoke via the BPEL process. 
Create the axis2 archive (.aar) [Right click on the AdderService Project --> Export Project as a deployable archive and save it].

Now Start WSO2 AppServer (Goto AppServer_HOME/bin --> sh wso2server.sh)

Deploy AdderService.aar on AppServer.
1. Copy aar file to AppServer_HOME/repository/deployment/server/axis2services directory
      OR
2. Using the AppServer Management Console (Add --> AAR Service --> upload the service archive) 
We need to keep the wsdl file for the AdderService as, that is required later when developing the  BPEL process.

wget http://localhost:9763/services/AdderService?wsdl
Save the AdderService.wsdl to your local file system.

Figure 1

Let's start with designing the BPEL workflow.
Open eclipse which has WSO2 Developer Studio installed. 
Goto Dashboard (Developer Studio --> Open Dashboard menu and click on BPEL Workflow under Business Process Server category)
Figure 2: Create New BPEL Project


Give a project Name, Namespace and select the Template type. As we are going to create a short running bpel process, select the template type as Synchronous.

Synchronous interaction - Suppose a BPEL process invokes a partner service. The BPEL process then waits for the partner service's operation to be completed, and responded. After receiving this completion response from the partner service, the BPEL process will continue to carry on its execution flow. This transmission does not apply for the In-Only operations defined in the WSDL of the partner service.

Usually we'll use asynchronous services for long-lasting operations and synchronous services for operations that return a result in a relatively short time.
Figure 3 

Figure 4

Here you can see the template for our business process. The BPEL editor automatically generates receiveInput and replyOutput activities(Figure 5). Also it will generate partnerLink and variables used in these two activities.

Note: It will automatically generate AdderProcessArtifacts.wsdl and AdderProcess.bpel. If we look at the folder structure of the BPEL process, we can easily figure out these two files.

Figure 5

In our BPEL process we need to invoke an external service which is AdderService. To invoke this service we need to assign the input variables into external service’s input and again the reply from the external service to our BPEL process output. So that, here we need two assign activities and one invoke activity. 

Let’s add an assign activity in between receiveInput and replyOutput activities. To add assign activity drag it from the Action section of the Palette.

Figure 6 : AdderProcess workflow

Before filling the invoke activity, you need to import the AdderService.wsdl to you workflow project. 
Figure 7

Now start implementing the business logic.  
Goto Properties of invoke activity.
Goto 'Details' tab and from the 'Operation' drop down list, select 'Create Global Partner Link'
Figure 8

Give a Partner Link Name and click OK.
Figure 9


Now you'll prompt to the window shown in Figure 10. Click on 'Add WSDL'
Figure 10

Select the WSDL file which you have already imported to the workflow project. Click OK.
Figure 11

In the Partner Link Type window, you should select the correct PortType. then click OK.
Figure 12

Give a name to the Partner Link Type and click Next.
Figure 13


Give a name to the partner role and then select the correct PortType. Now click Finish. We have only one role for this invoke activity. If we have multiple roles (partner roles and my roles, we need to click on Next and create the next role).
Figure 14

Now you need to pick the 'add' operation from the Quick Pick box. For that 'Double' click on it. 
Figure 15


Now you are done with implementing invoke activity. Next step is to implement two assign activities. Before doing that, you need to identify what are the inputs and outputs of your process. We have two integer values as the request parameters and a resulting integer as the response.

Open the AdderProcessArtifacts.wsdl and find the Service, PortType and the binding there. Click on the arrow next to AdderProcessRequest.
Figure 16


Add two integer elements as shown in Figure 17. [To add a element, RightClick --> Add Element] Select the element type as int from the drop down list.
Figure 17


Configure AdderProcessResponse part similarly the above step.
There you need to click on the arrow next to AdderProcessResponse.
Figure 18

For the request, you have only one integer element as the output.
Now save the wsdl file and close it.
Figure 19

Go back to the bpel file and start implementing the first assign activity, that is 'AssignInputVars'.
Goto 'Details' tab and click on 'New'.
Do the mapping as shown in Figure 20.
Figure 20


It will automatically prompt for the initialization. Click on 'Yes'.
Figure 21


Figure 22

Now you are done with configuring the First Assign Activity.
Figure 23


Configure the second assign activity, that is 'AssignoutputVars'. 
Figure 24

Allow for the automatic variable initialization for response.
Figure 25

Now you are done with the bpel process flow design. Now open the deploy.xml (Deployment Descriptor).

Here you can specify the process state (whether it activated, deactivated or retired) after the deployment, set the process executed only in memory, Inbound Interfaces (Service) and Outbound Interfaces (Invokes) etc.

Figure 26

Now make the BPEL process as a deployable archive (Right click on the AdderProcess workflow --> Export Project as a deployable archive).

Start WSO2 Business Process Server (BPS_HOME/bin --> sh wso2server.sh)
Make the port offset to 1 (Change offset to 1 in BPS_HOME/repository/conf/carbon.xml)

Deploy AdderProcess.zip on WSO2 BPS.
1. Copy zip file to BPS_HOME/repository/deployment/server/bpel directory
      OR
2. Using the BPS Management Console (Processes --> Add --> BPEL Archive(zip) --> upload) 

Figure 27

To test the process use TryIt wizard or any other tool(eg: SOAP UI).
Figure 28 : Click on TryIt


Figure 29 : SOAP Request

Figure 30 : SOAP Request/Response 

Here, we get integer (a+b) as the response in the xml output.

Imesh GunaratneHow to Deploy WSO2 Middleware on Cloud Foundry

Cloud Foundry architecture, highlights, drawbacks & steps to deploy WSO2 middleware

Asanka DissanayakeResizing images in one line in Linux

Re-sizing images in one line

I hope you all have had the problem when you upload high quality pics to FB or some other social network.
You can use following command to resize the images by 50%. You can change the ratios .. just replace the value you desire with “50%”

first , you need to have ImageMagick

Install ImageMagick with following command

sudo apt-get install imagemagick

Then go to the directory that has photos to be resized
Run following command

mkdir resize;for f in *; do echo "converting $f"; convert $f -resize 50% resize/${f}; done

Then you will see the re-sized files in the resize directory.

Hope this will save someone’s time .. Enjoy !!!

 


Thilina PiyasundaraRunning your WordPress blog on WSO2 App Cloud

WSO2 App Cloud is now supporting Docker base PHP applications. In this blog post I will describe how to install a WordPress blog in this environment. In order to setup a WordPress environment we need to have two things;

  1. Web server with PHP support
  2. MySQL database

If we have both of these we can start setting up WordPress. In WSO2 App Cloud we can use the PHP application as the WordPress hosting web server which is a PHP enabled Apache web server docker image. Also it provides a database service where you can easily create and manage MySQL databases via the App Cloud user interface (UI).

Note:- 
For the moment WSO2 App Cloud is on beta therefore these docker images will have only 12h of lifetime with no data persistence in the file storage level. Data on MySQL databases will be safe unless you override. If you need more help don't hesitate to contact Cloud support.

Creating PHP application

Signup or signin to WSO2 App Cloud via http://wso2.com/cloud. Then click on the "App Cloud beta" section.
Then it will redirect you to the App Cloud user interface. Click on 'Add New Application' button on the left hand corner.
This will prompt you to several available applications. Select 'PHP Web Application' box and continue.
Then it will prompt you a wizard. In that give a proper name and a version to your application. Name and version will be use to generate the domain name for your application.

There are several options that you can use to upload PHP content to this application. For the moment I will download the wordpress-X.X.X.zip file from the wordpress site and upload it to application.
In the below sections of the UI you can set the run time and container specification. Give the highest Apache version as the runtime and use minimal container speck as wordpress does not require much processing and memory.
If the things are all set and the file upload is complete click on 'Create' button. You will get the following status pop-up when you click the create button and it will redirect you to the application when its complete.
In the application UI note the URL. Now you can click on the 'Launch App' button so that it will redirect you to your PHP application.
Newly installed WordPress site will be like this.
Now we need to provide database details to it. Therefore, we need to create database and a user.

Creating database

Go back to the Application UI and click on 'Back to listing' button.
In that UI you can see a button in the top left hand corner called 'Create database'. Click on that.
In the create database UI give a database name, database user name and a password . Password need to pass the password policy so you can click on 'Generate password' to generate a secure password easily. By the way of you use generate password option make sure you copy the generated password before you proceed with database creation. Otherwise you may need to reset the password.

Also note that database name and database user name will append tenant domain and random string accordingly to the end of both. Therefore, those fields will only get few number of input characters.
If all set then click on 'Create database' button to proceed. After successfully creating the database it will redirect you to a database management user interface like following.
Now you can use those details to login to the newly create mysql database as follows;
$ mysql -h mysql.storage.cloud.wso2.com -p'' -u
eg :-
$ mysql -h mysql.storage.cloud.wso2.com -p'XXXXXXXXX' -u admin_LeWvxS3l wpdb_thilina 
Configuring WordPress

If the database creation is successful and you can login to it without any issue we can use those details to configure WordPress.

Go back to the WordPress UI and click on 'let's go' button. It will prompt to a database configuration wizard. Fill those fields with the details that we got from the previous section.
If WordPress application can successfully establish a connection with the database using your inputs it will prompt you to a UI as follows.
On that click on 'Run the install'. Then WordPress will start populating database tables and insert initial data to the given database.

When its complete it will ask for some basic configurations like the site title, admin user name and passwords.
Click on 'Install WordPress' after filling those information. Then it will redirect you to the WordPress admin console login page. Login to that using the username and password gave in the previous section.
So now WordPress is ready to use. But the existing URL is not very attractive. If you have a domain you can use it as the base URL of this application.

Setting custom domain (Optional)

IN the application UI click on the top left three lines button shown in the following image.
It will show some advance configuration that we can use. In that list select the last one 'Custom URL' option.
It will prompt you following user interface. Enter the domain name that you are willing to use.
But before you validate make sure you add a DNS CNAME to that domain pointing to you application launch URL.

Following is the wizard that I got when adding the CNAME via Godaddy. This user interface and adding CNAME options will be different for you DNS provider.
You can validate the CNAME by running 'dig' command in Linux or nslookup in windows.
If the CNAME is working click on 'Update'.
 If that is successful you will get the above notification and if you access that domain name it will show your newly created WordPress blog.

Sachith WithanaWSO2 Data Analytics Server: Introduction

WSO2 Data Analytics Server(DAS) is a full blown analytics platform which fuses together batch analytics, stream analytics, predictive analytics and interactive analytics in to a single package (how convenient is that?).

This post would (hopefully) act like a FAQ list to DAS new bees!!

Oh, by the way WSO2 DAS is completely open source AND free ( free as in speech and as in free beer :) ). So please be sure to give it a go and play around with the code! And do drop by to our mailing lists [2], we are a open community at WSO2!

Introduction:


The image below summarizes WSO2 DAS elegantly.

Analytics-diagram-v2.jpg

In the simplest terms,
1. It can collect the data from various sources (HTTP, Thrift, External Databases, even the WSO2 ESB! and much more).
2. Run various types of analytics on the data
3. Communicate the results through various ways ( alerts, dashboards ... etc)


So What's underneath?


For batch analytics, we use Apache Spark [3], a clustered large-scale computing framework with upto 100x performance as Apache Hadoop.

WSO2 Siddhi is one of the most powerful complex event processors around. (Even Uber is using siddhi underneath ;) ). Siddhi powers the real time analytics in DAS. 

Apache Lucene, a high performance full text search engine[4], is powering our interactive analytics.  

Predictive analytics is powered by immensely powerful algorithms that extends the Apache Sparks' MLlib package. It helps you build models of the data which can then be used to run analytics against!.

Okay! what are the "other" advantages? 


I'm glad you asked!

  • Single API for all kinds of processing
    You just want to publish data once and get it over with? Yep, that's how we do it.
  • Pluggable data storage support
    Want to run on HDFS? no problem!
    Already have a Cassandra cluster? we've got that covered!
  • Extremely fast writes!
    Asynchronous and non-blocking nature of our publishing allows extremely fast writes for data!
  • Multiple data publishing methods
    Publish data through PHP, python, java, c++ ...etc
    JMX publishers, log publishers and many more!

What can I do with the analyzed data?

Well, simply put, a lot!

We have built in support for interactive dashboards as shown below. You can view and analyze the data in hundreds of ways.

  

You can also send alerts!! 

You want to send SMSs, Emails for a particular type of occurrence? IT'S BUILT IN!

Want more?
Couple this with WSO2 ESB[5] and the options are limitless.
You can even send pagers, or better yet, trigger a physical alarm!! How cool is that?


WSO2 Data Analytics Server is an extremely powerful tool and I hope this gave a very brief but a fairly comprehensive introduction to the DAS server.

Refer to the documentation to for more details [6]

[1] http://wso2.com/products/data-analytics-server/
[2] http://wso2.com/mail/
[3] http://spark.apache.org/
[4] https://lucene.apache.org/core/
[5] http://wso2.com/products/enterprise-service-bus/
[6] https://docs.wso2.com/display/DAS301/WSO2+Data+Analytics+Server+Documentation

Hasitha Aravinda[Sample] Order Processing Process

This sample illustrates usage of WS-BPEL 2.0, WS-HumanTask 1.1 and Rule capabilities in WSO2 Business Process Server and WSO2 Business Rule Server.


Order Processing Flow
alt text
  • The client place an order by providing client ID, item IDs, quantity, shipping address and shipping city.
  • Then Process submits order information to invoicing web service, which generates order ID and calculate total order value.
  • If total order value is greater than USD 500, process requires a human interaction to proceed. When a order requires a human interaction process creates a Review HumanTask for regional clerks. If review task is rejected by one of regional clerk user, workflow terminates after notifying the client.
  • Once the regional clerk approve the review task, workflow invokes Warehouse Locater rule service to calculate nearest warehouse.
  • Once receiving nearest warehouse, process invokes Place Order web service to finalize the order.
  • Finally user will be notified with the estimated delivery date.

This sample contains

Please checkout this sample from Github. 

Sameera JayasomaCarbon JNDI

WSO2 Carbon provides an in-memory JNDI InitialContext implementation. This is available from the WSO2 Carbon 5.0.0. This module also…

Chathurika Erandi De SilvaRafting and Middleware QA are they the same?

Rafting the River Twinkle

Mary is opening a rafting entertainment business based on river Twinkle. She has a major challenge, her team has to have the best idea on the river so that they can give the best experience to the customers itself.

So what did they do? They decided to raft the river first by themselves because they needed to identify the loop holes, dangers before they take any customers on it.


Rafting and QA?

Aren't QA folks do the same thing as Mary and the team did? They do various activities to identify what works and what is not working. This is crucial because this information is much needed by the customers who will be using a specific product.

QA and Middleware

Designing tests for middleware is not an easy task. It's not same as designing tests for a simple web app. Middleware testing can be compared to the rafting experience itself while assuring the quality of a web app is boating on a large lake.

If we are to assure the quality of a product such as WSO2 ESB, its a challenging task but i have found out the following golden rules of thumb that can be incorporated to any middleware product.

My Golden Rules of Thumb 

Know where to start

It's important to know where u start designing tests. In order to achieve this, greater understanding on the functionality as well how it's to be implemented is also needed. So obviously, the QA has to be technically competent as well thorough in knowledge on the respective domain. Reading helps a lot as well as trying out by yourself so that knowledge can be gained from these

Have a proper design

A graphical design lets you evaluate your knowledge as well as your competency in the area. QAs with middleware cannot just stick to black box testing, they have to go for the white box as well as they have to ensure the quality of the code it self. So a graphical representation is very valuable in designing the tests and what to test.

Have room for improvement

Its advantage to be self driven, find out about what you are doing, understanding what you are doing is very important to achieve a good middleware testing.

With all of above, its easy to put our selves in customer shoes, because in middleware, there can be various and unlimited customer demands. If we follow the above rules of thumb, i guess any QA can be a better one and more suitable for the middleware platform that changes rapidly.

I'll be discussing more on this, this is just a start...







Prabath SiriwardenaJWT, JWS and JWE for Not So Dummies!

JSON Web Token (JWT) defines a container to transport data between interested parties. It became an IETF standard in May 2015 with the RFC 7519. There are multiple applications of JWT. The OpenID Connect is one of them. In OpenID Connect the id_token is represented as a JWT. Both in securing APIs and Microservices, the JWT is used as a way to propagate and verify end-user identity.


This article on medium explains in detail JWT, JWS and JWE with their applications.

Afkham AzeezAWS Clustering Mode for WSO2 Products



WSO2 Clustering is based on Hazelcast. When WSO2 products are deployed in clustered mode on Amazon EC2, it is recommended to use the AWS clustering mode. As a best practice, add all nodes in a single cluster to the same AWS security group.

To enable AWS clustering mode, you simply have to edit the clustering section in the CARBON_HOME/repository/conf/axis2/axis2.xml file as follows:

Step 1: Enable clustering


<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
enable="true">

Step 2: Change membershipScheme to aws


<parameter name="membershipScheme">aws</parameter>

Step 3: Set localMemberPort to 5701

Any value between 5701 & 5800 are acceptable
<parameter name="localMemberPort">5701</parameter>


Step 4: Define AWS specific parameters

Here you need to define the AWS access key, secret key & security group. The region, tagKey & tagValue are optional & the region defaults to us-east-1

<parameter name="accessKey">xxxxxxxxxx</parameter>
<parameter name="secretKey">yyyyyyyyyy</parameter>
<parameter name="securityGroup">a_group_name</parameter>
<parameter name="region">us-east-1</parameter>
<parameter name="tagKey">a_tag_key</parameter>
<parameter name="tagValue">a_tag_value</parameter>

Provide the AWS credentials & the security group you created as values of the above configuration items.  Please note that the user account used for operating AWS clustering needs to have the ec2:DescribeAvailabilityZones & ec2:DescribeInstances permissions.

Step 5: Start the server

If everything went well, you should not see any errors when the server starts up, and also see the following log message:

[2015-06-23 09:26:41,674]  INFO - HazelcastClusteringAgent Using aws based membership management scheme

and when new members join the cluster, you should see messages such as the following:
[2015-06-23 09:27:08,044]  INFO - AWSBasedMembershipScheme Member joined [5327e2f9-8260-4612-9083-5e5c5d8ad567]: /10.0.0.172:5701

and when members leave the cluster, you should see messages such as the following:
[2015-06-23 09:28:34,364]  INFO - AWSBasedMembershipScheme Member left [b2a30083-1cf1-46e1-87d3-19c472bb2007]: /10.0.0.245:5701


The complete clustering section in the axis2.xml file is given below:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
enable="true">
<parameter name="AvoidInitiation">true</parameter>
<parameter name="membershipScheme">aws</parameter>
<parameter name="domain">wso2.carbon.domain</parameter>

<parameter name="localMemberPort">5701</parameter>
<parameter name="accessKey">xxxxxxxxxxxx</parameter>
<parameter name="secretKey">yyyyyyyyyyyy</parameter>
<parameter name="securityGroup">a_group_name</parameter>
<parameter name="region">us-east-1</parameter>
<parameter name="tagKey">a_tag_key</parameter>
<parameter name="tagValue">a_tag_value</parameter>

<parameter name="properties">
<property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
<property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
<property name="subDomain" value="worker"/>
</parameter>
</clustering>

Afkham AzeezHow AWS Clustering Mode in WSO2 Products Works

In a previous blog post, I explained how to configure WSO2 product clusters to work on Amazon Web Services infrastructure. In this post I will explain how it works.

 WSO2 Clustering is based on Hazelcast.

All nodes having the same set of cluster configuration parameters will belong to the same cluster. What Hazelcast does is, it calls AWS APIs, and then gets a set of nodes that satisfy the specified parameters (region, securityGroup, tagKey, tagValue).

When the Carbon server starts up, it creates a Hazelcast cluster. At that point, it calls EC2 APIs & gets the list of potential members in the cluster. To call the EC2 APIs, it needs the AWS credentials. This is the only time these credentials are used. AWS APIs are only used on startup to learn about other potential members in the cluster.

Once the EC2 instances are retrieved, a Hazelcast node will try to connect to potential members that are running on the same port as its localMember port. By default this port is 5701. If that port is open, it will try to do a Hazelcast handshake and add that member if it belongs to the same cluster domain (group). The new member will repeat the process of trying to connect to the next port (i.e. 5702 by default) in increments of 1, until next port is not reachable.

 Here is the pseudocode;

for each EC2 instance e
     port = localMemberPort;
     while(canConnect(e, port))
          addMemberIfIPossible(e, port)    // A Hazelcast member is running & in the same domain
          port = port +1

Subsequently, the connections established between members are point to point TCP connections.  Member failures are detected through a TCP ping. So once the member discovery is done, the rest of the interactions in the cluster are same as when the multicast & WKA (Well Known Address) modes are used.

With that facility, you don't have to provide any member IP addresses or hostnames, which may be impossible on an IaaS such as EC2.

NOTE: This scheme of trying to establish connections with open Hazelcast ports from one EC2 instance to another does not violate any AWS security policies because the connection establishment attempts are made from nodes within the same security group to ports which are allowed within that security group.

Prabath SiriwardenaGSMA Mobile Connect vs OpenID Connect

Mobile Connect is an initiative by GSMA. The GSMA represents the interests of mobile operators worldwide, uniting nearly 800 operators with more than 250 companies in the broader mobile ecosystem, including handset and device makers, software companies, equipment providers and internet companies, as well as organizations in adjacent industry sectors. The Mobile Connect initiative by GSMA focuses on building a standard for user authentication and identity services between mobile network operators (MNO) and service providers.


This article on medium explains the GSMA Mobile Connect API and see how it differentiates from the OpenID Connect core specification.

Sameera JayasomaStartup Order Resolving Mechanisms in OSGi

There are few mechanisms in OSGi to deal with the bundle startup order. Most obvious approach is to use “start levels”. The other approach…

Dhananjaya jayasingheApplying security for ESB proxy services...


Security is a major factor we consider when it comes to each and every deployment. WSO2 Enterprise Service Bus also capable of securing services.

WSO2 ESB 4.8 or previous versions were having the capability of applying the security for a proxy service from Admin Console as in [1]

However, From ESB 4.9.0 , we can no longer apply security for a proxy service from Admin Console of the ESB. We need to use WSO2 Developer Studio version 3.8 for this requirement for ESB 4.9.0.


You can find the documentation on  applying security to ESB 4.9.0 based proxy service here[2].  However, i would like to add a small modification to the doc in [2] at the end.

After securing the proxy according to the document, We need to create the Composite Application Project and export the CAR file. When exporting the CAR file, by default the server role of the Registry project is being selected as GovernanceRegistry as in the bellow image.




When we deploy that CAR file in ESB, We are getting following exception [3] due to above Server Role.

In order to fix the problem, we need to change the server role to ESB as bellow since we are going to deploy it in ESB.






[1] https://docs.wso2.com/display/ESB481/Securing+Proxy+Services
[2] https://docs.wso2.com/display/ESB490/Applying+Security+to+a+Proxy+Service
[3]

 [2016-04-12 14:34:48,658] INFO - ApplicationManager Deploying Carbon Application : MySecondCarProject1_1.0.1.car...  
[2016-04-12 14:34:48,669] INFO - EndpointDeployer Endpoint named 'SimpleStockQuote' has been deployed from file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/SimpleStockQuote_1.0.0/SimpleStockQuote-1.0.0.xml
[2016-04-12 14:34:48,670] INFO - ProxyService Building Axis service for Proxy service : myTestProxy
[2016-04-12 14:34:48,671] WARN - SynapseConfigUtils Cannot convert null to a StreamSource
[2016-04-12 14:34:48,671] ERROR - ProxyServiceDeployer ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(SynapseConfigUtils.java:578)
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(SynapseConfigUtils.java:79)
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(ProxyService.java:822)
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(ProxyService.java:608)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:80)
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:46)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:194)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:130)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:263)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:72)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
at org.apache.axis2.deployment.RepositoryListener.startListener(RepositoryListener.java:371)
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(SchedulerTask.java:59)
at org.apache.axis2.deployment.scheduler.SchedulerTask.run(SchedulerTask.java:67)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(CarbonDeploymentSchedulerTask.java:93)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2016-04-12 14:34:48,672] ERROR - AbstractSynapseArtifactDeployer Deployment of the Synapse Artifact from file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed!
org.apache.synapse.deployers.SynapseArtifactDeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.handleSynapseArtifactDeploymentError(AbstractSynapseArtifactDeployer.java:475)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:112)
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:46)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:194)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:130)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:263)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:72)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
at org.apache.axis2.deployment.RepositoryListener.startListener(RepositoryListener.java:371)
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(SchedulerTask.java:59)
at org.apache.axis2.deployment.scheduler.SchedulerTask.run(SchedulerTask.java:67)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(CarbonDeploymentSchedulerTask.java:93)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(SynapseConfigUtils.java:578)
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(SynapseConfigUtils.java:79)
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(ProxyService.java:822)
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(ProxyService.java:608)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:80)
... 22 more
[2016-04-12 14:34:48,673] INFO - AbstractSynapseArtifactDeployer The file has been backed up into : NO_BACKUP_ON_WORKER.INFO
[2016-04-12 14:34:48,673] ERROR - AbstractSynapseArtifactDeployer Deployment of synapse artifact failed. Error reading /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
org.apache.axis2.deployment.DeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:201)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:130)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:263)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:72)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
at org.apache.axis2.deployment.RepositoryListener.startListener(RepositoryListener.java:371)
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(SchedulerTask.java:59)
at org.apache.axis2.deployment.scheduler.SchedulerTask.run(SchedulerTask.java:67)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(CarbonDeploymentSchedulerTask.java:93)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.synapse.deployers.SynapseArtifactDeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.handleSynapseArtifactDeploymentError(AbstractSynapseArtifactDeployer.java:475)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:112)
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:46)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:194)
... 20 more
Caused by: org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(SynapseConfigUtils.java:578)
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(SynapseConfigUtils.java:79)
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(ProxyService.java:822)
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(ProxyService.java:608)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:80)
... 22 more
[2016-04-12 14:34:48,674] ERROR - ApplicationManager Error occurred while deploying Carbon Application
org.apache.axis2.deployment.DeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:213)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:130)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:263)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:72)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
at org.apache.axis2.deployment.RepositoryListener.startListener(RepositoryListener.java:371)
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(SchedulerTask.java:59)
at org.apache.axis2.deployment.scheduler.SchedulerTask.run(SchedulerTask.java:67)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(CarbonDeploymentSchedulerTask.java:93)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.axis2.deployment.DeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:201)
... 20 more
Caused by: org.apache.synapse.deployers.SynapseArtifactDeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.handleSynapseArtifactDeploymentError(AbstractSynapseArtifactDeployer.java:475)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:112)
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:46)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:194)
... 20 more
Caused by: org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(SynapseConfigUtils.java:578)
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(SynapseConfigUtils.java:79)
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(ProxyService.java:822)
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(ProxyService.java:608)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:80)
... 22 more



Amila MaharachchiTomcat returns 400 for requests with long headers

We noticed this while troubleshooting an issue which popped up in WSO2 Cloud. We have configured SSO for the API Publisher and Store at WSO2 Identity Server. SSO was working fine except for one scenario. We checked the SSO configuration and couldn't find anything wrong.

Then we checked the load balancer logs. It revealed that LB was passing the request to the server i.e. Identity server, but gets a 400 from it. Then we looked the Identity Server logs to find nothing printed there. But, there were logs in the access log of the identity server which told us it was getting the request, but it was not letting it go through. Instead it was dropping it saying it was a bad request and was returning a 400 response.

We did some search in the internet and found out this kind of rejection can occur if the header values are too long. In the SAML SSO scenario, there is a referrer header which sends a lengthy value which was about 4000 characters long. When doing further search, we found out the property maxHttpHeaderSize in tomcat configs where we can configure the max http header size allowed in bytes. You can read about this config from here.

Once we increased that value, everything started working fine. So, I thought of blogging this down for the benefit of people using tomcat and also WSO2 products since WSO2 products have tomcat embedded in it. 

Dinusha SenanayakaExposing a SOAP service as a REST API using WSO2 API Manager

This post explains how we can publish an existing SOAP service as a  REST API using WSO2 API Manager.

We will be using a sample data-service called "OrderSvc"as the SOAP service which can be deployed as a SOAP service in WSO2 Data Services Server. But this could be any of SOAP service.

1. Service Description of ‘OrderSvc’ SOAP Backend Service

This “orderSvc” service provides WSDL with 3 operations (“submitOrder”, “cancelOrder”, “getOrderStatus”). 


submitOrder operation takes ProductCode and Quantity as parameters.
Sample request :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:submitOrder>
        <dat:ProductCode>AA_1</dat:ProductCode>
        <dat:Quantity>4</dat:Quantity>
     </dat:submitOrder>
  </soapenv:Body>

</soapenv:Envelope>

cancelOrder operation takes OrderId as parameter and does an immediate cancellation and returns a confirmation code.
Sample request:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:cancelOrder>
        <dat:OrderId>16</dat:OrderId>
     </dat:cancelOrder>
  </soapenv:Body>
</soapenv:Envelope>

orderStatus operaioin takes the orderId as parameter and return the order status as response.
Sample request :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:orderStatus>
        <dat:OrderId>16</dat:OrderId>
     </dat:orderStatus>
  </soapenv:Body>
</soapenv:Envelope>


We need to expose this "OrderSvc" SOAP service as a REST API using API Manager. And once we exposed this as a REST API, “submitOrder”, “cancelOrder”, “getOrderStatus” operations should map to REST resources as bellow which takes the user parameters as query parameters.

“/submitOrder” (POST) => request does not contain order id or date; response is the full order payload.

“/cancelOrder/{id}” (GET) => does an immediate cancellation and returns a confirmation code.

“/orderStatus/{id}” (GET) => response is the order header (i.e., payload excluding line items).


Deploying the Data-Service :

1. Login to the MySQL and create a database called “demo_service_db” . (This database name can be anything , we need to update the data-service (.dbs file) accordingly).

mysql> create database demo_service_db;
mysql> demo_service_db;

2. Execute the dbscript given here on the above created database. This will create two tables ‘CustomerOrder’, ‘OrderStatus’ and one stored procedure ‘submitOrder’. Also it will insert some sample data into two tables.

3. Include mysql jdbc driver into DSS_HOME/repository/components/lib directory.


4. Download the data-service file given here. Before deploy this .dbs file, we need to modify the data source section defined in it. i.e in the downloaded orderSvc.dbs file, change the following properties by providing correct jdbcUrl ( need to point to the database that you created in step 1)  and change the userName/ Pwd of mysql connection, if those are different than the one defined here.


<config id="ds1">
     <property name="driverClassName">com.mysql.jdbc.Driver</property>
     <property name="url">jdbc:mysql://localhost:3306/demo_service_db</property>
     <property name="username">root</property>
     <property name="password">root</property>
  </config>


5. Deploy the orderSvc.dbs file in Data services server by copying this file into “wso2dss-3.2.1/repository/deployment/server/dataservices” directory. Start the server.


6. Before expose through API Manager, check whether all three operations works as expected using try-it tool or SOAP-UI.


submitOrder
Sample request :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:submitOrder>
        <dat:ProductCode>AA_1</dat:ProductCode>
        <dat:Quantity>4</dat:Quantity>
     </dat:submitOrder>
  </soapenv:Body>
</soapenv:Envelope>


Response :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
  <soapenv:Body>
     <submitOrderResponse xmlns="http://ws.wso2.org/dataservice">
        <OrderId>16</OrderId>
        <ProductCode>AA_1</ProductCode>
        <Quantity>4</Quantity>
     </submitOrderResponse>
  </soapenv:Body>
</soapenv:Envelope>


cancelOrder
Sample request:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:cancelOrder>
        <dat:OrderId>16</dat:OrderId>
     </dat:cancelOrder>
  </soapenv:Body>
</soapenv:Envelope>


Response:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
  <soapenv:Body>
     <axis2ns1:REQUEST_STATUS xmlns:axis2ns1="http://ws.wso2.org/dataservice">SUCCESSFUL</axis2ns1:REQUEST_STATUS>
  </soapenv:Body>
</soapenv:Envelope>


orderStatus
Sample request :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:orderStatus>
        <dat:OrderId>16</dat:OrderId>
     </dat:orderStatus>
  </soapenv:Body>
</soapenv:Envelope>


Response:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
  <soapenv:Body>
     <OrderStatus xmlns="http://ws.wso2.org/dataservice">
        <OrderStatus>CANCELED</OrderStatus>
     </OrderStatus>
  </soapenv:Body>

</soapenv:Envelope>



2. Configuring API Manager

1. Download the custom sequence given here and save it to the APIM registry location “/_system/governance/apimgt/customsequences/in/”.  This can be done by login to the API Manager carbon management console. 

In the left menu section, expand the Resources -> Browse -> Go to "/_system/governance/apimgt/customsequences/in" -> Click in "Add Resource" -> Browse the file system and upload the "orderSvc_supporting_sequence.xml" sequence that downloaded above. Then click "Add". This step will save the downloaded sequence into registry.


4. Create orderSvc API by wrapping orderSvc SOAP service.

Login to the API Publisher and create a API with following info.

Name: orderSvc
Context: ordersvc
Version: v1

Resource definition1
URL Pattern: submitOrder
Method: POST

Resource definition2
URL Pattern: cancelOrder/{id}
Method: GET

Resource definition3
URL Pattern: orderStatus/{id}
Method: GET


Endpoint Type:* : Select the endpoint type as Address endpoint. And go to the “Advanced Options” and select the message format as “SOAP 1.1”.
Production Endpoint:https://localhost:9446/services/orderSvc/ (Give the OrderSvc service endpoint)

Tier Availability : Unlimited

Sequences : Click on the Sequences checkbox and selected the previously saved custom sequence under “In Flow”.


Publish the API into gateway.

We are done with the API creation.

Functionality of the custom sequence "orderSvc_supporting_sequence.xml"

OrderSvc backend service expecting a SOAP request while user invoking API by sending parameters as query parameters (i.e cancelOrder/{id}, orderStatus/{id}).

This custom sequence will take care of building SOAP payload required for cancelOrder, orderStatus operations by looking at the incoming request URI and the query parameters.

Using a switch mediator, it read the request path . i.e 

<switch xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope" xmlns:ns3="http://org.apache.synapse/xsd" source="get-property('REST_SUB_REQUEST_PATH')">

Then check the value of request path using a regular expression and construct the payload either for cancelOrder or orderStatus according to the matched resource. i.e

<case regex="/cancelOrder.*">
<payloadFactory media-type="xml">
<format>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Header/>
<soapenv:Body xmlns:dat="http://ws.wso2.org/dataservice">
<dat:cancelOrder>
<dat:OrderId>$1</dat:OrderId>
</dat:cancelOrder>
</soapenv:Body>
</soapenv:Envelope>
</format>
<args>
<arg evaluator="xml" expression="get-property('uri.var.id')"/>
</args>
</payloadFactory>
<header name="Action" scope="default" value="urn:cancelOrder"/>
</case>

<case regex="/orderStatus.*">
<payloadFactory media-type="xml">
<format>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Header/>
<soapenv:Body xmlns:dat="http://ws.wso2.org/dataservice">
<dat:orderStatus>
<dat:OrderId>$1</dat:OrderId>
</dat:orderStatus>
</soapenv:Body>
</soapenv:Envelope>
</format>
<args>
<arg evaluator="xml" expression="get-property('uri.var.id')"/>
</args>
</payloadFactory>
<header name="Action" scope="default" value="urn:orderStatus"/>
</case>


Test OrderSvc API published in API Manager

Login to the API Store and subscribe to the OrderSvc API and generate a access token. Invoke the orderStatus resource as given bellow. This will call to OrderSvc SOAP service and give you the response.

curl -v -H "Authorization: Bearer  _smfAGO3U6mhzFLro4bXVEl71Gga" http://localhost:8280/order/v1/orderStatus/3

Chandana NapagodaConfigure External Solr server with Governance Registry

In WSO2 Governance Registry 5.0.0, we have upgraded Apache Solr version into 5.2 release. With that you can connect WSO2 Governance Registry into an external Solr server or Solr cluster. External Solr integration provides features to gain comprehensive Administration Interfaces, High scalability and Fault Tolerance, Easy Monitoring and many more Solr capabilities.

Let me explain how you can connect WSO2 Governance Registry server with an external Apache Solr server.

1). First, you have to download Apache Solr 5.x.x from the below location.
http://lucene.apache.org/solr/mirrors-solr-latest-redir.html
Please note that we have only verified with Solr 5.2.0 and 5.2.1 versions only.

2). Then unzip Solr Zip file. Once unzipped, it's content will look like the below.



The bin folder contains the scripts to start and stop the server. Before starting the Solr server, you have to make sure JAVA_HOME variable is set properly. Apache Solr is shipped with an inbuilt Jetty server.

3). You can start the Solr server by issuing "solr start" command from the bin directory. Once the Solr server is started properly, following message will be displayed in the console. "Started Solr server on port 8983 (pid=5061). Happy searching!"

By default, server starts on port "8983" and you can access the Solr admin console by navigating to "http://localhost:8983/solr/".

4) To create a new Solr Core, you have to copy and paste Solr configuration directory(registry-indexing) found in G-REG_HOME/repository/conf/solr to SOLR_HOME/server/solr/ directory. Please note that only "registry-indexing" directory needs to be copied from the G-Reg pack.  This will create a new Solr Core named as "registry-indexing".

5). After creating "registry-indexing" Solr core, you can see it from the Solr admin console as below.



6). To integrate newly created Solr core with WSO2 Governance Registry, you have to modify registry.xml file located in <greg_home>/repository/conf directory. There you have to add "solrServerUrl" under indexingConfiguration as follows and need to comment out "IndexingHandler".

    <!-- This defines index cofiguration which is used in meta data search feature of the registry -->

<indexingConfiguration>
<solrServerUrl>http://localhost:8983/solr/registry-indexing</solrServerUrl>
<startingDelayInSeconds>35</startingDelayInSeconds>
<indexingFrequencyInSeconds>3</indexingFrequencyInSeconds>
<!--number of resources submit for given indexing thread -->
<batchSize>50</batchSize>
<!--number of worker threads for indexing -->
<indexerPoolSize>50</indexerPoolSize>
.................................
</indexingConfiguration>


7). After completing external Solr configurations as above, you have to start the WSO2 Governance Registry server. If you have configured External Solr integration properly, you can notice below log message in the Governace Registry server startup logs(wso2carbon log).

[2015-07-11 12:50:22,306] INFO {org.wso2.carbon.registry.indexing.solr.SolrClient} - Http Sorl server initiated at: http://localhost:8983/solr/registry-indexing

Further, you can view indexed data by querying via Solr admin console as well.

Happy Indexing and Searching...!!!

Note(added on March 2016): If you are moving from older G-Reg version to the latest one,  you have to replace existing Solr Core(registry-indexing) with the latest one available in the G-Reg pack.

Chamila WijayarathnaExtending WSO2 Identity Server to Engage Workflows with non User Store Operations

In my previous blog, I described about adding more control to an user store operation using workflows. By default Identity Server only supports engaging workflows to user store operations. But is this limited to user store operations? No, you can engage any operation with a workflow, if there exist an interceptor where we can start a workflow when the event is occurred.

First before seeing how to achieve this, let's try out a simple example on this. So here, I am going to demonstrate controlling 'adding service provider' operation using workflows. For this I am going to use sample workflow event handler which is available at [1].

Let's first clone the source code of this sample handler and built it. Then we should put the jar created at the target folder at handler source to repository/components/dropins folder of your Identity Server. Now start the Identity Server.

Now as usual, first you have to create the roles and users required for the approval process and then create a workflow with desired approval steps as I described in my previous blog [2].

If you have followed my previous blog [2], steps until this should be very comfortable for you. You know that after creating the workflow with approval steps, next part to do it is engaging the operation with the workflow. Here we are planning to engage 'add service provider' operation which is a non user store operation with this workflow.

In the 'add workflow engagement' page, by default, it will only show user store operations as the operations that can be engaged with workflow. But now since we have added new service-provider workflow handler, it will show service provider related operations in that UI as well.



Now we can fill the rest of the 'add workflow engagement' form in the usual way we did.

Now we have engaged 'add service provider' operation with a approval process. Now if we add a new service provider, it will not directly added until it was accepted in the approval process. Only after it is approved, it will shown in the UI and is usable.

So now we know that, not only user store operations, but other operations also can be engaged to workflows. But the most challenging thing here is how do we write the custom event handler. Anyway I'm not going to describe that part here, even though its the most important part of this, because its already available in WSO2 docs at [3].

[1]. https://github.com/wso2/product-is/tree/master/modules/samples/workflow/handler/service-provider
[2]. http://cdwijayarathna.blogspot.com/2016/04/making-use-of-wso2-identity-servers.html
[3]. https://docs.wso2.com/display/IS510/Writing+a+Custom+Event+Handler

Prabath SiriwardenaThirty Solution Patterns with the WSO2 Identity Server

WSO2 offers a comprehensive open source product stack to cater to all needs of a connected business. With the single code base structure, WSO2 products are weaved together to solve many enterprise-level complex identity management and security problems. By believing in open standards and supporting most of the industry leading protocols, the WSO2 Identity Server is capable of providing seamless integration with a wide array of vendors in the identity management domain. The WSO2 Identity Server is one of the most powerful open source Identity and Entitlement Management server, released under the most business friendly Apache 2.0 license.


This article on medium explains thirty solution patterns, built with the WSO2 Identity Server and other WSO2 products to solve enterprise-level security and identity management related problems.

Chamara SilvaHow to generate random strings or number from Jmeter

While testing soap services, most of the time we may need jmeter scripts to generate random string or numbers as a service parameters. I had a soap service to send the name (string value) and age (integer value) contentiously and each value should not be repeatable need to be random unique values. I used Random and RandomString functions to generate these values. Following Jmeter scrips may help

Dhananjaya jayasingheHow to get the Client's IP Address in WSO2 API Manager/ WSO2 ESB

Middleware solutions are designed to communicate with multiple parties and most of them are integrations. While integration different systems, It is required to validate the requests and collect statistics. When it comes to collecting statistics, Client's / Request Originator's IP Address plays a vital role.

In order to publish the client's IP to the stat collector, We need to extract the client's IP from the request received to the server.

When the deployment contains WSO2 API Manager or WSO2 Enterprise Service Bus, We can obtain the client's IP address using a property mediator in the InSequence.

If the deployment has a Load Balancer in front of ESB/APIManager, We can use X-Forwarded-For Header property as explained in the blog post of Firzhan.

In a deployment which doest not has Load Balancer in front of WSO2 ESB / API Manager, We can use REMOTE_ADDR to obtain the client's IP Address.

We can extract it as follows with using a property mediator.


 <property name="api.ut.REMOTE_ADDR"
expression="get-property('axis2','REMOTE_ADDR')"/&gt

Then we can use it in the sequence. As an example, if we extract the IP Address as above and log it, synapse configuration for it will look like bellow.


<property name="api.ut.REMOTE_ADDR"
expression="get-property('axis2','REMOTE_ADDR')"/>
<log level="full">
<property name="Actual Remote Address"
expression="get-property('api.ut.REMOTE_ADDR')"/>
</log>

You can use this in the InSequence of ESB or API Manager to obtain the client's IP Address.

Chathurika Erandi De SilvaWhy Message Enriching ? - A Simple Answer

 What is Message Enriching?


Message enriching normally happens when the incoming request does not contain all the needed information the backend is expecting. The message can be enriched by inserting data to the request midway as needed.

Graphically it can be illustrated as below



Message Enriching













Golden Rule of Message Enriching (my version)

Of course there are lot of use cases where enriching can be used, but ultimately, they can be narrowed down to the following three, to keep things simple

1. The message is enriched through a calculation using the existing values
2. The message is enriched using values from environment
3. The message is enriched using values from external systems, databases, etc...

WSO2 ESB in to the equation 

Now we have to see where WSO2 ESB fits in the picture. The Enrich mediator can be used to achieve message enriching. Following samples are basic demonstrations that are designed to cover the above mentioned "Golden Rule"

The message is enriched through a calculation using the existing values

For demonstration, I have created a sample sequence with Enrich mediator in it. This sequence takes in the request, matches a parameter in the request with a predefined one, and enriches messages when condition is true.

Sample Sequence





In above when a request reaches ESB with customerType as 1,2,3,4 a reference value is assigned to the customerType because the backend is expecting the customerType to come in either gold, platinum, silver or bronze

Now let's look at the Golden Rule #2

The message is enriched using values from environment

This rule is relatively simple. If the request is missing a certain value and if that value can be obtained through the environment, then its injected to the request.

Sample Sequence


story_seq_3.png


In above SystemDate is inserted to the request and later value is populated through enrich mediator.

The final Golden Rule, Rule # 3


The message is enriched using values from external systems, databases, etc...

This is the simplest, put in simple words, this rule says, if you don't have it, ask from some one who does and include it in the request.

Sample Sequence


story_4_seq.png

In above the request doesn't have the customer id, its inserted and populated through enrich mediator. The customer id is obtained from the database using DbLookUp mediator

Winding up, the Golden Rules are purely based on my understanding and of course, if anyone reads better, any one can come up with better set of Golden Rules.




Asanka DissanayakeValidate a URL with Filter Mediator in WSO2 ESB

In day to day development life, you may have come across this requirement lot of times. When you are getting a url as a field in the request, you may need to validate it.
Whether the url is in correct structure or whether it contains any unallowed characters.

This can be achieved using filter mediator in WSO2 ESB.

Matter is figuring out correct regular expression. So the code structure would be as follows.

<filter source="//url" regex="REGEX">
	<then>
		<log level="custom">
			<property name="propStatus" value="url is valid" />
		</log>
	</then>
	<else>
		<log level="custom">
			<property name="propStatus" value="!!!!url is not valid!!!!" />
		</log>
	</else>
</filter>

Refer to following table to figure out regular expressions for each use case.

Regex Sample
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)* http or https with host name/domain name with optional port http://www.asanka.com:2131
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)*[\/\w\?=&]* url with query parameters, special characters like other than ?,&,= not allowed https://www.asanka.com:2131/user/info?id=2&role=admin
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)*[\/\w]* url without query parameters https://www.asanka.com:2131/user/info
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)*[\/\w\W]* url with query parameters, special characters https://www.asanka.com:2131/user/info?id=2&role=admin

You can play around with this using following api

https://github.com/asanka88/BlogSamples/blob/master/ESB/urlvalidate.xml

Payload:

<data>
<url>https://www.asanka.com/asdasd?a=b</url>
</data>

api url :

http://localhost:8280/url/validate

 

Try changing the regex and url values from the above table.

 

Happy Coding !!!:)


Asanka DissanayakeCheck for existence of element/property with Filter Mediator in WSO2 ESB

In day to day development , sometimes you will need to check for the existence of an element or property or some thing. In other words, you will need to check something if it is null. It can be easily done in WSO2 ESB using filter mediator.

Learning by example is the best way ,

Let’s take a simple example. Suppose there is a payload incoming like below

<user>
 <name>asanka</name>
 <role>admin</role>
</user>

Suppose you need to read this field into a property. and suppose <role/> is an optional element. In that case what are you going to do?

The expected behavior is , if role is not coming with the payload, it is considered as the default role “generic_user”.

So the following code segment of filter mediator will do it for you.

<filter xpath="//role">
   <then>
      <log level="custom">
         <property name="propStatus" value="role is available" />
      </log>
      <property name="userRole" expression="//role" />
   </then>
   <else>
      <log level="custom">
         <property name="propStatus" value="!!!!role is not available!!!!" />
      </log>
      <property name="userRole" value="generic_user" />
   </else>
</filter>

“xpath” attribute in filter element provides the xpath expression to be evaluated.
If the xpath expression is evaluated to “true”, synapse code in the “then” block will be executed.
Otherwise code in the else block will be executed.

If the evaluation of the xpath returns something not null. It will be considered as true. If it is null it will be considered as false.

If you want to play with this, create filter.xml with following content and copy it to

$CARBON_HOME/repository/deployment/server/synapse-configs/default/api

https://github.com/asanka88/BlogSamples/blob/master/ESB/nullcheck.xml

and make an HTTP POST to http://localhost:8280/user/rolecheck with following payloads.

<user>
 <name>asanka</name>
 <role>admin</role>
</user>

Check the log file and you will see following output.

[2016-04-11 22:49:38,041] INFO - LogMediator propStatus = role is available
[2016-04-11 22:49:38,042] INFO - LogMediator status = ====Final user Role====, USER_ROLE = admin

<user>
 <name>asanka</name>
</user>

Check the log file and you will see following output.

[2016-04-11 22:49:43,083] INFO - LogMediator propStatus = !!!!role is not available!!!!
[2016-04-11 22:49:43,084] INFO - LogMediator status = ====Final user Role====, USER_ROLE = generic_user


Hope this helps someone:) happy coding ….


Prabath SiriwardenaSecuring Microservices with OAuth 2.0, JWT and XACML

Microservices is one of the most trending buzzword, along with the Internet of Things (IoT). Everyone talks about microservices and everyone wants to have microservices implemented. The term ‘microservice’ was first discussed at a software architects workshop in Venice, in May 2011. It’s being used to explain a common architectural style they’ve been witnessing for some time. With the granularity of the services and the frequent interactions between them, securing microservices is challenging. This post, which I published on medium presents a security model based on OAuth 2.0, JWT and XACML to overcome such challenges.

Ushani BalasooriyaHow to hide credentials used in mediation configuration using Secure Vault in WSO2 ESB

Eventhough we use secure vault to encrypt password, it is not possible to use secure vault directly in the mediation configuration. As an example imagine you need to hide a password given in a proxy.

All you have to do is using Secure Vault Password Management screen in WSO2 ESB.


1. Run sh ciphertool.sh -Dconfigure and enable secure vault
2. Start the WSO2 ESB with
3. Go to  Manage -> Secure Vault Tool and then click Manage Passwords
4. You will see the below screen.