WSO2 Venus

Srinath PereraAnalysis of Retweeting Patterns in Sri Lankan 2015 General Election

Post at https://iwringer.wordpress.com/2015/08/25/analysis-of-retweeting-patterns-in-sri-lankan-general-election/

Lakmal WarusawithanaConfigure Apache Stratos with Docker, CoreOS, Flannel and Kubernetes on EC2


UPDATE: This post is slightly outdated. Please refer link for updated steps.


Docker, CoreOS, Flannel and Kubernetes are latest cloud technologies, integrated into Apache Stratos and make more scalable and flexible PaaS, thereby enabling developers/devops to build their cloud applications with ease.


This post will focus on how you can create Kubernetes cluster using, CoreOS, Flannel on top of EC2. Then the later part will discuss how you can create application using docker based cartridges.

Setting up CoreOS, Flannel and Kubernetes on EC2


This section will cover how to  creates an elastic Kubernetes cluster with 3 worker nodes and a master. Also this includes;


I have followed [1] and also includes workaround I did for overcome some issues arose during my testing.

First of all, we need to setup some supporting tools which we need.

Install and configure kubectl


Kubectl is client command line tool provide by the kubernetes team for monitor and manage kubernetes cluster. Since I am using a mac bellow steps for set it up in mac. But you can find more details setting up on other OS at [2]

wget https://storage.googleapis.com/kubernetes-release/release/v0.9.2/bin/darwin/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/local/bin/

Install and configure AWS Command Line Interface


Below steps for setting up on Mac. For more information please see [3]
 
wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
sudo pip install awscli

If you encountered any issue, following command may help to resolve them

sudo pip uninstall six
sudo pip install --upgrade python-heatclient

Create the Kubernetes Security Group


aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"

aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 4500 --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes

Save the master and node cloud-configs



Launch the master


Attention: Replace <ami_image_id> below for a suitable version of CoreOS image for AWS. But I recommend to use CoreOS alpha channel ami_image_id (ami-f7a5fec7), because I have faced many issues with other channels AMIs. (at the time I have tested)

aws ec2 run-instances --image-id <ami_image_id> --key-name <keypair> \
--region us-west-2 --security-groups kubernetes --instance-type m3.medium \
--user-data file://master.yaml

Record the InstanceId for the master.




Gather the public and private IPs for the master node:
aws ec2 describe-instances --instance-id <instance-id>
{
   "Reservations": [
       {
           "Instances": [
               {
                   "PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com",
                   "RootDeviceType": "ebs",
                   "State": {
                       "Code": 16,
                       "Name": "running"
                   },
                   "PublicIpAddress": "54.68.97.117",
                   "PrivateIpAddress": "172.31.9.9",
...

Update the node.yaml cloud-config

Edit node.yaml and replace all instances of <master-private-ip> with the private IP address of the master node.

Launch 3 worker nodes

Attention: Replace <ami_image_id> below for a suitable version of CoreOS image for AWS. Recommend to use same ami_image_id used for the master.

aws ec2 run-instances --count 3 --image-id <ami_image_id> --key-name <keypair> \
--region us-west-2 --security-groups kubernetes --instance-type m3.medium \
--user-data file://node.yaml


Configure the kubectl SSH tunnel

This command enables secure communication between the kubectl client and the Kubernetes API.

ssh -i key-file -f -nNT -L 8080:127.0.0.1:8080 core@<master-public-ip>

Listing worker nodes

Once the worker instances have fully booted, they will be automatically registered with the Kubernetes API server by the kube-register service running on the master node. It may take a few mins.

kubectl get minions

Now you have successfully installed and configure kubernetes cluster with 3 worker (minions) nodes. If you want to try out more kubernetes sample please refere [4].

Lets setup Stratos now.

Configure Apache Stratos


I am recommending config Stratos in different EC2 instances. Create m3.medium instance from Ubuntu 14.04 ami. (I have used ami-29ebb519) Also make sure you have open following ports in the security group used. 9443, 1883, 7711

SSH into the created instance and follow the below steps to setup Stratos

  1. Download Stratos binary distribution ( apache-stratos-version.zip ) and unzip it. This folder will be referred as <STRATOS-HOME> for later reference.
  2. This can be done using any of the following methods:
    • Method 1 - Download the Stratos binary distribution from Apache Download Mirrors and unzip it. As per today (09/02/2015) Stratos 4.1.0 not done the GA release I recommend to use method 2 with master branch, until GA release available.
    • Methods 2 - Build the Stratos source to obtain the binary distribution and unzip it.
      1. git checkout tags/4.1.0-beta-kubernetes-v3
      2. Build Stratos using Maven.
      3. Navigate to the stratos/ directory, which is within the directory that you checked out the source:
        cd <STRATOS-SOURCE-HOME>/  
      4. Use Maven to build the source:
        mvn clean install
      5. Obtain the Stratos binary distribution apache-stratos-version.zip from the <STRATOS-SOURCE-HOME>/products/stratos/modules/distribution/target/ directory and unzip it.
  3. Start ActiveMQ:
    • Download and unzip ActiveMQ.
    • Navigate to the <ACTIVEMQ-HOME>/bin/ directory, which is in the unzipped ActiveMQ distribution.
    • Run the following command to start ActiveMQ.
    • ./activemq start
  4. Start Stratos server:
    • bash <STRATOS-HOME>/bin/stratos.sh start

If you wish you can tail the log and verify Stratos server is starting without any issues:
tail -f <STRATOS-HOME>/repository/logs/wso2carbon.log

Try our Stratos,kubernetes sample


Apache Stratos samples are located at following folder in git repo.

<STRATOS-SOURCE-HOME>/samples/

Here I will use simple sample called “single-cartridge” application which is in application folder. First you have to change the kubernetes cluster information with relevant information of you have setup.

Edit <STRATOS-SOURCE-HOME>/samples/applications/single-cartridge/artifacts/kubernetes/kubernetes-cluster-1.json and changed following highlighted to suite to your environment.

{
     "clusterId": "kubernetes-cluster-1",
     "description": "Kubernetes CoreOS cluster",
     "kubernetesMaster": {
                 "hostId" : "KubHostMaster1",
                 "hostname" : "master.dev.kubernetes.example.org",
       "privateIPAddress": "Kube Master Private IP Address",
                 "hostIpAddress" : "Kube Master Public IP Address",
                 "property" : [
                 ]
       },

       "portRange" : {
          "upper": "5000",
          "lower": "4500"
       },

       "kubernetesHosts": [
             {
                    "hostId" : "KubHostSlave1",
                    "hostname" : "slave1.dev.kubernetes.example.org",
          "privateIPAddress": "Kube Minion1 Private IP Address",
                    "hostIpAddress" : "Kube Minion1 Public IP Address",
                    "property" : [
                    ]
               },
               {
                    "hostId" : "KubHostSlave2",
                    "hostname" : "slave2.dev.kubernetes.example.org",
"privateIPAddress": "Kube Minion 2 Private IP Address",
                    "hostIpAddress" : "Kube Minion 2 Public IP Address",
                    "property" : [
                    ]
               },
               {
                    "hostId" : "KubHostSlave3",
                    "hostname" : "slave3.dev.kubernetes.example.org",
          "privateIPAddress": "Kube Minion 3 Private IP Address",
                    "hostIpAddress" : "Kube Minion 3 Public IP Address",
                    "property" : [
                    ]
               }
   ],   
   "property":[
 {
         "name":"payload_parameter.MB_IP",
         "value":"Apache Stratos instance Public IP Address"
      },
      {
         "name":"payload_parameter.MB_PORT",
         "value":"1883"
      },
      {
         "name":"payload_parameter.CEP_IP",
         "value":"Apache Stratos instance Public IP Address"
      },
      {
         "name":"payload_parameter.CEP_PORT",
         "value":"7711"
      },
      {
         "name":"payload_parameter.LOG_LEVEL",
         "value":"DEBUG"
      }
   ]
}

To speed up sample experience you can login to all 3 minions and pull docker image which we are going to used in the sample. This step is not mandatory but it will help to cache docker image in configured minions.

docker pull stratos/php:4.1.0-beta

core@ip-10-214-156-131 ~ $ docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
stratos/php                     4.1.0-beta          f761a71b087b        18 hours ago        418 MB



You can just run the following automated sample script located.

<STRATOS-SOURCE-HOME>/samples/applications/simple/single-cartridge-app/scripts/kubernetes/deploy.sh

You can used kubectl commands to view created pods in the kubernetes cluster from you local machine with SSH tunneling.

kubectl get pods

Lakmals-MacBook-Pro:ec2-kubernetes lakmal$ kubectl get pods
POD                                    IP                  CONTAINER(S)                                          IMAGE(S)                 HOST                LABELS                                                     STATUS
8b60e29c-b1d1-11e4-8bbb-22000adc133a   10.244.84.12        php1-php-domain338c238a-4856-4f00-b881-19aecda74cf7   stratos/php:4.1.0-beta   10.214.156.131/     name=php1-php-domain338c238a-4856-4f00-b881-19aecda74cf7   Running



Locate your browser to

https://<Stratos-Instance-IP>:9443/console and login as admin:admin which is coming default.


References






Bhathiya Jayasekara[WSO2 ESB] How to read a registry property?

Here is how to do it. This will load 'abc' property of collection "gov:/data/xml/collectionx", and store in 'regProperty' property.

How to read a file from registry can be found here.

Bhathiya Jayasekara[WSO2 ESB] How to read a file from registry and store in a property?

Here is how to do it. This will load body.xml file from governance registry, and store in 'xmlfile' property.

Having 'registry' scope is the key here. Type is "OM" because it is an xml file.

How to read a registry property can be found here.

sanjeewa malalgodaHow to set password validation policy in WSO2 Identity Server


If you need to add custom password policy there are multiple layers you can add that. First one is user-mgt.xml file and other configuration file is identity-mgt.properties file.

If identity management listener is enabled(only), user passwords should be satisfied both both regrEx defined in user-mgt.xml and identity-mgt.properties files. Otherwise we will check user-mgt.xml to validate password policy.

/repository/conf/user-mgt.xml
         
 <Property name="PasswordJavaRegEx">^[\S]{5,30}$</Property>

Following properties will be picked only if we enabled identity listener(Identity.Listener.Enable=true). Otherwise configurations on user management xml will only affect.

/repository/conf/security/identity-mgt.properties
Password.policy.extensions.1.min.length=6
Password.policy.extensions.1.max.length=12
Password.policy.extensions.3.pattern=^((?=.*\\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%&*])).{0,100}$

Evanthika AmarasiriCommon SVN related issues faced with WSO2 products and how they can be solved

Issue 1

TID: [0] [ESB] [2015-07-21 14:49:55,145] ERROR {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository} -  Error while attempting to create the directory: http://xx.xx.xx.xx/svn/wso2/-1234 {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository}
org.tigris.subversion.svnclientadapter.SVNClientException: org.tigris.subversion.javahl.ClientException: svn: authentication cancelled
    at org.tigris.subversion.svnclientadapter.javahl.AbstractJhlClientAdapter.mkdir(AbstractJhlClientAdapter.java:2524)
    at org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository.checkRemoteDirectory(SVNBasedArtifactRepository.java:240)


Reason : The user is not authenticated to write to the provided SVN location i.e.:- http://xx.xx.xx.xx/svn/wso2/ . When you see this type of an error, verify the credentials you have given under the svn configuration in the carbon.xml

    <DeploymentSynchronizer>
        <Enabled>true</Enabled>
        <AutoCommit>false</AutoCommit>
        <AutoCheckout>true</AutoCheckout>
        <RepositoryType>svn</RepositoryType>
        <SvnUrl>http://svnrepo.example.com/repos/</SvnUrl>
        <SvnUser>username</SvnUser>
        <SvnPassword>password</SvnPassword>
        <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
    </DeploymentSynchronizer>


Issue 2

TID: [0] [ESB] [2015-07-21 14:56:49,089] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} -  Deployment synchronization commit for tenant -1234 failed {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask}
java.lang.RuntimeException: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: A repository synchronizer has not been engaged for the file path: /home/wso2/products/wso2esb-4.9.0/repository/deployment/server/
    at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.commit(DeploymentSynchronizerServiceImpl.java:116)
    at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncCommit(CarbonDeploymentSchedulerTask.java:207)
    at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:128)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)


Reasons:
    (I) SVN version mismatch between local server and the SVN server (Carbon 4.2.0 products support SVN 1.6 only.

    Solution - Use the SVN kit jar 1.6 in Carbon server

    See https://docs.wso2.com/display/CLUSTER420/SVN-based+Deployment+Synchronizer)

      (II) If you have configured your server with a different SVN version than what's in the SVN server and even if you use the correct svnkit jar at the Carbon server side later, the issue will not get resolved

      Solution - Remove all the .svn files under $CARBON_HOME/repository/deployment/server folder

      (III) A similar issue can be observed when the SVN server is not reachable.

      Issue 3

       
        [2015-08-28 11:22:27,406] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} - Deployment synchronization update for tenant -1234 failed java.lang.RuntimeException: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: No Repository found for type svn at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.update(DeploymentSynchronizerServiceImpl.java:98) at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncUpdate(CarbonDeploymentSchedulerTask.java:179) at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:137) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: No Repository found for type svn at org.wso2.carbon.deployment.synchronizer.repository.CarbonRepositoryUtils.getDeploymentSyncConfigurationFromConf(CarbonRepositoryUtils.java:167) at org.wso2.carbon.deployment.synchronizer.repository.CarbonRepositoryUtils.getActiveSynchronizerConfiguration(CarbonRepositoryUtils.java:97) at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.update(DeploymentSynchronizerServiceImpl.java:66) ... 9 more 

        Reason:

        You will notice this issue when the svn kit (i.e. for latest versions of Carbon i.e. 4.4.x the jar version would be svnkit-all-1.8.7.wso2v1.jar) jar is not available in $CARBON_HOME/repository/components/dropins folder

Chamila WijayarathnaPort OJ to jruby - What is finished and what is yet to be done

In two of my previous blogs, I wrote about my GSoC experiences in year 2013 and 2014. I was expecting to write a similar blog about this year's experience also which will be my lst GSoC as an undergraduate, but unfortunately things didn't go in the way I expected. I couldn't complete the project, so at the moment I am writing this project only about 50% of the project is done.

The main reason why I am writing this blog is since I started working at WSO2 recently, in future also I may not be able to contribute lot of time to work on implementing this library. So anyone interested in contributing to this open source library can get some idea about what have been done on this and what is remaining to be done.

First of all, I'll give a brief introduction to the project.

The objective of the project was to port ruby gem 'oj (Optimized JSON)' [1][2] into jruby.  As the name implies, oj was written to provide speed optimized JSON handling. Methods exposed by this gem can be found at [3]. It has provided functionalities to parse and load json and also one of the other interesting functions is mimic_JSON. So what I was trying to do is porting this library to jruby, so jruby users can make use of functionalities available there.

So now let's see what are the stuff already done in this project. Current implementation of this is available at [4]. As you can see in the git repository, I have created the infrastructure required in the gem. Details required to install and run the gem can be found from the read me file. I have created the interface, so when a function of the library is called from a Ruby file, corresponding function in java which is located at Oj.java will be called.

Also I have completed default_options [5] [6] methods which used to set options of the parser and to check the current setting of parser.

There are 4 modes in Oj for parsing and loading json. They are strict mode, compat mode, safe mode and object mode. I have done most of the work required in strict load method. There are still few errors in this methods, but they are minor ones and I believe they can be fixed with very little effort. Then since most of the methods that are used in strict_load are reused in other load methods, they only required to implement the parseInfo data structure and and only requires few more configurations.

I couldn't look at implementation of parse methods yet, so that area may need some more work to be done.

In Oj the mimic_json and mimic_loaded methods were implemented in ruby, so I reused same code in my java implementation as well. But I couldn't tested them yet, so that parts need to be tested. Same tests implemented in 'oj' can be used to test this gem as well.

So that are the summery for now in this project. I would like to invite all jruby community and other interested people to fork this repo and do whatever contributions you can. I am very keen to get this into working quickly.

1. https://github.com/ohler55/oj
2. http://www.rubydoc.info/gems/oj
3. http://www.rubydoc.info/gems/oj/Oj
4. https://github.com/cdwijayarathna/oj4j
5. http://www.rubydoc.info/gems/oj/Oj#default_options-class_method
6. http://www.rubydoc.info/gems/oj/Oj#default_options%3D-class_method

Bhathiya Jayasekara[WSO2 ESB] How to replace message body by a value of a property

Here is how to do it. This will replace message body by the value of "xmlfile" property.

Make sure you set type="OM" in property. Otherwise you will get below error.

"ERROR - EnrichMediator Invalid Object type to be inserted into message body"


Isuru PereraJava Ergonomics and JVM Flags

Java Virtual Machine can tune itself depending on the environment and this smart tuning is referred to as Ergonomics.

When tuning Java, it's important to know which values were used as default for Garbage collector, Heap Sizes, Runtime Compiler by Java Ergonomics

There are many JVM command line flags. So, how do we find out which JVM flags are used? It turns out that JVM has flags to print flags. :)

I tested following commands with Java 8.

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 15.04
Release: 15.04
Codename: vivid
$ uname -a
Linux isurup-ThinkPad-T530 3.19.0-26-generic #28-Ubuntu SMP Tue Aug 11 14:16:32 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux



Printing Command Line Flags


We can use "-XX:+PrintCommandLineFlags" to print the command line flags used by the JVM. This is a useful flag to see the values selected by Java Ergonomics.

$ java -XX:+PrintCommandLineFlags -version
-XX:InitialHeapSize=259964992 -XX:MaxHeapSize=4159439872 -XX:+PrintCommandLineFlags -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseParallelGC
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)


My laptop is detected as a server-class machine as I'm running Ubuntu 64-bit version. I have 16GB memory.

$ free 
total used free shared buffers cached
Mem: 16247812 13500596 2747216 1020860 172252 7987388
-/+ buffers/cache: 5340956 10906856
Swap: 18622460 42072 18580388


So, as explained in Java Ergonomics, we can see that the initial heap size is 1/64 of physical memory and maximum heap size is 1/4 of physical memory.

$ echo $(((16247812 * 1024) / 259964992))
64
$ echo $(((16247812 * 1024) / 4159439872))
4


You can also notice that the parallel garbage collector (-XX:+UseParallelGC) is selected by the JVM.

Printing Initial JVM Flags


$ java -XX:+PrintFlagsInitial -version


When printing the JVM flags, we can see there are several columns. Those columns are type, name, assignment operator, value and the flag type.

Use this command to see the default values.

Printing Final JVM Flags


See blog post on -XX:+PrintFlagsFinal to learn more details on each column.

$ java -XX:+PrintFlagsFinal -version


When printing final flags, we can see there are some flags with assignment operator ":=", which indicates that the flag values were modified (by manually or by Java Ergonomics)


$ java -XX:+PrintFlagsFinal -version | grep ':='



JVM Flag Types


All JVM Flags are categorized in to different types, which can be seen inside curly brackets in the -XX:+PrintFlagsInitial/-XX:+PrintFlagsFinal output.



$ java -XX:+UnlockDiagnosticVMOptions -XX:+UnlockExperimentalVMOptions -XX:+UnlockCommercialFeatures -XX:+PrintFlagsFinal -version  |  awk -F ' {2,}' '{print $4}' | sort -u 
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)

{ARCH diagnostic}
{ARCH experimental}
{ARCH product}
{C1 diagnostic}
{C1 pd product}
{C1 product}
{C2 diagnostic}
{C2 experimental}
{C2 pd product}
{C2 product}
{commercial}
{diagnostic}
{experimental}
{lp64_product}
{manageable}
{pd product}
{product}
{product rw}


As mentioned in the blog post on -XX:+PrintFlagsFinal , we can find some details on these flag types from the JDK source file:
http://hg.openjdk.java.net/jdk8/jdk8/hotspot/file/tip/src/share/vm/runtime/globals.hpp

Following are the meanings of above flag types

  • product - General Flags for JVM, which are officially supported.
  • product rw - Writable internal product flags
  • manageable - Writable external product flags.
  • diagnostic - These flags are not meant for JVM tuning or for product modes. Can be used for JVM debugging. Need to use -XX:+UnlockDiagnosticVMOptions
  • experimental - These flags are in support of features, which are not part of officially supported product. Need to use -XX:+UnlockExperimentalVMOptions
  • commercial - These flags are related to commercial features of the JVM. Need a license to use in production. Need to use -XX:+UnlockCommercialFeatures
  • C1 - Client JIT Compiler specific flags
  • C2 - Server JIT Compiler specific flags
  • pd - Platform dependent flags
  • lp64 - Flags for 64bit JVM
  • ARCH - Architecture (CPU: x86, sparc etc) dependent flags

JVM Flag Data Types



$ java -XX:+PrintFlagsFinal -version | awk '{if (NR!=1) {print $1}}' | sort -u
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
bool
ccstr
ccstrlist
double
intx
uint64_t
uintx


The data types for JVM Flags are intuitive.

  • bool - Boolean (true/false)
  • ccstr - String
  • ccstrlist - Represents String arguments, which can accumulate
  • double - Double
  • intx - Integer
  • uint64_t - Unsigned long
  • uintx - Unsigned integer


Summary


If you ever wanted to find out all possible JVM flags, the -XX:+PrintFlagsFinal flag is the solution for you. Java Ergonomics select the best values depending on the environment and it's important to be aware of the Java Ergonomics when tuning your application.


Madhuka UdanthaPackaging and Distributing Python Projects

Requirements

Wheel: It is a built package that can be installed without the build proces
pip install wheel

Twine : It is a utility for interacting with PyPI
pip install twine

Configuring a Project

Here are files that will needed in root level.

setup.py : It contains a global setup() function. The keyword arguments to this function are how specific details of your project are defined.

1 setup(
2 name='sample',
3
4 # Versions should comply with PEP440.
5 version='1.0.0',
6
7 description='A sample Python project',
8 # url, author, author_email, license
9
10 # See https://pypi.python.org/pypi?%3Aaction=list_classifiers
11 classifiers=[
12 # How mature is this project? Common values are (Alpha, Beta, Production)
13 'Development Status :: 3 - Alpha',
14
15 # that you indicate whether you support Python 2, Python 3 or both.
16 'Programming Language :: Python :: 2.7',
17 ],
18
19 keywords='sample module',
20
21 # List run-time dependencies here
22 install_requires=['peppercorn'],
23
24 # List additional groups of dependencies here (e.g. development dependencies).
25 extras_require={
26 'dev': ['check-manifest'],
27 'test': ['coverage'],
28 },
29
30 # If there are data files included in your packages
31 package_data={
32 'sample': ['package_data.dat'],
33 },
34
35 # Place data files outside of your packages
36 data_files=[('my_data', ['data/data_file'])],
37
38 # To provide executable scripts
39 entry_points={
40 'console_scripts': [
41 'sample=sample:main',
42 ],
43 },
44 )
45


setup.cfg : It is ini file that contains option defaults for setup.py


README.rst : It is readme for the project


MANIFEST.in : Where you need to package additional files


Package (Folder) :  Most common practice to is to include all python modules and packages under a single top-level package that has the same name as the project


 


Building


Development Mode



  • python setup.py develop

This will install any dependencies declared with “install_requires” and also any scripts declared with “console_scripts”.


image


Packaging Project


Source Distributions



  • python setup.py sdist

image


To build a Universal Wheel:



  • python setup.py bdist_wheel --universal

Pure Python Wheels



  • python setup.py bdist_wheel

Install for windows



  • python setup.py bdist_wininst

Now you can share and install  it in windows easy with wizard as below


image


Now Check Index of Modules from : http://localhost:7464/


image


You can find you new modules in here


image


you can used it in python now as module as below and you can share it with other developers.


1 import minifycsv
2
3 #using minifycsv
4 minifycsv.main()

Next we can see uploading the Project to PyPI.

Isuru PereraFinding how many processors

I wanted to find out the processor details in my laptop and I found out that there are several ways to check. For example, see The RedHat community discussion on Figuring out CPUs and Sockets.

In this blog post, I'm listing few commands to find out details about CPUs.

I'm using Ubuntu in my Lenovo ThinkPad T530 laptop and following commands should be working any Linux system.

Display information about CPU architecture


$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 58
Model name: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
Stepping: 9
CPU MHz: 1199.988
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 5787.10
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 4096K
NUMA node0 CPU(s): 0-3


$ lscpu -e
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ
0 0 0 0 0:0:0:0 yes 3600.0000 1200.0000
1 0 0 0 0:0:0:0 yes 3600.0000 1200.0000
2 0 0 1 1:1:1:0 yes 3600.0000 1200.0000
3 0 0 1 1:1:1:0 yes 3600.0000 1200.0000

The lscpu command reads /proc/cpuinfo file to get the CPU architecture details. So, you can also read it get more info.

$ cat /proc/cpuinfo

The lshw (list hardware) is another command to get CPU details.

$ sudo lshw -class processor


There is a GUI tool named "hardinfo", which can show details about various hardware components.

$ sudo apt-get install hardinfo
$ hardinfo


The dmidecode is another tool to find out hardware details. Following is an example to find processor details. The value "4" is the Dmi Type for the processor.

$ sudo dmidecode -t 4


The cpuid tool can dump CPUID information for each CPU.

$ sudo apt-get install cpuid
$ cpuid


The inxi is also another tool to check hardware information on Linux.

$ sudo apt-get install inxi
# Show full CPU output, including per CPU clockspeed and CPU max speed
$ inxi -C


Finding number of processors


# Print the number of processing units available
$ nproc
4


$ cat /proc/cpuinfo | grep processor | wc -l
4


Finding number of Physical CPUs


$ cat /proc/cpuinfo | grep "physical id" | sort -u | wc -l
1


Finding number of cores per socket


$ lscpu | grep 'socket'
Core(s) per socket: 2


$ cat /proc/cpuinfo | grep "cpu cores" | uniq
cpu cores : 2

Finding number of threads per core


$ lscpu | grep -i thread
Thread(s) per core: 2


Summary


It's important to know about the CPUs when we are doing performance tests. A physical CPU is inserted into a single CPU socket. There should be multiple CPU sockets to support multiple CPUs. However modern CPUs support multiple cores and hyper-threading. See: CPU Basics: Multiple CPUs, Cores, and Hyper-Threading Explained

We can use following formula to calculate the number of logical processors we see in our system.

Number of Logical Processors = Number of Sockets x Number of Cores per CPU x Threads per Core

In my Laptop, there is one physical processor with two cores and each core has two threads. So, I have 4 logical processors. :)





Ajith VitharanaAdvance Mediation with WSO2 API Manager(1.9) + WSO2 ESB -3

In this use case WSO2 API Manager act as the gateway, and WSO2 ESB used to execute the mediation logic.
1. This service (http://www.webservicex.com/globalweather.asmx) provides two operations.
  • GetCitiesByCountry
                   Get all major cities by country name(full / part).
  • GetWeather
                 Get weather report for all major cities around the world.

2. Create an API in ESB to invoke the above operations  using following REST operations (ESB start with port offset as 1).
http://localhost:8281/weatheresb/GetCitiesByCountry?CountryName=Romania
http://localhost:8281/weatheresb/GetCitiesByCountry?CountryName=Romania&CityName=Timisoara
(Save this file as WeatherAPIESB.xml in <ESB_Home>\repository\deployment\server\synapse-configs\default\api)
<?xml version="1.0" encoding="UTF-8"?>
<api xmlns="http://ws.apache.org/ns/synapse"
     name="WeatherAPIESB"
     context="/weatheresb">
   <resource methods="GET" url-mapping="/GetCitiesByCountry">
      <inSequence>
         <payloadFactory media-type="xml">
            <format>
               <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
                                 xmlns:web="http://www.webserviceX.NET">
                  <soapenv:Header/>
                  <soapenv:Body>
                     <web:GetCitiesByCountry><web:CountryName>$1</web:CountryName>
                     </web:GetCitiesByCountry>
                  </soapenv:Body>
               </soapenv:Envelope>
            </format>
            <args>
               <arg xmlns:m0="http://services.samples/xsd"
                    evaluator="xml"
                    expression="get-property('query.param.CountryName')"/>
            </args>
         </payloadFactory>
         <send>
            <endpoint>
               <address uri="http://www.webservicex.com/globalweather.asmx" format="soap12"/>
            </endpoint>
         </send>
      </inSequence>
      <outSequence>
         <property name="messageType" value="application/json" scope="axis2"/>
         <send/>
      </outSequence>
   </resource>
   <resource methods="GET" url-mapping="/GetWeather">
      <inSequence>
         <payloadFactory media-type="xml">
            <format>
               <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
                                 xmlns:web="http://www.webserviceX.NET">
                  <soapenv:Header/>
                  <soapenv:Body>
                     <web:GetWeather><web:CityName>$1</web:CityName>
                        <web:CountryName>$2</web:CountryName>
                     </web:GetWeather>
                  </soapenv:Body>
               </soapenv:Envelope>
            </format>
            <args>
               <arg xmlns:m0="http://services.samples/xsd"
                    evaluator="xml"
                    expression="get-property('query.param.CityName')"/>
               <arg xmlns:m0="http://services.samples/xsd"
                    evaluator="xml"
                    expression="get-property('query.param.CountryName')"/>
            </args>
         </payloadFactory>
         <send>
            <endpoint>
               <address uri="http://www.webservicex.com/globalweather.asmx" format="soap12"/>
            </endpoint>
         </send>
      </inSequence>
      <outSequence>
         <property name="messageType" value="application/json" scope="axis2"/>
         <send/>
      </outSequence>
   </resource>
</api>
 
3.Create an API in API Manager considering the REST API exposed through the ESB (http://localhost:8281/weatheresb).

4.  Design API with  two GET resource.


5. Go to the implement wizard and define HTTP endpoint (http://localhost:8281/weatheresb).

6. Go the manage wizard , select the tier then "save & publish".


7.  Login to the store, subscribe to an application, then  generate token.



8.  You can use the SOAPUI to invoke this API.



Related posts:

i) http://www.vitharana.org/2015/01/soap-web-service-as-rest-api-using-wso2.html

ii) http://www.vitharana.org/2015/08/soap-webservice-as-rest-api-wso2-api.html

Madhuka UdanthaZeppelin Data Validation Service

Data Validation

Data validation is a process of ensuring data in zeppelin is clean, correct and according to the data schema model. Data validation provides certain well-defined rule set for fitness, and consistency checking for zeppelin charts. Here is more about data validation types.

Where the data validator is used  in zeppelin?

Data validator is used in zeppelin before drawing charts or analyzing data.

Why the data validator is used?

In drawing charts you can validate dataset if under validate data model schema, example. Before visualizing the dataset in charts, dataset needs to validated against data model schema for a particular chart type.
This is because different chart types have different data models. eg: Pie charts, Bar charts and Area charts have label and a number. Scatter charts and Bubble charts have two numbers for x axis and y axis at minimum in their data models.

Why the data validator is important?

When user request to draw any visualization of a dataset, data validation services will run through the dataset and check if the dataset is valid against the data schema. If unsuccess it will give a message which record is mismatched against the data schema. So the user gets a more accurate visualization and correct decision finally. Also researchers and data analytics use it to verify the dataset is clean and the preprocessing is done correctly.

How Data Validation is done?

Data Validation consists of service, factories and configs.Data Validation is exposed as Angular services. Data validation factory, which is extendable contains functional implementation. Schemas are defined as constants in config. It contains basic data type validation by default

Developers can introduce new data validation factories for their chart types by extending data validator factory. If a new chart consists of the same data schema existing data validators can be used.

How to used existing Data Validation services
Zeppelin Data Validation is exposed as service in Zeppelin Web application. It can be called and the dataset can be passed as a parameter.

`dataValidatorSrv.<dataModelValidateName>(data);`

This will return a message as below

{
  'error': true / false,
  'msg': 'error msg / notification msg'
}

How to Add New Data Validation Schema

Data Validation is implemented as factory model. Therefore customized Data Validation factory can be created by extending `DataValidator` (zeppelin-web/src/components/data-validator/data-validator-factory.js)

Data model schema in 'dataModelSchemas' can be configured.

'MapSchema': {
  type: ['string', 'string', 'number', 'number', 'number']
}

If beyond data type validation is needed a function for validating the record can be introduced. If Range and constraint validation, Code and Cross-reference validation or Structured validation are needed they can be added to the Data Validation factory.


How to Expose New Data Validation Schema in Service
After adding a new data validation factory it needs to be exposed in `dataValidatorSrv` (zeppelin-web/src/components/data-validator/data-validator-service.js)

1 this.validateMapData = function(data) {
2 var mapValidator = mapdataValidator;
3 doBasicCheck(mapValidator,data);
4 //any custom validation can be called in here
5 return buildMsg(mapValidator);
6 };

Adding new Data Range Validation


Data Range Validation is important with regard to some datasets. As an example Geographic Information dataset  will contain geographic coordinates, Latitude measurements ranging from 0° to (+/–)90° and Longitude measurements ranging from 0° to (+/–)180°. All the values of Latitude and Longitude must to be inside a particular range. Therefore you can define range in schema and range validation function for factory as below.


Adding range for `MapSchema`


1 'MapSchema': {
2 type: ['string', 'string', 'number', 'number', 'number'],
3 range: {
4 latitude: {
5 low: -90,
6 high: 90
7 },
8 longitude: {
9 low: -180,
10 high: 180
11 }
12 }
13 }

Validating latitude in `mapdataValidator` factory


1 //Latitude measurements range from 0° to (+/–)90°.
2 function latitudeValidator(record, schema) {
3 var latitude = parseFloat(record);
4 if (schema.latitude.low < latitude && latitude < schema.latitude.high) {
5 msg += 'latitudes are ok | ';
6 } else {
7 msg += 'Latitude ' + record + ' is not in range | ';
8 errorStatus = true;
9 }
10 }

Few other sample validators can be found in zeppelin-web/src/components/data-validator/ directory

Madhuka UdanthaIntroducing New Chart Library and Types for Apache Zeppelin

Why Charts are important in zeppelin?
Zeppelin is mostly used for data analysis and visualization. Depending on the user requirements and datasets the types of charts needed could differ. So Zeppelin let user to add different chart libraries and chart types.

 

Add New Chart Library
When needed a new JS chart library than D3 (nvd3) which is included in zeppelin, a new JS library for zeppelin-web is added by adding name in zeppelin-web/bower.json

eg: Adding map visualization to Zeppelin using leaflet

"leaflet": "~0.7.3" for dependencies

 

Add New Chart Type

Firstly add a button to view the new chart. Append to paragraph.html (zeppelin-web/src/app/notebook/paragraph/paragraph.html) the following lines depending on the chart you use.

1 <button type="button" class="btn btn-default btn-sm"
2 ng-class="{'active': isGraphMode('mapChart')}"
3 ng-click="setGraphMode('mapChart', true)"><i class="fa fa-globe"></i>
4 </button>


After successful addition the zeppelin user will be able to see a new chart button added to the button group as follows.


new_map_button


Defining the chart area


Defining the chart area of the new chart type.
To define the chart view of the new chart type add the following lines to paragraph.html


1 <div ng-if="getGraphMode()=='mapChart'"
2 id="p{{paragraph.id}}_mapChart">
3 <leaflet></leaflet>
4 </div>

Setup the chart data


Different charts have different attributes and features. To handle such features of the new chart type map those behaviors and features in the function `setGraphMode()` in the file paragraph.controller.js as follows.


1 if (!type || type === 'mapChart') {
2 //setup new chart type
3 }

The current Dataset can be retrieved by `$scope.paragraph.result` inside the `setGraphMode()` function.


Best Practices for setup a new chart.


A new function can be used to setup the new chart types. Afterwards that function could be called inside the `setMapChart()` function.


Here is sample code setting map chart type


1 var setMapChart = function(type, data, refresh) {
2 //adding markers for map
3 newmarkers = {};
4 for (var i = 0; i < data.rows.length; i++) {
5 var row = data.rows[i];
6 var rowMarker = mapChartModel(row);
7 newmarkers = $.extend(newmarkers, rowMarker);
8 }
9 $scope.markers = newmarkers;
10 // adding map bounds
11 var bounds = leafletBoundsHelpers.createBoundsFromArray([
12 [Math.max.apply(Math, latArr), Math.max.apply(Math, lngArr)],
13 [Math.min.apply(Math, latArr), Math.min.apply(Math, lngArr)]
14 ]);
15 $scope.bounds = bounds;
16 }

Madhuka UdanthaTutorial with Map Visualization in Apache Zeppelin

Zeppelin is using leaflet which is an open source and mobile friendly interactive map library.

Before starting the tutorial you will need dataset with geographical information. Dataset should contain location coordinates representing, longitude and latitude. Here the online csv file will be used for the next steps. Here I am sharing sample dataset in gist.

1 import org.apache.commons.io.IOUtils
2 import java.net.URL
3 import java.nio.charset.Charset
4
5
6 // load map data
7 val myMapText = sc.parallelize(
8 IOUtils.toString(
9 new URL("https://gist.githubusercontent.com/Madhuka/74cb9a6577c87aa7d2fd/raw/2f758d33d28ddc01c162293ad45dc16be2806a6b/data.csv"),
10 Charset.forName("utf8")).split("\n"))

Refine Data


Next to transform data from csv format into RDD of Map objects, run the following script. This will remove the csv headers using filter function.


 


1 case class Map(Country:String, Name:String, lat : Float, lan : Float, Altitude : Float)
2
3 val myMap = myMapText.map(s=>s.split(",")).filter(s=>s(0)!="Country").map(
4 s=>Map(s(0),
5 s(1),
6 s(2).toFloat,
7 s(3).toFloat,
8 s(4).toFloat
9 )
10
11
12 // Below line works only in spark 1.3.0.
13 // For spark 1.1.x and spark 1.2.x,
14 // use myMap.registerTempTable("myMap") instead.
15 myMap.toDF().registerTempTable("myMap")

Data Retrieval and Data Validation


Here is how the dataset is viewed as a table


map_dataset



Dataset can be vaildated by calling `dataValidatorSrv`. It will validate longitude and latitude. If any record is out of range it will point out the recordId and record value with a meaningful error message.



1 var msg = dataValidatorSrv.validateMapData(data);



Now data distributions can be viewed on geographical map as below.


1 %sql
2 select * from myMap
3 where Country = "${Country="United States"}

1 %sql
2 select * from myMap
3 where Altitude > ${Altitude=300}

maps

Madhuka UdanthaZeppelin Docs

Install Ruby Version Manager (rvm)

curl -L https://get.rvm.io | bash -s stable --ruby

Then check which rubies are installed by using

rvm list

ruby -v

you can then switch ruby versions using

rvm use 1.9.3 --default

If not install you can install by

rvm install ruby-1.9.3-p551

Now we have correct version start building app

gem install bundler

bundle install

Screenshot from 2015-08-11 15_37_41

To start serve

bundle exec jekyll serve –watch

Screenshot from 2015-08-11 15_37_18

Then go to below URL

http://localhost:4000/

Screenshot from 2015-08-11 15_39_37

Ajith VitharanaSOAP webservice as a REST API - WSO2 API Manager (1.9) -2

1. I'm going to use the weather service which is available in public. (http://www.webservicex.com/globalweather.asmx)

2. This SOAP service has a operation called "GetCitiesByCountry". You can get the following results when It access directly from the SOAPUI.

3. Now I'm going to create API in WSO2 API Manager to expose this service operation as a REST operation.

Eg: Expose the  GetCitiesByCountry as GET method as bellow.
https://localhost:8243/weather/v1.0.0/GetCitiesByCountry?country=Romania
4. We want to write a mediation extension to read the "country" from request and construct the SOAP payload which is  expect by back end service (I'm going to save this file as JSONtoSOAP.xml). This file shroud upload to /_system/governance/apimgt/customsequences/in in registry.
<sequence xmlns="http://ws.apache.org/ns/synapse" name="admin--weather:vv1.0.0--In">  
<property name="country" expression="$url:country"></property>
<payloadFactory media-type="xml">
<format>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:web="http://www.webserviceX.NET">
<soapenv:Header/>
<soapenv:Body>
<web:GetCitiesByCountry>
<web:CountryName>$1</web:CountryName>
</web:GetCitiesByCountry>
</soapenv:Body>
</soapenv:Envelope>
</format>
<args>
<arg xmlns:m0="http://services.samples/xsd" evaluator="xml" expression="get-property('country')"></arg>
</args>
</payloadFactory>
</sequence>


5. We want another mediation extension file to convert the SOAP response to JSON.(I'm going to save this file as SOAPtoJSON.xml). This file should upload to /_system/governance/apimgt/customsequences/out in registry.
<sequence xmlns="http://ws.apache.org/ns/synapse" name="admin--weather:vv1.0.0--Out">
   <property name="messageType" value="application/json" scope="axis2"/>
</sequence>
 
6.  Create an API as bellow. (Make sure to add new query parameter called country for GetCitiesByCountry resource)




7. Go to the implement wizard  and select the end point type as "Address Endpoint". Then define the endpoint as http://www.webservicex.com/globalweather.asmx


8.  Click on the "Advance Options" button and select the "Format" as SOAP 1.2.


9. Go to the manage wizard and select the custom mediation form the "Message Mediation Policies".


10. Click on "save & publish" button to publish this API to store.

11. Login to the store ,  subscribe that API  to an application and generate a token.

12. Use the following Curl command to invoke the  GetCitiesByCountry operation.
curl  -k -H "Authorization :Bearer 99215d98fa10ce6f2d83cd11ba9828" -H "Content-Type:application/json"   https://localhost:8243/weather/v1.0.0/GetCitiesByCountry?country=Romania

13. You can also use the SOAPUI to invoke that API. (Make sure to define the  Authorization header)


Related posts:

i) http://www.vitharana.org/2015/01/soap-web-service-as-rest-api-using-wso2.html

ii) http://www.vitharana.org/2015/08/advance-mediation-with-wso2-api-manager.html

Yumani RanaweerauseOriginalwsdl

I learnt a practical example on using 'useOriginalwsdl' parameters in an ESB proxy service which has a wsdl associated to it.  Thanks Charitha - as this was by listening to how Charitha approached and handled an ESB issue.

Firstly, there is a proxy service which is associating a wsdl and we were to access an operation in it via soap-ui. This particular wsdl-operation has a soap header specified. Issue being addressed is that the soap header is not appearing in the soap-request that was generated by soap-ui.

What we were doing to generate this request was generating ?wsdl of the proxy and using it in soap-ui.

When analyzing this auto generated wsdl (?wsdl), we found out that some of the parameters in the original wsdl are not appearing in it. This is because at the auto generation process ESB recreates the wsdl treating it as part of the proxy.

But for us to avoid this and have the wsdl syntax as its original we could use the parameter 'useOriginalwsdl'. After this correction, when the ?wsdl was used in soap-ui project we were able to see the correct request.

So when creating a proxy with a wsdl associated, if we need to make sure original wsdl is available for message invocation, we need to set useOriginalwsdl=true.
 

 Just now I read about enablePublishWSDLSafeMode  parameter in ESb proxies from one of Prabath's blog posts. Will write about it after trying the scenario.

Bhathiya JayasekaraHow to Split a JSON array in WSO2 ESB

Say you have this JSON array.

Now you want to split users, and call some backend service for each user. Here is how to do. (Here I'm logging each user details instead of calling a backend service.)

Output will look like this.

[2015-08-18 15:49:27,200]  INFO - LogMediator Name is = X, Age is = 10
[2015-08-18 15:49:27,204]  INFO - LogMediator Name is = Y, Age is = 12
[2015-08-18 15:49:27,205]  INFO - LogMediator Name is = Z, Age is = 15

Hope this will be helpful to someone.

References
https://denuwanthi.wordpress.com/2015/06/03/wso2-esbaccess-an-array-defined-in-property-mediator/

Hasini GunasingheRahasNym: Protecting against Linkability in the Digital Identity Eco System

This is the poster paper published and presented on the $subject in IEEE International Conference on Distributed Computing Systems (ICDCS 2015) which was held in Ohio, Columbus, USA from 29th June to 2nd July.

The poster paper can be found in the conference proceedings.

Following is the poster that was presented during the poster session of the main conference:



We were lucky to get the best poster award for this work.


Hasini GunasinghePrivacy Preserving Biometrics-Based and User Centric Authentication Protocol

This is my first research paper from grad school. This was published in the 8th International Conference on Network and System Security (NSS 2014) which was held in Xi'an, China from 15th-17th October 2015.

The full paper can be found here in Springer Lecture Notes in Computer Science.

Following are the slides I used when presenting the paper at the conference.



Interestingly, we got the best paper award for this paper at NSS 2014.



Ajith VitharanaAPI Manager distributed setup with Apache proxy

In this post I'm going to explain about how to setup API Manager distributed setup with Apache proxy. (The configuration details include for  both Windows and Linux environments)


  • The proxy1 used to accept the API request and distribute among the gaeway nodes.
  • The proxy2 used to distribute the key validation and authentication request among the manager nodes.
  • The proxy3 used to publish API to gateway nodes.
  • SVN/Rsync used to synchronize the API artifacts xml across the gateway nodes.
  • The registry(config and governance), user management and API manager database shared across all the nodes. 
Required applications:

1. Java 1.7.X
2 Apache2
3. Open SSL
4. MySQL

 1.0 Install and configure the Apache2 on Windows

i) You can download and install XAMP (https://www.apachefriends.org/index.html), that provides a control panel to manage the Apache server on Windows.

 Open the httpd.conf file (C:\xampp\apache\conf) and enable the following modules.
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so
LoadModule slotmem_shm_module modules/mod_slotmem_shm.so
 ii) In Linux environment.

Install Apache2 using the following command.
$sudo apt-get install apache2
To enable the modules.
$sudo a2enmod proxy_http
$sudo a2enmod ssl
$sudo a2enmod proxy_balancer
$sudo a2enmod lbmethod_balancer
$sudo a2enmod slotmem_shm_module
2.0 Create virtual host configuration for proxy-1.

i) In  windows,  virtual host file (eg: gw.wso2am.conf) should be saved in C:\xampp\apache\conf\extra directory.  (in Linux /etc/apache2/site-available).
<VirtualHost gw.wso2am:443>

 ServerAdmin abc@wso2.com
 ServerName  gw.wso2am
 ServerAlias gw.wso2am
 ProxyRequests Off
 SSLEngine On
 SSLProxyEngine On
 #SSLProxyVerify none
 SSLProxyCheckPeerCN off
 SSLProxyCheckPeerName off
 #SSLProxyCheckPeerExpire off

 SSLCertificateFile D:\ssl\gwssl\ca.crt
 SSLCertificateKeyFile D:\ssl\gwssl\ca.key

<Proxy balancer://gw.wso2am>
        BalancerMember https://gw1.wso2am:8245 route=gwNode1 loadfactor=1
        BalancerMember https://gw2.wso2am:8246 route=gwNode2 loadfactor=1
        ProxySet lbmethod=byrequests
        ProxySet stickysession=JSESSIONID|jsessionid
</Proxy>

ProxyPass /  balancer://gw.wso2am/ 
ProxyPassReverse / balancer://gw.wso2am/ 
  
</VirtualHost>

ii) In windows,  include this virtual host configuration in httpd.conf file. (Find the # Virtual hosts configuration section)
Include conf/extra/gw.wso2am.conf
iii) In linux you can use the following command.
sudo a2ensite gw.wso2am.conf
3.0 Create certificate file and key for the proxy-1.

i) You can download the open ssl for windows (https://code.google.com/p/openssl-for-windows/) . Exact the zip file and set the path variable to bin directory.

Generate key:
openssl genrsa -out ca.key 1024 (windows)
sudo openssl genrsa -out ca.key 1024 (linux)
Generate CSR (Certificate Signing Request ).
  • Here you need to enter CSR details and make sure to enter the gw.wso2am as the Common Name (CN).
openssl req -new -key ca.key -out ca.csr (windows)
sudo openssl req -new -key ca.key -out ca.csr (linux)
Generate self sign certificate.
openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt (Windows)
sudo openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt (linux)


  • The ca.crt and ca.key has configured as SSLCertificateFile  and SSLCertificateKeyFile in above virtual host configuration.
  •  If the CN (Common Name) of the back end server certificate is different from the virtual host name, then you need to disable the SSLProxyCheckPeerCN and SSLProxyCheckPeerName in the virtual host configuration.
4.0 Create new key store for gateway nodes.

i) Generate new key store.
  •  Make sure to enter the gw.wso2am as the first and last name , that value will be set as common name (CN).
  •  Enter the key store password as wso2carbon , then you don't need to change the default key store configurations in API Manager.
keytool -genkey -keyalg RSA -alias wso2carbon -keystore wso2carbon.jks -storepass wso2carbon -validity 360 -keysize 2048
ii) Export the public  key.
keytool -export -alias wso2carbon -keystore wso2carbon.jks -storepass wso2carbon -file wso2carbon.pem



iii) Copy this new  wso2carbon.jks and wso2carbon.pem files to repository\resources\security directory of the gateway nodes.

iv) Go to the repository\resources\security directory from command line and execute the following command to remove the default wso2carbon certificate from the client-truststore.jks.
keytool -delete -alias wso2carbon -keystore client-truststore.jks
v) Import new certificate.
keytool -import -alias wso2carbon -file wso2carbon.pem -keystore client-truststore.jks -storepass wso2carbon

5.0 Configure the gateway nodes.

i) Open the carbon.xml file (repository\conf) and define the HostName and MgtHostName.
<HostName>gw1.wso2am</HostName>
<MgtHostName>mgt1.gw.wso2am</MgtHostName>
ii) Open the catalina-server.xml file (repository\conf\tomcat) and add the following two properties in http connector configuration.
proxyPort="80"
proxyName="gw.wso2am"

iii) Open the catalina-server.xml file (repository\conf\tomcat) add add the following two properties in https connector configuration.
proxyPort="443"
proxyName="gw.wso2am"

iv) Open the master-datasources.xml and configure new data source for mount registry.
<datasource>       
    <name>WSO2_CARBON_DB_cluster_db</name>
    <description>The datasource used for mount registry </description>
    <jndiConfig>
        <name>jdbc/WSO2CarbonDB_cluster_db</name>
    </jndiConfig>
    <definition type="RDBMS">
    <configuration>
        <url>jdbc:mysql://localhost:3306/cluster_db</url>
        <username>root</username>
        <password></password>
        <driverClassName>com.mysql.jdbc.Driver</driverClassName>
        <maxActive>50</maxActive>
        <maxWait>60000</maxWait>
        <testOnBorrow>true</testOnBorrow>
        <validationQuery>SELECT 1</validationQuery>
        <validationInterval>30000</validationInterval>
    </configuration>
</definition>
</datasource>

v) Open the master-datasources.xml and update the WSO2AM_DB  data source.
<datasource>
            <name>WSO2AM_DB</name>
            <description>The datasource used for API Manager database</description>
            <jndiConfig>
                <name>jdbc/WSO2AM_DB</name>
            </jndiConfig>
            <definition type="RDBMS">
                <configuration>
                    <url>jdbc:mysql://localhost:3306/cluster_db</url>
                    <username>root</username>
                    <password></password>
                    <defaultAutoCommit>false</defaultAutoCommit>
                    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                    <maxActive>50</maxActive>
                    <maxWait>60000</maxWait>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
                </configuration>
            </definition>
        </datasource>
vi) Open the registry.xml and add the following configuration for registry mounting.
    <dbConfig name="wso2registry_mount">
        <dataSource>jdbc/WSO2CarbonDB_cluster_db</dataSource>
    </dbConfig>
    <remoteInstance url="https://localhost:9443/registry">
        <id>instanceid</id>
        <dbConfig>wso2registry_mount</dbConfig>
        <readOnly>false</readOnly>
        <enableCache>true</enableCache>
        <registryRoot>/</registryRoot>
        <cacheId>root@jdbc:mysql://localhost:3306/cluster_db</cacheId>
    </remoteInstance>

    <mount path="/_system/config" overwrite="true">
        <instanceId>instanceid</instanceId>
        <targetPath>/_system/nodes</targetPath>
    </mount>
       <mount path="/_system/governance" overwrite="true">
        <instanceId>instanceid</instanceId>
        <targetPath>/_system/governance</targetPath>
    </mount>
vii) Open the user-mgt.xml and  update the dataSource property as bellow.
<Property name="dataSource">jdbc/WSO2CarbonDB_cluster_db</Property>
6.0 Add the virtual host configuration for proxy-2.

i) In windows,  this file (mgt.wso2am.conf)should be save in C:\xampp\apache\conf\extra directory (in Linux /etc/apache2/site-available)
<VirtualHost mgt.wso2am:443>


 ServerAdmin abc@wso2.com
 ServerName  mgt.wso2am
 ServerAlias mgt.wso2am

 ProxyRequests Off

 SSLEngine On
 SSLProxyEngine On

 #SSLProxyVerify none
 SSLProxyCheckPeerName off
 SSLProxyCheckPeerCN off
 #SSLProxyCheckPeerExpire off


 SSLCertificateFile D:\ssl\mgtssl\ca.crt
 SSLCertificateKeyFile D:\ssl\mgtssl\ca.key

<Proxy balancer://mgt.wso2am>
        BalancerMember https://mgt1.wso2am:9443 route=amNode1 loadfactor=1
        BalancerMember https://mgt2.wso2am:9443 route=amNode2 loadfactor=1
        ProxySet lbmethod=byrequests
        ProxySet stickysession=JSESSIONID|jsessionid
</Proxy>

ProxyPass /  balancer://mgt.wso2am/ 
ProxyPassReverse / balancer://mgt.wso2am/
   
</VirtualHost>

ii) In windows, include this virtual host configuration in httpd.conf file. (Find the # Virtual hosts configuration section and include )
Include conf/extra/mgt.wso2am.conf
iii) In linux you can use the following command.
sudo a2ensite mgt.wso2am.conf
iv) Create new key and crt for the proxy-2 like in steps 3.0.
  • Make sure to enter the mgt.wso2am as the Common Name (CN). 
  • The ca.crt and ca.key has configured as SSLCertificateFile  and SSLCertificateKeyFile in above virtual host configuration.
v) Generate new key store for manager nodes like in steps 4.0.
  • Make sure to enter the mgt.am.wso2.com as the first and last name , that value will be set as common name (CN).
  •  Enter the key store password as wso2carbon , then you don't need to change the default key store configurations.
 7.0 Configure the manager nodes.

i) Open the carbon.xml file (repository\conf) and define the HostName and MgtHostName in manager-1 and manager-2.
<HostName>am1.wso2am</HostName>
<MgtHostName>mgt1.wso2am</MgtHostName>
<HostName>am2.wso2am</HostName>
<MgtHostName>mgt2.wso2am</MgtHostName>
ii) Open the catalina-server.xml file (repository\conf\tomcat) add add the following two properties in http connector configuration.
proxyPort="80"
proxyName="mgt.wso2am"
iii) Open the catalina-server.xml file (repository\conf\tomcat) add add the following two properties in https connector configuration.
proxyPort="443"
proxyName="mgt.wso2am"
 iv) Repeat the  same configuration steps mentioned  in 5. iv) , 5. v) , 5. vi) , 5. vii) for manager nodes

8.0 Add virtual host configuration for gateway management requests(publish API). (proxy-3).

 <VirtualHost mgt.gw.wso2am:443>
 ServerAdmin ajithn@wso2.com
 ServerName  mgt.gw.wso2am
 ServerAlias mgt.gw.wso2am

 ProxyRequests Off

 SSLEngine On
 SSLProxyEngine On

 #SSLProxyVerify none
 SSLProxyCheckPeerCN off
 SSLProxyCheckPeerName off
 #SSLProxyCheckPeerExpire off

 SSLCertificateFile D:\ssl\gwssl\ca.crt
 SSLCertificateKeyFile D:\ssl\gwssl\ca.key
  

<Proxy balancer://mgt.gw.wso2am>
        BalancerMember https://mgt1.gw.wso2am:9445 route=gwNode1 loadfactor=1
        BalancerMember https://mgt2.gw.wso2am:9446 route=gwNode2 loadfactor=1
        ProxySet lbmethod=byrequests
        ProxySet stickysession=JSESSIONID|jsessionid
</Proxy>

ProxyPass /  balancer://mgt.gw.wso2am/
ProxyPassReverse / balancer://mgt.gw.wso2am/

</VirtualHost>

i) In windows,  include this virtual host configuration (mgt.gw.wso2am.conf) in httpd.conf file. (Find the # Virtual hosts configuration section and include )
Include conf/extra/mgt.gw.wso2am.conf
ii) In linux you can use the following command.
sudo a2ensite mgt.gw.wso2am.conf
9. Create database .
  • Here I'm going to create one database for shared registry, user manager and API manager database. In production setup that should configure as separate databases.
i) Login to the MySQL server and execute the following command. (I'm using MySQL 5.6.26)
CREATE DATABASE cluster_db CHARACTER SET latin1;  
ii) Download the  connector jar ("mysql-connector-java-5.1.29.jar") and copy to the repository\components\lib directory of all the gateway and manager nodes.

10. Configure api-manager.xml

i) Open the api-manager.xml file of the gateway nodes and configure the <AuthManager> configuration section. (ServerURL should point to proxy-2).
<AuthManager>
        <!--
            Server URL of the Authentication service
        -->
        <ServerURL>https://mgt.wso2am/services/</ServerURL>
        <!--
            Admin username for the Authentication manager.
        -->
        <Username>${admin.username}</Username>
        <!--
            Admin password for the Authentication manager.
        -->
        <Password>${admin.password}</Password>
    </AuthManager>
ii) Open the api-manager.xml file of the gateway node and configure the <APIKeyValidator> section. (the ServerURL should point to proxy2).
        <ServerURL>https://mgt.wso2am/services/</ServerURL>

<!--
Admin username for API key manager.
-->
<Username>${admin.username}</Username>

<!--
Admin password for API key manager.
-->
<Password>${admin.password}</Password>
iii) Open the api-manager.xml file of the manager nodes and configure the <APIGateway> configuration.(ServerURL should point to proxy-3)
<APIGateway>
    <!-- The environments to which an API will be published -->
    <Environments>
        <!-- Environments can be of different types. Allowed values are 'hybrid', 'production' and 'sandbox'.
             An API deployed on a 'production' type gateway will only support production keys
             An API deployed on a 'sandbox' type gateway will only support sandbox keys
             An API deployed on a 'hybrid' type gateway will support both production and sandbox keys -->
                <Environment type="hybrid" api-console="true">
                        <Name>Production and Sandbox</Name>
                    <Description> Description of environment</Description>
            <!--
                        Server URL of the API gateway.
                -->
                        <ServerURL>https://mgt.gw.wso2am/services/</ServerURL>
            <!--
                        Admin username for the API gateway.
                -->
                        <Username>${admin.username}</Username>
            <!--
                        Admin password for the API gateway.
                -->
                        <Password>${admin.password}</Password>
            <!--
                        Endpoint URLs for the APIs hosted in this API gateway.
                -->
                        <GatewayEndpoint>http://gw.wso2am,https://gw.wso2am</GatewayEndpoint>
                </Environment>
        </Environments>

11.  Insall the proxy certificates to the client-truststore.jks file in gateway and manager nodes.

12. Configure gateway cluster.

i) Open the axis2.xml (repository\conf\axis2) file and enable clsutering.
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
ii) Change membershipScheme as wka.
<parameter name="membershipScheme">wka</parameter>
iii) Add a name for cluster domain. (eg: wso2.gw.domain)
<parameter name="domain">wso2.gw.domain</parameter>
iv) Add localMemberHost name as server IP (eg for gateway-1: *.*.*.1 and for gateway-2: *.*.*.2 ).
<parameter name="localMemberHost">*.*.*.1</parameter>
v) Add localMemberPort.   (eg: use 4100 for gateway-1 and 4200 for gateway-2).
<parameter name="localMemberPort">4100</parameter>
vi ) Add members.
for gateway-1:
 <members>
          <member>
             <hostName>*.*.*.2</hostName>
             <port>4200</port>
          </member>
  </members>
for gateway-2:
 <members>
        <member>
             <hostName>*.*.*.1</hostName>
             <port>4100</port>
        </member>
  </members>

13. Start servers.

i) Go to the bin directory and execute the startup script (wso2server.sh for Linux and wso2server.bat for windows) as bellow. (If you have alreay started the server ,then delete the repository\database directory before start the serevr - This is to create the mount configuration in local registry).
Eg: for Linux
Gateway-1/bin >  sh wso2server.sh -Dsetup -DjvmRoute=gwNode1 -Dprofile=gateway-worker
Gateway-2/bin >  sh wso2server.sh -Dsetup -DjvmRoute=gwNode2 -Dprofile=gateway-worker
Manager-1/bin >  sh wso2server.sh -Dsetup -DjvmRoute=amNode1
Manager-2/bin >  sh wso2server.sh -Dsetup -DjvmRoute=amNode1
  • The "jvmRoute" Identifier which must be used in load balancing scenarios to enable session affinity.
  • The value for the "jvmRoute" is configured in virtual host configuration.
  • The -Dsetup parameter is to populate the tables for registry, user manager and api manager in cluster_db.
  • The -Dprofile=gateway-worker is to start the gateway nodes from gateway worker profile.
14. Add the hostnames to the etc host file. (in Windows C:\Windows\System32\drivers\etc\hosts)

15. Login to the publisher (https://mgt.wso2am/publisher) and deploy the default sample.

16. Login to the store (https://mgt.wso2am/store) , then subscribe to API and generate token.

17. You can use the SoapUI REST project for https://gw.wso2am//weatherapi/1.0.0?q=colombo&mode=xml

i) Add Authorization header and invoke the API.

Evanthika AmarasiriEnabing E-mail User Login for WSO2 Products

This post explains different ways e-mail login can be enabled and how users/tenants can login to WSO2 products.

Pre-requisites

Users, tenants and their e-mail addresses that will be used for this scenario are as follows.

Super Admin User Name - admin
A user of Super Admin - adminUser
Email of Super Admin user - admin@yahoo.com
Email of a user of Super Admin - adminUser@gmail.com
Tenant Domain - tenantdomain.com
Tenant Admin - admin@tenantdomain.com
Tenant User - tenantDomainUser@tenantdomain.com
Tenant Admin Email - admin@hotmail.com
Tenant User Email - tenantDomainUser@aol.com

How to create tenants

When creating tenants, you have to give the tenant Admin Username as something like admin@gmail.com & not as admin

Scenario 1

Configuration that needs to be done

carbon.xml

<EnableEmailUserName>true</EnableEmailUserName>

user-mgt.xml

For JDBC User Stores

<Property name="UsernameWithEmailJavaScriptRegEx">[a-zA-Z0-9@._-|//]{3,30}$</Property>

For LDAP based User Stores

<Property name="UserNameSearchFilter">(&(objectClass=person)(|(mail=?)(uid=?)))</Property>

& Comment  out the following

<!--Property name="UserDNPattern">uid={0},ou=Users,dc=wso2,dc=org</Property-->


So when you do the above configuration, you can login from the following types of users

- admin
- admin@yahoo.com
- admin@yahoo.com@carbon.super
- adminUser
- adminUser@gmail.com
- adminUser@gmail.com@carbon.super
- admin@hotmail.com@tenantdomain.com
- tenantDomainUser@aol.com@tenantdomain.com

You cannot login as

- admin@tenantdomain.com
- tenantDomainUser@tenantdomain.com


Senario 2 - Without configuring  EnableEmailUserName property in carbon.xml

Configuration that needs to be done

carbon.xml

<EnableEmailUserName>false</EnableEmailUserName>

user-mgt.xml

Same as in Scenario 1 above

You should be able to login from the below users/email addresses

- admin
- admin@yahoo.com@carbon.super
- adminUser
- adminUser@gmail.com@carbon.super
- admin@hotmail.com@tenantdomain.com
- tenantDomainUser@aol.com@tenantdomain.com
- tenantDomainUser@tenantdomain.com

Cannot login from

- admin@yahoo.com
- adminUser@gmail.com
- admin@tenantdomain.com

To create users with email addresses, you need to change the following properties of the LDAP user store configuration.

<Property name="UserNameAttribute">mail</Property>
<Property name="UsernameJavaRegEx">[a-zA-Z0-9@._-|//]{3,30}$</Property>
<Property name="UserNameSearchFilter">(&(objectClass=person)(mail=?))</Property>

After configuring your server with the above configs, you should be able to add users with email addresses as well as with uids.
For more information, go through the detailed blog written by Asela Pathberiya.

Ajith VitharanaRsync to synchronize the artifacts - WSO2 products.

We have to configure the artifact synchronization mechanism to  synchronize the artifacts across the cluster nodes . The SVN based synchronization  is the most recommend way to do it .

The following use case is  just my local testing, try it ..!!  if you also interest. :)

In WSO2 products,  artifacts has stored in the /repository/deployment/server directory. In Linux base system,  we can use the Rsync (https://rsync.samba.org/) to synchronize artifacts across  the cluster nodes.

The syncd (https://github.com/drunomics/syncd) is simple bash script which is based on the rsync  and notify-tools utility. That provides very easy way to run the rsync in daemon mode.
apt-get install inotify-tools rsync
1. Go to the download directory and execute the following command to create symbolic link.
sudo ln -s $PWD/syncd /usr/local/bin/syncd
2. Copy the syncd.conf file to manager-1/repository/deployment/ directory.

3. Open that syncd.conf file and configure the following property.
WATCH_DIR=$SYNCD_CONFIG_DIR/server
SSH_USER=ajith
SSH_HOST=127.0.0.1
REMOTE_TARGET_DIR="/home/ajith/cluster/worker-1/repository/deployment/server"
5. Generate key for remote authentication to establish connection without prompting for the  password.

i)  Execute the following command and just enter for all the prompts.
ssh-keygen
ii) cd /home/<username>/.ssh 

iii) Open the id_rsa.pub   file , copy the key and save the file.
 vi id_rsa.pub  
iv) Execute the following command in same directory (.ssh) and past the key. [Ctrl+X then Y to save]
nano authorized_keys 
6. Go back to manager-1/repository/deployment/ and execute the following command to start the daemon task.
syncd start [stop|restart].

7. Now when you are  deploying the artifacts in manager node , that will be synchronize to the worker node.

8. You can extend the syncd script to synchronize the multiple target locations. :)

sanjeewa malalgodaSample data source configuration for WSO2 Servers to connect jdbc using LDAP

Please find following sample data source configuration to access jdbc using LDAP connection.

       <datasource>
            <name>DATASOURCE_NAME</name>
            <description>The datasource used for BPS</description>
            <jndiConfig>
                <name>jdbc/JNDI_NAME</name>
            </jndiConfig>
            <definition type="RDBMS">
                <configuration>
                    <url>jdbc:oracle:thin:@ldap://localhost:389/cn=wso2dev2,cn=OracleContext,dc=test,dc=com</url>
                    <username>DB_USER_NAME</username>
                    <password>DB_PASSWORD</password>
                    <driverClassName>oracle.jdbc.OracleDriver</driverClassName>
                    <maxActive>50</maxActive>
                    <maxWait>60000</maxWait>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1 FROM DUAL</validationQuery>
                    <validationInterval>30000</validationInterval>
                </configuration>
            </definition>
        </datasource>




sanjeewa malalgodaHow to install Redis in ubuntu and send event

Please follow below instructions to install and use Redis
Type following commands in command line.

wget http://download.redis.io/releases/redis-stable.tar.gz

tar xzf redis-stable.tar.gz

cd redis-stable

make

make test

sudo make install

cd utils

sudo ./install_server.sh
As the script runs, you can choose the default options by pressing enter.

Port depends on the port you set during the installation. 6379 is the default port setting.

sudo service redis_6379 start
sudo service redis_6379 stop

Access redis command line tool
redis-cli

You will see following command line.
redis 127.0.0.1:6379>


Then send events with key and value as follows.
127.0.0.1:6379> publish EVENTCHANNEL sanjeewa11111199999999



sanjeewa malalgodaHow to increase time out value in WSO2 API Manager

When we increase timeout value in API Manager we have to set 3 properties.

1) Global timeout defined in synapse.properties (\repository\conf\synapse.properties)

synapse.global_timeout_interval=60000000


2) Socket timeout defined in the passthru-http.properties (ESB_HOME\repository\conf\passthru-http.properties )

http.socket.timeout=60000000

3) Also we need to set timeout in API level per each API.
 <endpoint name="admin--Stream_APIproductionEndpoint_0">
      <address uri="http://localhost:9763/example-v4/example">
 <timeout>
   <duration>12000000</duration>
  <responseAction>fault</responseAction>
  </timeout>
 </address>

Shiva BalachandranSetting Up Java, Maven and Git on Windows 7

This is just a small reference guide on how to setup Java, Maven and Git on a Windows machine.

  1. Download the lastest JDK and install into the machine, java documentations clearly explain the installation in their guides.
  2. Download and UnZip Maven from their site. Maven doesnt require installations, its a simply a folder with the executables and all you have to is set the path which i will address later on.
  3. Download Git Bash and install Git Bash on the server, its a simple download run executable.
    Word of advice, stick to the default settings, unless you know for sure what you are doing.
  4. Now that we have the necessary tools install and downloaded to the local machine, now it is a matter of setting the path variable.
  5. Highly recommended; Please make sure before heading off to set up the path variables, please open up and write down the directory path of the Java Folder (e.g – C:\Program Files\Java\jdk1.7.0_03 ),
    Maven Folder(e.g.- C:\mavenHome ),
    Git Location (e.g. – C:\Users\shiva\AppData\Local\Programs\Git )
  6. First step would be to add JAVA_HOME variable and adding the path to the JDK, to the PATH Variable.
  7. Right click on My Computer, Select Properties and look for the Advanced Settings on the left column. (refer to image below, if you feel like a newb)Untitled
  8. Once you click on the Advanced Settings, a pop up should load as shown below, look for “Environment Variables” in the “Advanced” tab.
  9. The following screen should pop up, in there you use the new button to create variable and edit button to edit them.
    add_to_PATH_thumb
  10. Click on the New button and add JAVA_HOME.
    Variable Name : JAVA_HOME
    Variable Value : give the path to the java directory as specified in Step 5.
    and save the variable.
  11. Click on the New button and add M2_HOME.
    Variable Name : M2_HOME
    Variable Value : give the path to the maven directory as specified in Step 5, and save the variable.
  12. Click on the New button and add MAVEN_HOME.
    Variable Name : MAVEN_HOME
    Variable Value : give the path to the java directory as specified in Step 5.
    and save the variable.
  13. Next step would be to add the above defined variables to the “PATH” Variable. We defined the variables separately to make it easier to specify them in the PATH variable. Look for the PATH variable in the system variables list and select it and click on Edit.
  14. Once in edit mode you will see a pop up as below.add_new_variables_into_PATH_thumb
  15. In the above shown pop up click on the value field and go to the end of it and append the text inside the quotes to the value in the field “;%JAVA_HOME%\bin;%M2_HOME%\bin” and click on okay.
  16. You have successfully configured JAVA and MAVEN.
  17. To configure GIT, open up the PATH variable, and append the GIT Directory location to the bin and cmd as shown below, (the dir i have used is mine and may defer from others), ;C:\Users\shiva\AppData\Local\Programs\Git\bin; C:\Users\shiva\AppData\Local\Programs\Git\cmd.

These are the steps to configure Maven, Java and Git. I have broken them down to baby steps for clearer understanding.


Ushani BalasooriyaConfiguring Salesforce outbound provisioning with WSO2 IS

Provisioning is a simple way to provision users in to different domains with new Identity Provisioning framework.
This example is explained how to configure Salesforce as the Identity Provider to provision the users from WSO2 Identity Server.

Configure Salesforce :

  • Following is a screenshot of a connected app created to configure WSO2 IS.
Figure 1 : Connected App


  • Once you create the connected app, you will be getting the Consumer Key and the Consumer Secret of the app.
Figure 2 : keys


  • Next you should select the your connected app to the profile you are going to use to assign when you add users in to Salesforce.
  • This can be viewed in the Manage Profile sections in the setup page. When you click on that, it will list down the existing profiles.
Figure 3 : Profiles


  • As an example, if we are going to use the profile “Chatter Free User” click on edit mode and select the connected app you created to configure with WSO2 IS as given in the following screen.
Figure 4 : Profile and select the connected app 

  • Now we have done the required configurations needed in Salesforce side.

Configure WSO2 IS :


  • This feature is introduced with WSO2 IS 5.0.0.
  • Salesforce user login is an email address. Therefore you need to configure WSO2 IS to enable email address for user login. In order to do that follow the below steps :
If the user store is MySQL :

  • Step1 : Open carbon.xml in IS_HOME/repository/conf and uncomment
 <EnableEmailUserName>true</EnableEmailUserName>  

  • Step2 : Open user-mgt.xml in IS_HOME/repository/conf and uncomment JDBC configurations org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager And Comment default LDAP user store manager configurations.
 org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager  

  • Step3 : Please add following property under folowing configuration.  
 org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager   


 <Property name="UsernameWithEmailJavaScriptRegEx">^[_A-Za-z0-9-\+]+(\.[_A-Za-z0-9-]+)*@[A-Za-z0-9-]+(\.[A-Za-z0-9]+)*(\.[A-Za-z]{2,})$</Property>  

* Using above property, you can change the pattern of your email address.

  • Step4 : Restart the server

Configure Identity Provider in WSO2 IS :

  • Now you have to first register Salesforce as an Identity provider. In order to d that install WSO2 server and start it up. Then when you go to home page, click on the Add Identity provider and register identity provider and save it. E.g., Salesforce.com”
  • Then you click on the the IDP and provide the basic information as given in the following image.
Figure 5 : Create IDP in WSO2 IS

  • Then you have to fill in the basic information as given in the screenshot.
Figure 6 :  Basic Information - Claims

  • Claim mapping should be done for the following mandatory fields. Alias, Email, EmailEncodingKey, LanguageLocaleKey, LastName, LocaleSidKey, ProfileId, TimeZoneSidKey, Username
  • Advance configurations can be filled in as follows.
Figure 7 : Advance Configurations

  • Then click on the Outbound provisioning Connectors section and configure for Saleforce as below :
Figure 8 : Configure Salesforce Connector

  • Values :
      - API version : Salesforce API version
      - Domain Name : Your developer environment domain URL
      - Client ID : Client ID got from the Connected app which is created
      - Client Secret : Client secret got from the Connected app created
      - username : Username of your salesforce developer account 
      - password : This should be the password followed by the security token received by the email.
E.g., <password><security token> (passwordJYn8OLa9pC8CbQWrepGQpxxcu)



Configure Service Provider in WSO2 IS :


  • If you are going to use WSO2 IS user management console to add users, you can configure the resident service provider as the service provider in WSO2 IS. Following is an illustration.
Figure 9 : Configure Resident Service Provider


  • Select the IDP configured and select the salesforce as the connector from the drop down and save it.



Add users in WSO2 IS :


  • This is the normal process of adding users through WSO2 IS administration UI. user should provide an email address as the username.
  • Create a user via UI and check whether the user is provisioned to Salesforce as follows. You will be able to see the users added.
Figure 10 : Provisioned users in salesforce



Via SCIM and Ouath bearer token:

  • If you are going to add users via SCIM and Oauth you will need to add a service provider in WSO2 IS and configure it for the added IDP and Salesforce connector as below.
Figure 11 : Configure Service provider for SCIM and Ouath bearer token

  • If you use SCIM you have to select the correct User Store Domain under Resident IDP -> Inbound Provisioning Configuration
  • Sample Requests :

Add User Via SCIM :


 curl -v -k --header "Content-Type:application/json" --user ushani@wso2.com:password --data '{"schemas":     ["urn:scim:schemas:core:1.0"],"userName":"sfuser24@wso2.com","password":"ush     anisf25","name":{"familyName":"Ushanisf24"},"emails":     ["sfuser24@wso2.com"],"entitlements":     [{"value":"00e90000001STRnAAO","display":"ChatterFreeUser"}]}' https://localhost:9463/wso2/scim/Users  


Via bearer token


 curl -v -k --header "Content-Type:application/json" --header 'Authorization: Bearer c648fcae8b7b75e7b3287e31d5886e3' --data '{"schemas":     ["urn:scim:schemas:core:1.0"],"userName":"ushani002@scimdemo.org", "password":"ushani0012", "na     me":{"familyName":"Ushani12"},"emails":     ["ushani12@gmail.com"],"entitlements":     [{"value":"00e90000001P171","display":"ChatterFreeUser"}]}' https://localhost:9463/wso2/scim/Users  


Aruna Sujith Karunarathna[WSO2] Adding tenants using Admin Services - Sample Code

Adding tenants using admin services is straight forward. You have to use two admin services. 1. AuthenticationAdminService 2. TenantMgtAdminService The  AuthenticationAdminService is used to authenticate the user and get the session. Below is a sample code for adding a tenant <!-- HTML generated using hilite.me --> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

Isuru PereraLinux Performance Observability Tools

I am learning about Linux Performance Tools and I found Brendan Gregg's talks on Linux Performance are very interesting.

There are so many performance tools for Linux. Brendan recommends to follow a performance analysis methodology to analyze system or application performance. These methodologies can guide us to choose and use these performance tools effectively.

Linux Performance Observability Tools



There are different types of command line tools available in Linux. In this blog post, I'm going to focus on Linux Performance Observability Tools. I highly recommend to watch Brendan's talk at Velocity 2015 on Linux Performance Tools and I took details about following tools from his presentation and his website.


http://www.brendangregg.com/Perf/linux_observability_tools.png
Linux Performance Observability Tools
Taken from Brendan Gregg's Website: http://www.brendangregg.com/Perf/linux_observability_tools.png

Here are some examples of using Linux Performance Observability Tools in Ubuntu.  I tested each of these commands in a Ubuntu Trusty Vagrant Box

Basic Observability Tools



# Print load averages
uptime


# System and per-process interval summary. 
# It's important to note that the top can miss short-lived processes.
# See 30 Linux TOP Command Examples With Screenshots
top


# htop is an interactive process viewer and you need to install it.
sudo apt-get install htop
htop


# Process status listing
ps -ef
# ASCII art forest
ps -ef f


# Virtual Memory Statistics
vmstat
# Show stats in Megabytes and update every second
vmstat -SM 1


# Block I/O (disk) stats. The iostat tool is in 'sysstat' package.
sudo apt-get install sysstat
iostat
# Display extended statistics in megabytes per seconds.
# This also shows device utilization and omits the inactive devices
# during the sample period.
iostat -xmdz 1


# Report multi-processor statistics.
mpstat
# per-CPU stats
mpstat -P ALL 1


# Main memory usage in megabytes.
free -m



Intermediate Observability Tools



# Trace system calls and signals. This is not recommended in production.
strace
# Trace system calls in a process.
# Prints the time (us) since epoch (-ttt) and syscall time (-T).
# Need to use sudo to attach to the process.
sudo strace -tttT -p 3344


# Sniff network packets for post analysis. 
# Using sudo to get permissions to capture packets on device
sudo tcpdump -i eth0 -w /tmp/out.tcpdump
# Read the dump
sudo tcpdump -nr /tmp/out.tcpdump


# Print network connections. See 10 basic examples of linux netstat command
netstat
# Network statistics by protocol
netstat -s
# Show both listening and not-listening sockets
netstat -a
# Show listening sockets
netstat -l
# Show the PID (-p) and the name of the program for TCP sockets (-t)
netstat -tp
# Kernel IP routing table
netstat -r
# Kernel Interface table
netstat -i


# Print network traffic statistics. You need to install 'nicstat' package.
sudo apt-get install nicstat
nicstat


# Process Stats
pidstat
# Process Stats by thread
pidstat -t
# Process Stats by disk I/O
pidstat -d


# Show swap usage
swapon -s
# Show swap usage in verbose mode
swapon -v


# List open file. Can be used as a debug tool
lsof
# Show active network connections
lsof -iTCP -sTCP:ESTABLISHED


# System Activity Reporter. 
# Before using sar, we need to enable data collection.
# See:
# Simple steps to install and configure sysstat/sar on Ubuntu/Debian server
# 10 Useful Sar (Sysstat) Examples for UNIX / Linux Performance Monitoring
sar -q



http://www.brendangregg.com/Perf/linux_observability_sar.png
Linux Performance Observability: sar
Taken from Brendan Gregg's Website: http://www.brendangregg.com/Perf/linux_observability_sar.png





Advanced Observability Tools



# Socket Statistics. This is similar to netstat,
# but it can display more TCP and state informations than other tools.
ss
# Socket Statistics. Show timer, processes and memory
ss -mop
# Show internal TCP information
ss -i


# Interactive Colorful IP LAN Monitor
sudo apt-get install iptraf
sudo iptraf


# Monitor I/O. A top-like tool
sudo apt-get install iotop
sudo iotop


# Kernel slab allocator memory usage
sudo slabtop


# Page cache statistics. 
# This tools is available in GitHub: https://github.com/tobert/pcstat
# and we need to use Go to build it.
# You can also download a binary from GitHub. Refer README for more information.
./pcstat testfile


# perf_events: Linux profiling with performance counters.
# This tool needs to be installed.
# I used perf command in previous blog post about Java CPU Flame Graphs.
# See Brendan's Linux Perf Examples.
# Perf Tutorial is also good resource to learn about perf.
sudo apt-get install linux-tools-common linux-tools-generic
# List perf event
sudo perf list


# tiptop: reads hardware performance counters
# and displays statistics about running processes, such as IPC, or cache misses.
# This tool was not available in Ubuntu 14.04 package repositories.
# Therefore I tried it on Ubuntu 15.04
sudo apt-get install tiptop
tiptop


# The rdmsr command reads a Model-Specific Register
# (MSR) value from the specified address.
# This tools is available from "msr-tools" package.
sudo apt-get install msr-tools
# Brendan has developed some Model Specific Register (MSR)
# tools for Xen guests (eg, AWS EC2)
.
# Reading CPU temperature:
sudo rdmsr -p1 -f 23:16 -d 0x1a2




Summary


This blog post lists some Linux Performance Observability Tools. I have also linked man pages and some examples of using the tools.

As I mentioned, Brendan's presentations have more details on these tools and I just wanted to list those in one page for my own reference.

Madhuka UdanthaData validation

Data validation is a process of ensuring that a program operates on clean, correct and useful data. Data validation provide certain well-defined guarantees for fitness, accuracy, and consistency for user/stream/data input into an application. It can designed using various methodologies, and be deployed in any of various contexts.

Data-Validation

Different kinds of data validation

  • Data type validation
    It carried out on one or more simple data fields. data field is consistent with expected data type (such as number, string, etc.)
  • Range and constraint validation
    Data is consist with a minimum/maximum range, or with a evaluating a sequence of characters
  • Code and Cross-reference validation
    Code and cross-reference validation includes tests for data type validation, combined with one or more operations to verify the data by with supplied data or known look-up table
  • Structured validation
    It allows for the combination of any of various basic data type validation steps, along with more complex processing steps. It can handle complex data objects.

Shiva BalachandranQuick Note 6 : Fixing “Filename too Long” Error when checking out from GIT in Window

I was new to windows environment after spending a year long in linux. SO when i first tried to checkout a repo in windows, i encountered this error. In order to recitfy this error or pass over.

You should run the command

git config --system core.longpaths true

or add it to one of your git config files manually to turn this functionality on, once your are on a supported version of git. It looks like maybe 1.9.0 and after.

This will fix the issue and allow you to checkout files with long names.


Sajith KariyawasamAuthenticating tenants and users in a web app deployed in WSO2 Application Server

WSO2 Application Server can be used to deploy and host standard web applications. WSO2 Application server runs on top of Carbon platform which provides the user and tenant management features as well. If you want the web application you deploy (in super tenant mode), to include user authentication (user login), you can leverage the API s provided by the Carbon platform. Relevant services are available as OSGi services, and you can do an OSGi lookup to obtain the required services.

Refer the code segment (jsp) below.

In the UI I have two text boxes to provide username and password.

If domain name is not specified in the username, it assumes a login of a super tenant or a super tenant user (hence domain is set to carbon.super). For tenant admins and users, relevant domain needs to be specified, and relevant tenant's UserRealm is loaded using the method AnonymousSessionUtil.getRealmByTenantDomain

For more about PrivilegedCarbonContext, refer here
     

<%@ page import="org.wso2.carbon.context.CarbonContext" %>
<%@ page import="org.wso2.carbon.context.PrivilegedCarbonContext" %>
<%@ page import="org.wso2.carbon.user.api.UserRealm" %>
<%@ page import="org.wso2.carbon.user.core.service.RealmService" %>
<%@ page import="org.wso2.carbon.user.api.UserRealmService" %>
<%@ page import="org.wso2.carbon.user.api.UserStoreException" %>
<%@ page import="org.wso2.carbon.user.api.UserStoreManager" %>
<%@ page import="org.wso2.carbon.core.util.AnonymousSessionUtil" %>
<%@ page import="org.wso2.carbon.registry.core.service.RegistryService" %>

<%! String removeTenantDomain(String userName) {
if(userName.contains("@")) {
String[] arr = userName.split("@");
return arr[0];
}
return userName;
}
%>

<%
String username = request.getParameter("username");
String password = request.getParameter("password");
String tenantDomain = "carbon.super";
boolean status = false;
if (username != null && username.trim().length() > 0) {
try {

PrivilegedCarbonContext carbonContext = PrivilegedCarbonContext.getThreadLocalCarbonContext();
RealmService realmService = (RealmService) carbonContext.getOSGiService(RealmService.class);
RegistryService registryService = (RegistryService) carbonContext.getOSGiService(RegistryService.class);

// If domain is specified
if(username.contains("@")) {
String[] arr = username.split("@");
tenantDomain = arr[1];

}
UserRealm realm = AnonymousSessionUtil.getRealmByTenantDomain(registryService,realmService,tenantDomain);
status = realm.getUserStoreManager().authenticate(removeTenantDomain(username), password);

} catch (Exception e) {
e.printStackTrace();
}
}
if (status) {
session.setAttribute("logged-in", "true");
session.setAttribute("username", username);
response.sendRedirect("login.jsp");
} else {
session.invalidate();
response.sendRedirect("login.jsp?failed=true");
}
%>

Sajith KariyawasamHow to do a URL encode in WSO2 ESB

I have recently came across a requirement to send an XML paylod in URL encoded format to the backend. I was able to get it done with WSO2 ESB script mediator

Request to ESB is as follows:
<request><user>Sajith</user></request>


Backend expects the request as follows
http://localhost:8001/myservice?RequestXML=%3CRequest%3E%3CUser%3ESajith%3C%2FUser%3E%3C%2FRequest%3E


This was achieved by the following code. (Using javascript method encodeURL)

<script language="js" description="">
mc.setProperty("uri.var.encodedBody", encodeURI(mc.getPayloadXML()));
</script>

<send>
<endpoint>
<http method="post" uri-template="http://localhost:8001/myservice?RequestXML={uri.var.encodedBody}"/>
</endpoint>
</send>

Ajith VitharanaLimit the size of the wso2carbon.log file

We can limit the size of the wso2carbon log file as bellow.

1. Open the log4j.properties file under <server_home>/repository/conf directory.

2. Find this log appender "log4j.appender.CARBON_LOGFILE=org.wso2.carbon.logging.appenders.CarbonDailyRollingFileAppender"
3. Change it as bellow.
log4j.appender.CARBON_LOGFILE=org.apache.log4j.RollingFileAppender
4. Add following two property under the RollingFileAppender.
log4j.appender.CARBON_LOGFILE.MaxFileSize=10MB
log4j.appender.CARBON_LOGFILE.MaxBackupIndex=20
Finally it should look like bellow:
# CARBON_LOGFILE is set to be a DailyRollingFileAppender using a PatternLayout.
log4j.appender.CARBON_LOGFILE=org.apache.log4j.RollingFileAppender
# Log file will be overridden by the configuration setting in the DB
# This path should be relative to WSO2 Carbon Home
log4j.appender.CARBON_LOGFILE.File=${carbon.home}/repository/logs/${instance.log}/wso2carbon${instance.log}.log
log4j.appender.CARBON_LOGFILE.Append=true
log4j.appender.CARBON_LOGFILE.layout=org.wso2.carbon.utils.logging.TenantAwarePatternLayout
# ConversionPattern will be overridden by the configuration setting in the DB
log4j.appender.CARBON_LOGFILE.layout.ConversionPattern=TID: [%T] [%S] [%d] %P%5p {%c} - %x %m {%c}%n
log4j.appender.CARBON_LOGFILE.layout.TenantPattern=%U%@%D [%T] [%S]
log4j.appender.CARBON_LOGFILE.threshold=DEBUG
log4j.appender.CARBON_LOGFILE.MaxFileSize=10MB
log4j.appender.CARBON_LOGFILE.MaxBackupIndex=20

Shiva BalachandranQuick Note 5 : WSO2 : Findout / Access / Lookup / Identify the name of the current file being processed by the VFS listener

 I havent been blogging lately so i thought i’ll start up again with short notes, here is a quick tip on how to Findout / Access / Lookup / Identify the name of the current file being processed by the VFS listener.

Use this, <property name=”Processing_File_Name” expression=”get-property(‘transport’,’FILE_NAME’)”  type=”STRING”/> to identify the name of the file being processed.

Once processed, the response or the output file can be renamed usign the following,

<property name=”transport.vfs.ReplyFileName” value=”out_response.xml” scope=”transport” type=”STRING”/>

Thank you.


Chintana WilamunaUsing a local CA for certificate management in a multi tenant PaaS

On a multi tenant middleware PaaS a shared JVM can host multiple tenants. A tenant can do pretty much anything (within the security constraints of course) inside his tenant. While a PaaS can be used in different scenarios and use cases this discussion concentrate on using a PaaS inside an enterprise. Although the ideal scenario is every application is developed and deployed on top of the common platform reality is different. There are applications that’s very expensive to rewrite. Integration first and refactoring later is a much more practical approach.

Simple solution for this certificate management problem is to have a local CA. You can then sign external systems having self signed certificates with your root certificate and only have the root certificate on server’s trust store. This is an example diagram that shows the scenario we’ll be discussing.

On a PaaS there are many components that can make an outgoing HTTPS calls. Application Servers, API Manager, ESB and so on. For external HTTPS calls to external systems, public certificate of the external system should be installed in trust store of whatever the JVM running the service. If a web app is making an external HTTPS call, public certificate of the external system should exist in trust store of the Application Server. That’s regardless of what tenant that application is deployed into. As of this writing, Application Server (version 5.2.1 and before) use a common trust store for that running JVM instance. Not all systems inside an enterprise use a valid CA signed certificate. Specially if it’s a system designed for internal consumption.

By having a self service PaaS you allow different groups inside the organization to create/deploy applications without much supervision. That’s the goal anyway because you need the teams to move at their pace without getting in their way to do devops tasks.

When tenants make external HTTPS calls to systems with self signed certs, you need to import those certs to Application Server’s trust store. This creates an administrative problem if you have a self service PaaS running. Every time you update a certificate or try to integrate with a system with self signed certs you have to rely on an administrator to install certificates into server’s trust store.

In my case, I generated a custom keystore by going through this article and configured it in AppServer called MyCompany.jks. I’m using this instance as my external system with a custom self signed cert. I’m following this article to have a local CA cert and have that installed into the trust store. Here’s what the keystore configuration look like,

As in the article I created a root key with a password and self signed that cert,

$ openssl req -x509 -new -nodes -key rootCA.key -days 1024 -out rootCA.pem

$ openssl genrsa -des3 -out rootCA.key 2048

Now we have a PEM file, we need to import this to the keystore of 3rd party system because we’re using Application Server for testing and once we import the signed cert it should be able to validate that. Otherwise you’ll get an error like keytool error: java.lang.Exception: Failed to establish chain from reply. Before importing we need to convert it to DER format

$ openssl x509 -outform der -in rootCA.pem -out rootCA.der

Import this cert to the keystore

$ keytool -import -alias rootca -keystore MyCompany.jks -file rootCA.der

You should also import this cert to the client-truststore where you’re hosting the application that make the external service call

$ keytool -import -alias rootca -keystore client-truststore.jks -file rootCA.der

Now we need to create a certificate signing request. You can find details of how to generate a certificate signing request and then installing the signed cert to the keystore. Use the following command to generate a CSR

$ keytool -certreq -alias mycompany -keyalg RSA -keystore MyCompany.jks

This will generate an output like below. I’m going to save below to a file named mycompany.csr

-----BEGIN NEW CERTIFICATE REQUEST-----
MIIC4jCCAcoCAQAwbTELMAkGA1UEBhMCVVMxEzA…
...
-----END NEW CERTIFICATE REQUEST-----

Let’s sign this using our root cert

$ openssl x509 -req -in mycompany.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out mycompany.crt -days 500

We have our CA signed certificate, we need to import this to the keystore

$ keytool -importcert -alias mycompany -keystore MyCompany.jks -file mycompany.crt

After keystore updates it should look like the following diagram,

Now mycompany.crt is signed by rootCA.der, we can have one cert in the trust store and talk to external systems that’s signed by the same certificate. This avoid having to import multiple keys into the trust store every time a tenant want to integrate into an external system with a self signed cert.

Dhananjaya jayasingheHow to setup networking for Virtualbox VMs in Mac

By default when we configure Virtual box VM for  Mac we are getting networking with NAT. The configuration looks like bellow.




But with that i could not PING from my Mac to the Windows VM or vice-versa.  As usual i changed the connection to Bridge mode but no success. Then contacted my Networking friend "Dhanushka Ranasinge" and with his guide i could get it done.

Following is the TIP.

For the VM
1. Change the  network interface to "Host-Only Adapter" as in the bellow image.


2. Then start the VM and disable to windows firewall.



Now you are in a STATE that you can PING from your MAC to WINDOWS.

Now we need to set up internet for the VM.

3. Shutdown the VM and add second Network adaptor for NAT



With this NAT interface , it will connect your VM to internet. Now it is all done. Start your VM and you should be able to connect to INTERNET and you should be able to PING from MAC to WINDOWS and vice-versa.

sanjeewa malalgodaHow to avoid getting incorrect access tokens due to constraint violation when we have high load on token API(CON_APP_KEY violated).

Sometimes you may see following behavior when we have very high load on token API.
1. Call https://localhost:8243/token
2. Get constraint error.
{org.wso2.carbon.identity.oauth2.dao.TokenPersistenceTask} - Error occurred while persisting access token bsdsadaa209esdsadasdae21a17d {org.wso2.carbon.identity.oauth2.dao.TokenPersistenceTask}
org.wso2.carbon.identity.oauth2.IdentityOAuth2Exception: Access Token for consumer key : H5sadsdasdasddasdsa, user : sanjeewa and scope : default already exists
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.storeAccessToken(TokenMgtDAO.java:194)
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.persistAccessToken(TokenMgtDAO.java:229)
at org.wso2.carbon.identity.oauth2.dao.TokenPersistenceTask.run(TokenPersistenceTask.java:56)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (WSO2_APIM.CON_APP_KEY) violated

3. Attempt to use the access token when calling an API but get HTTP Status-Code=401 (Unauthorized) Invalid Credentials error

This issue happens because our token storing logic is not blocking call(this was implemented as improvement to token API as persisting can block token generation flow).
So due to that we already returned incorrect token to client(which is not already persisted). This happens only if constraint failed when we try to persist token.
But at that time we may return token to client.

If we made token persisting pool size 0 then this issue will not be there and user will immediately get error (probably internal server error) and token will not be return to client.
See following code block

try {
tokenMgtDAO.storeAccessToken(accessToken, oAuth2AccessTokenReqDTO.getClientId(),
accessTokenDO, userStoreDomain);
} catch (IdentityException e) {
throw new IdentityOAuth2Exception(
"Error occurred while storing new access token : " + accessToken, e);
}

You can set pool size as follows. By default it set to 100.
wso2am-1.9.0/repository/conf/identity.xml

<JDBCPersistenceManager>
    <SessionDataPersist>
        <PoolSize>0</PoolSize>
    </SessionDataPersist>
</JDBCPersistenceManager>
 

This Will resolve your issue
 

Chintana Wilamuna"WSO2 is nowhere near the goal I set out to do for us - take over the world (of middleware!). But 10..."

“WSO2 is nowhere near the goal I set out to do for us - take over the world (of middleware!). But 10 years later, we’re now on a solid foundation to build WSO2 into a much stronger position in the next 10 years”

- @Sanjiva

Sanjiva WeerawaranaWSO2 at 10

Today August 4th 2015 is WSO2's unofficial official birthday - we complete 10 years of existence.

I guess its been a while.

Its unofficial because not a whole lot happened on the 4th of August 2005 itself. Starting a global set up like WSO2 had many steps - registering a company in Sri Lanka (in early July 2005 IIRC), registering a company in the US, getting money to the US company, "selling" the LK company to the US company etc. etc.. We officially "launched" the company at OSCON 2005 in Portland, Oregon the first week of August.

However, I gave a talk there on the 4th on Open Source and Developing Countries. The talk abstract refers to the opportunity that open source gives to "fundamentally change the dynamics of the global software industry".

That's what we've been up to for 10 years - taking on the enterprise middleware part of the software industry with open source and Sri Lanka as the major competitive weapons. We can't claim victory yet but we're making progress. Getting into nearly 20 Gartner Magic Quadrants and Forrester Waves as a Visionary is not a bad track record from zero.

This is of course only possible because of the people we have and the way we do things (our culture) that allows people to do what they do best and do it well. To me, as the person at the helm, its been an incredible ride to work with such awesome people and to have such an awesome work environment that births and nurtures cool stuff just as effectively as how well it chews and spits out stupid stuff and BS.

We're now somewhat sizable ... just about crossing 500 full time employees globally on August 1 this year. I am still (and will be for the next 10 years) the last interview for every employee .. no matter what level and no matter what country they're in (yeah that means Skype sometimes). I don't check for ability to do the job - its all about what the person's about, what they want to achieve in their life and how well I think will fit into our culture and approach and value system. I have veto'ed many hires if my gut feeling is that the person is not the right fit for us.

Here's a graph of how the team has grown:


(The X axis is the number of months since August 1, 2005.)

A key to our ability to continue to challenge the world by taking on audacious tasks is the "so what if we fail" mindset that's integral to our culture. Another part is being young and stupid in terms of not knowing how hard some things apparently are. When I started WSO2 I was 38 .. not that young but definitely stupid in my understanding of how hard it is/was to take on the world of IBM/Oracle owned enterprise middleware market and ultimately stupid about the technical complexities of the problems we needed to solve. BUT what has worked for us so far is the "so what if we fail part" being used by young people who are regularly put in the deep end to get stuff done. I am still utterly stupid about how hard certain things are supposed to be - and I love that. Most of us in WSO2 are very stupid that way - but we're not afraid to try nor are we afraid to fail. Shit happens, life goes on (oh yeah and then we all die anyway at some point .. so why not give it a shot). I have little or no respect to the "way things are done" or the "way things work" - we've challenged and re-envisioned almost every part of our business from the way a normal software company works and I'm very proud of my team for having done that over and over and over again. And I'm of course grateful that they still talk to me for all the grief I give them daily on various little to big aspects of every side of the company - from colors to cleanliness to marketing to architecture to pricing to paying taxes.

The amazing thing is after 10 years we've managed to become slightly younger as a company over time! How is that possible? This is the average age of employees of WSO2 over time (same X axis as above):


We apparently had some old farts (like me) hired at the beginning and then again a few more around 3 years in .. but since then the average age has hovered between 30 and 32! Not bad for a 10 year old company where very few people leave ...

To me the actual physical age is not the issue - after all I'm now 48 years old but I don't hesitate to think and act like a 25 year old either mentally or physically (come and play basketball with me and lets see who hurts more at the end). Its all about how you think and act and accept "experience". I view experience and assumption as things to question and assume as false until proven true in our context. That frustrates a lot of senior people but that is exactly what has allowed WSO2 to keep growing and keep challenging the world of middleware and getting to its front. I view any assumption (e.g. "this is the way others do it") as a likely point of failure until proven otherwise. My challenge is to keep WSO2 "young" - in thinking and in age as much as possible (without age discrimination of course). I love this Jeff Bezos quote:
If your customer base is aging with you, then eventually you are going to become obsolete or irrelevant. You need to be constantly figuring out who are your new customers and what are you doing to stay forever young.
Technology will never stop - it maybe SOA, ESB, REST, CEP, Mashups. Cloud, APIs, IoT, Microservices, Docker,  Clojure, NodeJS, whatever ... and more will come. We need to keep on top of every new thing that comes along, be the ones to create a bunch of these and still deliver real stuff that works.

If we as a team continue to challenge every assumption, continue to treat each other with respect but not fear, continue to fight for doing the right long term thing instead of hype-chasing then we will never lose.

WSO2 is nowhere near the goal I set out to do for us - take over the world (of middleware!). But 10 years later, we're now on a solid foundation to build WSO2 into a much stronger position in the next 10 years. Thank you for all the wonderful people who are still in WSO2 and to those that have moved on but did their part, for helping us get there. Its been my honor and privilege to lead this incredible bunch of crazies.

Jayanga DissanayakeBinding a processes into CPUs in Ubuntu

In this post I'm going to show you how to bind a process into a particular CPU in Ubuntu. Usually the OS manages the processes and schedules the threads. There is no guarantee on which CPU your process is running, OS will schedule it based on the resource availability.

But there is a way to specify the CPU and bind your process into a CPU.

taskset -cp <CPU ID | CPU IDs> <Process ID>

Following is an sample to demonstrate how you can do that.

1. Sample code which consumes 100% CPU (for demo purposes)

class Test {
public static void main(String args[]) {
int i = 0;
while (true) {
i++;
}
}
}

2. Compile and run the above simple program

javac Test.java
java Test

3. Use the 'htop' to view the CPU usage

In the above screen shot you can see that my sample process is running in the CPU 2. But its not guaranteed that it will always remain in CPU2. The OS might assign it to another CPU at some point.

4.  Run the following command, it will assign the process 5982 permanently into 5th CPU (CPU # start at zero, hence the index 4 refers to 5th CPU.)

taskset -cp 4 5982


In the above screen shot you can see, that 100% CPU usage is now indicated in the CPU 5.

Danushka FernandoWSO2 App Factory - Developing New Application Type


This post will describe how to use extension points to develop an application type. In [1] you can find step by step guide of adding a new application type and its runtime. In this post I will explain following.
  • Adding new Application Event Handlers
  • Maven Archetype creation for App Factory Applications
  • Extending the Application Type Processor Interface
  • Extending Deployer Interface

Adding new Application Event Handlers

You can find the interface ApplicationEventsHandler in [3] Lets say you need to do some special stuff for your application type (like create a database when creating an application to support your app type), you can extend methods of this to do that. For example we are creating a datasource using an handler when we create a Data Service type application. You can find that code in [4].

After you develop your handler you can add it to an OSGI bundle and then deploy it to the AppFactory server's following location.

$APPFACTORY_HOME/repository/components/dropins

And then you can edit the order of your handler by mentioning the necessary priority in the appfactory configuration which can be found in the following location.

$APPFACTORY_HOME/repository/conf/appfactory/appfactoy.xml

In this configuration file you can find a tag named EventHandlers. You can insert a new tag to this place with the name of your handler class name and set the desired priority value as it is done for other handlers.

Figure 1 : Event Handlers Priority Settings

Maven Archetype creation for App Factory Applications

In WSO2 App Factory when we create an application we generate a sample code for that application. This is done using maven archetypes. We have created a maven archetype for all apptypes. Maven archetypes can be created for non maven applications as well. For an example even for .NET app type we are having a maven archetype even though its built using MS Build.

[5] describes how to create a maven archetype. And you can also take some hints from my previous post [7] for maven archetype creation. When we create an archetype for WSO2 AppFactory we have to add few more resources in to it. That is for the initial deployment. This initial deployment phase was introduced to reduce the time spent to create an application. So what happens there is we include built artifacts in to the maven archetype and we bundle them using assembly plugin and deploy the created artifact[6].

Figure 2 shows the tree structure of web application archetype. I use this to explain since this is the simplest and common structure. Here archetype-resources folder contains the content of the archetype. Apart from the src directory you can see there is a directory named built_artifact. This is the place that we use to keep built artifacts that we are going to keep to create the initial artifact using maven assembly plugin. And the assembly.xml and the bin.xml placed at the same level will be used to run maven assembly plugin and create the initial artifact.

Figure 2 : Tree structure of Web app archetype

You can see the assembly.xml and bin.xml here. When we trigger follwing command on this folder it will create an .war file with the content of the built_artifact folder and will copy it to a folder named  ${artifactId}_deploy_artifact which is located one level outside the application code. And in the initial deployment phase it will pick this artifact to deploy.

mvn clean install -f assembly.xml

Figure 3 : bin.xml

Figure 4 : assembly.xml

Figure 5 is the structure of maven archetype for ESB apptype which is the most complex apptype created in WSO2 AppFactory up to now. Here you can see that there is no assembly.xml and bin.xml and instead of that there is a pom.xml inside built_artifact folder and bin.xml is also there in the same level. So we have run following command inside the built_artifact folder and how this extension can be done will be described in later in this article.

mvn clean install

Figure 5 : Tree structure of ESB maven archetype


Extending the Application Type Processor Interface

Application Type Processor interface can be found at [8]. AbstractApplicationTypeProcessor is an abstract implementation of the interface which is recommended to use[9]. But there could be cases where that it needs to be implemented from the beginning. This processor is triggered in several places which will do something specific to an application type.

When we create an application when the repository get initiated[2], generateApplicationSkeleton method will get invoked. In [9] it is calling a method initialDeployArtifactGeneration which is not mentioned in the interface but anyone extending [9] can override that. The behavior I mentioned in the earlier section about ESB apptype initial artifact generation.

Extending Deployer Interface

The Deployer Interface [10] will be used to deploy the artifacts to the PaaS Artifact repository. Underlying PaaS will be the responsible one to copy it to the actual running server instance. If you need to do something special with your apptype you can use this interface. Anyway again the recommended class to extend is AbstractDeployer [11]. Most common use of this will be changing the artifacts that you choose to deploy. So you can override getArtifact method in [11] so it will select the artifacts to deploy in your algorithm.

And the next thing would be Initial Artifact Deployment. As I explained earlier WSO2 AppFactory will deploy an initial artifact generated directly from the archetype generated code. So this is usually done using the class InitialArtifactDeployer [12]. But if you want to change some behavior of this class then again you can extend this class [12] and add it to WSO2 AppFactory and mention it in apptype (apptype.xml) under a parameter name "InitialDeployerClassName".

References

[1] Adding a new App Type and its Runtime
[2] WSO2 Appfactory LifeCycle Of
[3] ApplicationEventsHandler.java (org.wso2.carbon.appfactory.core.ApplicationEventsHandler)
[4] DSApplicationListener.java
[5] Guide Creating Archetypes
[6] Maven Assembly Plugin
[7] How to include artifactid in folder name or content of a file
[8] ApplicationTypeProcessor.java (org.wso2.carbon.appfactory.core.apptype.ApplicationTypeProcessor)
[9] AbstractApplicationTypeProcessor.java
[10] Deployer.java (org.wso2.carbon.appfactory.core.Deployer)
[11] AbstractDeployer.java (org.wso2.carbon.appfactory.deployers.AbstractDeployer)
[12] InitialArtifactDeployer.java (org.wso2.carbon.appfactory.deployers.InitialArtifactDeployer)

Danushka FernandoWSO2 App Factory - Life Cycle of an application

This post will try to explain the general life-cycle of an application in WSO2 AppFactory. WSO2 AppFactory is a place where multiple project teams can collaboratively create run and manage enterprise applications[1]. With WSO2 AppFactory users can create an complex application and push it to production within a matter of couple of hours.

Figure 1 - App Cloud Create Application Page

If you go to WSO2 AppFactory and create an application it will do several operations that generates the resources needed for your application such as
  • Create Repository
  • Generate Sample Code for the Application
  • Create Build Job
  • Create Issue Tracker Project
  • Deploy initial artifact in to PAAS artifact repository
  • etc..
When you hit create application it will create an instance of the application RXT installed in WSO2 AppFactory and then will call a bpel which is hosted in the BPS. Any one can edit this bpel to add their workflow in to it. Currently this will just trigger the AF service to trigger on creation event of Application Event Handlers. There are set of Application Event Handlers registered in AF. You can develop a new Application Event Handler class and add it to an OSGI bundle and copy to following location and start AF and that new handler will get invoked.

$APPFACTORY_HOME/repository/components/dropins 

And you can configure the order of the handlers by setting the priority of the handler by setting priority in the AppFactory configuration placed in following location.

$APPFACTORY_HOME/repository/conf/appfactory/appfactoy.xml 

I will explain this in detail in my next post [2].

These handlers will be responsible for the operations mentioned above. Figure 2 is about the flow of this Application Creation process. 


Figure 2 : Create Application Flow

When the application is created trunk version will get created in the repository. Next step would be to create a new version from the trunk version to promote it to next stages.

Figure 3 : Repositories and Builds page

When you click the create branch it will trigger same kind of flow and will create the new version and new build jobs.

Figure 4 : Flow of creating a new version in an Application

Once you create a new version then you can promote this version to next stages (Development -> Testing -> Production). When you promote it will deploy the artifacts to next stages.

[1] http://wso2.com/cloud/app-factory/
[2] http://wdfdo1986.blogspot.com/2015/07/wso2-app-factory-developing-new.html

Danushka FernandoWSO2 AppFactory - Using ESB Apptype

In next version of WSO2 AppFactory (2.2.0), it's going to introduce a new apptype ESB apptype. With this apptype, it's going allow users to develop WSO2 ESB Capps (CAR) with WSO2 AppFactory. In this article I will give you a guide line to follow when you are developing a Capp using WSO2 AppFactory.

WSO2 AppFactory ESB apptype is the first multi module apptype supported by it. It will contain 4 modules in the sample project. They are as below.

  1. Resources module
  2. Resources CAR module
  3. Synapse Config (Proxy Service) module
  4. Main Car module

Development

There are several rules that developer should obey when developing an ESB type application. Which are as follows.

  • Developers can add any number of modules to the project. But always there should only be two CAR modules. Which should be resources CAR and main CAR which will contain all synapse configs.
  • All synapse config names should contain the version number. Like foosequence-1.0.0. And the synapse config file name also should contain the version number in the same manner.
  • All modules should start with the application ID. This rule is not to conflict artifacts in between applications. If two applications contained artifacts in same name they could conflict.
  • Main Car module artifact id should be similar to the application ID.

It is recommended to use WSO2 Developer Studio to develop WSO2 AppFactory ESB type application. Developer Studio will validate the project structure and it will help developer to follow above rules. If some one is going to edit it by some other method, before accepting the commit it will check whether the required structure is there and will decide to accept it or reject it.

LifeCycle Management and Resources Management

When the ESB application is promoted it will still keep using the development endpoints / resources which is mentioned in the Resources CAR module. So the users in next stage (QA / DevOps) will need to update this resources CAR. So there will be an UI to upload a resources CAR for ESB applications. QAs and DevOps will need to checkout the code from the source code location which is mentioned in the Application Home page and they will need to edit registry resources and endpoints to match their endpoints and then they can build it and upload there Resources CAR so it will get deployed and main car will change it's endpoints to new ones.


Lali DevamanthriGPU Research Center at the University of Peradeniya

The University of Peradeniya is recognized as a GPU Research Center by NVIDIA Corporation for the GPU-based High-Performance Computing Research carried out at the Department of Computer Engineering.

The major focus of the research center is the investigation of the High-Performance Computing (HPC) aspect of GPU in various domains, including bio-computing, computer security, machine learning and data-mining, and physics.

The GPU Research Center at the University of Peradeniya is part of the Embedded Systems, and Computer Architecture Lab (ESCAL) at the Department of Computer Engineering, supervised by Dr. Roshan Ragel.


Senaka FernandoThe Polarizer Pattern

A polarizer is a filter used in optics to control beams of light by blocking some light waves while allowing others to passthrough. Polarizers are found in some sunglasses, LCDs and also photographic equipment.

When it comes to managing an API there is often the need to control what bits of an API gets exposed and what does not. However, this kind of control is generally done at a Gateway that supports API Management rather than at the back-end API which may provide many more functions that never get exposed to a consumer.

A good example, is a SOAP or REST service that is designed to support a web portal which you also want to expose as an API so that 3rd parties can build their own applications based on it. Though your API may provide many functionalities in support of your web portal, you may not want all of these functionalities available to the 3rd party application developer for one of many reasons. And, based on many cases (as in this example), you’d find that this pattern is most useful if you find that you have an existing capability that is currently being used for some purpose, which also has to be exposed for another purpose but with restricted functionality.

While The Polarizer might be treated as a special kind of an Adapter, the key for differentiating the two in terms of API Management is based on how these two patterns would be implemented. While adapter may expose a new interface to an existing implementation with new capabilities or logic combining two or more existing capabilities, a polarizer will simply restrict the number of methods exposed by an existing implementation without altering any of its functionalities. The outcome of The Polarizer may also be similar to a Remote Facade. But, unlike the remote facade, the purpose of a polarizer is not to expose a convenient and performance-oriented coarse-grained interface to a fine-grained implementation with an extensive number of methods; it is to purely to restrict the methods from being accessible in a given context.

The polarizer also fits alongside patterns such as Model-View-ViewModel (MVVM) and Model-View-Presenter (MVP). Unlike these patterns, which are designed to build integration layers to connect front-ends with back-ends, the focus of the polarizer is to control what is being exposed from a back-end without making any consideration in favour of a front-end. We however may find situations where an implementation of an MVVM or MVP pattern also performing tasks of a polarizer.


The graphic below explains how The Polarizer pattern can be implemented by a typical API Management platform. In such an implementation, the Gateway component will simply polarize all incoming requests through some sort of filter, which may or may not be based on some configurable policy.


The WSO2 API Manager provides the capability to configure what API resources are being exposed to the outside world and thereby polarize the requests to the actual implementation. Polarization may not necessarily be a one-time activity for an API. You may decide on a later date to change what methods are exposed. The WSO2 API Manager allows you to do such reconfiguration via the Publisher portal.


sanjeewa malalgodaHow to generate custom error message with custom http status code for throttled out messages in WSO2 API Manager.

In this post we will discuss how we can override the throttle out message HTTP status code. APIThrottleHandler.handleThrottleOut method indicates that the _throttle_out_handler.xml sequence is executed if it exists. If you need to send custom message with custom http status code we may execute additional sequence which can generate new error message. There we can override message body, http status code etc.

Create convert.xml with following content

<?xml version="1.0" encoding="UTF-8"?><sequence xmlns="http://ws.apache.org/ns/synapse" name="convert">
    <payloadFactory media-type="xml">
        <format>
            <am:fault xmlns:am="http://wso2.org/apimanager">
                <am:code>$1</am:code>
                <am:type>Status report</am:type>
                <am:message>Runtime Error</am:message>
                <am:description>$2</am:description>
            </am:fault>
        </format>
        <args>
            <arg evaluator="xml" expression="$ctx:ERROR_CODE"/>
            <arg evaluator="xml" expression="$ctx:ERROR_MESSAGE"/>
        </args>
    </payloadFactory>
    <property name="RESPONSE" value="true"/>
    <header name="To" action="remove"/>
    <property name="HTTP_SC" value="555" scope="axis2"/>
    <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
    <property name="ContentType" scope="axis2" action="remove"/>
    <property name="Authorization" scope="transport" action="remove"/>
    <property name="Access-Control-Allow-Origin" value="*" scope="transport"/>
    <property name="Host" scope="transport" action="remove"/>
    <property name="Accept" scope="transport" action="remove"/>
    <property name="X-JWT-Assertion" scope="transport" action="remove"/>
    <property name="messageType" value="application/json" scope="axis2"/>
    <send/>
</sequence>


Then copy it to wso2am-1.6.0/repository/deployment/server/synapse-configs/default/sequences directory or use source view to add it to synapse configuration.
If it deployed properly you will see following message in system logs. Please check the logs and see is there any issue in deployment process.

[2015-04-13 09:17:38,885]  INFO - SequenceDeployer Sequence named 'convert' has been deployed from file : /home/sanjeewa/work/support/wso2am-1.6.0/repository/deployment/server/synapse-configs/default/sequences/convert.xml

Now sequence deployed properly then we may use it in _throttle_out_handler_ sequence. Add it as follows.

<?xml version="1.0" encoding="UTF-8"?><sequence xmlns="http://ws.apache.org/ns/synapse" name="_throttle_out_handler_">
    <sequence key="_build_"/>
    <property name="X-JWT-Assertion" scope="transport" action="remove"/>
    <sequence key="convert"/>
    <drop/>
</sequence>



Once _throttle_out_handler_ sequence deployed properly you will see following message in carbon logs. Check carbon console and see is there any errors with deployment.

[2015-04-13 09:22:40,106]  INFO - SequenceDeployer Sequence: _throttle_out_handler_ has been updated from the file: /home/sanjeewa/work/support/wso2am-1.6.0/repository/deployment/server/synapse-configs/default/sequences/_throttle_out_handler_.xml


Then try to invoke API until requests get throttled out. You will see following response.

curl -v -H "Authorization: Bearer 7f855a7d70aed820a78367c362385c86" http://127.0.0.1:8280/testam/sanjeewa/1.0.0


* About to connect() to 127.0.0.1 port 8280 (#0)
*   Trying 127.0.0.1...
* Adding handle: conn: 0x17a2db0
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x17a2db0) send_pipe: 1, recv_pipe: 0
* Connected to 127.0.0.1 (127.0.0.1) port 8280 (#0)
> GET /testam/sanjeewa/1.0.0 HTTP/1.1
> User-Agent: curl/7.32.0
> Host: 127.0.0.1:8280
> Accept: */*
> Authorization: Bearer 7f855a7d70aed820a78367c362385c86
>
< HTTP/1.1 555
< Access-Control-Allow-Origin: *
< Content-Type: application/json
< Date: Mon, 13 Apr 2015 05:30:12 GMT
* Server WSO2-PassThrough-HTTP is not blacklisted
< Server: WSO2-PassThrough-HTTP
< Transfer-Encoding: chunked
<
* Connection #0 to host 127.0.0.1 left intact
{"fault":{"code":"900800","type":"Status report","message":"Runtime Error","description":"Message throttled out"}}

John MathonIIOT and IOT combined in an Airport Use Case

picture of airplane

One of the most interesting use cases for IoT application is around airports.   WSO2 has been working with some airports worldwide and these are truly the most challenging environments for IoT and IIoT in the future.   Imagine the airport of the future.  What does the IIoT and IoT vision for them look like?

Airports have an enormous array of devices that are part of the infrastructure such as the sensors, valves, security systems of all types plus a mixture of facility energy management, vehicles for transportation inside and around the airport. robot devices for baggage handling and other purposes, authorization systems for personnel access.   In addition it is conceived airplanes themselves would be IoT devices.    Like a car you could imagine an API for accessing information about the plane for use by airport, traffic control and security personnel.   In fact many of these things already exist.

IoT and IIoT is about much more than the devices themselves.  It is about the network effect of having many devices connected.   Uber demonstrated a dramatic improvement was possible in an old business with simple application of some IoT ideas.   It is a combination of the network effect and the orchestration of connected devices that can make our world better.

In addition to the highly secure and industrial operational uses of IIoT, the airport of the future we would expect to be a hospitable place for consumers to access services and information that would make our travel experience vastly easier, smoother, less error prone.   Consumers would like to know information about plane arrival, baggage processing, find services in the airport or request services and have the service be able to contact the customer individually.   We would expect over time the airport and airlines would learn to interact with our devices to make this future better.

your bags are on 6

The airports are thinking of a highly IoT future in which both customers and the airport itself would have many novel devices to facilitate operations, efficiency and convenience for the customer in the future.  It is exciting to think of all the novel ways this might evolve and airports want to be ready for innovation and diversity in the future.

Here are 3 uses cases I am going to go in depth about I think you will find interesting both personally and from a technology perspective.  It may provide fodder for your own ideas how to make travel easier.

1) Judy is planning on taking a flight today and she is having a bit of bad luck.   Her friend Linda is picking her up.

2) A plane is arriving and needs some servicing before it can take off again.

3) Jonathon is having a medical emergency while at the airport.

I have purposely chosen use cases with more consumer focus although there are lots of interesting use cases around the operational aspects of the airport that are as complicated and involve many more devices.  Possibly this could be a second blog on this topic.

Use Case 1:  Judy is taking a flight – things don’t go well

Judy has an IoT watch with a silver band that has an airport app for her local airport.  Linda, her friend in Paris that is picking her up has a similar watch with an antique brass band.  The watch could just as easily be a phone or some other always connected device.  Judy is taking an international flight today from LAX to Paris.   Hours before the flight Judy is informed that weather is impacting the flights into and out of Paris.   She is given the option to reschedule her flight.

flight 909 possible cancel

Backstory: These events can be major hassles and costs for people, airports and airlines.   The passenger should be given specific information as it is available of the possibility of cancellations, delays of their flight and recommendations to reschedule or delay if possible their flight.    In many cases airlines will waive fees in such scenarios to reschedule considering the costs of handling inconvenienced passengers.  Making the inconvenience and costs of these kinds of regular incidents less would be a very desirable goal.   This implies that the APIs of the airport and the airlines are in sync so that each is aware of situations and conditions that affect each other and are up to date always and show the same information.   Ideally this applies to all the vendors or other organizations who are involved in the airport whether for infrastructure vendors or outward customer servicing vendors.

The airport should have APIs that are accessible by applications on watches, phones, cars or other IoT devices that queries airport status, known flight status, delays, potential delays for individual flights as well as general airport conditions.  Data can be fed to these services automatically from the IoT devices at the airport as well as airlines and other providers.  If a security situation requires a person to leave the airport or avoid the airport this could also be provided so that applications on Judy or Linda‘s phone or watch and warn them of such situations.    Today Judy receives a warning the airport she is going is going into has potential weather situation that may affect her flight.  She is on alert to see if her flight will be cancelled.   The flight is not cancelled before she leaves for the airport.

As the flight time comes closer Judy is automatically asked to check in if she hasn’t done so already.  

If the flight destination requires special visas, or has other restrictions she will be informed automatically before she heads to the airport.   The airport has also told her that congestion at the airport is high and knowing her current location it is able to tell her the latest she can leave from where she is right now to get to the airport in sufficient time.

congestionliondawants share

Judy determines that she will take the chance and go to the airport.   She leaves when the watch suggests giving her time to allow for the congestion at the airport.   She also approves her friend Linda to get updates on her status and location.

As she approaches the airport she is automatically informed of which parking garages are full, which are available.  If she has an electric car the system may be able to tell her what charging ports are open.   The system will tell her which terminal her flight is leaving from and the status of the plane arriving to LAX.

All of these things require personalization of the app so Judy needs to tell the app her preferences and specific things she would like.  She needs to tell if she is traveling by car or mass transit and many other things.  She may walk slowly and want the app to give her extra time to get places.   She may want an audible alarm in addition to a buzz on her wrist or phone.

ev ports avail

When she parks her car and enters the airport her watch synchronizes with the airport and establishes a secure channel to her device and application on her device.

Knowing her airline for her first segment Judy’s IoT device informs her as she enters the terminal that she can use desks 112-120 to check in for her flight.

checkin

Her IoT device communicates with the airport to tell the airport her location and this is relayed as well to the friends or others she has designated to share this information with.  When she gets to the checkin desk the system automatically brings up her flight record for the airline agent because the system has been tracking her. The agent informs her that the problem in the destination city has gotten better but that her flight may still be delayed.

securty and lounge 

Judy is issued electronic tickets to her airport watch app and her luggage is identified with her watch identity through attached NFC devices at the airport.   Judy is authorized to go to the upper class waiting area as she may have to spend extra time at the airport.  

id required

She can use her phone or watch’s identity fingerprint reader to go directly through the security system without having her identity checked manually.   She puts her bag through security and the airport knows she has gone through security successfully.    

loungejudy at airport notification

So far Judy’s path has been highly streamlined and made more efficient both for her, the airline and the airport in a difficult situation. There is enhanced security in knowing where Judy is and through enhanced biometric data.

As Judy is in the airport her watch app tells her of the status of the airplane and knowing her location and boarding priority it can estimate when she needs to head to the gate to board.    

Judy has ordered several presents from duty-free to take with her to her friend.   Her watch app is informed she has duty free items to pick up.  When they are ready she is told that she needs to go to duty free to pick up her products and directed how to get there.

duty free pickuptake transit bus 7

Similarly, she is led to her gate via whatever buses, trains or paths she needs to take.  When Judy gets to the gate agent her ticket is located in her watch passbook as well as her boarding priority so that she can simply walk through the gate at the right time and onto the walkway to her plane without needing a gate agent to handle her ticket.   If the plane requires an ID check before boarding she could simply use her biometric reader to bypass this.   

During her traversal of the airport Judy’s IoT device has determined her location and sent that information to the airport app but there are times when a more precise location is required and a NFC capability or similar functionality is required to provide short distance identification such as at various interactive devices, gates or ticket counters.

board the planeluggage on plane

Judy is on the plane so it seems the worst part of her journey may be over.   She sees her luggage made it as well.  Unfortunately, this is not Judy’s lucky day and before departure a physical problem with the plane is detected.  Judy is informed the airline is preparing an alternate plane to take her.  She must de-board the plane and proceed to a different gate.  She is informed her seat will be slightly different because of the different configuration of the new plane.   Judy gets off the plane and proceeds to the new gate.  

new departurejudy flight delayed

She gets in her seat on the new plane similarly to the last time without having to carry a ticket since her ticket is electronic it knows her plane and gate have been reassigned.    Judy left something on the first plane in her seat.   Don’t tell me you haven’t done this.  

Since the items have her identity on them the system knows where she is, the new plane and seat she is now located in.  The attendant simply dispatches an automated courier robot which picks up the item and proceeds to deliver it to Judy on her new plane.

article left

Judy is very happy she didn’t end up forgetting the duty free items.  Whew!  

The airport knows that Judy and her luggage are together on the same plane.   However, when she boards the new plane her luggage cannot fit onto the smaller plane.  The baggage system transfers her luggage automatically to the next flight to that destination.

Judy is informed before takeoff that her luggage will be delayed and will be scheduled onto the next flight in 6 hours to that destination.   She is automatically registered as having missing luggage and is issued a kit of consumables to help her survive without her luggage.

bagscom for delivery

Linda of course is aware of all this and texts Judy she regrets her baggage will be late and will be able to lend her some stuff too.    Judy’s plane finally takes off.   During the flight the special request she had made a week earlier from the airline web site for a Domaine Carneros La Reve Champagne was delivered to her.  This made the flight a lot more enjoyable.    She put on her virtual reality goggles and watched an engrossing version of an old movie called Avatar by James Cameron she had never seen.

To Judy’s relief the pilot was able to make up some time en-route but not enough to make it on-time.

Judy is able to see en-route through the planes IoT systems that are connected to the internet that her luggage did indeed make it onto the next flight.  She is able to tell the baggage system her location at her destination to send the luggage to.   This time as she exits the plane Judy doesn’t forget the duty free items.  :)

In the meantime Linda is aware the plane is delayed.  Her watch app tells her that given congestion on the streets and at the airport to arrive to pick up her friend she must leave 47 minutes prior to landing.  This includes the time Judy would wait for baggage (in this case 0) and the traversal through passport control and customs at this time of day.  The calculation is quite precise and Linda arrives at the airport knowing that Linda is out of customs and walking to door 7 within a few minutes of the expected time.

leave for linda

Linda is aware of Judy’s progress at all times.   The airport police are pushing drivers along who don’t move try to ask Linda to leave.  Linda shows them her watch app which shows her friend is en-route, has her baggage and within a minute of exiting the building.   The police allow her to loiter for her friend.

judy is past customs

Judy pops out into the miserable weather but the happy arms of her friends embrace knowing exactly where each other are.  

They drive out of the airport and the airport disengages and sends Judy a welcome message to her new city and any information that they might deem useful to her including traffic problems or other security information.

welcome to paris

Use Case II:    Flight 909 is a long haul flight from LAX to Paris.   Flight 909 is sitting at the gate waiting for the automated baggage and other systems to finish loading the plane up.

The plane itself is a giant set of IoT devices.  Entertainments system, baggage, GPS, food systems and of course the operation and controls of the plane itself.  All of these systems have telemetry to report to the airline, airport and various service providers.   When the plane landed the airport personnel and vendors already knew what consumables had been used during the last flight and what special needs there were for the new flight so that it could be stocked appropriately.     The plane knew for instance that one of the passengers wanted a special champagne to be available for this flight.

In the final checkup of the plane by inspectors at LAX they determine that the plane has a defect which will not allow it to fly to the destination immediately.   This is a rare occurrence because the plane normally detects these situations in flight and informs them prior to arrival at the destination.  The decision is made that a different plane can be made available.

A special complex orchestration is initiated which automates much of the work needed to transfer to the new airplane including robots needed to move the luggage or help in the movement of the airplanes.  Appropriate paperwork has to be filed with the tower and flight agencies.  Flight plans updated.  Passengers seats reassigned, The new plane is requisitioned.    As soon as this decision is final all passengers on the flight are informed of the need to disembark and go to the new gate.

Some passengers who will miss connections due to the delay are automatically informed of alternative routes and times.  They are automatically reassigned to new routes after the passenger agrees to the new routing. Passengers that will have to spend the night in this location are automatically told which hotels to go to and given directions.  The hotel is booked for them and the room paid for.   If they have special assistance needs an autonomous vehicle is dispatched to pick them up at the gate and transport them to the place they need to go.

During the transfer to the new smaller plane in this instance some luggage does not make it.   The passengers with displaced luggage are informed and the luggage is immediately routed to the new plane by the automated baggage system robots when the appropriate time is reached.

The new plane is boarded and as the plane goes through its final manifest and route approval all of the process is automated to make the rerouting seamless and fast.

As the new plane leaves the gate all the IoT devices on the Tarmac are automatically reassigned to other duties.  Each device on the tarmac and other areas of the operational area of the airport is instrumented so that it can be automatically fetched and dispatched where it is needed.  Each device has a health status so that any abnormal behavior or service needed is handled in advance as much as possible.

Autonomous operation is a much easier job at an airport with very limited paths and destinations.

Every door and access has biometric reading to validate personnel are allowed in that area of the airport.   If any device is commandeered and moved outside of a geo-fence it becomes de-activated and immediately all data is wiped from the device as needed and the system is informed.

In flight

The plane takes off and telemetry from the plane is transmitted constantly to various authorized consumers.    The consumers may be airlines and airplane manufacturers, service vendors who need to understand the operation of the plane itself, detailed location information for air traffic control, less detailed information for other parties.  Consumables are recorded as they are used.   Most important any safety or mechanical trouble is reported instantly along with logs.  Cockpits are instrumented to provide visual confirmation of the condition of the cockpit and can be remotely operated in case the pilots become unable to fly the plane.

En-route the airplane detects an asymmetrical weighting on one wing.  A camera captures the image of a ape-like man who is tearing at the wing.   Such things occur regularly so the pilot authorizes electrifying the wing causing the gremlin to jump off the planes wing.   Soon after that the airplane detects a small condition in one engine needs to be looked at.  3 parts are involved and this information is relayed to suppliers so that the parts can be made available if needed the instant the plane lands at the gate in Paris.

Use Case III (3) :  Jonathon is having a medical emergency while at the airport.

In Paris in terminal one a passenger is feeling ill.   He has a condition called COPD which is a progressive disease.  He has been issued a medical monitor armband which detects conditions when Jonathon may be in danger.   As Jonathon is walking to his next flight he feels a light-headedness.

His monitor detects Jonathon’s blood has a spectrum that is indicating insufficient oxygen is being consumed, an acceleration in his heart rate, increased perspiration and with little physical movement  it concludes something is wrong.    The monitor broadcasts to closeby IoT devices that there is a person in distress.   These signals are relayed to the airports emergency medical technicians who are dispatched to the exact location of Jonathon within the terminal within 2 minutes.   An autonomous medical vehicle is also routed in case it is needed.

The technicians are able to help Jonathon with some simple medications which help him breathe better and insure if he does have a heart attack that his heart will have minimal damage.   Within 20 minutes Jonathon is able to proceed on his journey after the EMT’s determine he is okay to travel.

What is needed at an airport to enable this kind of service and efficiency?

There are obviously requirements for hardware to support all these functions.   This blog will not concern itself so much with specific hardware sources or choices but simply specify the requirements in terms of standards and software components, capabilities to facilitate the functionality described.

Categorization of IoT devices at the airport

An airport in this scenario will have devices that require high security and fast response times and many diverse devices for numerous other functions.  It could easily be the case there are 10s of thousands of IIoT devices in and around the airport that have to be managed.     This number of devices requires a large amount of automation to make it practical.   There has to be well defined security protocols and standards as well as policies and rules for devices so that they can be managed in this highly complex environment.

It’s important to note that because of the low cost of IoT hardware, the ubiquity of standards and other technology that whether or not the airports want all the infrastructure and security made into IoT devices they will evolve rapidly to all become IoT devices.

An airport will need IoT devices which are designed for high security and high reliability operation.  Some of the protocols such as CoAp are evolving rapidly to support more secure applications.  In other cases hardwired connections or high security wifi connections will be used.

Let us establish some terms for different types of environments that devices need to operate within.

INFRASTRUCTURE:  Devices related to HVAC, energy management, pumps, watering systems and anything that has to do with the basic operation of the buildings and environment.

OPERATIONS:  Devices related to operating the airport including transportation vehicles, robots, baggage handling, conveyers.  Also, will include in this category things like airplanes themselves.

SECURITY: Devices related to locks, authentication systems, entitlement control systems, security monitoring devices, cameras, detectors of dangerous situations.  This might include cameras that can recognize possibly known people who are not supposed to be traveling, the security devices at security checkpoints.

SERVICE: Devices that are designed to provide services to customers.  This might include monitors that display information or dispense information to consumers, beacons for instance, devices that help consumers with baggage or travel assistance wheelchairs and carts.

CUSTOMER:  Devices that customers bring into the airport that need to access services within the airport.

It is expected that devices in the INFRASTRUCTURE and OPERATIONS category are always connected devices that have large battery power or directly connected to a power source and network connection that is physical and possibly replicated in some cases for reliability and security.

We would hope the same high level of service was possible for SECURITY devices but it is expected that this might include things like NFC or RFID devices.   Security devices might need to be mobile and thus hard wired power and networking may not be possible in all cases.

These classes of devices in the first 3 categories will also have to have the ability to be managed by a device manager that can establish geo-fences, wipe devices contents and de-authorize if they are tampered with or accessed outside the geo-fence they are assigned to.   Devices in these categories should have health APIs to tell of failure or imminent failure, battery loss or other concern.   They should support a authentication mechanism and all data on the devices at rest as well as data in motion over whatever protocol the device uses must be encrypted.   The certificates used by such encryption systems should be managed automatically so they can be revoked and re-issued automatically periodically.

Security, Integration (Orchestration), IoT, BigData and other requirements of Airport Infrastructure

Due to the large number of people coming and going from an airport at any time and the sheer number of devices with telemetry the amount of data captured by the systems and the number of transactions is truly a very large number.   It is anticipated that the system for an average big city airport would have to handle potentially tens of thousands of messages / second possibly many more.   This is roughly a billion messages / day and is quite reasonable for todays systems to handle.

Security

A comprehensive full featured Identity and Security manager component is required which will support all the following:

Authentication
OPENID, SAML2, Kerberos
Multi-factor authentication
credential mapping across different protocols
federation via OPENID, SAML2
account locking on failed user attempts
Account recovery with email and secret questions
bio-metric authentication
User/Group Management
LDAP, Active Directory or any database including Cassandra to support large user stores
SCIM support
Entitlement
OAUTH2
RBAC Role Based Access Control
Fine Grained Policies via XACML
Entitlement Management for APIs – REST or SOAP
Geofencing
preventing login outside defined geofences
Auditing
XDAS/JMX
logging integration with BAM and CEP for KPI’s and suspicious activity eventing

 BigData

A bigdata infrasturcture is required because of the data flow requirements as well as the scale of data being collected.  Each device will be polled frequently and that data will be logged both for use in security applications as well as for analysis to improve efficiency, discover loopholes, discover new automations and implement improvements in functionality quickly.

Support for standards such as Cassandra,

Bigdata Collection Must Collect to a common bigdata store for analysis
collect information on all API usage
collect information on data from devices or services that require polling
collect data from devices or services that publish themselves
easy to add new streams of unstructured or semi-structured data
discover new sources for data dynamically and collect data
support 10000s of messages / second easily
collect metadata from some services or devices where the raw data repository is elsewhere
collect data on the system itself, such as metrics produced, actions taken, events generated
Support Apache Thrift, HTTP, MQTT, JMS, SOAP, Kafka and Web services
ability to add GPS, other information to stream
ability to also send data to other loggers as needed
real-time analytics
non-programmers should be able to add new metrics, KPI’s or continuously calculated quantities
time-based pattern matching (Complex event processing) for real-time eventing
machine learning capability to learn behaviors to be monitored
ability to create events based on exceeding limits, falling below limits, breaking a geofence, entering a geofence or other conditions
ability to aggregate data in an event from other events or data streams
ability to process rule based analytics in real time
visualization
easy to create dashboards of visualizations of any event or data in any stream
easy to create maps of events, devices, data on any event stream
easy to use tools like Google gadgets to create visualizations
ability to aggregate data from multiple sources including bigdata, conventional databases, file systems or other sources
batch analytics
ability to integrate with hadoop and open source batch big data analytics tools such as pentaho
manage batch analytics to perform
Management
ability to manage large clusters of cassandra or other big data storage databases automatically
ability to scale on increasing demand rapidly

Governance Registry / API/App/Web Store

A governance Registry and Enterprise Store Capability is needed to insure security, configuration consistency, managing the lifecycle of APIs, promote services to vendors, partners and the public.  A governance registry provides lifecycle management as well as fault tolerance configuration and some security services but the typical governance registry doesn’t provide a friendly interface for the public, vendors or even to the internal development.   An Enterprise Store designed to make it easy to find services, documentation, helpful hints to promote a community of users of services is required.   This is an essential part of the Platform 3 message I talk about that is key to productivity and agility.

Some of the features required in these components:

gov Registry / Enterprise Store
Registry
Need to be able to support any type of asset as a governed enttity including APIs for services, APIs for devices, different types of mobile devices, mobile phones, keys for APIs, credentials for devices, certificates for services, GPS coordinates for fixed devices, geofencing zones
UI
easy to add devices and services
easy to find status of all entities
easy to find and view devices and services
must support APIs, Devices, APIs for devices and Apps
easy to find documentation about any asset
ability to create new asset classes
lifecycle creation for each class
easy to manage lifecycle of each classcertificate management for services and devices

Internet of Things (IoT)

The Internet of Things components have to do mainly with device management.   In the past device management has been focused on cell phones.  New device management capabilities must include the capability to manage devices of all types.  The purpose of device management is to register devices, configure devices in a uniform way, detect anomalous behaviors, handle device upgrades, replacement, failures, theft, maintenance or even tampering.

In order to support the wide variety of devices in both the IIoT and IoT sphere it is necessary to support a Connected Device Management Framework which allows abstraction of basic functions and services associated with any device.    An important aspect of device management in such a complex environment as an Airport is to group devices in intelligent ways that allows management and analysis of data contextually.

IoT
Types
IP type devices connected over wired interfaces
IP type devices connected over wireless interfaces Wifi
IP type devices connected over CoAp/Zigbee
NFC/RFID
Protocols REST and SOAP
MQTT
Device Management
Device Profile
API registration
Owner Registration
GPS and GeoFencing
Beacon Security Profile
Supported Security
Health Status monitoring
Authentication
Entitlement / Authorization Profile
Data Logging
Device Wipe
Upgrade Status and Upgrade
Documentation
Applications
functional tagging
Groups Groups are non-heirarchical list of connected devices that depend on each other and have a set of analytics or services that operate across the group. Groups can contain other groups.
Group Profile
Group APIs
Group Geofencing
Group Health
Group part of Groups, tags
Applications
Documentation
Group Data Logging
Framework
Connected Device Management Framework
Support for OMA
Support for LWM2M
Extensible Support for Devices that don’t work under these
Extensible Support for all the device management semantics above

Integration and Orchestration

A key capability of the IoT infrastructure proposed is the intelligent working of multiple services together to produce intelligent behavior.    In the past individual devices were handled individually as their own function and not much thought was put into devices autonomously working together or information from multiple devices used to produce automation.

In order to make the Airport function efficiently with tens of thousands of devices and many services involving access to multiple devices it is necessary to be able to integrate all these devices so they can work together and to be able to establish rules, processes and simple workflows and dataflows between devices and people.   As a result there is a need for all 3 types of orchestration tools that we have used before:  Rules Engines, Integration Patterns in Enterprise Busses and Business Process Engines.   These 3 patterns provide an orthogonal and complete set of functionalities to specify behaviors that are simple integration of information, distribution of information from different devices to all the parties needed and to provide rules for complex behaviors or business process logic that may involve humans and take longer than a microsecond to fully process.

orchestration
the full suite of enterprise integration patterns supported
JMS, AMQP, HTTP(S), Files,
support for visual process management scripting orchestration
support for business rules orchestration
support for ETL from wide variety of sources
APIs and Enterprise Integration Patterns

Scalability, Performance

Since this system must support high message rates and highly variable demand it is important that the system be able to scale as needed to provide services.   In addition it is highly desirable to have automatic failure detection and service replacement.  A cloud based architecture is well suited to this as has been shown by numerous companies such as Google, Yahoo, Twitter.

In order to support a cloud infrastructure a DevOps/PaaS strategy should be employed to automate the management, deployment of services and move, scale or upgrade them.    PaaS platforms provide many features that take months and years to custom build and then are inflexible.

For example at an airport it may be possible to anticipate load at certain times of the day or based on flight arrival and departure times, events in an area that may increase or decrease base load.   To keep response to services always efficient it should be possible to allocate instances to specific devices or regions.

Overall
Fault Tolerance The entire system and every component must support active/active and active/passive FT
Disaster Recovery Data must be replicated to alternate site and applications should be runnable in alternate environment through replicated governance registry contents
Scale The system should support dynamic scaling allowing for peak period data flows and stress conditions 10 times the average flow
The system should support 1000s of messages / second at average flow
The system should be able to dynamically add instances of services if required to meet demand
Load Balancing
Hybrid Support multiple clouds in different locations
Polyglot Ability to support different development environments, different development tools and applications
Container Support Support for Docker, Zen and other containers
Orchestration Support for Kubernetes
Operations Operational Management capabilities including performance monitoring
AutoScaling The ability to scale a process based on numerous factors not just queue lengths

Summary

The modern Airport will be one of the most challenging environments for IoT and IIoT applications.   The ability to provide efficient operation and security go hand in hand in this new world.   Many of the new devices and capabilities will make customers lives dramatically better especially in stressful situations.  People can be better informed and have more of the basic things handled automatically.

The purpose of all these “things” is to make our life easier and better.   If the complexity overwhelms the system then the purpose of IoT is lost.   The “things” enable intelligent behavior from the ability to sense and act.  However, if each thing has to be managed individually we will spend our lives managing the things and not enjoying.   So, the key is “intelligence.”   The key to implementing intelligence is interoperability and orchestration.  However, to know what to do that is intelligent requires BigData.    We must be able to discover the patterns and the actions that will result in saved time and effort.  We must figure out how to respond to security situations intelligently and to handle typical failures and events like weather or plane outages or worse in as automated and smooth a way as possible or the complexity of the system will overwhelm the people intended to manage and produce good results.

The technology exists today in open source to do a modern Airport like I have described.  Nothing I suggested or described was beyond what we have today.   It is simply a matter of the desire to build a better Airport and world for ourselves.

I hope you enjoyed these use cases.


Evanthika AmarasiriHow to print the results summary when JMeter is running on headless mode

When you are running JMeter in GUI mode, you can easily view the results through the Summary report or through the Aggregate report.

But how do you view the summary if you are running JMeter in headless mode?

This can be configured through the jmeter.properties file which resides in $JMETER_HOME/bin folder.

Note that this is enabled by default in latest versions of JMeter. E.g.:- Version 2.13

#--------------------------------------------------------------------------- # Summariser - Generate Summary Results - configuration (mainly applies to non-GUI mode) #--------------------------------------------------------------------------- #

# Define the following property to automatically start a summariser with that name
# (applies to non-GUI mode only)
summariser.name=summary
#
# interval between summaries (in seconds) default 3 minutes
summariser.interval=180
#
# Write messages to log file
summariser.log=true
#
# Write messages to System.out
summariser.out=true

Ajith VitharanaInvoke file upload spring service using WSO2 API Manager(1.9).

This is one use case I tried recently. So I thought to post that as a blog post :).

1. Deploy sample spring service to upload a file.

i  Download the sample service from this website https://spring.io/guides/gs/uploading-files/  (download link https://github.com/spring-guides/gs-uploading-files/archive/master.zip)

The FileUploadController.java class like bellow.

package hello;

import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileOutputStream;

import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.multipart.MultipartFile;

@Controller
public class FileUploadController {

@RequestMapping(value="/upload", method=RequestMethod.GET)
public @ResponseBody String provideUploadInfo() {
return "You can upload a file by posting to this same URL.";
}

@RequestMapping(value="/upload", method=RequestMethod.POST)
public @ResponseBody String handleFileUpload(@RequestParam("name") String name,
@RequestParam("file") MultipartFile file){
if (!file.isEmpty()) {
try {
byte[] bytes = file.getBytes();
BufferedOutputStream stream =
new BufferedOutputStream(new FileOutputStream(new File(name)));
stream.write(bytes);
stream.close();
return "You successfully uploaded " + name + "!";
} catch (Exception e) {
return "You failed to upload " + name + " => " + e.getMessage();
}
} else {
return "You failed to upload " + name + " because the file was empty.";
}
}

}

ii Unzip the file and go to the gs-uploading-files-master\complete directory from command window.

iii. Execute the following command to build the executable jar file.(You should have maven 3.x installed)

mvn  clean install

(The default pom file configured to build with Java 8 , if you are using  java 6 or 7 change the following section in the pom file gs-uploading-files-master\complete\pom.xml)
    <properties>
        <java .version="">1.7</java>
    </properties>

iv.  Go to the target directory and execute the following command to run our file upload service.

java -jar gs-uploading-files-0.1.0.jar


v. When you locate your browser to http://localhost:8080/ , you should be able to see the service UI to upload a file.

vi. You can send a POST request to http://localhost:8080/upload to upload file as well.

2. Create a API for this service using WSO2 API Manager.

i. Required fields to create an API.

API Name: FileUploadAPI
Context    : /fileupload
Version    : 1.0.0
URL pattern : /*
HTTP Method : POST
Production URL :  http://localhost:8080/upload
Tier Availability : Unlimited
 
ii. After published API , log in to the store , subscribe to API and generate token.

iii) Generate SOAP UI project using the API endpoint https://localhost:8243/fileupload/1.0.0

  • Change the HTTP method to POST.
  • Add a query parameter "name". (because hanleFileUpload operation in FileUploadController.java expect that parameter).
  • Select the Media Type as "multipart/form-data" . (because we send file to the upload service as an attachment)
  • Add Authorization header.(because we invoke OAuth2 protected resource exposed by WSO2 API Manager)

iv) Add a file as attachment in SOAP UI.


v) Now when you send a request,  you should see the error message which similar to this.

{
   "timestamp": 1437841327984,
   "status": 400,
   "error": "Bad Request",
   "exception": "org.springframework.web.bind.MissingServletRequestParameterException",
   "message": "Required MultipartFile parameter 'file' is not present",
   "path": "/upload"
}


vi) To avoid this issue , you need to define the ContentID as "file", in the attachment window.


This ContentID value is depend on the name given for the RequestParam [eg: @RequestParam("file")] defined for the MultipartFile in method signature.
public @ResponseBody String handleFileUpload(@RequestParam("name") String name, 
            @RequestParam("file") MultipartFile file){


vii) Go to the place you execute the jar file, then you should see the uploaded file.


Nirmal FernandoHow to tune hyperparameters?

Hyperparameter tuning is one of the key concepts in machine learning. Grid search, random search, gradient based optimization are few concepts you could use to perform hyperparameter tuning automatically [1].

In this article, I am going to explain how you could do the hyperparameter tuning manually by performing few tests. I am going to use WSO2 Machine Learner 1.0 for this purpose (refer [2] to understand what WSO2 ML 1.0 is capable of doing). Dataset I have used to perform this analysis is the well-known Pima Indians Diabetes dataset [3] and the algorithm picked was Logistic regression with mini batch gradient descent algorithm. For this algorithm, there are few hyperparameters namely,

  • Iterations - Number of times optimizer runs before completing the optimization process
  • Learning rate - Step size of the optimization algorithm
  • Regularization type - Type of the regularization. WSO2 Machine Learner                                supports L2 and L1 regularizations.
  • Regularization parameter - Regularization parameter controls the model complexity and hence, helps to control model overfitting.
  • SGD Data Fraction - Fraction of the training dataset use in a single iteration of the optimization algorithm

From the above set of hyperparameters, what I wanted to know was, the optimal learning rate and the number of iterations keeping other hyperparameters at a constant value.

Goals
  • Finding the optimal learning rate and the number of iterations which improves AUC (Area under curve of ROC curve [4])
  • Finding the relationship between Learning rate and AUC
  • Finding the relationship between number of iterations and AUC

Approach

Firstly, Pima Indians Diabetes dataset was uploaded to WSO2 ML 1.0. Then, I wanted to understand a fair number for the iterations so that I could find the optimal learning rate. For that the learning rate was kept at a fixed value (0.1) and varied the number of iterations and recorded the AUC against each iterations number.

LR = 0.1







Iterations
100
1000
5000
10000
20000
30000
50000
AUC
0.475
0.464
0.507
0.526
0.546
0.562
0.592


According to the plotted graph, it is quite evident that the AUC increases with the number of iterations. Hence, I picked 10000 as a fair number of iterations to find the optimal learning rate (of course I could have picked any number > 5000 (where learning rate started to climb over 0.5)). Increasing number of iterations extensively would lead to an overfitted model.
Since, I have picked a ‘fair’ number for iterations, next step is to find the optimal learning rate. For that, the number of iterations was kept at a fixed value (10000) and varied the learning rate and recorded the AUC against each learning rate.

Iterations=10000






LR
0.0001
0.0005
0.001
0.005
0.01
0.1
AUC
0.529
0.558
0.562
0.59
0.599
0.526

According to the above observations, we can see that the AUC has a global maxima at 0.01 learning rate (to be precise it is between 0.005 and 0.01). Hence, we could conclude that AUC is get maximized when learning rate approaches 0.01 i.e. 0.01 is the optimal learning rate for this particular dataset and algorithm.

Now, we could change the learning rate to 0.01 and re-run the first test mentioned in the article.

LR = 0.01









Iterations
100
1000
5000
10000
20000
30000
50000
100000
150000
AUC
0.512
0.522
0.595
0.599
0.601
0.604
0.607
0.612
0.616


Above graph depicts that the AUC increases ever so slightly when we increase the number of iterations. So, how to find the optimal number of iterations? Well, it depends on how much computing power you have and also what level of AUC you expect. AUC will probably not improve drastically, even though you improve number of iterations.

How can I increase the AUC then? You can of course use another binary classification algorithm (Support Vector Machine) or else you could do some feature engineering on the dataset so that it reduces the noise of the training data.

Summary
This article tries to explain the process of tuning hyperparameters for a selected dataset and an algorithm. Same approach could be used with different datasets and algorithms too.
References:

Danushka FernandoHow to include artifactid in a folder or file name or content of a file in a maven archetype.

When we create maven archetypes, specially the multi module ones, we might need to include the artifact id in to a folder or file name or include it to a content of a file. This can be done very easily.


To include artifactid to a folder or file name you just have to add the place holder __rootArtifactId__.


**Note that there are two '_' characters before and after the word rootArtifactId.


So for example if  you want a file name like -development.xml, then you can simply name it as __rootArtifactId__-development.xml. When file name is mentioned like this inside the archetype, when you run archetype generate command it will replace with the artifactid provided.


Next thing is how to include this inside a file. This was tricky. I couldn't find a way to do it first. So I keep trying things and it works and it's simple. You can simply add the place holder ${rootArtifactId}.

In the same pattern you can use other parameters like version as well by using __version__ and ${version} respectively.


Enjoy !!!!

Srinath PereraMoved to wordpress at https://iwringer.wordpress.com

Moved to wordpress, and find the new blog at https://iwringer.wordpress.com

Isuru PereraJava CPU Flame Graphs

Brendan Gregg shared an exciting news in his Monitorama talk: The "JDK-8068945" is fixed in Java 8 Update 60 Build 19!

Without this fix, it was not possible to see full stack in Java with Linux perf_events and standard JDK (without any patches). For more information, see Brendan's Java CPU Flame Graphs page.

The Problem with Java and Perf


First of all, let's see what's the problem with using current latest Java and perf.

For this example, I used the same program explained in my previous blog post regarding FlameGraphs.

java org.wso2.example.JavaThreadCPUUsage.App

Then I sampled on-CPU functions for Java program using perf. (See Brendan's Perf Examples)

sudo perf record -F 99 -g -p `pgrep -f JavaThreadCPUUsage`

After few seconds, I pressed Ctrl+C to stop recording.

When I tried to list all raw events from "perf.data" using the "sudo perf script" command, I see following message.

Failed to open /tmp/perf-29463.map, continuing without symbols

Perf tries to load Java symbol table from the "/tmp/perf-29463.map" file. The 29463 part in the file name is the PID of the Java program.

Missing these Java symbols is one of the main problems with Java and Perf.  In here, there are actually two specific problems and Brendan has explained those in Java CPU Flame Graphs.

As explained by Brendan, in order to solve these problems, we need to provide Java symbol table for the perf and instruct JVM to preserve frame pointers. This is why we need the fix for  "JDK-8068945".

Generating Java CPU Flame Graphs


I downloaded the latest JDK™ 8u60 Early Access Release, which is "JDK 8u60 Build b24" as of now.

I extracted the JDK to a temp directory in my home. So, my JAVA_HOME is now "~/temp/jdk1.8.0_60".

With this release, I can use the JVM argument "-XX:+PreserveFramePointer" with java command.

~/temp/jdk1.8.0_60/bin/java -XX:+PreserveFramePointer org.wso2.example.JavaThreadCPUUsage.App

Now we need to create the Java symbol file. I found two ways to do that.

  1. Use https://github.com/jrudolph/perf-map-agent
  2. Use https://github.com/coderplay/perfj 

Using perf-map-agent


We need to build "perf-java".

git clone https://github.com/jrudolph/perf-map-agent.git
cd perf-map-agent
export JAVA_HOME=~/temp/jdk1.8.0_60/
sudo apt-get install cmake
cmake .
make

Now, run "perf-java".

./perf-java `pgrep -f JavaThreadCPUUsage` 

This will create the Java symbol file in /tmp.

Please note that the perf-java command attaches to the java process. Please use the same JAVA_HOME to make sure there are no errors when attaching to the Java process.

Now we can start a perf recording. (Same command as mentioned above)

sudo perf record -F 99 -g -p `pgrep -f JavaThreadCPUUsage`

Using perfj


Perfj is a wrapper of linux perf command for java programs. Download the latest release from perfj releases page.

tar -xvf perfj-1.0.tgz
cd perfj-1.0/
sudo -u isuru JAVA_HOME=/home/isuru/temp/jdk1.8.0_60 ./bin/perfj record -F 99 -g -p `pgrep -f JavaThreadCPUUsage`

Please note that I'm using the same user as the java process. This is required as perfj also attaches to the Java process. JAVA_HOME also must be same.

Generating Flame Graph


Now we have perf.data and java symbols to generate the flame graph.

sudo perf script | ~/performance/brendangregg-git/FlameGraph/stackcollapse-perf.pl > /tmp/out.perf-folded
cat /tmp/out.perf-folded | ~/performance/brendangregg-git/FlameGraph/flamegraph.pl --color=java --width 550 > /tmp/perf.svg

When you open the perf.svg in browser, you can see complete stack traces.

Summary


This is a quick blog post on Generating Java CPU Flame Graphs. To create Java symbol file, we can use perfj or perf-map-agent.

When I tested, perfj seems to provide better results.

Here is the flame graph generated with perfj java symbols!

Flame Graph Reset Zoomjlong_disjoi.._ZL10java_startP6Thread__v.._ZN10JavaThread3runEvjava_ZN9JavaCalls12call_virtualEP9JavaValue6Handle11KlassHandleP6Symbo..[unk..readB.._..vfs..ex..org/wso2/example/JavaThreadCP..jshort_disjo.._ZL12thread_entryP10JavaThreadP6ThreadInterpreter_ZN10JavaThread17thread_main_innerEvsyst..call_stubsun/security/provider/NativeP.._ZN9JavaCalls12call_virtualEP9JavaValue11KlassHandleP6SymbolS4_P17..ura..ext..sun/se..start_threadjava/..jshort_.._ZN9JavaCalls11call_helperEP9JavaValueP12methodHandleP17JavaCallAr..java/..sys_..

If there are any questions, please ask in comments. 

Isuru PereraInstalling Oracle JDK 7 (Java Development Kit) on Ubuntu

There are many posts on this topic if you search on Google. This post just explains the steps I use to install JDK 7 on my laptop.

Download the JDK from Oracle. The latest version as of now is Java SE 7u51.

I'm on 64-bit machine, therefore I downloaded jdk-7u51-linux-x64.tar.gz

It's easy to get the tar.gz package as we just have to extract the JDK.

I usually extract the JDK to /usr/lib/jvm directory.


sudo mkdir -p /usr/lib/jvm
cd /usr/lib/jvm/
sudo tar -xf ~/Software/jdk-7u51-linux-x64.tar.gz
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/jdk1.7.0_51/bin/javac" 1
sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.7.0_51/bin/java" 1
sudo update-alternatives --install "/usr/lib/mozilla/plugins/libjavaplugin.so" "mozilla-javaplugin.so" "/usr/lib/jvm/jdk1.7.0_51/jre/lib/amd64/libnpjp2.so" 1
sudo update-alternatives --install "/usr/bin/javaws" "javaws" "/usr/lib/jvm/jdk1.7.0_51/bin/javaws" 1


After installing, we should configure each alternative

sudo update-alternatives --config javac
sudo update-alternatives --config java
sudo update-alternatives --config mozilla-javaplugin.so
sudo update-alternatives --config javaws

Now we can configure JAVA_HOME. We can edit ~/.bashrc and add following.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_51/

That's it! :)

UPDATE:

Please check Oracle Java Installation script for Ubuntu. All above steps are now automated via an installation script: https://github.com/chrishantha/install-java

Evanthika AmarasiriSolving the famous "java.sql.SQLException: Total number of available connections are less than the total number of closed connections" issue

While starting up your WSO2 product after configuring a registry mount, you may have come across the below issue.

Caused by: java.sql.SQLException: Total number of available connections are less than the total number of closed connections
    at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCDatabaseTransaction$ManagedRegistryConnection.close(JDBCDatabaseTransaction.java:1349)
    at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCTransactionManager.endTransaction(JDBCTransactionManager.java:178)
    ... 46 more


When you see the above exception, the first thing you have to do is verify the mount configuration of the registry.xml.

See below. In this config, if you accidentally refer to the dbConfig name of the local DB in the mount, you will get the exception mentioned in the $subject.The correct dbConfig name you should refer to is, wso2Mount where wso2Mount is pointed to an external DB.

E.g. :-

     <currentDBConfig>wso2registry</currentDBConfig>
    <readOnly>false</readOnly>
    <enableCache>true</enableCache>
    <registryRoot>/</registryRoot>

    <dbConfig name="wso2registry">
        <dataSource>jdbc/WSO2CarbonDB</dataSource>
    </dbConfig>

    <dbConfig name="wso2Mount">
        <dataSource>jdbc/WSO2RegDB</dataSource>
    </dbConfig>

    <remoteInstance url="https://localhost:9443/registry">
        <id>instanceid</id>
        <dbConfig>wso2Mount</dbConfig>
        <readOnly>false</readOnly>
        <enableCache>true</enableCache>
        <registryRoot>/</registryRoot>
    </remoteInstance>

    <mount path="/_system/config" overwrite="true">
        <instanceId>instanceid</instanceId>
        <targetPath>/_system/nodes</targetPath>
    </mount>

sanjeewa malalgodaHow to handle ditributed counter across cluster when each node contribute to counter - Distributed throttling

Handle throttling in distributed environment is bit tricky task. For this we need to maintain time window and counters per instance and also those counters should be shared across cluster as well. Recently i worked on similar issue and i will share my thoughts about this problem.

Lets say we have 5 nodes. Then each node will serve x number of requests within minutes. So across cluster we can server 5X requests per minutes. And some cases node1 may server 2x while other servers 1x. But still we need to have 5x across cluster. To address this issue we need shared counter across cluster. So each and every node can contribute to that and maintain counters.

To implement something like that we may use following approach.

We can maintain two Hazelcast IAtomicLong data structures or similar distributed counter as follow. This should be handle in cluster level.
And node do not have to do anything about replication.

  • Shared Counter : This will maintain global request count across the cluster
  • Shared Timestamp : This will be used for manage time window across the cluster for particular throttling period

In each and every instance we should maintain following per each counter object.
  • Local global counter which sync up with shared counter in replication task(Local global counter = shared counter + Local counter )
  • Local counter which holds request counts until replication task run.(after replication Local counter = 0)

We may use replication task that will run periodically.
During the replication task following tasks will be happen.
Update the shared counter with node local counter and then update local global counter with the shared counter.
If global counter set to zero, it will reset the global counter.


In addition we need to set the current time into the hazelcast Atomic Long .When other servers get the first request it sets it first access time as the value in hazelcast according to the caller context ID.So all the servers will set into the one first access time. We check the throttle time will become to the time come from hazelcast and unit Time according to tier.
To check time window is elapsed so if this happen We set previous callercontext globalcount to 0.
As assumption we made was all nodes in cluster are having the same timestamp.




See following diagrams.






If you need to use throttle core for your application/component run in WSO2 runtime you can import throttle core to your project and use following code to check access availability.


Here i have listed code to throttle message using handler. So you can write your own handler and call doThrottle method in message flow. First you need to import org.wso2.carbon.throttle.core to your project.


       private boolean doThrottle(MessageContext messageContext) {
            boolean canAccess = true;
            boolean isResponse = messageContext.isResponse();
            org.apache.axis2.context.MessageContext axis2MC = ((Axis2MessageContext) messageContext).
                    getAxis2MessageContext();
            ConfigurationContext cc = axis2MC.getConfigurationContext();
            synchronized (this) {

                if (!isResponse) {
                    initThrottle(messageContext, cc);
                }
            }         // if the access is success through concurrency throttle and if this is a request message
            // then do access rate based throttling
            if (!isResponse && throttle != null) {
                AuthenticationContext authContext = APISecurityUtils.getAuthenticationContext(messageContext);
                String tier;             if (authContext != null) {
                    AccessInformation info = null;
                    try {

                        String ipBasedKey = (String) ((TreeMap) axis2MC.
                                getProperty("TRANSPORT_HEADERS")).get("X-Forwarded-For");
                        if (ipBasedKey == null) {
                            ipBasedKey = (String) axis2MC.getProperty("REMOTE_ADDR");
                        }
                        tier = authContext.getApplicationTier();
                        ThrottleContext apiThrottleContext =
                                ApplicationThrottleController.
                                        getApplicationThrottleContext(messageContext, cc, tier);
                        //    if (isClusteringEnable) {
                        //      applicationThrottleContext.setConfigurationContext(cc);
                        apiThrottleContext.setThrottleId(id);
                        info = applicationRoleBasedAccessController.canAccess(apiThrottleContext,
                                                                              ipBasedKey, tier);
                        canAccess = info.isAccessAllowed();
                    } catch (ThrottleException e) {
                        handleException("Error while trying evaluate IPBased throttling policy", e);
                    }
                }
            }         if (!canAccess) {
                handleThrottleOut(messageContext);
                return false;
            }

            return canAccess;
        }    


    private void initThrottle(MessageContext synCtx, ConfigurationContext cc) {
            if (policyKey == null) {
                throw new SynapseException("Throttle policy unspecified for the API");
            }         Entry entry = synCtx.getConfiguration().getEntryDefinition(policyKey);
            if (entry == null) {
                handleException("Cannot find throttling policy using key: " + policyKey);
                return;
            }
            Object entryValue = null;
            boolean reCreate = false;         if (entry.isDynamic()) {
                if ((!entry.isCached()) || (entry.isExpired()) || throttle == null) {
                    entryValue = synCtx.getEntry(this.policyKey);
                    if (this.version != entry.getVersion()) {
                        reCreate = true;
                    }
                }
            } else if (this.throttle == null) {
                entryValue = synCtx.getEntry(this.policyKey);
            }         if (reCreate || throttle == null) {
                if (entryValue == null || !(entryValue instanceof OMElement)) {
                    handleException("Unable to load throttling policy using key: " + policyKey);
                    return;
                }
                version = entry.getVersion();             try {
                    // Creates the throttle from the policy
                    throttle = ThrottleFactory.createMediatorThrottle(
                            PolicyEngine.getPolicy((OMElement) entryValue));

                } catch (ThrottleException e) {
                    handleException("Error processing the throttling policy", e);
                }
            }
        }



sanjeewa malalgodaHow to minimize solr idexing time(registry artifact loading time) in newly spawned instance

In API Management platform sometimes we need to add store and publisher nodes to cluster. But if you have large number of resources in registry solr indexing will take some time.
Solr indexing will be used to index registry data in local file system. In this post we will discuss how we can minimize time take to this loading process. Please note this will apply to all carbon kernel 4.2.0 or below versions. In GREG 5.0.0 we have handled this issue and do not need to do anything to handle this scenario.

You can minimize the time taken to list down existing API list in Store and Publisher by copying an already indexed Solr/data directory to a fresh APIM instance.
However note that you should NOT copy and replace Solr/data directories of different APIM product versions. (For example you can NOT copy and replace Solr/data directory of APIM 1.9 to APIM 1.7)

[1] First create a backup of Solr indexed files using the last currently running API product version.
   [APIM_Home]/solr/data directory
[2] Now copy and replace [Product_Home]/solr/data directory in the new APIM instance/s before puppet initializes it, which will list existing APIs since by the time your new carbon instance starts running we have already copied the Solr indexed file to the instance.

If you are using automated process its recommend to automate this process.
So you can follow these instructions to automate it.
01. Get backup of solr/data directory of running server and push it to some artifact server(you can use Rsynch or svn for this).
02. When new instance spawned before start it copy updated solr content from remote artifact server.
03. Then start new server.


If you need to manually re-index data you can follow approach listed below.

Shutdown the server if it is already started.
Rename the lastAccessTimeLocation in registry.xml ,
Eg:
/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime
To
/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime_1
Backup the solr directory and delete it.

/solr

Restart the server and keep the server idle few minutes to re-index.

Chanika GeeganageResolve - keytool error: java.lang.Exception: Failed to establish chain from reply

This blog post is related to my previous post 'Add CA signed certificate to keystore'. When you are going to import the CA signed certificate to your keystore, you may be getting the following error

keytool error: java.lang.Exception: Failed to establish chain from reply

The cause of this error
This error occurs if 
  • the correct root certificate is not imported to the keystore 
  • the correct intermediate certificate is not imported to the keystore
The root cause is when you are going to import the signed certificate it checks whether it can create a chain from issuer and subject parameters in the imported certificate. 

The solution is to

Import the correct root and intermediate which is compatible with the CA and the certificate type. For and example if you are using VeriSign you can find all the intermediate and root certificates from here.

Chanika GeeganageAdd CA signed certificate to keystore

A keystore is a file that keeps private keys, certificates and symmetric keys as key value pair. Each certificate is uniquely identified by an identifier called as 'Alias'. In this blog post I will go through a very common usecase where we have to get a certificate signed by CA and import that to the keystore.

As a prerequisite we have to make sure that Java is installed correctly and the class path is set. Then we can follow the following steps

1. Create a key store

You can create your own keystore by executing the following command.

keytool -genkey -alias democert -keyalg RSA -keystore demokeystore.jks -keysize 2048

You will be prompted to give below required information and a password for the keystore.

Enter keystore password:
Re-enter new password:
What is your first and last name?
  [Unknown]:  localhost
What is the name of your organizational unit?
  [Unknown]:  wso2
What is the name of your organization?
  [Unknown]:  wso2
What is the name of your City or Locality?
  [Unknown]:  colombo
What is the name of your State or Province?
  [Unknown]:  WP
What is the two-letter country code for this unit?
  [Unknown]:  LK
Is CN=localhost, OU=wso2, O=wso2, L=colombo, ST=WP, C=LK correct?
  [no]:  yes

Enter key password for <democert>
(RETURN if same as keystore password):

This generates a private key and the certificate with the alias specified in the command (ex: democert).

Once you executed the above command a new file with the name demokeystore.jks will be created at your the location you executed the command.

2. View the content in the created keystore.

You can execute the following command in order to view the content of the created keystore in step 1.

keytool -list -v -keystore demokeystore.jks -storepass password

You will receive an output similar to

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: democert
Creation date: Jul 21, 2015
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=localhost, OU=wso2, O=wso2, L=colombo, ST=WP, C=LK
Issuer: CN=localhost, OU=wso2, O=wso2, L=colombo, ST=WP, C=LK
Serial number: 2ef9b438
Valid from: Tue Jul 21 18:46:12 IST 2015 until: Mon Oct 19 18:46:12 IST 2015
Certificate fingerprints:
MD5:  2F:1B:EF:8E:95:5D:0E:0F:81:34:FE:4A:27:A9:68:A8
SHA1: FD:9D:98:A1:FB:36:DD:6B:D7:1A:F6:E8:AC:98:35:3A:5E:3C:7F:9A
SHA256: CF:02:15:41:9E:CC:67:65:85:33:4A:E4:3D:B9:C4:C5:B2:04:CD:A8:FF:B6:63:D6:DB:DC:79:85:51:79:FA:1E
Signature algorithm name: SHA256withRSA
Version: 3

Extensions:

#1: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 9B D4 69 A2 D9 A8 E0 22   02 D6 4F 57 71 3B 27 F4  ..i...."..OWq;'.
0010: 18 8E 7F 4F                                        ...O
]
]



*******************************************
*******************************************

3. Create CSR

The certificate in the create keystore is a self-signed certificate. But you need to get your certificate signed by a Certificate Authority(CA). For that  a Certificate Signing Request (CSR) has to be generated. You can use the following command for that.

keytool -certreq -v -alias democert -file csr_request.pem -keypass password -storepass password -keystore demokeystore.jks

Then a csr_request.pem file is created in the location that you have executed this command.

4. Get the certificate signed by CA

In this blog post I'm going to use VeriSign free trail version to get the certificate signed. If you are using this trial version the certificate is valid for only 30 day. Follow the wizard in here. When you are asked to give the CSR, open the generated csr_request.pem in a text editor and copy the content and paste in the text area in the wizard. After you have completed the wizard you will be received an email from VeriSign with the signed certificate.

5. Import the root and intermediate certificates to the keystore

Before importing the signed certificate you have to import the root and intermediate certificates to the keystore. The root certificate for VeriSign trial version can be found from here. Copy the text in the root certificate to a new file and save it as verisign_root.pem file

Now you can import the root certificate to the keystore by executing the following command.

keytool -import -v -noprompt -trustcacerts -alias verisigndemoroot -file verisign_root.pem -keystore demokeystore.jks -storepass password

Now the root cert is imported. You can verify that by listing the content in the keystore. (step2)

The next step is to import the intermediate cert file. For the VeriSign trial version you can get the intermediate certificate from here. Copy the text and save that in a new file (verisign_intermediate.pem)

Import the intermediate certificate:

keytool -import -v -noprompt -trustcacerts -alias verisigndemoim -file verisign_intermediate.pem -keystore demokeystore.jks -storepass password

6. Import the CA signed certificate to keystore

Now we can import the signed certificate. You can find the certificate in the email you received from VeriSign. Copy the text file. For and example,

-----BEGIN CERTIFICATE-----
MIIEtTCCA52gAwIBAgIQG+p3SdIPIRRsG/yB/6iYKjANBgkqhkiG9w0BAQsFADCB
jTELMAkGA1UEBhMCVVMxHTAbBgNVBAoTFFN5bWFudGVjIENvcnBvcmF0aW9uMTAw
LgYDVQQLEydGb3IgVGVzdCBQdXJwb3NlcyBPbmx5LiAgTm8gYXNzdXJhbmNlcy4x
LTArBgNVBAMTJFN5bWFudGVjIFRyaWFsIFNlY3VyZSBTZXJ2ZXIgQ0EgLSBHMzAe
Fw0xNTA3MjAwMDAwMDBaFw0xNTA4MTkyMzU5NTlaMFoxCzAJBgNVBAYTAlNMMQsw
CQYDVQQIEwJXUDEMMAoGA1UEBxQDQ09MMQ0wCwYDVQQKFAR3c28yMQ0wCwYDVQQL
FAR3c28yMRIwEAYDVQQDFAlsb2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IB
DwAwggEKAoIBAQC8YacCtuAHjcUheGn6rHKrK8DgGnFiZjRKs5RlBkQBJTWgasxH
LpCd1qfeDkqGVBnoizcCvuqmMy9a6jZKjLNYT9MD/VaezCQq0c9VAMRWz9C06poi
igpvOnEAcoDJdmjSMQawSVIy5XO9aLVv5r3I/Plx6HG3VTNyS+J/Jh5ejO5ktJKU
Vjp/cQQenljSgmw+V1nERNAAl7EQMYE2gR0szzi6r/AiQaiEtETjoScTIwiU+gbH
STb47324GmlT6PF5T0e0+3d4QpHIp7+7HP79W5lESF/PrDA99HGXDIO64OCVIchM
dVHqDa3E8YNJNPyqwApOUw5yFUxokANEn/5rAgMBAAGjggFBMIIBPTAUBgNVHREE
DTALgglsb2NhbGhvc3QwCQYDVR0TBAIwADAOBgNVHQ8BAf8EBAMCBaAwKwYDVR0f
BCQwIjAgoB6gHIYaaHR0cDovL3JlLnN5bWNiLmNvbS9yZS5jcmwwZQYDVR0gBF4w
XDBaBgpghkgBhvhFAQcVMEwwIwYIKwYBBQUHAgEWF2h0dHBzOi8vZC5zeW1jYi5j
b20vY3BzMCUGCCsGAQUFBwICMBkMF2h0dHBzOi8vZC5zeW1jYi5jb20vcnBhMB0G
A1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcDAjAfBgNVHSMEGDAWgBSp8EYB+q89
iUDnFiFocUQ1XSrQvjA2BggrBgEFBQcBAQQqMCgwJgYIKwYBBQUHMAKGGmh0dHA6
Ly9yZS5zeW1jYi5jb20vcmUuY3J0MA0GCSqGSIb3DQEBCwUAA4IBAQBlZHLiUaGP
ok8HP0QJTwRh2J5dwawQKPB9dauqcGXa5xhIeVF8cc6ie25+Szd1Bs9p07tzZwyq
jhdMk51fRkaGtli6N84V5Db7bRYARncXQR91FScxSq6Opda7eFRTH2Ux4BUvhjnE
rBtVv47PXIykgXEaGzECrIT/RSBh78HY5rKLYrxnS7hOjNHeuzjvCLzYp0gQPZjP
sGp7lmGWjy01j1mYTfHzVWOKQmiheWdwtbKVd2+TR5YzuINt9ErDfL86yCt6mkNK
eSy7pIOcy4nvHh7h07UsjivGNIIfQ6hN4aY7OsnGcIBvIIB6X9JVrNoG0Fib8wwl
LZtGg2ZrFIsd
-----END CERTIFICATE-----

Create a new text file and past this content and save it as verisign_signed.pem
Import that with the following command

keytool -import -v -alias democert -file verisign_signed.pem -keystore demokeystore.jks -keypass password -storepass password

Now you have the CA signed certificate in your keystore. You can verify that by listing the certificate in the keystore as you did in the step2


Sumedha KodithuwakkuHow to pass an Authorization token to the back end sever, in WSO2 API Cloud

There can be scenarios where the back-end service is expecting an Authorization token which is different from the Authorization token used in API Cloud. However when a request is sent to WSO2 API Cloud with the Authorization header, API Gateway will use it for API authentication/authorization and it will be dropped from the out going message to the back-end service.

This requirement can be achived with the use of two headers; Authorization header containing the API Clouds token and a different header which contains the value of the token expected by the back-end. Using a custom mediation extension, the value of this second header can be extracted and set it to Authorization header, which will be then send to the back-end. For example the two headers, Authorization (API Clouds token) and Authentication (the token expected by back end) can be used.

For this scenario, per API extension can be used. There is a naming pattern of a per-API extension sequence which is explained here. In API Cloud it should be similar to (assuming the user email as user@email.com);

user.email.com-AT-yourOrganizationKey--YourAPIName:v1.0.0--In

You can find the Organization Key from Organization Manage page in WSO2 Cloud.

Following is a sample synapse configuration of a per-API extension sequence for this scenario.

<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse"
name="user.email.com-AT-yourOrganizationKey--YourAPIName:v1.0.0--In">
<property xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"
xmlns:ns3="http://org.apache.synapse/xsd"
name="Authentication"
expression="get-property('transport', 'Authentication')"/>
<property xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"
xmlns:ns3="http://org.apache.synapse/xsd"
name="Authorization"
expression="get-property('Authentication')"
scope="transport"
type="STRING"/>
<property name="Authentication" scope="transport" action="remove"/>
</sequence>

A XML file should be created using the above configuration and it should be uploaded to the Governance registry of the API Cloud using the management console UI of Gateway (https://gateway.api.cloud.wso2.com/carbon).

You can get the user name from the top right corner of the Publisher and then enter your password and log in. Once you are logged in select Resources (left hand side of the Management console) and click on Browse and then navigate to /_system/governance/apimgt/customsequences registry location. Since this sequence need to be invoked in the In direction (or the request path) navigate to the in collection. Click Add Resource and upload the XML file of the sequence configuration and add it. (Note: Once you add the sequence it might take up-to 15 minutes until it is deployed into the publisher)

Now go to the Publisher and select the required API and go to edit wizard by clicking edit and then navigate into Manage section. Click on Sequences check box and select the sequence we added from the In Flow. After that Save and Publish the API.

Now you should invoke the API passing the above two headers and the back-end will receive the required Authorization header. A sample curl request would be as follows;

curl -H "Authorization: Bearer a414d15ebfe45a4542580244e53615b" -H "Authentication: Bearer custom-bearer-token-value" http://gateway.api.cloud.wso2.com:8280/t/clouddemo/authsample/1.0

What happens will be as follows;

Client (headers: Authorization, Authentication) -> 
           Gateway (drop: Authorization, convert: Authentication-Authorization) -> Backend 


References:
[1] http://sanjeewamalalgoda.blogspot.com/2014/11/how-to-use-custom-authentication-header.html

Saliya EkanayakeGet and Set Process Affinity in C

Affinity of a process can be retrieved and set within a C program using sched_getaffinity (man page) and sched_setaffinity (man page) routines available in the sched.h. The following are two examples showing these two methods in action.

Get Process Affinity

#define _GNU_SOURCE
#include
#include
#include
#include
#include

int main(int argc, char* argvi[])
{
pid_t pid = getpid();
cpu_set_t my_set;
int ret;

CPU_ZERO(&my_set);
ret = sched_getaffinity(0, sizeof(my_set), &my_set);
char str[80];
strcpy(str," ");
int count = 0;
int j;
for (j = 0; j < CPU_SETSIZE; ++j)
{
if (CPU_ISSET(j, &my_set))
{
++count;
char cpunum[3];
sprintf(cpunum, "%d ", j);
strcat(str, cpunum);
}
}
printf("pid %d affinity has %d CPUs ... %s\n", pid, count, str);
return 0;
}
You can test this by using taskset command in linux to set the affinity of this program and checking if the program returns the same affinity you set. For example you could do something like,
taskset -c 1,2 ./a.out
Note, you could use the non-standard CPU_COUNT(&my_set) macro routine to retrieve how many cores are assigned to this process instead of using a count variable within the loop as in the above example.

Set Process Affinity

#define _GNU_SOURCE
#include
#include
#include
#include
#include

int main(int argc, char* argvi[])
{
pid_t pid = getpid();
cpu_set_t my_set;
int ret;

CPU_ZERO(&my_set);
CPU_SET(1, &my_set);
CPU_SET(2, &my_set);

ret = sched_setaffinity(0, sizeof(my_set), &my_set);
printf("pid %d \n", pid);

// A busy loop to keep the program from terminating while
// you use taskset to check if affinity is set as you wanted
long x = 0;
long i;
while (i < LONG_MAX)
{
x += i;
++i;
}
printf("%ld\n",x);
return 0;
}
The program is set to bind to cores 1 and 2 (assuming you have that many cores) using the two CPU_SET macro routines. You can check if this is set correctly using the taskset command again. The output of the program will include its process id, say pid. Use this as follows to check withtaskset.
taskset -pc pid
Note, I've included a busy loop after printing the pid of the program just so that it'll keep running while you check if affinity is set correctly.

Chamara SilvaHow to disable HTTP Keep-Alive connections in WSO2 ESB / API Manager

WSO2 API manager and ESB uses HTTP keep-alive connection by default. If Keep-alive connection enabled, It will be used same HTTP connection to send request and getting responses. But if you want to disable keep-alive connection, Following property need to be added into the in-sequence in proxy service or API.

Waruna Lakshitha JayaweeraRest Service with ODE process

Overview

Web services can be two types as SOAP or REST. BPEL processes communicate using soap messages.This post describes how we can call Rest service within our BPEL process.

Why we need to access Rest Service via WSDL

Since Rest is light weight most of social networks and public services are deployed as restful services. There are some requirements where we need to provide WSDL file for client but through that we need to access rest service specially writing BPEL processes.

Apache ODE – WSDL 1.1 Extensions for REST

Traditional HTTP binding on WSDL support only GET and POST operations. But Rest requires additional operations of PUT and DELETE. This can be satisfied using WSDL 1.1 Extensions for REST with Apache ODE.

User Service Rest Process

I wrote sample Rest process called User rest process. This is the BPEL model of the process.

Here User Process takes input as user ID and user name and create user  in the service store with PUT operation. POST will add username appending with '@wso2.com' and GET operation provides the user object to response which will be the output. Before sending the output DELETE operation will delete the user. Process doesn't do meaningful task but it was modeled to cover all four operations of rest service. Lets look at sample Rest request made during the the process.

Get
Get operation will give me user name based on given user_id.
http://localhost:9764/UserService_1.0.0/services/user_service/userservice/users/name/1
Put
Put will add new user
http://localhost:9764/UserService_1.0.0/services/user_service/userservice/users/name/1/waruna
Post
Post will update the username for given id
location="http://localhost:9764/UserService_1.0.0/services/user_service/userservice/users/name/1/waruna@wso2.com
Delete
delete will remove the user for given ID
location="http://localhost:9764/UserService_1.0.0/services/user_service/userservice/users/name/1

All these rest operations are to be specified to get user input than hard coded values. We know BPEL access external service using partner links via  WSDL files.When developing BPEL process to  call Rest service we need to create separate WSDL file for that. That will use WSDL 1.1 extensions to add rest service calls.
For example I will describe How step by step we can get user input and pass into PUT Rest operation via WSDL. For other Rest operations we can use the same approach.

1) Here is the user schema for consist of username and user ID.

<wsdl:types>
<xsd:schema attributeFormDefault="qualified"
elementFormDefault="unqualified" targetNamespace="http://wso2.org/bps/ignorens">
<xsd:element name="user">
<xsd:complexType>
<xsd:sequence>
<xsd:element minOccurs="0" name="uid" nillable="true"
type="xsd:string" />
<xsd:element minOccurs="0" name="uname" nillable="true"
type="xsd:string" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="userID">
<xsd:complexType>
<xsd:sequence>
<xsd:element minOccurs="0" name="uid" nillable="true"
type="xsd:string" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>
</wsdl:types>

2) Put Message can be defined as follows.
<wsdl:message name="putUserNameRequest">
<wsdl:part name="user" element="ignore:user" />
</wsdl:message>
<wsdl:message name="putUserNameResponse">
<wsdl:part name="part" type="xsd:string" />
</wsdl:message>


3) Port type for PUT operation can be specified as follows.
<wsdl:portType name="UserServicePutPT">
<wsdl:operation name="putUserName">
<wsdl:input message="tns:putUserNameRequest" />
<wsdl:output message="tns:putUserNameResponse" />
</wsdl:operation>
</wsdl:portType>


4) WSDL Binding for PUT operation can be specified as follows. Here I am using {} to get message request data to pass it via Rest Put URL.

<wsdl:binding name="UserServicePutHTTP" type="tns:UserServicePutPT">
<http:binding verb="PUT" />
<wsdl:operation name="putUserName">
<http:operation
location="http://10.100.5.24:9764/UserService_1.0.0/services/user_service/userservice/users/name/{uid}/{uname}" />
<wsdl:input>
<http:urlReplacement />
</wsdl:input>
<wsdl:output>
<mime:content part="part" type="text/xml" />
</wsdl:output>
</wsdl:operation>
</wsdl:binding>
5) Service for Put operation can be written as follows.
<wsdl:service name="UserServicePut">
<wsdl:port binding="tns:UserServicePutHTTP" name="UserServicePutHTTP">
<http:address
location="http://10.100.5.24:9764/UserService_1.0.0/services/user_service/userservice/users/name/1" />
</wsdl:port>
</wsdl:service>


You can checkout Rest user process from following link.

https://svn.wso2.org/repos/wso2/carbon/platform/branches/turing/products/bps/3.2.0/modules/samples/product/src/main/resources/bpel/2.0/UserRestProcess/

Please find sample rest process from here.
https://github.com/wso2/product-bps/tree/master/modules/services/UserService

Lakmal WarusawithanaWSO2 Hackathon in the Cloud



UPDATE : The hackathon is postponed. We'll let you know the new date as soon as possible

# 100 Amazon EC2 Instances
# 2000 Docker Containers
# 70 Node Kubernetes Cluster
# 4 Billion Events  
# 4GB Data
# 24 Hours
# $10,000 in Prizes


We are hosting our first ever 24-hour hackathon in the cloud to coincide with our 10th year anniversary celebrations. Are you ready to take on our big data in the cloud challenge, and show us how you can process over 4 billion events in the cloud?  More detail WSO2 Hackathon in the Cloud


Here I’m going to tell some inside information how we are technically preparing the WSO2 Hackathon in the Cloud.

Single WSO2 Private PaaS environments to host the entire hackathon


WSO2 Private PaaS has the multi tenant capability which comes from underneath Apache Stratos allow us to host hackathon in a single PaaS environment while having all the scalability, isolation and security requirements. Other than PaaS capabilities, WSO2 CEP and WSO2 DAS provide necessary analytics platform to solve the big data challenge.

Figure 01 - WSO2 Private PaaS
For the PaaS environment we are using following cutting edge technologies and releases which general available within the July itself make more exciting!

Apache Stratos 4.1.0 will set up on top of 100 EC2 instances which has totally 2580 virtual CPUs and 4425 GB memory in the infrastructure. It will create 10 Kubernetes cluster using 70 Kubernetes nodes each node having 36 virtual CPUs and 60 GB memory footprint. 2000 docker containers will orchestrate into 70 Kubernetes nodes each team allowing scale up to 200 docker containers in their application.

stratos-kubernetes-deployment-architecture.png
 
Figure 02 - Apache Stratos with Kubernetes


Each team will get Stratos tenant with pre-configured WSO2 CEP and WSO2 DAS cartridge groups including following cartridges.


  • WSO2 DAS Group
    • WSO2 DAS Data Receiver/Publisher
    • WSO2 DAS Analytics (Spark)
    • WSO2 DAS Dashboard
    • Hadoop (HDFS)
    • Apache Hbase
    • Zookeeper
    • MySQL
  • WSO2 CEP Group
    • WSO2 CEP Manager
    • WSO2 CEP Worker
    • Apache Storm Supervisor
    • Apache Storm UI
    • Zookeeper
    • Nimbus
    • MySQL

Team members should create scalable application using above two groups to process all 4 billion events and solve the given 2 queries within very short period of time. Solution architecture of your application will definitely play a key role to win the challenge while writing necessary CEP and DAS quires to solve the given problems. They can used Apache Stratos web console to create solution architecture of their application in simple drag and drop manner.

WSO2 DAS cartridge group comes with pre-configured work in following architecture, which we are expecting each team members use to solve query 02 - Outlier.

das-architecture.png

Figure 03 - WSO2 DAS Architecture
WSO2 CEP cartridge group pre-configured work with apache storm, enables parallel processing help to solve query 01 - Load Prediction in short period of time.
Figure 04 - WSO2 CEP with Storm

Metering and Monitoring


Apache Stratos will monitor each docker instances and auto heals with their dependencies if any failures happens while processing the data. Application logs of all cartridges will publish to central WSO2 DAS which runs on separate AWS EC2 instances. Each team can view their applications logs in using central DAS dashboard just in case want to see the each services logs. Captured health statistics, CPU, memory usage ..etc publish from every cartridge agents in docker containers are records in DAS for later analytic, which can useful to identify how each member has worked while processing all 4 billion events.

Are you ready for the Challenge?


I’m highly recommend to participate to the hackathon who are up-to take challenges, it going to be a rare occasion to show your colours while having tons of fun using these cutting edge technologies.

Will conduct several webinars, tutorials and samples on Apache Stratos and WSO2 Analytics Platform help you to sharp your knowledge in the pre hackathon week and during the hackathon IRC channels will open for 24 hours for any help.




Srinath PereraWhy We need SQL like Query Language for Realtime Streaming Analytics?


I was at O'reilly Strata in last week and certainly interest for realtime analytics was at it’s top.

Realtime analytics, or what people call Realtime Analytics, has two flavours.  
  1. Realtime Streaming Analytics ( static queries given once that do not change, they process data as they come in without storing. CEP, Apache Strom, Apache Samza etc., are examples of this. 
  2. Realtime Interactive/Ad-hoc Analytics (user issue ad-hoc dynamic queries and system responds). Druid, SAP Hana, VolotDB, MemSQL, Apache Drill are examples of this. 
In this post, I am focusing on Realtime Streaming Analytics. (Ad-hoc analytics uses a SQL like query language anyway.)

Still when thinking about Realtime Analytics, people think only counting usecases. However, that is the tip of the iceberg. Due to the time dimension of the data inherent in realtime usecases, there are lot more you can do. Lets us look at few common patterns. 
  1. Simple counting (e.g. failure count)
  2. Counting with Windows ( e.g. failure count every hour)
  3. Preprocessing: filtering, transformations (e.g. data cleanup)
  4. Alerts , thresholds (e.g. Alarm on high temperature)
  5. Data Correlation, Detect missing events, detecting erroneous data (e.g. detecting failed sensors)
  6. Joining event streams (e.g. detect a hit on soccer ball)
  7. Merge with data in a database, collect, update data conditionally
  8. Detecting Event Sequence Patterns (e.g. small transaction followed by large transaction)
  9. Tracking - follow some related entity’s state in space, time etc. (e.g. location of airline baggage, vehicle, tracking wild life)
  10. Detect trends – Rise, turn, fall, Outliers, Complex trends like triple bottom etc., (e.g. algorithmic trading, SLA, load balancing)
  11. Learning a Model (e.g. Predictive maintenance)
  12. Predicting next value and corrective actions (e.g. automated car)

Why we need SQL like query language for Realtime Streaming  Analytics?

Each of above has come up in use cases, and we have implemented them using SQL like CEP query languages. Knowing the internal of implementing the CEP core concepts like sliding windows, temporal query patterns, I do not think every Streaming use case developer should rewrite those. Algorithms are not trivial, and those are very hard to get right! 

Instead, we need higher levels of abstractions. We should implement those once and for all, and reuse them. Best lesson we can learn from Hive and Hadoop, which does exactly that for batch analytics. I have explained Big Data with Hive many time, most gets it right away. Hive has become the major programming API most Big Data use cases.

Following is list of reasons for SQL like query language. 
  1. Realtime analytics are hard. Every developer do not want to hand implement sliding windows and temporal event patterns, etc.  
  2. Easy to follow and learn for people who knows SQL, which is pretty much everybody 
  3. SQL like languages are Expressive, short, sweet and fast!!
  4. SQL like languages define core operations that covers 90% of problems
  5. They experts dig in when they like!
  6. Realtime analytics Runtimes can better optimize the executions with SQL like model. Most optimisations are already studied, and there is lot you can just borrow from database optimisations. 
Finally what are such languages? There are lot defined in world of Complex Event processing (e.g. WSO2 Siddhi, Esper, Tibco StreamBase,IBM Infoshpere Streams etc. SQL stream has fully ANSI SQL comment version of it. Last week I did a talk on Strata discussing this problem in detail and how CEP could match the bill. You could find the slide deck from below.


Srinath PereraIntroducing WSO2 Analytics Platform: Note for Architects

WSO2 have had several analytics products:WSO2 BAM and WSO2 CEP for some time (or Big Data products if you prefer the term).  We are adding WSO2 Machine Learner, a product to create, evaluate, and deploy predictive models, very soon to that mix. This post describes how all those fit within to a single story. 

Following Picture summarises what you can do with the platform. 



Lets look at each stages depicted the picture in detail. 

Stage 1: Collecting Data

There are two things for you to do.

Define Streams - Just like you create tables before you put data into a database, first you define streams before sending events. Streams are description of how your data look like (Schema). You will use the same Streams to write queries at the second stage. You do this via CEP or BAM's admin console (https://host:9443/carbon) or via Sensor API described in the next step.


Publish Event - Now you can publish events. We provide a one Sensor API to publish events for both batch and realtime pipelines. Sensor API available as Java clients (Thrift, JMS, Kafka), java script clients* ( Web Socket and REST) and 100s of connectors via WSO2 ESB. See How to Publish Your own Events (Data) to WSO2 Analytics Platform (BAM, CEP)  for details on how to write your own data publisher. 

Stage 2: Analyse Data

Now time to analyse the data. There are two ways to do this: analytics and predictive analytics. 

Write Queries

For both batch and realtime processing you can write SQL like queries. For batch queries, we support HIVE SQL and for realtime queries we support Siddhi Event Query Language

Example 1: Realtime Query (e.g. Calculate Average Temperature over 1 minute sliding window from the Temperature Stream) 

from TemperatureStream#window.time(1 min)
select roomNo, avg(temp) as avgTemp
insert into HotRoomsStream ;

Example 2: Batch Query (e.g. Calculate Average Temperature per each hour from the Temperature Stream)

insert overwrite table TemperatureHistory
select hour, average(t) as avgT, buildingId
from TemperatureStream group by buildingId, getHour(ts);


Build Machine Learning (Predictive Analytics) Models

Predictive analytics let us learn “logic” from examples where such logic is complex. For example, we can build “a model” to find fraudulent transactions. To that end, we can use machine learning algorithms to train the model with historical data about Fraudulent and non-fraudulent transactions.


WSO2 Analytics platform supports predictive analytics in multiple forms
  1. Use WSO2 Machine Learner ( 2015 Q2) Wizard to build Machine Learning models, and we can use them with your Business Logic. For example, WSO2 CEP, BAM and ESB would support running those models.
  2. R is a widely used language for statistical computing, and we can build model using R, export them as PMML ( a XML description of Machine Learning Models), and use the model within WSO2 CEP. Also you can directly call R Scripts from CEP queries
  3. WSO2 CEP also includes several streaming Regression and Anomaly Detection Operators

Stage 3: Communicate the Results

OK now we have some results, and we communicate those results to users or systems that cares for these results. That communications can be done in three forms.
  1. Alerts detects special conditions and cover the last mile to notify the users ( e.g. Email, SMS, and Push notifications to a Mobile App, Pager, Trigger physical Alarm ). This can be easily done with CEP.
  2. Visualising data via Dashboards provide the “Overall idea” in a glance (e.g. car dashboard). They supports customising and creating user's own dashboards. Also when there is a special condition, they draw the user's attention to the condition and enable him to drill down and find details. Upcoming WSO2 BAM and CEP 2015 Q2 releases will have a Wizard to start from your data and build custom visualisation with the support for drill downs as well.
  3. APIs expose Data as to users external to the organisational boundary, which are often used by mobile phones. WSO2 API Manager is one of the leading API solutions, and you can use it to expose your data as APIs. In the later releases, we are planning to add support to expose data as APIs via a Wizard.

Why choose WSO2 Analytics Platform?

Reason 1: One Platform for both Realtime, Batch, and Combined Processing - with Single API for publish events, and with support to implement combined usecases like following
  1. Run the similar query in batch pipeline and realtime pipeline ( a.k.a Lambda Architecture)
  2. Train a Machine Learning model (e.g. Fraud Detection Model) in the batch pipeline, and use it in the realtime pipeline (usecases: Fraud Detections, Segmentation, Predict next value, Predict Churn)
  3. Detect conditions in the realtime pipeline, but switch to detail analysis using the data stored in the batch pipeline (e.g. Fraud, giving deals to customers in a e-commerce site)
Reason 2: Performance - WSO2 CEP can process 100K+ events per second and one of the fastest realtime processing engines around. WSO2 CEP was a Finalist for DEBS Grand Challenge 2014 where it processed 0.8 Million events per second with 4 nodes.

Reason 3: Scalable Realtime Pipeline with support for running SQL like CEP Queries Running on top of Storm. - Users can provide queries using SQL like Siddhi Event Query Language. SQL like query language provides higher level operators to build complex realtime queries. See SQL-like Query Language for Real-time Streaming Analytics for more details. 
For batch processing, we use Apache Spark ( 2015 Q2 release forward), and for realtime processing, users can run those queries in one of the two modes.
  1. Run those queries using a two CEP nodes, one nodes as the HA backup for the other. Since WSO2 CEP can process in excess of hundred thousand events per second, this choice is sufficient for many usecases.
  2. Partition the queries and streams, build a Apache Storm topology running CEP nodes as Storm Sprouts, and run it on top of Apache Storm. Please see the slide deck Scalable Realtime Analytics with declarative SQL like Complex Event Processing Scripts. This enable users to do complex queries as supported by Complex Event Processing, but still scale the computations for large data streams. 
Reason 4: Support for Predictive analytics support building Machine learning models, comparing them and selecting the best model, and using them within real life distributed deployments.


Almost forgot, all these are opensource under Apache Licence. Most design decisions are discussed publicly at architecture@wso2.org.


Refer to following talk at wso2con Europe for more details. ( slides).



If you find this interesting, please try it out. Please reach out to me or through http://wso2.com/contact/ if you want to know more information.




Shani Ranasinghe[WSO2 APIM ] [Gateway] How to limit traffic to WSO2 APIM Gateway using API Handlers

Recently I had to find a solution to limit the traffic that passes through the WSO2 APIM Gateway. The requests that are allowed to pass through are the requests from the testing team. The architecture of the system is quite straight forward. Please find the drawing of the system architecture in brief below.
When it comes to limiting the traffic to the Gateway, there are several ways to handle this. This can be handled from the network level (e.g. routing rules) or even simply starting the server on a another port that is not visible to the outside world. Here, in this article I am going to describe how to do it from the Gateway. Using this method is only another option to achieve the goal.

 When an API is created from the WSO2 APIM, using the publisher, the publisher will propagate the changes to the WSO2 APIM Gateway. By this it means it will create a synapse configuration on the Gateway. This synapse configuration of the API will hold a set of handlers. These handlers are placed to achieve different functionality. For e.g. the APIAuthenticationHandler is intended to validate the token. More information on the handlers can be found at [1]. The Handlers are placed in an order in the synapse configuration, and they will be executed in the order they appear. Since the first point of contact in the API is their handlers, we can use a handler to filter out the request if it is from the testing device or not.

 To achieve this, we need to have an identifier being sent from the request. If we have this, it is easily possible to filter them out. First things first, we need to figure out what the identifier is. In my case, the requests sends the device ID in the header, under the parameter "Auth". So in my handler I will read the header and check for this Auth value.
 How do I tell which device Id's are able to continue? For this, I will read the device Id's from a system property so that the allowed device Id's could be passed from the command line as a system property when the server is start up.

 Okay, so given a brief description of what we are going to achieve, lets see how we can do this.

1. Create the Custom Handler. To create the custom handler we need to create a maven project, and create a new class. This class must be extending the "org.apache.synapse.rest.AbstractHandler". Please find the sample code below.

package org.wso2.CustomHandler;

import java.util.Map;

import org.apache.synapse.MessageContext;
import org.apache.synapse.core.axis2.Axis2MessageContext;
import org.apache.synapse.rest.AbstractHandler;

public class GatewayTrafficContrlHandler extends AbstractHandler {

public boolean handleRequest(MessageContext arg0) {
String deviceId = null;
String identifier = null;
String[] identifiers;

//Obtain the identifier .
Map headers = (Map)((Axis2MessageContext) arg0).getAxis2MessageContext().
getProperty(org.apache.axis2.context.MessageContext.TRANSPORT_HEADERS);
identifier = (String) headers.get("Auth");

if (identifier == null || identifier.isEmpty()) {
//Get the first identifier which is the device ID.
identifiers = identifier.split("\\|");
if (identifiers != null && identifiers.length > 0) {
deviceId = identifiers[0];
}

//Get the device ID list which is passed as a system property from //command line.Only these deviceId's will be allowed to pass 
                        //through the handler.
String[] supportedDeviceIdList = null;
String supportedDeviceIds = System.getProperty("deviceIds");
if (supportedDeviceIds != null && !supportedDeviceIds.isEmpty()) {
supportedDeviceIdList = supportedDeviceIds.split(",");
}

//Check if the device Id which is sent in the request is in the
//list of deviuce Id's passed as a system property.
if (supportedDeviceIdList != null && supportedDeviceIdList.length > 0) {
for (int index = 0; index < supportedDeviceIdList.length; index++) {
if (supportedDeviceIdList[index].equals(deviceId) == true) {
return true;
}
}
}
}
return false;
}

public boolean handleResponse(MessageContext arg0) {
// TODO Auto-gGatewayTrafficContrlHandlerenerated method stub
return false;
}

}

2. Once we create the Class to extract and find for the identifier, we then need to build the jar.
3. Copy the created Jar to the /repository/components/lib folder. 
4. Then start the APIM server with the system property -DdeviceIds, and log into the APIM      
     Gateway's management console. for e.g :- ./wso2server.sh -DdeviceIds=123,456,789 
5. Go to Service Bus > Source View in the Main menu.
6. In the configuration select the API, and then check the Handler section. In the handler section, add 
    the class to point to the newly created handler.


 <handlers>
<handler class="org.wso2.CustomHandler.GatewayTrafficContrlHandler"/>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler"/>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.APIThrottleHandler">
<property name="id" value="A"/>
<property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
</handler>
<handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageHandler"/>
<handler class="org.wso2.carbon.apimgt.usage.publisher.APIMgtGoogleAnalyticsTrackingHandler"/>
<handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerExtensionHandler"/>
</handlers>

7. Make sure the newly created handler is the first handler in the list.
8. Once we do the change save, and observe the console for reloading of the API. 
9. Test the Handler by doing a REST API call.

And that's it. 

 The above solution will only let the requests with deviceId's 123 or 456 or 789 to pass through.

  References 

[1] https://docs.wso2.com/display/AM160/Architecture#Architecture-APIHandlers
[2] https://docs.wso2.com/display/AM160/Writing+a+Custom+Authentication+Handler

sanjeewa malalgodaHow to get MD5SUM of all files available in conf directory

We can use following command to get MD5SUM of all files available in the system. We can use this approach to check status of configuration file of multiple servers and check those are same or not.
find ./folderName -type f -exec md5sum {} \; > test.xml

Prabath SiriwardenaUnderstanding Logjam and making WSO2 servers safe

LogJam’s discovery(May 2015) came as a result of follow-up investigations into the FREAK (Factoring attack on RSA-EXPORT Keys) flaw, which was revealed in March.

Logjam - vulnerabilities in Diffie-Hellman key exchange affect browsers and servers using TLS. Before we delve deep into the vulnerability, lets have a look at how Diffie-Hellman key exchange works.

How does Diffie-Hellman key exchange work?


Let's say Alice wants to send a secured message to Bob over an unsecured channel. Since the channel is not secured, the message has to be secured itself, to avoid by being seen by outsiders other than Bob.

Diffie-Hellman key exchange provides a way to exchange keys between two parties over an unsecured channel, so the established key can be used to encrypt the messages later. First both Alice and Bob have to agree on a prime modulus (p) and a generator (g) - publicly. These two numbers need not be protected. Then Alice selects a private random number (a) and calculates g^a mod p , which is also known as Alice's public secret - let's say its A.

In the same manner Bob also picks his own private random number (b) and calculates g^b mod p, which is also known as Bob's public secret - let's say its B.

Now, both will exchange their public secrets over the unsecured channel, that is A and B - or g^a mod p and g^b mod p.

Once Bob, receives A from Alice - he will calculate the common secret (s) in the following manner, A^b mod p and in the same way Alice will also calculate the common secret (s) - B^a mod p.

Bob's common secret: A^b mod p -> (g^a mod p ) ^b mod p -> g^(ab) mod p

Alice's common secret: B^a mod p -> (g^b mod p ) ^a mod p -> g^(ba) mod p

Here comes the beauty of the modular arithmetic. The common secret derived at the Bob's is the same which is derived at the Alice's end. The bottom line is - to derive the common secret either you should know p,g, a and B or p,g,b and A. Anyone intercepting the messages transferred over the wire would only know p,g,A and B. So - they are not in a position to derive the common secret.

How does Diffie-Hellman key exchange relate to TLS? 

Let's have look at how TLS works.


I would not go through here, explaining each and every message flow in the above diagram - but would only focus on the messages Server Key Exchange and the Client Key Exchange.

The Server Key Exchange message will be sent immediately after the Server Certificate message (or the Server Hello message, if this is an anonymous negotiation).  The Server Key Exchange message is sent by the server only when the Server Certificate message (if sent) does not contain enough data to allow the client to exchange a premaster secret. This is true for the following key exchange methods:
  • DHE_DSS 
  • DHE_RSA 
  • DH_anon
It is not legal to send the Server Key Exchange message for the following key exchange methods:
  • RSA 
  • DH_DSS 
  • DH_RSA 
Diffie-Hellman is used in TLS to exchange keys based on the crypto suite agreed upon during the Client Hello and Server Hello messages. If it is agreed to use DH as the key exchange protocol, then in the Server Key Exchange message server will send over the values of p, g and its public secret (Ys) - and will keep the private secret (Xs) for itself. In the same manner using the p and g shared by the server - client will generate its own public secret (Yc) and the private secret (Xc) - and will share Yc via the Client Key Exchange message with the server. In this way, both the parties can derive their own common secret.

How would someone exploit this and what is in fact LogJam vulnerability?

On 20th May, 2015, a group from INRIA, Microsoft Research, Johns Hopkins, the University of Michigan, and the University of Pennsylvania published a deep analysis of the Diffie-Hellman algorithm as used in TLS and other protocols. This analysis included a novel downgrade attack against the TLS protocol itself called Logjam, which exploits EXPORT cryptography.

In the DH key exchange, the cryptographic strength relies on the prime number (p) you pick, not in fact on the random numbers picked either by the server side or by the client side. It is recommended to have the prime number to be 2048 bits. Following table shows how hard it is to break DH key exchange based on the length of the prime number.


Fortunately no one is using 512 bits long prime numbers - but, except in EXPORT cryptography. During the crypto wars happened in 90's it was decided to make ciphers weaker when its being used to communicate outside USA and these weaker ciphers are known as EXPORT ciphers. This law was out-turned later, but unfortunately TLS was designed before that and it has the support for EXPORT ciphers. According to the EXPORT ciphers the DH prime numbers cannot be longer than 512 bits. If the client wants to use DH EXPORT ciphers with 512 bit prime number, then during the Client Hello message of the TLS handshake its has to send DH_EXPORT cipher suite.

None of the legitimate clients do not want to use a weak prime number - so will never suggest the server to use DH_EXPORT - but still - most servers do support DH_EXPORT cipher suite. That means, if someone in the middle manages to intercept the Client Hello initiated by the client and change the requested cipher suite to DH_EXPORT - then still the server will support it and key exchange will happen using a weaker prime number. These types of attacks are known as TLS downgrade attacks - since the original cipher suite used by the client was downgraded by changing the Client Hello message.

But, wouldn't this change ultimately detected by the TLS protocol itself. TLS has the provision to detect if any of the messages in the handshake got modified in the middle by validating the hash of all the messages sent and received by both the parties - at both the ends. This happens at the end of the handshake. Client derives the hash of the messages sent and received by it and sends the hash to the server - and server will validate the hash against the hash of all the messages sent and received by it. Then once again server derives the hash of the messages sent and received by it and sends the hash to the client  - and client will validate the hash against the hash of all the messages sent and received by it. Since by this time the common secret is established - the hash is encrypted by the derived secret key - which, at this point is known to the attacker. So, the attacker can create a hash that is accepted by both the parties - encrypts it and sends it over to both the client and the server.

To protect from this attack, the server should not respond to any of the weaker ciphers, in this case DHE_EXPORT.

How to remove the support for weaker ciphers from WSO2 Carbon 4.0.0+ based products ?

The cipher set which is used in a Carbon server is defined by the embedded Tomcat server (assuming JDK 1.7.*)
  • Open CARBON_HOME/repository/conf/tomcat/catalina-server.xml file. 
  • Find the Connector configuration corresponding to TLS. Usually there are only two connector configurations and connector corresponding to TLS have connector property, SSLEnabled=”true”. 
  • Add new property “ciphers” inside the TLS connector configurations with the value as follows.
    • If you are using tomcat version 7.0.34 :
      •  ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_DES_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA"
    • If you are using tomcat version 7.0.59:
      •  ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA" 
  • Restart the server. 
Now to verify the configurations are all set correctly, you can run TestSSLServer.jar which can be downloaded from here

$ java -jar TestSSLServer.jar localhost 9443 

In the output you get by running the above command, there is a section called “Supported cipher suites”. If all configurations are done correctly, it should not contain any export ciphers. 

With Firefox v39.0 onwards it does not allow to access web sites which support DHE with keys less than 1023 bits (not just DHE_EXPORT). The key length of 768 bits and 1024 bits are assumed to be attackable depending on the computing resources the attacker has.  Java 7 uses keys with length 768 bits even for non export DHE ciphers. This will probably not be fixed until Java 8, so we cannot use these ciphers. Its recommended not just remove DHE_EXPORT cipher suites - but also all the DHE cipher suites. In that case use following for the 'ciphers' configuration.
  • If you are using tomcat version 7.0.34 :
    • ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_RSA_WITH_DES_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA"
  • If you are using tomcat version 7.0.59:
    • ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA" 
The above is also applicable for Chrome v45.0 onwards.

      How to remove the support for weaker ciphers from WSO2 pre-Carbon 4.0.0 based products ?

      • Open CARBON_HOME/repository/conf/mgt-transports.xml
      • Find the transport configuration corresponding to TLS - usually this is having the port as 9443 and name as https.
      • Add the following new element.
        • <parameter name="ciphers">SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_RSA_WITH_DES_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA</parameter>

      Jayanga DissanayakeWSO2 BAM : How to change the scheduled time of a scripts in a toolbox

      WSO2 Business Activity Monitor (WSO2BAM) [1] is a fully open source, complete solution for monitor/store a large amount of business related activities and understand business activities within SOA and Cloud deployments.

      WSO2 BAM comes with predefined set of toolboxes.

      A toolbox consist of several components
      1. Stream definitions
      2. Analytics scripts
      3. Visualizations components

      Non of the above 3 components are mandatory.
      You can have a toolbox which has only Stream definitions and Analytics scripts but not Visualization components.

      In WSO2 BAM, toolbox always get the precedence. Which means if you manually change anything related to any component published via a toolbox. It will be override once the server is restarted.

      If you update,
      1. Schedule time
      This will update the schedule time, and newly update value will be only effective until the next restart. This will not get persisted. Once the server is started, schedule time will have the original value form the toolbox

      2. Stream definition
      If you change anything related to stream definition, it might cause some consistency issues. When the server is restarted, it will find that there is already a stream definition exist with the given name and the configurations are different. So an error will be logged.

      So it is highly discouraged to manually modify the components deployed via a toolbox

      The recommended way to change anything associated with a toolbox, is to,
      1. Unzip the toolbox.
      2. Make the necessary changes.
      3. Create a zip the files again.
      4. Rename the file as <toolbox_name>.tbox
      5. Redeploy the toolbox

      So, if you need to change the scheduled time of Service_Statistics_Monitoring Toolbox,
      Get a copy of Service_Statistics_Monitoring.tbox file resides in [BAM_HOME]/repository/deployment/server/bam-toolbox directory.

      Unzip the file. Open the file Service_Statistics_Monitoring/analytics/analyzers.properties

      Set the following configuration according to your requirement
      analyzers.scripts.script1.cron=0 0/20 * * * ?

      Create a zip file and change the name of the file to Service_Statistics_Monitoring.tbox

      And redeploy the toolbox.

      Now your changes is embed into the toolbox and each time the toolbox is deployed, it will have the modified value.

      [1] http://wso2.com/products/business-activity-monitor/

      Jayanga DissanayakePublishing WSO2 APIM Statistics to WSO2 BAM

      WSO2 API Manager (WSO2APIM) [1] is a fully open source, complete solution for creating, publishing and managing all aspects of an API and its lifecycle.

      WSO2 Business Activity Monitor (WSO2BAM) [2] is a fully open source, complete solution for monitor/store a large amount of business related activities and understand business activities within SOA and Cloud deployments.

      Users can use these two products together, which collectively gives total control over management and monitoring of APIs.

      In this post I'm going to explain how APIM stat publishing and monitoring happens in WSO2APIM and WSO2BAM.

      Configuring WSO2 APIM to publish statistics

      You can find more information on setting up statistics publishing in [3]. Once you do your configurations, it should look like the below.

      <APIM_HOME>/repository/conf/api-manager.xml

       1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      <APIUsageTracking>
      <!-- Enable/Disable the API usage tracker. -->
      <Enabled>true</Enabled>
      <PublisherClass>org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher</PublisherClass>
      <ThriftPort>7614</ThriftPort>
      <BAMServerURL>tcp://<BAM host IP>:7614/</BAMServerURL>
      <BAMUsername>admin</BAMUsername>
      <BAMPassword>admin</BAMPassword>
      <!-- JNDI name of the data source to be used for getting BAM statistics. This data source should
      be defined in the master-datasources.xml file in conf/datasources directory. -->
      <DataSourceName>jdbc/WSO2AM_STATS_DB</DataSourceName>
      </APIUsageTracking>

      <APIM_HOME>/repository/conf/datasources/master-datasources.xml


       1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      <datasource>
      <name>WSO2AM_STATS_DB</name>
      <description>The datasource used for getting statistics to API Manager</description>
      <jndiConfig>
      <name>jdbc/WSO2AM_STATS_DB</name>
      </jndiConfig>
      <definition type="RDBMS">
      <configuration>
      <url>jdbc:mysql://localhost:3306/stats_db?autoReconnect=true&amp;</url>
      <username>db_username</username>
      <password>db_password</password>
      <driverClassName>com.mysql.jdbc.Driver</driverClassName>
      <maxActive>50</maxActive>
      <maxWait>60000</maxWait>
      <testOnBorrow>true</testOnBorrow>
      <validationQuery>SELECT 1</validationQuery>
      <validationInterval>30000</validationInterval>
      </configuration>
      </definition>
      </datasource>


      Configuring WSO2 BAM


      You can find more information on setting up statistics publishing in [3].

      Note that you only need to copy API_Manager_Analytics.tbox into super tenant space. (No need to do any configuration in tenant space)




      Above digram illustrate how the stat data is published and eventually view though the APIM Statistic view.

      1. Statistics information about APIs from all the tenants are published to the WSO2 BAM via a single data publisher.

      2.  API_Manager_Analytics.tbox has stream definitions and hive scripts needed to summarize statistics. These hive scripts get periodically executed and summarized data is pushed into a RDBMS.

      3. When you visit statistics page in WSO2 APIM, it will retrieve summerized statistics form the RDBMS and shows it to you.

      Note: If you need to view statistics of a API which is deployed in a particular tenant. Login in to WSO2 APIM in particular tenant and view statistics page.
      (You don't need to do any additional configuration to support tenant specific statistics.)


      [1] http://wso2.com/api-management/
      [2] http://wso2.com/products/business-activity-monitor/
      [3] https://docs.wso2.com/display/AM180/Publishing+API+Runtime+Statistics

      Niranjan KarunanandhamCreating a new Profile for WSO2 Products


      The logical grouping of a set of features is known as a Server Profile in WSO2 Carbon Servers.  All WSO2 products have atleast one profile which is the default profile. This profile consists of all the features in a particular product. Apart from the default profile, WSO2 Carbon servers can be run in mulitple profiles.

      There can be a requirement where you want to run to profiles in the same instance. This can be done by creating a new profile using a maven pom.xml and leveraging carbon-p2-plugin. Pre-requisites are Apache Maven 3.0.x and Apache Ant.

      Steps to create the pom.xml:
      1. Add the WSO2 Carbon Core zip of the Product you are creating the new profile for.
      2. Add the feature artifacts to generate the p2-repo
      3. Add the feature that you want to be grouped into the new profile.
      4. Create the carbon.product which is used to manage all aspects of the product.
      5. Now execute the "mvn clean install" command.
      6. This will create the "p2-repo" and "wso2carbon-core-<carbon kernel version>" to the target folder.
      7. Copy the newly created profile folder at target/wso2carbon-core-<carbon kernel version>/repository/components/<new profile name> and target/wso2carbon-core-<carbon kernel version>/repository/components/p2/org.eclipse.equinox/profileRegistry/<new profile name>.profile to the corresponding location of the product you want to create the new profile.
      8. Now execute "./bin/wso2server.sh -Dprofile=<new profile name>" to start the product in the new profile.

      Sample new profile available for APIM 1.8.0.

      Darshana GunawardanaFREAK Vulnerability and Disabling weak export cipher suites in WSO2 Carbon 4.2.0 Based Products

      A group of researchers from Microsoft Research, INRIA and IMDEA have discovered a serious vulnerability in some SSL\TLS servers and clients, that allows a man in the middle attacker to downgrade the security of the SSL\TLS connection and gain the access to all the encrypted data transferred between client and server.

      Web servers which supports export ciphers are vulnerable to the FREAK attack. The attack is carried out in a way that attacker can downgrade the connection to the web server from strong RSA to (weak) export grade RSA cipher, and get a message signed with the weak RSA key. Quoting smacktl.com [2],

      Thus, if a server is willing to negotiate an export ciphersuite, a man-in-the-middle may trick a browser (which normally doesn’t allow it) to use a weak export key. By design, export RSA moduli must be less than 512 bits long; hence, they can be factored in less than 12 hours for $100 on Amazon EC2.

      Now you can understand the severity of this vulnerability. If you want to learn more on the FREAK attack, there are good references listed at the bottom on this post.

      Now lets have a look at the most important part, How to avoid FREAK in WSO2 Carbon products?

      In order to avoid FREAK vulnerability, the web server should avoid supporting weak export-grade RSA ciphers.

      How to disable weak export cipher suites in WSO2 Carbon 4.2.0 Based Products.

      The cipher set used in a carbon server is defined by the embedded tomcat server. So this configuration should be done in “catalina-server.xml”.

      1. Open <CARBON_HOME>//repository/conf/tomcat/catalina-server.xml file.
      2. Find the Connector configuration corresponding to TLS. Usually there are only two connector configurations and connector corrosponding to TLS have connector property, SSLEnabled=”true”.
      3. Add new property “ciphers” inside the TLS connector configurations with the value as follows,
        • If you are using tomcat version 7.0.34,
          ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,SSL_RSA_WITH_DES_CBC_SHA,SSL_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA"
          
        • If you are using tomcat version 7.0.59,
          ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA"
          
      4. Restart the server.

      Now to verify whether all configurations are set correctly, you can run TestSSLServer.jar which can be downloaded from here.

      $ java -jar TestSSLServer.jar localhost 9443

      In the output you get running above command, there is a section called “Supported cipher suites”.

      Supported cipher suites (ORDER IS NOT SIGNIFICANT):
      
      SSLv3
      
      RSA_WITH_RC4_128_MD5
      
      RSA_WITH_RC4_128_SHA
      
      DHE_RSA_WITH_DES_CBC_SHA
      
      DHE_RSA_WITH_3DES_EDE_CBC_SHA
      
      RSA_WITH_AES_128_CBC_SHA
      
      DHE_RSA_WITH_AES_128_CBC_SHA
      
      (TLSv1.0: idem
      

      If all configurations are set correctly, it should not contain any export ciphers (like “RSA_EXPORT_WITH_RC4_40_MD5”, “RSA_EXPORT_WITH_DES40_CBC_SHA” or “DHE_RSA_EXPORT_WITH_DES40_CBC_SHA”). Output should be similar to above.

      Cipher suites supported in the carbon products is defined by the java version of the server running on, hence the output you see when you run might differ from the above output. Bottom line is the list should not contain any ciphers which have “_EXPORT_” in its name.

      References :

      [1] Washingtonpost article on FREAK : http://www.washingtonpost.com/blogs/the-switch/wp/2015/03/03/freak-flaw-undermines-security-for-apple-and-google-users-researchers-discover/

      [2] “SMACK: State Machine AttaCKs” : https://www.smacktls.com/

      [3] “Attack of the week: FREAK (or ‘factoring the NSA for fun and profit’)” : http://blog.cryptographyengineering.com/2015/03/attack-of-week-freak-or-factoring-nsa.html

      [4] “Akamai Addresses CVE 2015-0204 Vulnerability” : https://blogs.akamai.com/2015/03/cve-2015-0204-getting-out-of-the-export-business.html


      Nadeeshaan GunasingheIntegrating WSO2 ESB Connectors in real world integration Scenarios

      For a moment consider some of the third party services we use daily. For an example We can consider Twitter as such a service. When we consider this fact we can identify a lot of service we use in our day to day life. Usually we use APIs to connect to these services as developers. WSO2 ESB Connectors allow you to interact with such third parties’ API from ESB Message flow. With these connectors you can access the API interact with it through the exposed functionalities.

      Eg: With the Twitter Connector you can access the twitter API and then retrieve tweets of a user.

      At the moment you can find more than hundred of connectors at Connector Repo. And you can build one connector at a time simply with the maven build tool.

      Working with one connector can be much simpler as well as less frequently used. Consider a scenario in which we are communicating with two such third parties’ APIs. Such scenarios are complex to handle and messy in configuring them. With WSO2 ESB Connectors you can easily acquire such integrations for real world scenarios as well as business scenarios.

      During this blog post Let’s have a look at such real world scenario which addresses an important project management task.


      Integration Scenario with RedMine, Google Spreadsheet and Gmail.

      In this integration scenario my objective is to track the expired Redmine assignments to a particular user/all the users and then retrieve the information as well as sending an email notification and log them in to google spreadsheet.

      In order to accomplish this task we need to have the Redmine Connector, Google Spreadsheet Connector and the Gmail Connector. After building the connectors, before you write the configuration you need to upload them to ESB. Make sure to enable the connector after uploading them.

      Enable Connectors

      Configuring connectors

      Gmail Connector
      <gmail.init>
      <username>your_user_name</username> <oauthAccessToken>your_access_token</oauthAccessToken>
      </gmail.init>

      Redmine Connector
      <redmine.init>
      <apiUrl>https://redmine.wso2.com</apiUrl>
      <username>User_Name</username>
                 <password>Password</password>
                 <responseType>xml</responseType>
      </redmine.init>

      Google Spreadsheet Connector
      <googlespreadsheet.oAuth2init>
                 <oauthConsumerKey>Consumer_Key</oauthConsumerKey>
                 <oauthConsumerSecret>Consumer_Secret</oauthConsumerSecret>
                 <oauthAccessToken>Access_Token</oauthAccessToken>
      <oauthRefreshToken>Refresh_Token</oauthRefreshToken>
      </googlespreadsheet.oAuth2init>

      Note: For the development purposes you can access Google OAuth 2.0 Playground in order to get the access tokens for gmail and google spreadsheet connectors.


      <?xml version="1.0" encoding="UTF-8"?> <definitions xmlns="http://ws.apache.org/ns/synapse"> <registry provider="org.wso2.carbon.mediation.registry.WSO2Registry"> <parameter name="cachableDuration">15000</parameter> </registry> <taskManager provider="org.wso2.carbon.mediation.ntask.NTaskTaskManager"/> <import name="gmail" package="org.wso2.carbon.connector" status="enabled"/> <import name="redmine" package="org.wso2.carbon.connector" status="enabled"/> <import name="evernote" package="org.wso2.carbon.connector" status="enabled"/> <import name="googlespreadsheet" package="org.wso2.carbon.connectors" status="enabled"/> <proxy name="evernote_gtask_proxy" transports="https http" startOnLoad="true" trace="disable"> <description/> <target> <inSequence onError="faultHandlerSeq">

      <redmine.init>
      <apiUrl>https://redmine.wso2.com</apiUrl>
      <username>User_Name</username>
                  <password>Password</password>
                  <responseType>xml</responseType>
      </redmine.init>
      <redmine.listIssues> <statusId>*</statusId> <limit>2</limit> <assignedToId>me</assignedToId> </redmine.listIssues>
      <property name="cur_date" expression="get-property('SYSTEM_DATE', 'yyyy-MM-dd')" scope="default"/> <iterate continueParent="true" preservePayload="true" attachPath="//issues" expression="//issues/issue"> <target> <sequence> <log level="custom"> <property name="ITERATOR" value="Iterating Over the redmine feature list"/> </log> <property name="issue-id" expression="//issues/issue/id"/> <property name="project-name" expression="//issues/issue/id/@name"/> <property name="description" expression="//issues/issue/description"/> <property name="due-date" expression="//issues/issue/due_date"/> <script language="js">var current_date = mc.getProperty("cur_date").split("-"); var due_date = mc.getProperty("due_date"); if (due_date === null){ mc.setProperty("is_due","false"); } else{ var due_date_arr = due_date.split("-"); var due_date_obj = new Date(due_date_arr[0],due_date_arr[1],due_date_arr[2]); var cur_date_obj = new Date(current_date[0],current_date[1],current_date[2]); if((cur_date_obj&gt;due_date_obj)&gt;0){ mc.setProperty("is_due","true"); } else{ mc.setProperty("is_due","true"); } } </script> <gmail.init> <username>username</username> <oauthAccessToken>access_token</oauthAccessToken> </gmail.init> <filter source="get-property('is_due')" regex="true"> <then> <gmail.sendMail> <subject>Subject</subject> <toRecipients>recipients</toRecipients> <textContent>Email_Body</textContent> </gmail.sendMail> </then> </filter> </sequence> </target> </iterate> <script language="js">var current_date = mc.getProperty("cur_date").split("-"); var issues = mc.getPayloadXML().issue; var returnCsv = "Issue_ID,Project_Name,Due_Date\n"; for(i=0;i&lt;issues.length();i++){ var id = issues[i].id; var name = issues[i].project.@name; var due_date = issues[i].due_date; if (due_date != null){ var due_date_arr = due_date.split("-"); var due_date_obj = new Date(due_date_arr[0],due_date_arr[1],due_date_arr[2]); var cur_date_obj = new Date(current_date[0],current_date[1],current_date[2]); if((cur_date_obj&gt;due_date_obj)&gt;0){ mc.setProperty("task_due","true"); returnCsv=returnCsv+id+","+name+","+due_date+"\n"; } else{ mc.setProperty("task_due","false"); } } }                mc.setPayloadXML(                    &lt;text&gt;{returnCsv}&lt;/text&gt; );</script> <log level="full"/> <googlespreadsheet.oAuth2init> <oauthConsumerKey>consumer_key</oauthConsumerKey> <oauthConsumerSecret>consumer_secret</oauthConsumerSecret> <oauthAccessToken>access_token</oauthAccessToken> <oauthRefreshToken>refresh_token</oauthRefreshToken> </googlespreadsheet.oAuth2init> <filter source="get-property('task_due')" regex="true"> <then> <googlespreadsheet.importCSV> <spreadsheetName>spread_sheet_name</spreadsheetName> <worksheetName>work_sheet_name</worksheetName> <batchEnable>true</batchEnable> <batchSize>10</batchSize> </googlespreadsheet.importCSV> </then> </filter> <respond/> </inSequence> <outSequence> <log level="full"/> <send/> </outSequence> </target> </proxy> <localEntry key="csv_transform"> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output method="text" indent="no" encoding="UTF-8"/> <xsl:template match="/"> Issue_ID,Project_Name,Due_Date <xsl:value-of select="//issues/issue/id"/> <xsl:text>,</xsl:text> <xsl:value-of select="//issues/issue/project/@name"/> <xsl:text>,</xsl:text> <xsl:value-of select="//issues/issue/due_date"/> </xsl:template> </xsl:stylesheet> <description/> </localEntry> <sequence name="fault"> <log level="full"> <property name="MESSAGE" value="Executing default 'fault' sequence"/> <property name="ERROR_CODE" expression="get-property('ERROR_CODE')"/> <property name="ERROR_MESSAGE" expression="get-property('ERROR_MESSAGE')"/> </log> <drop/> </sequence> <sequence name="main"> <in> <log level="full"/> <filter source="get-property('To')" regex="http://localhost:9000.*"> <send/> </filter> </in> <out> <send/> </out> <description>The main sequence for the message mediation</description> </sequence> </definitions>


      According to the above configuration, at first via the redmine connector we get the corresponding redmine issues. Then we use the iterate mediator to iterate through the result list. Depending on the result, use the script mediator to extract the required data fields and append the data to the payload accordingly. Then use the google spreadsheet connector to write them to the spreadsheet.

      Get the Connectors Here https://www.dropbox.com/s/pq4rtzrrurtw0rm/Connectors.zip?dl=0

      Chathurika Erandi De SilvaBits of being a QA

      The job of a QA is never easy, as you have to be the middle man between the developer and the client, it's a responsible job which is similar to a blade with two sharp edges.

      In order to make the life of a QA easier, i have come up with following points, which is entirely based on my experience.

      1. Know what you are testing

      It's very important for a QA to know what's being tested. As the QA has to put him / her self in the client shoes, its very crucial for the QA to know the functionality of the system / product entirely. It's very important to think out of the box and not to limit yourself. As an example, if you are testing a product with lot of configurations, its very important to know why the configurations are there and how they will work.

      2. Plan what you test

      Don't try to be smart by just trying to test as the QA cycle begins. Plan the test. After getting the functionality in to your head, build diagrams, etc.. to help plan the test. Design the flow in which you will be testing. Set priority to the tests, so your flow will be continous and nice

      3. Derive test cases

      Maybe your company doesn't want you to use test cases but derive them your self, the test cases are simply what the client does and expects from the system. So derive them yourself. Make sure you think as the end user and derive them covering the functionality.

      4. Adjust the test cases

      Adjust your test cases so that it adhers to the above flow that you have created. So testing will be quick, easy and you will be able to find defects quickly.

      5. Execute the tests by being in customer's shoes.

      Be the customer for the product, before it reaches the real users.

      Happy Testing !!!!! :)

      Kalpa WelivitigodaHow to move a XML element to be the first child of the parent element

      I was using enrich mediator in WSO2 ESB to add a child to a parent element in the payload. The new element got added as the last element of the children where I wanted it as the first. I tried action="sibling" but as per the synapse code it also adds the element after the target element [1]. So I decided to make use of XSLT and following is a sample stylesheet along with the input and expected messages.

      Input XML
      <root>
      <data>
      <B>value 1</B>
      <C>value 1</C>
      <D>value 1</D>
      <A>value 1</A>
      </data>
      </root>
      Expected XML
      <root>
      <data>
      <A>value 1</A>
      <B>value 1</B>
      <C>value 1</C>
      <D>value 1</D>
      </data>
      </root>
      Stylesheet 
      <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
      <xsl:output indent="yes" />
      <xsl:strip-space elements="*" />
      <xsl:template match="node() | @*">
      <xsl:copy>
      <xsl:apply-templates select="node() | @*" />
      </xsl:copy>
      </xsl:template>
      <xsl:template match="//A" />
      <xsl:template match="//data">
      <xsl:copy>
      <xsl:copy-of select="//A" />
      <xsl:apply-templates />
      </xsl:copy>
      </xsl:template>
      </xsl:stylesheet>

      Kalpa WelivitigodaDate time format conversion with XSLT mediator in WSO2 ESB


      I recently came across this requirement where a xsd:datetime in the payload is needed to be converted to a different date time format as follows,

      Original format : 2015-01-07T09:30:10+02:00
      Required date: 2015/01/07 09:30:10

      In WSO2 ESB, I found that this transformation can be achieved through a XSLT mediator, class mediator or a script mediator. In an overview, XSLT mediator uses a XSL stylesheet to format the xml payload passed to the mediator whereas in class mediator and script mediator we use java code and javascript code respectively to manipulate the message context. In this blog post I am going to present how this transformation can be achieved by means of the XSLT mediator.

      XSL Stylesheet
      <?xml version="1.0" encoding="UTF-8"?>
      <localEntry xmlns="http://ws.apache.org/ns/synapse" key="dateTime.xsl">
      <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" version="2.0">
      <xsl:output method="xml" omit-xml-declaration="yes" indent="yes" />
      <xsl:param name="date_time" />
      <xsl:template match="/">
      <dateTime>
      <required>
      <xsl:value-of select="format-dateTime(xs:dateTime($date_time), '[Y0001]/[M01]/[D01] [H01]:[m01]:[s01] [z]')" />
      </required>
      </dateTime>
      </xsl:template>
      </xsl:stylesheet>
      <description />
      </localEntry>


      Proxy configuration
      <?xml version="1.0" encoding="UTF-8"?>
      <proxy xmlns="http://ws.apache.org/ns/synapse" xmlns:xs="http://www.w3.org/2001/XMLSchema" name="DateTimeTransformation" transports="https http" startOnLoad="true" trace="disable">
      <target>
      <inSequence>
      <property name="originalFormat" expression="$body/dateTime/original" />
      <xslt key="dateTime.xsl">
      <property name="date_time" expression="get-property('originalFormat')" />
      </xslt>
      <log level="full" />
      </inSequence>
      </target>
      </proxy>

      dateTime.xsl XLS style sheet is stored as an inline xml local entry in ESB.

      In the proxy, the original date is passed as an parameter ("date_time") to the XLS style sheet. I have used format-dateTime function, a function of XSL 2.0, to do the transformation.

      Sample request
      <?xml version="1.0" encoding="UTF-8"?>
      <soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope">
      <soap:Header />
      <soap:Body>
      <dateTime>
      <original>2015-01-07T09:30:10+02:00</original>
      </dateTime>
      </soap:Body>
      </soap:Envelope>

      Console output
      <?xml version="1.0" encoding="UTF-8"?>
      <soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope">
      <soap:Body>
      <dateTime xmlns="http://ws.apache.org/ns/synapse" xmlns:xs="http://www.w3.org/2001/XMLSchema">
      <required>2015/01/07 09:30:10 GMT+2</required>
      </dateTime>
      </soap:Body>
      </soap:Envelope>

      Kalpa WelivitigodaWorkaround for absolute path issue in SFTP in VFS transport

      In WOS2 ESB, VFS transport can be used to access SFTP file system. The issue is that we cannot use absolute paths with SFTP and this affects to WSO2 ESB 4.8.1 and prior versions. The reason is that SFTP uses SSH to login, and it will by default log into the user's home directory and the path specified will be considered relative to the user's home directory.

      For example consider the VFS URL below,
      vfs:sftp://kalpa:*****@localhost/myPath/file.xml
      The requirement is to refer to /myPath/file.xml but it will refer /home/kalpa/myPath/file.xml (/home/kalpa) is the user's home directory.

      To overcome this issue we can create a mount for the desired directory in the home directory of the user in the FTP file system. Considering the example above, we can create the mount as follows,
      mount --bind /myPath /home/kalpa/myPath
       With this the VFS URL above will actually refer to /myPath/file.xml.

      Chandana NapagodaConfigure External Solr server with Governance Registry

      In WSO2 Governance Registry 5.0.0, we have upgraded Apache Solr version into 5.0.0 release. With that you can connect WSO2 Governance Registry into an external Solr server or Solr cluster. External Solr integration provides features to gain comprehensive Administration Interfaces, High scalability and Fault Tolerance, Easy Monitoring and many more Solr capabilities.

      Let me explain how you can connect WSO2 Governance Registry server with an external Apache Solr server.

      1). First, you have to download Apache Solr 5.x.x from the below location.
      http://lucene.apache.org/solr/mirrors-solr-latest-redir.html
      Please note that we have only verified with Solr 5.0.0, 5.2.0 and 5.2.1 versions only.

      2). Then unzip Solr Zip file. Once unzipped, it's content will look like the below.



      The bin folder contains the scripts to start and stop the server. Before starting the Solr server, you have to make sure JAVA_HOME variable is set properly. Apache Solr is shipped with an inbuilt Jetty server.

      3). You can start the Solr server by issuing "solr start" command from the bin directory. Once the Solr server is started properly, following message will be displayed in the console. "Started Solr server on port 8983 (pid=5061). Happy searching!"

      By default, server starts on port "8983" and you can access the Solr admin console by navigating to "http://localhost:8983/solr/".

      4) Next, you have to create a new Solr Core(In Solr Cloud mode this will be called as a Collection) which will be used in the WSO2 Governance Registry. To create a new Core, you have to execute "solr create -c registry-indexing -d basic_configs" command from the Solr bin directory. This will create a new Solr Core named "registry-indexing".



      5). After creating "registry-indexing" Solr core, you can see it from the Solr admin console as below.



      6). To integrate newly created Solr core with WSO2 Governance Registry, you have to modify registry.xml file located in <greg_home>/repository/conf directory. There you have to add "solrServerUrl" under indexingConfiguration as follows.

          <!-- This defines index cofiguration which is used in meta data search feature of the registry -->

      <indexingConfiguration>
      <solrServerUrl>http://localhost:8983/solr/registry-indexing</solrServerUrl>
      <startingDelayInSeconds>35</startingDelayInSeconds>
      <indexingFrequencyInSeconds>3</indexingFrequencyInSeconds>
      <!--number of resources submit for given indexing thread -->
      <batchSize>50</batchSize>
      <!--number of worker threads for indexing -->
      <indexerPoolSize>50</indexerPoolSize>
      .................................
      </indexingConfiguration>


      7). After completing external Solr configurations as above, you have to start the WSO2 Governance Registry server. If you have configured External Solr integration properly, you can notice below log message in the Governace Registry server startup logs(wso2carbon log).

      [2015-07-11 12:50:22,306] INFO {org.wso2.carbon.registry.indexing.solr.SolrClient} - Http Sorl server initiated at: http://localhost:8983/solr/registry-indexing

      Further, you can view indexed data by querying via Solr admin console as well.

      Happy Indexing and Searching...!!!

      Note: You can download G-Reg 5.0.0 Alpha pack from : Product-Greg GitHub repo

      Lali DevamanthriACES Hackathon 2015

      ACES Hackathon is a coding event that lasts three days,which is held annually in the Faculty of Engineering,University of Peradeniya. Yesterday(10th July) 2015 Hackathon was launch and I participated as a mentor.

      ACES Hackathon has proven to be a well-established and perfectly-organized event aiming to bring together large number of undergraduates and engage them into the development of creative and innovative IT solutions.

      “ACES Hackathon 2015″ is the 4th event of its kind to be organized by ACES, following the success of ACES Hackathon 2014. This year ACES is planning to bring the hackathon event to the next level calling contribution from mentors, who are industry experts to help the teams out during the event. The main purpose behind this event is to bring fresh talent to the IT industry and provide motivation and experience to a large number of participants.

      Yesterday is the first day of hackathon and its for pitch the ideas of students. Students have came up with creative ideas in short time. Following are the some ideas that would implement in the  week end.

      1. CardioLab – [Software] – by Wijethunga W.M.D.A
      2. PIN POTHA- [Software] – by Wickramarachchi A.O
      3. Multiplayer “omi” card game – [Software] by Rajapakse K. G. De A.
      4. Smart DriveMate – The Driver’s best friend.- [software] by Geesara Prathap Kulathunga
      5. Unified web platform for consumer market [software] by Jayawardena J.L.M.M
      6. Android app: Auto message – [software] by Balakayan k
      7. Local Responsive Cloud Storage and a Service Platform – [Networks and Systems] by Nanayakkara NBUS
      8. Analytical Tool for Cluster Resources – [Networks and Systems] by Nanayakkara NBUS
      9. Algorithms runner – [Networks and Systems] by Nanayakkara NBUS
      10. Trust management API – [Networks and Systems] by Nanayakkara NBUS
      11. Location based trigger reminders – [Software] by Shantha E.L.W.
      12. Wifi data transfer – [Software] by Ukwattage U.A.I
      13. Fly healthy – [Software] by Dias E.D.L.
      14. Attendance marking with image processing- [Software] by Titus Nanda Kumara
      15. Centralize the photocopy center with internet – Network by Titus Nanda Kumara
      16. URL shortening service with lk domain [network] by Titus Nanda Kumara
      17. Video tutorials and past paper sharing web space by Titus Nanda Kumara
      18. Online Judging System by Mudushan Weerawardhana
      19. Smart reload by Prasanna Rodrigo

      ACES alumini members will be visit the engineering faculty and mentor the students for implement those ideas.  Also leading IT organizations are sponsoring the event.

      Happy coding guys!


      Evanthika AmarasiriDealing with "HTTP 502 Bad Gateway"

      Recently, while configuring WSO2 products with NginX, we were struggling to connect to the management console of the product since it was returning "HTTP 502 Bad Gateway" error. After doing some research, found out that it was a problem with SELinux.

      There are two SELinux boolean settings available by default. One of them is httpd_can_network_connect which allows httpd to connect to anything on the network.

      So to enable this, I executed the below command and everything worked well.

      sudo setsebool -P httpd_can_network_connect 1
      However, there are other similar variables that can be enabled other than this and they are explained in detail in [1].

      [1] - https://www.axivo.com/resources/selinux-booleans-explained.22/

      Dhananjaya jayasingheHow to deal with "java.nio.charset.MalformedInputException: Input length = 1" WSO2 ESB

      Some times we are receiving unusual characters in our responses from various back ends. Then WSO2 ESB is facing problems in understanding those characters from the response and it tends to throw following exception.

      [2015-07-09 12:42:49,651] ERROR - TargetHandler I/O error: Input length = 1
      java.nio.charset.MalformedInputException: Input length = 1
      at java.nio.charset.CoderResult.throwException(CoderResult.java:277)
      at org.apache.http.impl.nio.reactor.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:193)
      at org.apache.http.impl.nio.codecs.AbstractMessageParser.parse(AbstractMessageParser.java:171)
      at org.apache.synapse.transport.http.conn.LoggingNHttpClientConnection$LoggingNHttpMessageParser.parse(LoggingNHttpClientConnection.java:210)
      at org.apache.synapse.transport.http.conn.LoggingNHttpClientConnection$LoggingNHttpMessageParser.parse(LoggingNHttpClientConnection.java:192)
      at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:171)
      at org.apache.synapse.transport.http.conn.LoggingNHttpClientConnection.consumeInput(LoggingNHttpClientConnection.java:106)
      at org.apache.synapse.transport.passthru.ClientIODispatch.onInputReady(ClientIODispatch.java:83)
      at org.apache.synapse.transport.passthru.ClientIODispatch.onInputReady(ClientIODispatch.java:41)
      at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:119)
      at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:160)
      at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:342)
      at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:320)
      at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:280)
      at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106)
      at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:604)
      at java.lang.Thread.run(Thread.java:745)


      Most of the case,  by adding following property to the "passthru-http.properties" file will solve the case.

      Eg:  once i was getting a response like following

      [2015-07-09 12:47:55,236] DEBUG - wire >> "HTTP/1.1 100 Continue[\r][\n]"
      [2015-07-09 12:47:55,237] DEBUG - wire >> "[\r][\n]"
      [2015-07-09 12:47:55,274] DEBUG - wire >> "HTTP/1.1 201 Cr[0xe9]e[\r][\n]"
      [2015-07-09 12:47:55,274] DEBUG - wire >> "Set-Cookie: JSESSIONID=87101C43FCABBB97D049C0F0C8DD216D; Path=/api/; Secure; HttpOnly[\r][\n]"
      [2015-07-09 12:47:55,274] DEBUG - wire >> "Location: https://abc.foo.com/api/office/files/2038[\r][\n]"
      [2015-07-09 12:47:55,274] DEBUG - wire >> "Content-Type: application/json[\r][\n]"
      [2015-07-09 12:47:55,274] DEBUG - wire >> "Transfer-Encoding: chunked[\r][\n]"
      [2015-07-09 12:47:55,275] DEBUG - wire >> "Vary: Accept-Encoding[\r][\n]"
      [2015-07-09 12:47:55,275] DEBUG - wire >> "Date: Thu, 09 Jul 2015 16:47:55 GMT[\r][\n]"
      [2015-07-09 12:47:55,275] DEBUG - wire >> "Server: qa-byp[\r][\n]"
      [2015-07-09 12:47:55,275] DEBUG - wire >> "[\r][\n]"
      [2015-07-09 12:47:55,275] DEBUG - wire >> "1f[\r][\n]"
      [2015-07-09 12:47:55,275] DEBUG - wire >> "{"person":john}[\r][\n]"
      [2015-07-09 12:47:55,275] DEBUG - wire >> "0[\r][\n]"
      [2015-07-09 12:47:55,275] DEBUG - wire >> "[\r][\n]"


      Then it thew the above exception.

      By adding the above entry as follows, i could get rid of that.

      http.protocol.element-charset=iso-8859-1

      Dhananjaya jayasingheසිතුවිල්ල





      එදා සුදු ගවුම ඇඳන් බස් එකෙන් ඇවිත් බහිනකල්
      නොඉවසිල්ලෙන් බස් ස්ටැන්ඩ් එකට වෙලා බලන් උන්නු මම...
      අද..
      පුංචි පුතාගෙ හුරතල් හිනාව බලන්න
      ගෙදර යන්න ඔරලොසුවෙ කටුව කැරකෙනකල් බලන් ඉන්නවා..
      කාලය... නුඹේ අරුමය..

      Danushka FernandoWSO2 Products - How User Stores work

      Today we keep our users and profiles in several forms. Some times they are in a LDAP. Some uses Active Directory (AD). Some uses databases and etc. WSO2 Products are written in a way any of these format could support. If some one have their own way of storing users and any one can easily plug them in to WSO2 products by just writing a custom user store. In this post I will explain how these user stores works and the other components connected to them.

      When we discuss about user management in WSO2 world, there are several key components. They are
      1. User Store Manager
      2. Authorization Manger
      3. Tenant Manager
      In simple user management we need to authorize some user to some action / permission. Normally we group these actions / permissions as groups and assign these groups / roles to users. So there are two kind of mappings that we need to consider. They are
      1. User to Role Mapping
      2. Role to Permission Mapping
      User to Role Mapping is managed by user store implementation and Role to Permission mapping is managed by authorization manger implementation. These things are configured in the configuration file under [1].

      Tenant Manager comes in to play when Multi Tenancy is considered. This is configured under [2]. Lets discuss this later.

      User Management


      By default WSO2 products (except WSO2 Identity Server) it stores every thing in DB. There it use [3] as the user store manager implementation. In WSO2 Identity Server, it's shipped with an internal LDAP and users are stored in that LDAP. So there it uses the [4] as the implementation of the User Store Manager. And by default all WSO2 servers uses the DB to store Role to Permission Mapping. There it uses [5] as the authorization manager implementation.

      I will explain the WSO2 Identity server case since it contains most of the elements. Since it has an LDAP user store, all users in the system, all roles in the system and the all the user to role mappings are saved in the LDAP. User Store Manager Implementation which is based on the interface [6]. WSO2 Products contains an Abstract User Store Implementation which includes the most of the common implementation is done and extension points provided for plug external implementations [7]. It is recommended to use [7] as the base when writing a user store manager always. All users, roles and user role mappings are  managed through [4] which are in LDAP. And all the Role to Permissions mappings are persisted in DB and it was handled via [5].

      Figure 1 : User Stores and Permission Stores

      Figure 1 is about the relationships among users, roles and permissions and where they are stored and who is handling them.

      Multi tenancy

      Lets get in to multi tenancy now. Some of you may already know what is multi tenancy. But on behalf of the others in multi tenancy we create a space ( a tenant ) which is isolated from everything and nobody other than the people in the tenant don't know the existence of the tenant.

      So in WSO2 products OOTB its supporting for a completely isolated tenants. Each tenant will have their own artifact space, registry space and user store (We call this user realm). Figure 2 graphically explains this story.

      Figure 2 : Each Tenant having their own user store

      Since there could be lots of tenants in the system in WSO2 products, it won't load all the tenants to the memory. It will load the tenant to the memory when the tenant is active. And when the tenant is idle for some time it will get unloaded. When tenant get loaded it will load registry, user realm and artifacts belongs to the tenant.

      In this user realm it contains all the users, roles, permissions and their mappings. Realm gets loaded from the implementation of [8] which we mention in [2]. So by having your own implantation you can plug your tenant structure to WSO2 Products.

      In next post I will explain plugging a custom LDAP structure to WSO2 Products.

      [1] $CARBON_HOME/repository/conf/user-mgt.xml
      [2] $CARBON_HOME/repository/conf/tenant-mgt.xml
      [3] org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager
      [4] org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager
      [5] org.wso2.carbon.user.core.authorization.JDBCAuthorizationManager
      [6] UserStoreManager.java
      [7] AbstractUserStoreManager.java
      [8] org.wso2.carbon.user.core.config.multitenancy.MultiTenantRealmConfigBuilder
      [9] MultiTenantRealmConfigBuilder.java


      Ushani BalasooriyaEnable email login in WSO2 carbon products

      To enable email address the below steps can be followed in any carbon product.

      1. EnableEmailUserName in carbon.xml

      <EnableEmailUserName>true</EnableEmailUserName>

      2. Then provide the correct regex to allow email address in user store configuration in user-mgt.xml for JDBC user store
      E.g.,

          <Property name="UsernameJavaRegEx">[a-
      zA-Z0-9@._-|//]{3,30}$</Property>
      3. Create admin user with email address in user in user-mgt.xml.

         <AdminUser>
                           <UserName>admin@wso2.com</UserName>
                           <Password>admin</Password>
        </AdminUser>
      By the above configurations, it will enable email address.

      If you want to give the both support, email address and username, you can include the below property in user store configuration.

      4.  <Property name="
      UsernameWithEmailJavaScriptRegEx">[a-zA-Z0-9@._-|//]{3,30}$</Property>
            
      To know how to do this for a LDAP, refer this well explained blog post [1] done for Identity server which is applicable for other carbon products as well. This document also explains the properties [2]

      Dhananjaya jayasingheWSO2 ESB API with JMS Queues - HTTP GET

      We are going to discuss on how we can handle HTTP GET methods with JMS Proxy services in WSO2. With this blog we are going to discuss following message flow.


      Flow is like follows;


      • Client invokes an API defined in ESB
      • From the API it sends the message to a JMS queue specifying a "ReplyTo" queue
      • There is a JMS proxy consume message from the above queue 
      • After consuming message, it will invoke the backend and response will be sent back to JMS "Reply To" queue
      • Rest API will consume the message from that "Reply To" queue and send back the response to the client.


      In order to have this flow working, you need to setup ActiveMQ with wso2 ESB. You can find the documentation to do it in this documentation page.

      Lets see the API


      <api xmlns="http://ws.apache.org/ns/synapse" name="WeatherAPI" context="/getQuote">
      <resource methods="POST GET" uri-template="/details?*">
      <inSequence>
      <property name="transport.jms.ContentTypeProperty" value="Content-Type" scope="axis2"></property>
      <log>
      <property name="httpMethod" expression="get-property('axis2','HTTP_METHOD')"></property>
      </log>
      <property name="HTTP_METHOD" expression="get-property('axis2','HTTP_METHOD')" scope="transport" type="STRING"></property>
      <send>
      <endpoint>
      <address uri="jms:/SMSStore?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory&java.naming.provider.url=tcp://localhost:61616&transport.jms.DestinationType=queue&transport.jms.ContentTypeProperty=Content-Type&transport.jms.ReplyDestination=SMSReceiveNotificationStore"></address>
      </endpoint>
      </send>
      </inSequence>
      <outSequence>
      <property name="TRANSPORT_HEADERS" scope="axis2" action="remove"></property>
      <send></send>
      </outSequence>
      </resource>
      </api>

      In the above API , there are few things to be focused on.
      • You can see that we are setting the HTTP_METHOD to a property extracted from the incoming message as follows.  By default, for HTTP POST method, API invocation works perfectly with out setting this property at the end. But If we are doing a HTTP GET , we need to obtain the HTTP method at the consumer side of the queue to invoke the back end with a http GET. So we are setting it here. 

      <property name="HTTP_METHOD" expression="get-property('axis2','HTTP_METHOD')" scope="transport" type="STRING"></property>


      • In the endpoint of the send mediator we are setting up following information which you should have 
        • transport.jms.ContentTypeProperty=Content-Type  - We are saying that we are passing the content type in JMS transport
        • ransport.jms.ReplyDestination=SMSReceiveNotificationStore - We are saying that ESB is expecting a response to this queue for this request 

      Then in the JMS proxy which we consume the message, it will look like follows.


      <proxy xmlns="http://ws.apache.org/ns/synapse"
      name="SMSForwardProxy"
      transports="jms"
      statistics="disable"
      trace="disable"
      startOnLoad="true">
      <target faultSequence="fault">
      <inSequence>
      <property name="HTTP_METHOD" expression="$trp:HTTP_METHOD" scope="axis2"/>
      <send>
      <endpoint>
      <address uri="http://localhost:8080/foo">
      <suspendOnFailure>
      <errorCodes>-1</errorCodes>
      <progressionFactor>1.0</progressionFactor>
      </suspendOnFailure>
      <markForSuspension>
      <errorCodes>-1</errorCodes>
      </markForSuspension>
      </address>
      </endpoint>
      </send>
      </inSequence>
      <outSequence>
      <send/>
      </outSequence>
      </target>
      <parameter name="transport.jms.ContentType">
      <rules>
      <jmsProperty>contentType</jmsProperty>
      <default>text/xml</default>
      </rules>
      </parameter>
      <parameter name="transport.jms.ConnectionFactory">myQueueConnectionFactory</parameter>
      <parameter name="transport.jms.DestinationType">queue</parameter>
      <parameter name="transport.jms.Destination">SMSStore</parameter>
      <description/>
      </proxy>


      In the above proxy configuration, you can see following line


       <property name="HTTP_METHOD" expression="$trp:HTTP_METHOD" scope="axis2"/>

      In that , what we do is , we read the HTTP_METHOD we already set in previous API before sending the message to JMS Queue and now we are reading it from transport level property.  Then we are setting it to "HTTP_METHOD" property with the "axis2" scope.

      With that it will be able to figure out the HTTP method to be used on invoking the actual endpoint.


      After getting a response, this Proxy will send the reply to the "SMSReceiveNotificationStore" queue. Then it will consumed from the previous API and send back the response to the client.

      This will be working perfectly for HTTP POST requests. If it is a HTTP GET Request , you need to do some additional changes from the client side and the server side.

      Client Side Changes


      • We need to pass the content type from the client side as follows. You can set it as a header in SOAP UI
             contentType:  application/x-www-form-urlencoded




      Server Side Changes
      • You need to change the builder and formatter for above content type in the axis2.xml file located in WSO2ESB/repository/conf/axis2 directory as follows. 

      .....

      <messageFormatter contentType="application/x-www-form-urlencoded"
      class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>

      .......

      <messageBuilder contentType="application/x-www-form-urlencoded"
      class="org.wso2.carbon.relay.BinaryRelayBuilder"/>

      ......

      If you does not set above , there will be message building and formatting errors.


      With this way you ll be able to get this working for GET http methods.

      This is tested with WSO2 ESB 4.8.1 with it's latest patches and Apache Activemq 5.10.0


      Manoj KumaraInstall Citrix Receiver on Ubuntu 14.04

      Last week I need to install Citrix on my Ubuntu machine to access a remote server until I get the permissions for the network. I found [1] is useful during setting up Citrix on ubuntu servers.

      Once installed when opening a session I got the following error ‘“Thawte Premium Server CA”, the issuer of the server’s security certificate (SSL error 61)’ to resolve this when searching I found steps to resolve on [2].

      Hope this will be useful to me or anyone in future.

      [1] https://help.ubuntu.com/community/CitrixICAClientHowTo
      [2] https://www.geekpete.com/blog/ssl-error-61-using-citrix-ica-client-on-linux/

      Ushani BalasooriyaA reason for getting com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

      Sometimes you will end up in getting an exception like below when starting a server pointed to MySQL DB.

      ERROR - DatabaseUtil Database Error - Communications link failure

      The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
      com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

      The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server



      One possible reason can be since you have mapped, bind-address to an IP in /etc/mysql/my.cnf

      The default is 0.0.0.0 which is all interfaces. This setting does not restrict which IPs can access the server, unless you specified 127.0.0.1 for localhost only or some other IP.

      bind-address            = 0.0.0.0

      Once this is done, restart mysql server.

       sudo service mysql restart

      John MathonThe Internet of Things (IoT) Integration Disaster – The state of affairs today

      The Internet of Things (IoT) Integration Explosion

      Since I started my project to integrate my home I have discovered a lot of cool tools.   There are a lot of choices now to bridge the integration of the different technologies, but a lot of these tools are immature.   Let’s look at the types of tools available.   These tools can be used in combination to provide a variety of ways to solve the IoT usage and integration problem.

      Screen Shot 2015-07-06 at 4.53.18 PM

      APIs (Spotty)

      Six of the devices/services above have supported and well documented APIs.   Thank you.   Another 3 have documented APIs although no support.  10 devices don’t have an API and I may be able to do only partial integration or wait.    The Eagle has a simple http REST well defined API.   The Davis doesn’t, but fortunately there is a tool called Weathersnoop that provides a nice API to access data locally or remotely.  Followmee has a good API service well documented.  The ATT M2X is similar.    The Tesla has an “unsupported” but reasonably well documented API.  The WEMO has a similar unsupported SOAP / http interface.  The Bodymedia has a good API definition.  The Carrier does not although there are hints that a new device and service is coming shortly that will improve the situation significantly.   The IFTTT service does not have an API service, the same with Thalmic.  The Myq has an undocumented interface unsupported again.   The Lynx does not have any support for external interaction yet.

      I like the Followmee service and it works well.  They did make me pay $10/year for access to the API recently which several months ago when I first signed up for the service was free.   At least it is reasonably priced.

      Each of these APIs is significantly different.   A considerable amount of research is needed to find, write the interface to the device, check it.  This is especially true in the undocumented unsupported cases.   Most of these API’s have poor security.   Tesla upgraded its API to use Oauth2 which is good but its implementation is non-standard.

      I believe this is common situation.   To write and get working all of these services has been more work than I wanted but I am also happy with the functionality I am getting so far.

      The Tesla API lets me see data I can’t see about my car that I can’t see any other way that could be very helpful.   I have automated some functions to do things like “Announce” the car which during daytime honks the horn, waits 10 seconds, honks the horn again and opens the trunk which makes the car very visible.  At night the same function will blink the lights wait 10 seconds and do it again.  It also unlocks the car, turns on the heating or airconditioning depending on the internal temperature in the car and enables the car for driving (starting it.)   :)

      Overall this has been much worse than I expected.   Wemo has some integration with IFTTT but none of the services I have explored has any knowledge of any of these devices to simplify integration.   So far, the integration requires writing my own code for everything I want to do.     All these services claim to be IoT services but then have not opened their APIs, documented them, they have not worked with other vendors to create interoperability.   I am very dissatisfied with the state of the industry.

      The APIs are very inconsistent.  Some use get for everything, some use get, post and put for different things but not in any logical way you would know intuitively.  Some require some things on the URI and some require headers or fields in the body.   Some use a security token that you have to remember and use in subsequent calls.   Some work through a proxy service available in the cloud and some can be accessed locally.

      This is a pathetic situation.   I am publishing on open source the code for these devices so at least someone might be able leverage the experience I am gaining with this experiment.    I should have that up in a week or less and will let you know the location.

      DATA STORAGE

      I am very happy so far with the ATT M2X service for recording all my data.  I have tested a number of these data storage solutions.   Some had no consumer pricing or limited trials that were not useful.  Some of the services wanted 500 dollars+/year  for data storage for a consumer which makes no sense.   Others limited me to a total of 8 sensors per device and a small number of devices.   The ATT service is well documented, works brilliantly, is free for consumer uses that are reasonable.   I have nearly 50 sensors in it and it is working well.   Some of the services I tried turned out to not be functional at this time.  They were more promise than fact.   The ecosystem is barely functional.   A lot of vendors are focusing on the Enterprise market.  They claim the ability to take many thousands of devices data and claim significant data analytics capability.

      I am still waiting for some of these analytics things to be useful at the consumer level.  I get real time data from my energy meter but the company that promises to be able to give me powerful analysis won’t be ready for some time to offer it.

      VISUALIZATION

      I am using google spreadsheets for my visualization at this time.   The scripting and functionality provided in google sheets is comprehensive and flexible.  The spreadsheet metaphor makes it easy for me to test out some of my automations first before implementing them and also providing a simple easily modified user interface for viewing the data and initiating actions anywhere.

      The spreadsheet solution is great for testing during this phase of adding devices and playing with the rules.  Once I have decided on what I want to do about visualization and control I will look at more of the options.  There are options to have physical special purpose devices, build a phone app or web service.

      ORCHESTRATION

      Most of the vendors still seem to believe that IoT means I buy their device and then I download a phone app and connect to the cloud.  I am then supposed to bring up their app to control their device and only their device.   This is NOT making it easier.  I don’t know if these vendors really think this is the way IoT will work.   I think many of the consumer companies are suffering from not understanding the paradigm shift in the works.   I don’t want 20 different apps on my phone and switching between them.  I don’t want to control things myself anyway.   The whole point of IoT is to make things smarter so I don’t need to interact with the device as much.   I don’t want their app except as a last resort if my automation fails.

      I am still debating where to put most of my automation.    There are dozens of options in this category.    I have downloaded some and played with them a little.  I am debating where to put the automation.   I believe it would be ideal to put my automation local and get a hub like Smarthings, Vera or Ninja Smartblocks but I am also waiting to see what Apple does and new versions of some of the products out there.    To their credit it does seem like Smartthings, Vera are making big efforts to create more off the shelf integration.  Discussion boards at these companies talk about integrating almost all of the devices I have listed above.   However, at this time none of them has more than a couple checkboxes yet.  Their progress looks a lot more than others.

      An issue is that  even if I try to have local decision making and cloud backup a number of the devices I have don’t allow that and require the device connect directly with the cloud and I access functionality from the cloud. That really pisses me off.   An example of this is MYQ from Chamberlain / Liftmaster.   In retrospect I really wish I didn’t have their garage door opener.  Like other companies they have decided I am going to build my entire IoT strategy apparently on their garage door system and cloud service!  NOT! They advertise this great internet gateway, claim a great flexible expandable service but the API is unsupported and undocumented as well as being ridiculously verbose.  It took more time to get it working because of the poor documentation I have found.  I had to tear the wall controller apart and solder some wires to provide the functionality I wanted.

      The Future

      I am still hopeful that we are early in this revolution.   I believe Apple and Google will drive many of these companies to much more interoperability.   I believe that most of these companies will realize their “do it alone” strategy is doomed.   Nobody is going to base their IoT for their home or business on any one vendor.  None of them provide a wide enough range of devices to make it a compelling argument.

      Other articles you may find interesting:

      Why would you want to Integrate IOT (Internet of Things) devices? Examples from a use case for the home.

      Integrating IoT Devices. The IOT Landscape.

      Iot (Internet of Things), $7 Trillion, $14 Trillion or $19 Trillion? A personal look

      Siri, SMS, IFTTT, and Todoist.

      securing-the-internet-of-things

      A Reference Architecture for the Internet of Things

      OPEN PLATFORM FOR INTERNET OF THINGS

      5-open-source-home-automation-projects-we-love

       Alternatives to IFTTT

      Home Automation Startups


      Evanthika AmarasiriSolving "org.apache.ws.security.WSSecurityException: An unsupported signature or encryption algorithm was used"

      While trying out the scenario which I have explained in my previous post Accessing a non secured backend from a secured client with the help of WSO2 ESB, with security scenario 3 onward, you might have come across an issue as below on client side.

      org.apache.axis2.AxisFault: Error in encryption
          at org.apache.rampart.handler.RampartSender.invoke(RampartSender.java:76)
          at org.apache.axis2.engine.Phase.invokeHandler(Phase.java:340)
          at org.apache.axis2.engine.Phase.invoke(Phase.java:313)
          at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:261)
          at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:426)
          at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:430)
          at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:225)
          at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
          at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:554)
          at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:530)
          at SecurityClient.runSecurityClient(SecurityClient.java:99)
          at SecurityClient.main(SecurityClient.java:34)
      Caused by: org.apache.rampart.RampartException: Error in encryption
          at org.apache.rampart.builder.AsymmetricBindingBuilder.doSignBeforeEncrypt(AsymmetricBindingBuilder.java:612)
          at org.apache.rampart.builder.AsymmetricBindingBuilder.build(AsymmetricBindingBuilder.java:97)
          at org.apache.rampart.MessageBuilder.build(MessageBuilder.java:147)
          at org.apache.rampart.handler.RampartSender.invoke(RampartSender.java:65)
          ... 11 more
      Caused by: org.apache.ws.security.WSSecurityException: An unsupported signature or encryption algorithm was used (unsupported key transport encryption algorithm: No such algorithm: http://www.w3.org/2001/04/xmlenc#rsa-oaep-mgf1p); nested exception is:
          java.security.NoSuchAlgorithmException: Cannot find any provider supporting RSA/ECB/OAEPPadding
          at org.apache.ws.security.util.WSSecurityUtil.getCipherInstance(WSSecurityUtil.java:785)
          at org.apache.ws.security.message.WSSecEncryptedKey.prepareInternal(WSSecEncryptedKey.java:205)
          at org.apache.ws.security.message.WSSecEncrypt.prepare(WSSecEncrypt.java:259)
          at org.apache.rampart.builder.AsymmetricBindingBuilder.doSignBeforeEncrypt(AsymmetricBindingBuilder.java:578)
          ... 14 more
      Caused by: java.security.NoSuchAlgorithmException: Cannot find any provider supporting RSA/ECB/OAEPPadding
          at javax.crypto.Cipher.getInstance(Cipher.java:524)
          at org.apache.ws.security.util.WSSecurityUtil.getCipherInstance(WSSecurityUtil.java:777)
          ... 17 more
      Exception in thread "main" java.lang.NullPointerException
          at SecurityClient.main(SecurityClient.java:38)

      The reason for this is that the Bouncycastle jar required to run this scenario, is not found in the class path at the client side.

      To overcome this issue, you need to place the relevant bouncycastle jar downloaded from the www.bouncycastle.org.

      E.g.:- If you are running your client on JDK1.7, then the jar you need to download is bcprov-jdk15on-150.jar.
      A point to note : I tried this scenario pointing the $ESB_HOME/repository/plugins folder to the Eclipse project and pointed to a bouncycastle jar which was at a different location. For some reason, it did not load the jar until I dropped it inside the $ESB_HOME/repository/plugins folder.

      NOTE: Sometimes, you will have to clear the Eclipse/IntelliJ Idea cache in order for the classes to pick up the jars properly.

      Nirmal FernandoSneak Peek into WSO2 Machine Learner 1.0


      This article is about one of the newest products of WSO2, WSO2 Machine Learner (WSO2 ML). These days we are working on the very first general availability release of WSO2 ML and it will be released in mid-July, 2015. For people who are wondering, when did I move from Stratos team to ML team, it happened January this year (2015) on my request (Yes, WSO2 was kind enough to accommodate my request :-)). We are a 7 member team now (effectively 3 in R&D) and lead by Dr. Srinath Perera, VP Research. We also get the assistance from a member of UX team and a member of documentation team.


      What is Machine Learning?
       


      “Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. Machine learning explores the construction and study of algorithms that can learn from and make predictions on data.”


      More simplified definition from Professor Andrew Ng of Stanford University;


      “Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI.” (source: https://www.coursera.org/course/ml)


      In simple terms, with machine learning we are trying to make the computer learn patterns from a vast amount of historical data and then use the learnt patterns to make predictions.


      What is WSO2 Machine Learner?

      WSO2 Machine Learner is a product which helps you to manage and explore your data, build machine learning models after analyzing the data using machine learning algorithms, compare and manage generated machine learning models and predict using the built models. Following image depicts the high level architecture of WSO2 ML.


      WSO2 ML exposes all its operations via a REST API. We use well-known Apache Spark to perform various operations on datasets in a scalable and efficient manner. Currently, we support number of machine learning algorithms, covering regression and classification types from supervised learning techniques and clustering type from unsupervised learning techniques. We use Apache Spark's MLLib to provide support for all currently implemented algorithms.

      In this post, my main focus is to go through the feature list of WSO2 ML 1.0.0 release, so that you could see, whether it can be used to improve the way you do machine learning.

      Manage Your Datasets

      We help you manage your data, through our dataset versioning support. In a typical use case, you would have an X amount of data now and you would collect another Y amount of data in a month time. With WSO2 ML you could create a dataset with version 1.0.0 which points to X data and in a month time you could create version 1.1.0 which points to (X+Y) data. Then, you could pick these different dataset versions, run a machine learning analysis on top of them and generate models.


      WSO2 ML accepts CSV, TSV data formats and the dataset files can reside in file system or in an HDFS. In addition to these storages, we support pulling data from a WSO2 Data Analytics Server generated data table [doc].

      Explore Your Data

      Once you uploaded datasets into WSO2 ML, you could explore few key details about your dataset such as feature set, scatter plots to understand the relationship of two selected features, histogram of each feature, parallel sets to explore categorical features, trellis charts and cluster diagrams [doc].




      Manage Your ML Projects

      WSO2 ML has a concept call 'Project' which is basically a logical grouping of set of machine learning analyses you would perform on a selected dataset. Note that when I say a dataset, it implies multiple dataset versions belong to a particular dataset. WSO2 ML allows you to manage your machine learning projects based on datasets and also based on users.



      Build and Manage Analyses

      WSO2 ML has a concept call 'Analysis' which holds a pre-processed feature set, a selected machine learning algorithm and its calibrated set of hyper-parameters. Each analysis belongs to a project and a project can have multiple analyses. Once you create an analysis, you cannot edit it but you can view it and also delete it. Analysis creation can be done using the wizard provided by WSO2 ML.





      Run Analyses and Manage Models

      Once you followed the wizard and generate an analysis, final step is to pick a dataset version from the available versions of the project's dataset and run the analysis. Outcome of this process is a machine learning model. Same analysis can be run on different dataset versions and generate multiple models.



      Once a model is generated you could perform various operations on it such as viewing the model summary, downloading the model object as a file, publishing the model into WSO2 registry and predicting.





      Compare Models

      The ultimate goal of you is to build an accurate model which can later be used for prediction. To help you out here, i.e. to allow you to easily compare all the different models got created using different analyses, we have a model comparison view.



      In a Classification problem case, we will sort the models using their accuracy values, and in numerical prediction case we sort base on the mean squared error.

      ML REST API

      All the underlying WSO2 ML operations are exposed using the REST API and in fact our UI client is built on top of the ML REST API [doc]. If you wish, you could write a client in any language, on top of our REST API. It currently supports basic auth and session based authentication.

      ML UI

      Our Jaggery based UI is built using latest UX designs and you probably have felt it from the screenshots seen thus far in this post.

      ML-WSO2 ESB Integration

      We have written a ML-ESB mediator which could be used to do prediction of data collected from an incoming request against a ML model generated using WSO2 ML [doc].

      ML-WSO2 CEP Integration

      In addition to ESB mediator, we have written a ML-CEP extension, which could use to do real-time predictions against a generated model [doc].

      External Spark Cluster Support

      WSO2 ML by default ships an embedded Spark runtime, so that you could simply unzip the pack and start playing with it. But it can be configured to connect to an external Spark cluster [doc].

      The Future

      * Deep Learning algorithm support using H2O - this is currently underway as a GSoC project.
      * Data pre-processing using DataWrangler - current GSoC project
      * Recommendation algorithm support - current GSoC project
       ... whole lot of other new features and improvements.


      This is basically a summary of what WSO2 ML 1.0 is all about. Please follow our GitHub repository for more information. You are most welcome to try it out and report any issues in our Jira.


      Ajith VitharanaAdd "getRecentlyAddedAPIs" operation for Store API - WSO2 API Manager.

      The WSO2 API Manager doesn't expose the "getRecentlyAddedAPIs" operation through the Store API (https://docs.wso2.com/display/AM190/Store+APIs). But you can do the following workaround to expose that operation.

      1. Open the file <am_home>/repository/deployment/server/jaggeryapps/store/site/blocks/api/recently-added/ajax/list.jag and change it as bellow.

      <%
      include("/jagg/jagg.jag");
      (function () {
          response.contentType = "application/json; charset=UTF-8";
          var mod, obj, tenant, result, limit,

                  msg = require("/site/conf/ui-messages.jag"),
                  action = request.getParameter("action");
          if (action == "getRecentlyAddedAPIs") {
              tenant = request.getParameter("tenant");
              limit = request.getParameter("limit");
              mod = jagg.module("api");
              result = mod.getRecentlyAddedAPIs(limit,tenant);
              if (result.error) {
                  obj = {
                      error:result.error,
                      message:msg.error.authError(action)
                  };
              } else {
                  obj = {
                      error:false,
                      apis:result.apis
                  }
              }
              print(obj);
          } else {
              print({
                  error:true,
                  message:msg.error.invalidAction(action)
              });
          }
      }());
      %>
      2. Now you can invoke the getRecentlyAddedAPIs operation as bellow.

      curl -b cookies 'http://localhost:9763/store/site/blocks/api/recently-added/ajax/list.jag?action=getRecentlyAddedAPIs&limit=10&tenant=carbon.super'

      limit - Number of recently added APIs
      tenant - Tenant domain.

       

      Sumedha KodithuwakkuHow to delete a random element from the XML payload with the use of Script mediator in WSO2 ESB

      In WSO2 ESB, we can use the Script Mediator manipulate a XML payload. Here I have used JavaScript/E4X for accessing/manipulating the elements.

      Example XML payload;

      <breakfast_menu>
      <food>
      <name>Belgian Waffles</name>
      <price>$5.95</price>
      <calories>650</calories>
      </food>
      <food>
      <name>Strawberry Belgian Waffles</name>
      <price>$7.95</price>
      <calories>900</calories>
      </food>
      <food>
      <name>Berry-Berry Belgian Waffles</name>
      <price>$8.95</price>
      <calories>900</calories>
      </food>
      </breakfast_menu> 

      Lets assume we want to remove the last food element (Berry-Berry Belgian Waffles); In this scenario, breakfast_menu is the root element and the set of children elements will be food.

      The length of the child elements (food) can be obtained as follows;

      var payload = mc.getPayloadXML();
      var length = payload.food.length();

      Then delete the last element as follows; Here the index of the last element would be length-1

      delete payload.cuidInfo[length-1];

      Complete Script Mediator configuration would be as follows;

      <script language="js">
      var payload = mc.getPayloadXML();
      var length = payload.food.length();
      delete payload.food[length-1];
      mc.setPayloadXML(payload);
      </script>

      The output of the script mediator would be as follows;

      <breakfast_menu>
      <food>
      <name>Belgian Waffles</name>
      <price>$5.95</price>
      <calories>650</calories>
      </food>
      <food>
      <name>Strawberry Belgian Waffles</name>
      <price>$7.95</price>
      <calories>900</calories>
      </food>
      </breakfast_menu>

      Likewise  we can delete the required elements from the payload.

      sanjeewa malalgodaHow to use SAML2 grant type to generate access tokens in web applications (Generate access tokens programatically using SAML2 grant type). - WSO2 API Manager

      Exchanging SAML2 bearer tokens with OAuth2 (SAML extension grant type)

      SAML 2.0 is an XML-based protocol. It uses security tokens containing assertions to pass information about an enduser between a SAML authority and a SAML consumer.
      A SAML authority is an identity provider (IDP) and a SAML consumer is a service provider (SP).
      A lot of enterprise applications use SAML2 to engage a third-party identity provider to grant access to systems that are only authenticated against the enterprise application.
      These enterprise applications might need to consume OAuth-protected resources through APIs, after validating them against an OAuth2.0 authentication server.
      However, an enterprise application that already has a working SAML2.0 based SSO infrastructure between itself and the IDP prefers to use the existing trust relationship, even if the OAuth authorization server is entirely different from the IDP. The SAML2 Bearer Assertion Profile for OAuth2.0 helps leverage this existing trust relationship by presenting the SAML2.0 token to the authorization server and exchanging it to an OAuth2.0 access token.

      You can use SAML grant type for web applications to generate tokens.
      https://docs.wso2.com/display/AM160/Token+API#TokenAPI-ExchangingSAML2bearertokenswithOAuth2(SAMLextensiongranttype)


      Sample curl command .
      curl -k -d "grant_type=urn:ietf:params:oauth:grant-type:saml2-bearer&assertion=&scope=PRODUCTION" -H "Authorization: Basic SVpzSWk2SERiQjVlOFZLZFpBblVpX2ZaM2Y4YTpHbTBiSjZvV1Y4ZkM1T1FMTGxDNmpzbEFDVzhh, Content-Type: application/x-www-form-urlencoded" https://serverurl/token

      How to invoke token API from web app and get token programmatically.

      To generate user access token using SAML assertion you can add following code block inside your web application.
      When you login to your app using SSO there would be access you will get SAML response. You can store that in application session and use it to get token whenever requires.



      Please refer following code for Access token issuer.

      package com.test.org.oauth2;
      import org.apache.amber.oauth2.client.OAuthClient;
      import org.apache.amber.oauth2.client.URLConnectionClient;
      import org.apache.amber.oauth2.client.request.OAuthClientRequest;
      import org.apache.amber.oauth2.common.token.OAuthToken;
      import org.apache.catalina.Session;
      import org.apache.commons.logging.Log;
      import org.apache.commons.logging.LogFactory;

      public class AccessTokenIssuer {
          private static Log log = LogFactory.getLog(AccessTokenIssuer.class);
          private Session session;
          private static OAuthClient oAuthClient;

          public static void init() {
              if (oAuthClient == null) {
                  oAuthClient = new OAuthClient(new URLConnectionClient());
              }
          }

          public AccessTokenIssuer(Session session) {
              init();
              this.session = session;
          }

          public String getAccessToken(String consumerKey, String consumerSecret, GrantType grantType)
                  throws Exception {
              OAuthToken oAuthToken = null;

              if (session == null) {
                  throw new Exception("Session object is null");
              }
      // You need to implement logic for this operation according to your system design. some url
              String oAuthTokenEndPoint = "token end point url"

              if (oAuthTokenEndPoint == null) {
                  throw new Exception("OAuthTokenEndPoint is not set properly in digital_airline.xml");
              }


              String assertion = "";
              if (grantType == GrantType.SAML20_BEARER_ASSERTION) {
          // You need to implement logic for this operation according to your system design
                  String samlResponse = "get SAML response from session";
          // You need to implement logic for this operation according to your system design
                  assertion = "get assertion from SAML response";
              }
              OAuthClientRequest accessRequest = OAuthClientRequest.
                      tokenLocation(oAuthTokenEndPoint)
                      .setGrantType(getAmberGrantType(grantType))
                      .setClientId(consumerKey)
                      .setClientSecret(consumerSecret)
                      .setAssertion(assertion)
                      .buildBodyMessage();
              oAuthToken = oAuthClient.accessToken(accessRequest).getOAuthToken();

              session.getSession().setAttribute("OAUTH_TOKEN" , oAuthToken);
              session.getSession().setAttribute("LAST_ACCESSED_TIME" , System.currentTimeMillis());

              return oAuthToken.getAccessToken();
          }

          private static org.apache.amber.oauth2.common.message.types.GrantType getAmberGrantType(
                  GrantType grantType) {
              if (grantType == GrantType.SAML20_BEARER_ASSERTION) {
                  return org.apache.amber.oauth2.common.message.types.GrantType.SAML20_BEARER_ASSERTION;
              } else if (grantType == GrantType.CLIENT_CREDENTIALS) {
                  return org.apache.amber.oauth2.common.message.types.GrantType.CLIENT_CREDENTIALS;
              } else if (grantType == GrantType.REFRESH_TOKEN) {
                  return org.apache.amber.oauth2.common.message.types.GrantType.REFRESH_TOKEN;
              } else {
                  return org.apache.amber.oauth2.common.message.types.GrantType.PASSWORD;
              }
          }
      }


      After you login to system get session object and initiate access token issuer as follows.
      AccessTokenIssuer accessTokenIssuer = new AccessTokenIssuer(session);

      Then keep reference for that object during session.
      Then when you need access token request token as follows. You need to pass consumer key and secret key.

      tokenResponse = accessTokenIssuer.getAccessToken(key,secret, GrantType.SAML20_BEARER_ASSERTION);

      Then you will get access token and you can use it as required.

      sanjeewa malalgodaHow to change endpoit configurations, timeouts of already created large number of APIs - WSO2 API Manager

      How to add additional properties for already create APIs. Sometimes in deployments we may need to change endpoint configurations and some other parameters after we created them.
      For this we can go to management console, published and change them. But if you have large number of APIs that may be extremely hard. In this post lets see how we can do it for batch of API.

      Please note that test this end to end before you push this change to production deployment. And also please note that some properties will be stored in registry, database and synapse configurations. So we need to change all 3 places. In this example we will consider endpoint configurations only(which available on registry and synapse).

      Changing velocity template will work for new APIs. But when it comes to already published APIs, you have to do following process if you are not modifying it manually.

      Write simple application to change synapse configuration and add new properties(as example we can consider timeout value).
       Use a checkin/checkout client to edit the registry files with the new timeout value.
         you can follow below mentioned steps to use the checkin/checkout client,
       Download Governance Registry binary from http://wso2.com/products/governance-registry/ and extract the zip file.
       Copy the content of Governance Registry in to APIM home.
       Go into the bin directory of the Governance Registry directory.
       Run the following command to checkout registry files to your local repository.
               ./checkin-client.sh co https://localhost:9443/registry/path -u admin -p admin  (linux environment)
                 checkin-client.bat co https://localhost:9443/registry/path -u admin -p admin (windows environment)
              
      Here the path is where your registry files are located. Normally API meta data will be listed under each provider '_system/governance/apimgt/applicationdata/provider'.

      Once you run this command, registry files will be downloaded to your Governance Registry/bin directory. You can find the directories with user names who created the API.
      Inside those directories there are files with same name 'api' in the location of '{directory with name of the api}/{directory with version of the api}/_system/governance
      /apimgt/applicationdata/provider/{directory with name of the user}\directory with name of the api}/{directory with version of the api}' and you can edit the timeout value by
      using a batch operation(shell script or any other way).

      Then you have to checkin what you have changed by using the following command.
           ./checkin-client.sh ci https://localhost:9443/registry/path -u admin -p admin  (linux)
            checkin-client.bat ci https://localhost:9443/registry/path -u admin -p admin (windows)
         

      Open APIM console and click on browse under resources. Provide the loaction as '/_system/governance/apimgt/applicationdata/provider'. Inside the {user name} directory
      there are some directories with your API names. Open the 'api' files inside those directories and make sure the value has been updated.

      Its recommend to change both registry and synapse configuration. This change will not be applicable to all properties available in API Manager.
      This solution specifically designed for endpoint configurations such as time outs etc.

      sanjeewa malalgodaHow to add secondry user store domain name to SAML response from shibboleth side. WSO2 Identity server SSO with secondary user store.

      When we configure shibboleth as identity provider in WSO2 Identity server as described in this article(http://xacmlinfo.org/2014/12/04/federatation-shibboleth/) deployment would be something like below.

      http://i0.wp.com/xacmlinfo.org/wp-content/uploads/2014/12/sidp0.png



      In this case shibboleth will act as identity provider for WSO2 IS and will provide SAML assertion to WSO2 IS. But actual permission check will happen from IS side and we may need complete user name for that. If we configured user store as secondary user store then user store domain should be part of name. But shibboleth do not know about secondary user store. So in IS side you will username instead of DomainName/UserName. Then it will be an issue if we try to validate permissions per user.

      To over come this we can configure shibboleth to send domain aware user name from their end. Let say domain name is LDAP-Domain then we can set it from shibboleth side with following configuration. Then it will send user name like this LDAP-Domain/userName.

       (attribute-resolver.xml)

          <!-- This is the NameID value we send to the WS02 Identity Server. -->
          <resolver:AttributeDefinition xsi:type="ad:Script" id="eduPersonPrincipalNameWSO2">
              <resolver:Dependency ref="eduPersonPrincipalName" />

              <resolver:AttributeEncoder xsi:type="enc:SAML2StringNameID" nameFormat="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent" />

              <ad:Script>
                  <![CDATA[
                      importPackage(Packages.edu.internet2.middleware.shibboleth.common.attribute.provider);

                      eduPersonPrincipalNameWSO2 = new BasicAttribute("eduPersonPrincipalNameWSO2");
                      eduPersonPrincipalNameWSO2.getValues().add("LDAP-Domain/" + eduPersonPrincipalName.getValues().get(0));
                  ]]
      >

              </ad:Script>
          </resolver:AttributeDefinition>

      sanjeewa malalgodaHow to write custom throttle handler to throttle requests based on IP address - WSO2 API Manager

      Please find the sample source code for custom throttle handler to throttle requests based on IP address. Based on your requirements you can change the logic here.

      package org.wso2.carbon.apimgt.gateway.handlers.throttling;import org.apache.axiom.om.OMAbstractFactory;
      import org.apache.axiom.om.OMElement;
      import org.apache.axiom.om.OMFactory;
      import org.apache.axiom.om.OMNamespace;
      import org.apache.axis2.context.ConfigurationContext;
      import org.apache.commons.logging.Log;
      import org.apache.commons.logging.LogFactory;
      import org.apache.http.HttpStatus;
      import org.apache.neethi.PolicyEngine;
      import org.apache.synapse.Mediator;
      import org.apache.synapse.MessageContext;
      import org.apache.synapse.SynapseConstants;
      import org.apache.synapse.SynapseException;
      import org.apache.synapse.config.Entry;
      import org.apache.synapse.core.axis2.Axis2MessageContext;
      import org.apache.synapse.rest.AbstractHandler;
      import org.wso2.carbon.apimgt.gateway.handlers.Utils;
      import org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityUtils;
      import org.wso2.carbon.apimgt.gateway.handlers.security.AuthenticationContext;
      import org.wso2.carbon.apimgt.impl.APIConstants;
      import org.wso2.carbon.throttle.core.AccessInformation;
      import org.wso2.carbon.throttle.core.RoleBasedAccessRateController;
      import org.wso2.carbon.throttle.core.Throttle;
      import org.wso2.carbon.throttle.core.ThrottleContext;
      import org.wso2.carbon.throttle.core.ThrottleException;
      import org.wso2.carbon.throttle.core.ThrottleFactory;

      import java.util.Map;
      import java.util.TreeMap;


      public class IPBasedThrottleHandler extends AbstractHandler {

          private static final Log log = LogFactory.getLog(IPBasedThrottleHandler.class);

          /** The Throttle object - holds all runtime and configuration data */
          private volatile Throttle throttle;

          private RoleBasedAccessRateController applicationRoleBasedAccessController;

          /** The key for getting the throttling policy - key refers to a/an [registry] entry    */
          private String policyKey = null;
          /** The concurrent access control group id */
          private String id;
          /** Version number of the throttle policy */
          private long version;

          public IPBasedThrottleHandler() {
              this.applicationRoleBasedAccessController = new RoleBasedAccessRateController();
          }

          public boolean handleRequest(MessageContext messageContext) {
              return doThrottle(messageContext);
          }

          public boolean handleResponse(MessageContext messageContext) {
              return doThrottle(messageContext);
          }

          private boolean doThrottle(MessageContext messageContext) {
              boolean canAccess = true;
              boolean isResponse = messageContext.isResponse();
              org.apache.axis2.context.MessageContext axis2MC = ((Axis2MessageContext) messageContext).
                      getAxis2MessageContext();
              ConfigurationContext cc = axis2MC.getConfigurationContext();
              synchronized (this) {

                  if (!isResponse) {
                      initThrottle(messageContext, cc);
                  }
              }        // if the access is success through concurrency throttle and if this is a request message
              // then do access rate based throttling
              if (!isResponse && throttle != null) {
                  AuthenticationContext authContext = APISecurityUtils.getAuthenticationContext(messageContext);
                  String tier;            if (authContext != null) {
                      AccessInformation info = null;
                      try {

                          String ipBasedKey = (String) ((TreeMap) axis2MC.
                                  getProperty("TRANSPORT_HEADERS")).get("X-Forwarded-For");
                          if (ipBasedKey == null) {
                              ipBasedKey = (String) axis2MC.getProperty("REMOTE_ADDR");
                          }
                          tier = authContext.getApplicationTier();
                          ThrottleContext apiThrottleContext =
                                  ApplicationThrottleController.
                                          getApplicationThrottleContext(messageContext, cc, tier);
                          //    if (isClusteringEnable) {
                          //      applicationThrottleContext.setConfigurationContext(cc);
                          apiThrottleContext.setThrottleId(id);
                          info = applicationRoleBasedAccessController.canAccess(apiThrottleContext,
                                                                                ipBasedKey, tier);
                          canAccess = info.isAccessAllowed();
                      } catch (ThrottleException e) {
                          handleException("Error while trying evaluate IPBased throttling policy", e);
                      }
                  }
              }        if (!canAccess) {
                  handleThrottleOut(messageContext);
                  return false;
              }

              return canAccess;
          }    private void initThrottle(MessageContext synCtx, ConfigurationContext cc) {
              if (policyKey == null) {
                  throw new SynapseException("Throttle policy unspecified for the API");
              }        Entry entry = synCtx.getConfiguration().getEntryDefinition(policyKey);
              if (entry == null) {
                  handleException("Cannot find throttling policy using key: " + policyKey);
                  return;
              }
              Object entryValue = null;
              boolean reCreate = false;        if (entry.isDynamic()) {
                  if ((!entry.isCached()) || (entry.isExpired()) || throttle == null) {
                      entryValue = synCtx.getEntry(this.policyKey);
                      if (this.version != entry.getVersion()) {
                          reCreate = true;
                      }
                  }
              } else if (this.throttle == null) {
                  entryValue = synCtx.getEntry(this.policyKey);
              }        if (reCreate || throttle == null) {
                  if (entryValue == null || !(entryValue instanceof OMElement)) {
                      handleException("Unable to load throttling policy using key: " + policyKey);
                      return;
                  }
                  version = entry.getVersion();            try {
                      // Creates the throttle from the policy
                      throttle = ThrottleFactory.createMediatorThrottle(
                              PolicyEngine.getPolicy((OMElement) entryValue));

                  } catch (ThrottleException e) {
                      handleException("Error processing the throttling policy", e);
                  }
              }
          }    public void setId(String id) {
              this.id = id;
          }    public String getId(){
              return id;
          }    public void setPolicyKey(String policyKey){
              this.policyKey = policyKey;
          }    public String gePolicyKey(){
              return policyKey;
          }    private void handleException(String msg, Exception e) {
              log.error(msg, e);
              throw new SynapseException(msg, e);
          }    private void handleException(String msg) {
              log.error(msg);
              throw new SynapseException(msg);
          }    private OMElement getFaultPayload() {
              OMFactory fac = OMAbstractFactory.getOMFactory();
              OMNamespace ns = fac.createOMNamespace(APIThrottleConstants.API_THROTTLE_NS,
                                                     APIThrottleConstants.API_THROTTLE_NS_PREFIX);
              OMElement payload = fac.createOMElement("fault", ns);        OMElement errorCode = fac.createOMElement("code", ns);
           errorCode.setText(String.valueOf(APIThrottleConstants.THROTTLE_OUT_ERROR_CODE));
              OMElement errorMessage = fac.createOMElement("message", ns);
              errorMessage.setText("Message Throttled Out");
              OMElement errorDetail = fac.createOMElement("description", ns);
              errorDetail.setText("You have exceeded your quota");

              payload.addChild(errorCode);
              payload.addChild(errorMessage);
              payload.addChild(errorDetail);
              return payload;
          }    private void handleThrottleOut(MessageContext messageContext) {
              messageContext.setProperty(SynapseConstants.ERROR_CODE, 900800);
              messageContext.setProperty(SynapseConstants.ERROR_MESSAGE, "Message throttled out");

              Mediator sequence = messageContext.getSequence(APIThrottleConstants.API_THROTTLE_OUT_HANDLER);
              // Invoke the custom error handler specified by the user
              if (sequence != null && !sequence.mediate(messageContext)) {
                  // If needed user should be able to prevent the rest of the fault handling
                  // logic from getting executed
                  return;
              }        // By default we send a 503 response back
              if (messageContext.isDoingPOX() || messageContext.isDoingGET()) {
                  Utils.setFaultPayload(messageContext, getFaultPayload());
              } else {
                  Utils.setSOAPFault(messageContext, "Server", "Message Throttled Out",
                                     "You have exceeded your quota");
              }
              org.apache.axis2.context.MessageContext axis2MC = ((Axis2MessageContext) messageContext).
                      getAxis2MessageContext();

              if (Utils.isCORSEnabled()) {
                  /* For CORS support adding required headers to the fault response */
                  Map headers = (Map) axis2MC.getProperty(org.apache.axis2.context.MessageContext.TRANSPORT_HEADERS);
                  headers.put(APIConstants.CORSHeaders.ACCESS_CONTROL_ALLOW_ORIGIN, Utils.getAllowedOrigin((String)headers.get("Origin")));
                  headers.put(APIConstants.CORSHeaders.ACCESS_CONTROL_ALLOW_METHODS, Utils.getAllowedMethods());
                  headers.put(APIConstants.CORSHeaders.ACCESS_CONTROL_ALLOW_HEADERS, Utils.getAllowedHeaders());
                  axis2MC.setProperty(org.apache.axis2.context.MessageContext.TRANSPORT_HEADERS, headers);
              }
              Utils.sendFault(messageContext, HttpStatus.SC_SERVICE_UNAVAILABLE);
          }
      }

      As listed above your custom handler class is : "org.wso2.carbon.apimgt.gateway.handlers.throttling.IPBasedThrottleHandler", the following will be the handler definition for your API.


      <handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.IPBasedThrottleHandler">
      <property name="id" value="A"/>
      <property name="policyKey" value="gov:/apimgt/applicationdata/tiers.xml"/>
      </handler>

      Then try to invoke API and see how throttling works.

      John MathonA case of disruption gone wrong? Uber

      uber-logo

      June 2015 – France Taxi Drivers revolt, judge arrests 2 uber officials for illegal operation

      March 2015 – Netherlands – preliminary judgment that Uber must stop its UberPop service

       June 2015 – San Francisco – The California Labor Commission ruled an Uber driver should be considered a company employee, not an independent contractor

      May 2015 – Mexico – Hundreds of taxi drivers protested

      April 2015 –  Chicago – an Uber driver shot a 22-year-old man who had opened fire on a group of pedestrians in Chicago

       April 2015 – Brazil – Sao Paulo court backs taxi drivers, bans Uber

      April 2015 –  San Francisco – an Uber driver accused of running down a bicyclist 

      March 2015 –  U.N. women’s agency backed out of a partnership with Uber after a protest by trade unions and civil society groups.

      January 2015 – China –  government bans drivers of private cars from offering their services through taxi-hailing apps.

      January 2015 – India – New Delhi police say service is re-instated after death of woman. 

      December 2014 – Spain – Uber’s temporary suspension

       dontleaveme

      “Disruption” gone awry

      This could be a terrific example of “Disruption” gone wrong or not.

      The traditional disruption model is a company produces a product at lower cost or better features that eats away at the lower end of the dominant players market.   This model leads to little awareness of the disruption in play.  The bigger companies happily usually give away the low margin producing business initially eaten by the new entrants.

      In the case of Uber we have a different story.  Uber is displacing regular Taxi drivers around the world.  Unlike the car workers in Detroit or other industries which have experienced the pain of disruption there is rarely this kind of outcry especially against the disrupter who in many cases may be the next employer these workers may have to work for.   I have met many former taxi cab drivers who are happily Uber drivers now.

      So, what is the reason for the more vociferous response to Ubers entrance?  There could be many reasons:

      Let’s review the Uber model and approach as it is understood by me.   I don’t claim any special knowledge of Ubers business practices other than what I’ve learned in talking to drivers and seeing the news stories everybody else has.

      Uber is quite forceful

      Uber has moved into 200+ cities and 50 countries setting up shop and “using” locals within a few short years.  It has definitely been a shock to many people the rapidity with which Uber has been transforming this staid and seemingly heretofore permanently unchangeable business.

      Uber has been quite heavy handed in its approach to penetrating foreign and US markets.   They have been aggressive in hiring tactics, competing strategies.  Whether they are legitimate or not they have raised considerable controversy for being unique.  Lyft a comparable service doesn’t garner quite the antagonism so this could be related to Uber’s tactics and public relations.

      They suffered a public humiliation recently when a VP held an “off the record” meeting in which he explained how Uber was tracking people who were critical of it and was considering revealing personal details of the riders who criticized Uber as retribution.  They VP named a specific individual who he had looked into her travel records and could harm by revealing her personal information.   He suggested a multi-million dollar program like this could help Uber clean up its reputation with media and the public.  I’m not joking.    You can look through my tweets at @john_mathon to see how I called out the president of Uber to fire this individual and to institute new policies.

      The main problem Uber seems to have is that they run afoul of local regulations, ignore the system that exists and try to establish they are different and can do it their way.

      Uber seeks to exist outside the regulations

      They claim they are unregulated because they connect with drivers and passengers via internet and cell phone apps which are not specified in the regulations in any country explicitly.   This is merely an oversight however.  Most countries and cities which regulate things like this will rapidly add clauses regarding the types of services Uber delivers.   How to regulate them is not clear which leads to many places wanting to ban the service until they work out the laws or there becomes more of a consensus how to deal with such a service.

      Uber’s model inherently is a lower cost method of providing workers which means that they consistently offer a lower cost service than local companies can offer.  This obviously disrupts the local drivers of taxis and creates demand.  They purposely avoid compliance with local regulations seeking to keep the model that they originated with no changes.   They avoid training workers as many countries demand of taxi drivers, they eschew employing locals or dealing with medallions or other local regulations seeking always to be on the unregulated outside of the definition of “taxi” services where possible by using their simple hands off approach.

      Uber vs Conventional Taxis

      The traditional taxi ride

      I am going to start by saying I exclude London taxis.   I have had the greatest experiences in London taxis.  I have found the drivers engaging, always interesting to talk to, always knowledgable and the service, the vehicles impeccable.  I estimate in all the trips I have taken 500 taxi rides in London.   Also, I have rarely ever had a real problem getting a taxi in London.  They really are an exception in my opinion.

      The rest of the world:

      In my history I have been ripped off by cab drivers more times than I care to admit.  I have been taken far afield of where I wanted to go either on purpose or accidentally on too many occasions.  I have found taxi drivers all the time that I have to give directions because they have no idea how to get to my destination.   I have had taxi drivers who stink, who smell of drugs, taxi cars that I felt very unsafe with, that smelled or were unhygienic.   I am sure many of the cabs I’ve been in were in violation of several laws for motor vehicles.  I’ve had trouble communicating with drivers, drivers who I’ve fought with, drivers who seemed incompetent or dangerous to drive with.  Drivers who were rude to me and other drivers or people.  I’ve found it sometimes impossible to find a taxi because of the load or strikes even though I looked for an HOUR.   I remember several drives where I feared for my life.  I’ve been in Taxis that have had accidents while I am in them.  I have also felt ripped off even by normal taxi fares paying sometimes over $100 for a simple drive from SFO to home less than 20 miles away that is the legitimate fare in some cases.

      Overall, the situation has improved over the years but it still leads me to trepidation when getting into a Taxi.  I always make sure they are a real taxi.  I have been hassled by too many hucksters seeking to rip me off.  I now track my ride all the time with a mapping app to make sure I am going to the right place or the best route.  I make sure to always insist on a meter taxi.  Even with precautions the number of bad experiences is still too many.  This is one reason I think many people want an alternative.

      In Sydney recently I was shocked when locals told me they hated their local taxi drivers.   Apparently this is a common perception because I went to a comedy show in London soon after and the comedian (from Sydney) was making a lot of fun of the Sydney taxi drivers.

      My Uber experience

      I have taken Uber in countries all over the world in Asia and Europe as well as in America in half a dozen cities.    My experience in uniformly much better than cabs except in London.   Uber drivers are rated after each ride.  They can be booted from the system if their rating falls even a fraction of 1 point.   Several disgruntled passengers early in an Uber drivers career will doom them and their income.  As a result the system works incredibly well.  The Uber drivers have always been incredibly pleasant, talkative and helpful when needed.  They have gone out of their way to help me.

      In a few of the rides the cars were maybe 5 or more years old.  Still, compared to the 10 or 20 or 30 years old some of the taxis i’ve been in they seem positively new.  I’ve noticed that Uber drivers almost always soon get late model cars usually 2 years old or less.  They have ranged from BMW’s to fancy Japanese brands.   They usually have a range of comfort features including excellent air conditioning and heating as well as being universally clean and hygienic.     I really am not being paid in any way by Uber.   This is my actual experience.

      When I read of these people who have had bad experiences in Uber taxis I am not entirely surprised.   The law of averages would automatically mean that at least some crazy incidents would occur if you have millions of rides and tens of thousands of employees you are going to run into every situation possible.

      I have a couple complaints.

      1) I frequently have found the process of finding your Uber driver is problematic.   The Uber drivers do not get the address you are located at even if you type it in.  This is considered a security risk apparently so this means frequently I’ve been texting the driver telling him where I am and finding it costs us a couple minutes to finally get in the car.

      2) I believe the surge pricing system needs to be modified.    I understand all that goes into the current system but I find it very irritating.   I have a friend who uses Uber a lot more than me.  He says that surge zones can be quite small and a taxi can move into a surge zone to “up” their fees.   He claims that he has had on more than one occasion a situation where a driver cancelled his ride only to find that surge pricing went into effect immediately and when he got the next Uber he was paying 2 or 3 times what the fare just 2 minutes ago would have been.  He claims Uber doesn’t care if drivers abuse the system this way.  I don’t know how much this is done but I avoid surge pricing.

      The Uber model as I understand it

      Uber recruits drivers aggressively.   This has been subject of some concern to competitors who claim they actually employ people to go into competitors taxis and recruit drivers they like, then go only a block if they feel there is no chance of recruiting the driver.

      A driver for Uber usually receives a phone from Uber.  They also have many rules and standards they ask drivers to adhere to. Uber does not help pay for the vehicle, the health insurance, car insurance or anything else for the driver although I understand they do help recruit group plans and reduced rates for some policies.   Uber takes 20% of the drivers fare.    This is far less than other types of service take.  So, taxi drivers feel like they make more from Uber.   Many taxi drivers have told me they get more rides on Uber and in spite of the lower cab ride fares they make more money.   Many of the drivers drive late model fancy vehicles that would seem to be outside the price range of the drivers.   I believe they are able to deduct their vehicle on taxes in the US at least which would greatly reduce the cost of the vehicle.

      The main way Uber seems to have of enforcing its regulations is the same system effectively used and created originally by Ebay.   The rating system has been incredibly effective for E-bay which has grown to do billions of transactions / day efficiently and with little problems.   I know how many transactions E-bay does because they use WSO2 software to mediate all their messages to and from mobile applications and web services and all their services.  On peak days the number of transactions routinely exceeds 10 billion.    This is a well oiled machine.   People are remarkably concerned about their reputation in such rating systems.  You can imagine for Uber where your very livelihood would be in jeopardy that drivers are going to want you to give them a 5 star rating every time.  That explains pretty clearly why the service I’ve experienced is so good.

      Another important selling point to the Uber system is its “first mover advantage.”   I believe this is very significant.  One of the big advantages of Uber is that I am a known quantity on their system wherever I go.  Also, they are a known quantity to me.   I can go to Paris, Sydney, New York or one of more than 200 cities in the world where they have drivers and be assured I’m not going to be ripped off and have generally the same quality of service.  I don’t have to worry about local currency and other issues I’ve mentioned.   So, I may have 2 or 3 Taxi app services on my phone but I won’t be subscribing to every local countries App based taxi service.  I will naturally want to use the ones that work in most or all the places I go.    There is a tremendous pressure for Uber to expand to maintain its first mover advantage in as many markets as possible.

      Summary Comparison

      This is simple.  I get rides predictably from Uber where I may find I wait for an hour or more in some cases with traditional taxis. This is especially a problem if you, like me have to make meetings and need to be sure to get a ride.   I can take an Uber taxi anywhere in the world and not worry about being ripped off.   I don’t hassle with local currency, tipping rules or the whole money exchange process which typically adds a tedious and problematic end to the taxi ride.  I walk out of the taxi as soon as I get to the destination which is so liberating.   I have never had an Uber driver take me to the wrong location or take me on a circuitous path.   The drivers are friendly, the cars clean, in good functioning order and frequently as nice as any car you could be in.   This applies whether I have been in Paris, Asia in many countries including China, London and other places in Europe.   It applies whether in Florida or Nevada, Boston, New York.    The Uber fare is always surprisingly lower than the local comparable fair.   The only exception to this would be during surge pricing or taking a TukTuk in Asian countries.  There doesn’t seem to be a “Uber TukTuk” service.

      The Riots and Objections

      I’ve spoken to many people and read many articles which seem to assume that the fact Uber drivers are not regulated by some government means they must be criminals, loaded up on drugs, dangerous, unsafe.  The refrain is you don’t know who you’re getting.   However, I have no idea why people would make this conclusion.   It makes no sense as you have even less idea of who is driving a local taxi.  The Uber system like with E-bay seems to put an incredible onus on drivers to behave well far more than the assumption that people seem to ascribe to local regulatory authorities.  They also seem to attract a more intelligent driver in my experience.  However, in spite of this unassailable reality many people have an innate hostility to Uber and its service.

      Let’s take each of these points I made originally and consider the validity  as objectively as I can.

      1) Uber is “disrupting in foreign countries which are not used to disruption

      This seems clearly to have some truth to it.  Many countries haven’t seen a Toyota come in and displace millions of workers because in most cases they didn’t have an indigenous car manufacturing industry.   Many other disruptions have happened against high tech or large industrial companies which have high paid workers who usually aren’t protected as lower paid workers are.   So most people in the world and countries are not used to disruption like this.   It has come as a surprise to many people that Uber could offer a service and succeed in their markets.  Change itself is disturbing to people not used to it.

      2) Many International countries may be much less “docile” on labor rights than the US.

      Uber’s model means that they don’t employ the drivers.  A driver may receive a bad rating and lose their contract tomorrow.   Uber takes 20% leaving the driver to pay for their car, health and car insurance, maintenance etc… For most drivers this seems to result in a lot more money for them at cheaper fares for passengers and Uber still hauling in billions in income but the riders have no guaranteed income.   Nonetheless, this is a win-win-win if I’ve ever seen one but the down side is that drivers have no “protection” that many countries consider important.

      California recently ruled that a driver was really an employee.   California is particularly a stickler about contractors always trying to find a way to get more tax revenues.  I doubt seriously california is concerned for the drivers health care or unemployment insurance or whatever.   However, the point is valid.  If Uber employed its drivers instead of using them as contractors they would have to change the formula drastically and possibly raise rates.

      In most cases, becoming an employee would mean Uber would pay your taxes and insurance costs, possibly buy your vehicle, maintain it, similar to how many taxi companies work.  Another even more significant point is that Uber’s ability to fire an employee for a couple low ratings might disappear.   It might unravel the Uber model but I don’t think so.   I think they could still find a way to make the system work.   It would take changing their system, taking on additional liabilities and costs.  A lot of regulation to deal with and more hassle but they could do this and still maintain the basics of their service.   I think some countries or states or cities will require things like this and Uber will eventually have to deal with variations in its model.

      3) Politicians and others see an opportunity to gain traction with voters by siding with existing taxi drivers or nationalist sentiment

      I won’t venture to accuse any individual politician but this kind of thing must be happening.

      4) Graft, i.e people paid off to present obstacles to Uber

      Again, I have no idea that such techniques are in play in some places but common sense suggests it must be happening.  The opposite could be happening as well.  Unfortunately in my career I have known of situations where we have lost deals because we didn’t make appropriate contributions.  Fortunately I have worked for companies who refused to deal in such behavior and I know we lost deals as a result.  The fact is such behavior is more common than may be assumed by many people.

      5) Genuine concerns that Uber is trampling on people

      As I mentioned earlier many people may believe that Uber in fact does trample on people.   This is basically a political point and arguing it would be a waste of time for me.  The problem for Uber is that it is unlikely they can change the political situation of all the countries they want to deal in.  So, they are going to have to make concessions to their business model eventually.  They will presumably fight this as long as they can but at cost of being portrayed as the villain.

      6) The pace of change Uber is forcing on people is too fast

      Obviously Uber wants to grow as fast as it can and establish a foothold wherever they can.  They are moving at a blistering pace in acquiring new markets.  They just raised in May $1Billion just to expand in China.  Uber is the largest call Taxi service in many Chinese cities already reportedly.

      People in general can be resistant to change.  For a business that has seen little impact from all the technology change of the last 50 years the resistance is natural but usually people don’t start blowing things up because they fear change.

      7) Ubers model is flawed and may need to adjust especially in some countries to fit in with local laws

      Due to the historical facts like medallions and local regulation of traditional taxi drivers it is eminently possible that Uber has an unfair advantage.   Frequently, local taxi cab drivers are employees and have costs and taxes that Uber drivers don’t have. This is typical for disruptive companies.   It is possible Uber will have to face special taxes or other restrictions which level the playing field.

      A lot of people think Uber’s advantage is in its cost structure and lower fares.  I don’t find that is important.  To me the compelling advantages of Uber are in its service as I have described.  If their fares were the same or even higher than traditional cabs I would pay for the convenience.  So, I think they have considerable room for increased costs before it would really impact the business model.   Others believe that if Uber has to change its model to employ people or other changes it will kill their advantage.  I think not.

      8) Uber has become a flashpoint that isn’t the real issue but a convenient scapegoat

      Frequently one issue is used to deflect from the real purpose of something.  It is very possible that some are using the fear of Uber to drive other political change for their own purposes not because of a real concern for Uber’s purported damage or risk. I find the claims of people who say they are concerned about rape by Uber drivers or lack of safety or regulation as disingenuous.  There is no reason to believe that regular taxi cab drivers wouldn’t be just as likely to be rapists or more.  An incident is San Francisco claimed an Uber driver hit a biker on purpose.  Maybe the driver did but how many incidents with local cabs who have done the same thing?

      Wired magazine wrote a review that said the Uber driver knows where you live so of course you’ll give 5 *’s.   Isn’t it true if I take a taxi to my home he’ll know my home as well?  If I don’t tip him/her well or he/she is a nefarious person.  People who would do something like that would be in serious trouble.  It seems more likely it would be one of the taxi drivers I’ve been with than an Uber driver who rob my home.

      The Boston.com article below is typical pointing out that Uber drivers have to pay for expenses.  They fail to mention that Uber drivers only pay 20% of the fare so they have ample income compared to regular taxi drivers to pay for these costs.

      I believe that people who make these claims are either very poorly informed or have ulterior motives.   The writer of the Boston.com article never mentions experiences with regular cabs.  Have those always been perfection?

      Other Articles on this topic of interest:

      A look at challenges Uber has faced around the world

      Uber problems keep piling on