WSO2 Venus

Shiva BalachandranThe basic need to knows before setting up your Website!

shivabalachandran:

Block 3 – On the rise.

Originally posted on Block Three Creative:

Okay, so you decided to set up your website. That’s Great!!! But here are some need to knows before you consider taking the leap into the internet.

1) DOMAIN NAMES

“A domain name is a unique name that identifies a website….Each website has a domain name that serves as an address, which is used to access the website.” definition via www.techterms.com

First, you need to make sure the domain name you’re looking for is available, you can do that by visiting domain name sellers like domains.inowebz.net.

domains

If it is available then your next step will be to purchase the domain! If you’re unlucky you can either go for another domain or it will show you other alternate domain names available like shown in the image below.

other domains

Side Note – Purchasing your domain name for the more than 1 year provides less budget constraints in the future which is on yearly renewals.

View original 147 more words


Madhuka UdanthaWorkflows for Git

There are many Workflows for Git

  • Centralized Workflow
  • Feature Branch Workflow
  • Gitflow Workflow
  • Forking Workflow


In Centralized Workflow, Team develop projects in the exact same way as they do with Subversion. Git to power your development workflow presents a few advantages over SVN. First, it gives every developer their own local copy of the entire project. This isolated environment lets each developer work independently of all other changes to a project—they can add commits to their local repository and completely forget about upstream developments until it's convenient for them.

Feature Branch Workflow is that all feature development should take place in a dedicated branch instead of the master branch. This encapsulation makes it easy for multiple developers to work on a particular feature without disturbing the main codebase. It also means the master branch will never contain broken code.

Gitflow Workflow provides a robust framework for managing larger projects. it assigns very specific roles to different branches and defines how and when they should interact. You also get to leverage all the benefits of the Feature Branch Workflow.

The Forking Workflow is fundamentally different than the other workflows. Instead of using a single server-side repository to act as the “central” codebase, it gives every developer a server-side repository. Developers push to their own server-side repositories, and only the project maintainer can push to the official repository. The result is a distributed workflow that provides a flexible way for large, organic teams (including untrusted third-parties) to collaborate securely. This also makes it an ideal workflow for open source projects.

Dhananjaya jayasingheWSO2 APIManager - API is not visible to public

WSO2 API Manager  is releasing new versions time to time. So, people are migrating from old versions to new versions. In that situation, after the migration some times people are experiencing some problems like ;


  • API is not visible at all in API Store
  • API is not visible to public , but can see after logged in.

API is not visible at all in API Store


This can be due to the problem in indexing of APIs.  WSO2 APIM is providing the search capability of APIs with it's Solr based indexing feature.  Once there is a problem in indexing , it can cause to Not to displace the migrated APIs at all. 


How to fix ?

It is needed to allow the APIM to do the indexing again. In order to do that , it is needed to do the following steps.

1. Remove/Backup the solr directory located in WSO2AM directory
2. Change the value of "lastAccessTimeLocation" property in registry.xml file located in WSO2AM/repository/conf to an arbitrary value. 

Eg: By default the value of the above entry is as follows :

/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime

You can change it to 

/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime_1 



Note: About entry contains the last time it created the indexing on WSO2 AM in milliseconds. When we change it , if there is no resource available, APIM will create the indexing again and build the content for solr directory.

3. After the above step, restart the server and let it to be idle for 3-5 mins. Then you will be able to see the APIs if the problem was with the API indexing.




API is not visible to public, but can see after logged in

This can be caused due to a permission issue for the migrated API. By default, if we create an API with visibility as public,  APIM will create a resource for that API in registry with "system/wso2.anonymous.role" role with read permission. 

Eg: If i create an API called foo and with visibility set to public, i can see following permissions in registry.



So i can see my API with out log in to the API Store as bellow.



If i remove the anonymous permission from the registry resource, as bellow, It will not be visible to public. 



So, if you are experiencing a problem like this, You need to search for this API in registry and then check whether it has the read permission for the role "system/wso2.anonymous.role". If not just check by adding that permission. 

Then if it is working fine, You can check your migration script for the problem of not migrating the permissions correctly.


Dinusha SenanayakaWSO2 App Manager 1.0.0 released

WSO2 App Manager is the very latest product added to WSO2 product stack.

App Manager can work as a Apps Store for web apps and mobile apps while providing whole set of other features.


  • Single sign on/ Single sign out between web apps
  • Role/permission based access control for web apps
  • Capability to configure federated authentication for web apps
  • Subscription maintenance in App Store
  • Commenting, Rating capabilities in App Store
  • Statistic monitoring for apps usage 

Above are the core features comes with App Manager. Have a look at App Manager product page to get an idea about whole other features and its capabilities.

http://wso2.com/products/app-manager/

Isuru PereraFlame Graphs with Java Flight Recordings

Flame Graphs


Brendon D. Gregg, who is a computer performance analyst, has created Flame Graphs to visualize stack traces in an interactive way.

You must watch his talk at USENIX/LISA13, titled Blazing Performance with Flame Graphs, which explains Flame Graphs in detail.

There can be different types of flame graphs and I'm focusing on CPU Flame Graphs with Java in this blog post.

Please look at the Flame Graphs Description to understand the Flame Graph visualization.

CPU Flame Graphs and Java Stack Traces


As Brendon mentioned in his talk, understanding why CPUs are busy is very important when analyzing performance. 

CPU Flame Graphs is a good way to identify hot methods from sampled stack traces.

In order to generate CPU Flame Graphs for Java Stack Traces, we need a way to get sample stack traces.

Brendon has given examples to use jstack and Google's lightweight-java-profiler. Please refer to his perl program on generating CPU Flame Graphs from jstack and his Java Flame Graphs blog post on using the lightweight-java-profiler.

While trying out these examples, I was thinking whether we can generate a CPU Flame Graph from a Java Flight Recording Dump.

Hot Methods and Call Tree tabs in Java Mission Control are there to get an understanding of "hot spots" in your code. But I was really interested to see a Flame Graph visualization by reading the JFR dump. In this way, you can quickly see "hot spots" by using the Flame Graph software.

Note: JFR's method profiler is sampling based.


Parsing Java Flight Recorder Dump


In order to get sample stack traces, I needed a way to read a JFR dump (The JFR dump is a binary file).

I found a way to parse JFR dump file and output all data into an XML file. 

java oracle.jrockit.jfr.parser.Parser -xml /temp/sample.jfr > recording.xml

Even though, this is an easy way, it takes more time and the resulting XML file is quite large. For example, I parsed a JFR dump around 61MB and the XML was around 5.8GB!

Then I found out about the Flight Recorder Parsers from Marcus Hirt's blog.

There are two ways to parse a JFR file.

  1. Using the Reference Parser - This API is available in Oracle JDK
  2. Using the JMC Parser - This is available in Java Mission Control.

For more info, see the Marcus' blog posts on Parsers. He has also given an example for Parsing Flight Recordings.

As stated in his blog, these APIs are unsupported and there is a plan to release a proper Parsing API with JMC 6.0 and JDK 9.

Converting JFR Method Profiling Samples to FlameGraph compatible format.


I wrote a simple Java program to read a JFR file and convert all stack traces from "Method Profiling Samples" to FlameGraph compatible format.

I used the JMC Parser in the program. I couldn't find a way to get Method Profiling Samples using the Reference Parser. I was only able to find the "vm/prof/execution_sample" events from the reference parser and there was no way to get the stack trace from that event.

The JMC Parser was very easy to use and I was able to get the stack traces without much trouble.

The code is available at https://github.com/chrishantha/jfr-flame-graph. Please refer the README file for complete instructions on building, running and generating a FlameGraph from a JFR dump.

Following is the FlameGraph created from a sample JFR dump.


Flame Graph Reset Zoomorg.wso2.example..java.secur..j..java.lang.Thread.run():745sun...ja..or..sun...sun.sec..java.lang.AbstractStringB..sun.secur..sun...sun..sun.secur..jav..java..sun...org.wso2.example.JavaThreadCPUUsage.HeavyThr..java.lang.StringBuilder.(..java..java.util...ja..
I got the JFR dump by running a sample application, which consumes more CPU resources. Original source files were obtained from a StackOverflow answer, which explains a way to find a thread consuming high CPU resources. Please note that the package name and line numbers are different in the FlameGraph output when comparing with original source code in StackOverflow Answer. (I will try to share the complete source code later).

I used following JVM arguments:

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:StartFlightRecording=delay=1s,duration=20s,name=Fixed,filename=/tmp/highcpu.jfr,settings=profile -XX:FlightRecorderOptions=loglevel=info

Then I used following command to generate the FlameGraph

jfr-flame-graph$ ./run.sh -f /tmp/highcpu.jfr -o /tmp/output.txt
FlameGraph$ cat /tmp/output.txt | ./flamegraph.pl --width 550 > ../traces-highcpu.svg


Summary


  • This blog post explains a way to generate CPU Flame Graphs from a Java Flight Recording using a simple Java program.
  • Program is available at GitHub: https://github.com/chrishantha/jfr-flame-graph
  • The program uses the (unsupported) JMC Parser

Update


It's nice to see Srinath's tweet has so many retweets!
Brendon has also mentioned about my program in his Flame Graphs page!

Madhuka UdanthaChart Types and Data Models in Google Charts

Different data model is need for different chart types. This post is basically covering google chart types and support of data models.

Bar charts and Column chart
Each bar of the chat represent the value of elements of x-axis. Bar charts display tooltips when the user hovers over the data. For a vertical version of this chart called the 'column chart'.
Each row in the table represents a group of bars.
  • Column 0 : Y-axis group labels (string, number, date, datetime)
  • Column 1 : Bar 1 values in this group (number)
  • Column n : Bar N values in this group (number)



Area chart
An area chart or area graph displays graphically quantities data. It is based on the line chart. The area between axis and line are commonly emphasized with colors, textures and hatchings.
Each row in the table represents a set of data points with the same x-axis location.
  • Column 0 : Y-axis group labels (string, number, date, datetime)
  • Column 1 : Line 1 values (number)
  • Column n : Line n values (number)


Scatter charts
Scatter charts plot points on a graph. When the user hovers over the points, tooltips are displayed with more information.



Each row in the table represents a set of data points with the same x-axis value.
  • Column 0 : Data point X values (number, date, datetime)
  • Column 1 : Series 1 Y values (number)
  • Column n : Series n Y values (number)
(This is only fake sample data for chart representing)


Bubble chart
A bubble chart is used to visualize a data set with two to four dimensions. The first two dimensions are visualized as coordinates, the third as color and the fourth as size.
  • Column 0 : Name of the bubble (string)
  • Column 1 : X coordinate (number)
  • Column 2 : Y coordinate (number)
  • Column 3 : It is optional. A value representing a color on a gradient scale  (string, number)
  • Column 4 : It is optional. A Size - values in this column (number)


Bubble Name is  "January"
X =  22
Y =  12
Color = 15
Size  = 14


Summary Of the Data model and Axis in chart types.

The major axis is the axis along the natural orientation of the chart. For line, area, column, combo, stepped area and candlestick charts, this is the horizontal axis. For a bar chart it is the vertical one. Scatter and pie charts don't have a major axis. The minor axis is the other axis.
The major axis of a chart can be either discrete or continuous. When using a discrete axis, the data points of each series are evenly spaced across the axis, according to their row index. When using a continuous axis, the data points are positioned according to their domain value. The labeling is also different. In a discrete axis, the names of the categories. In a continuous axis, the labels are auto-generated.
Axes are always continuous
  • Scatter
  • Bubble charts
Axes are always discrete
  • The major axis of stepped area charts (and combo charts containing such series).

In line, area, bar, column and candlestick charts (and combo charts containing only such series), you can control the type of the major axis:

  • For a discrete axis, set the data column type to string.
  • For a continuous axis, set the data column type to one of: number, date, datetime.

Yumani RanaweeraAdding proxy server behind WSO2 ESB



When the message flow needs to be routed through a proxy, you need to add following parameters to transportSender configuration in axis2.xml.
http.proxyHost - proxy server's host name
http.proxyPort - port number of the proxy  server
http.nonProxyHosts - any host that need to by pass above proxy

Else you can set Java networking properties.
-Dhttp.proxyHost=example.org -Dhttp.proxyPort=5678 -Dhttp.nonProxyHosts=localhost|127.0.0.1|foo.com

Sample:
This scenario illustrates how a routing via proxy and nonproxy happens. I have echo service in an AppServer which is fronted by stockquaote.org HTTP proxy. I also have SimpleStockQuoteService in localhost which is set as a nonProxyHost.

I have my transport sender in axis2.xml configured as below:
<transportSender name="http" class="org.apache.synapse.transport.passthru.PassThroughHttpSender">
 <parameter name="non-blocking" locked="false">true</parameter>
 <parameter name="http.proxyHost" locked="false">stockquote.org</parameter>
 <parameter name="http.proxyPort" locked="false">8080</parameter>
 <parameter name="http.nonProxyHosts" locked="false">localhost</parameter>
</transportSender>

Proxy rout:
<proxy name="Echo_viaProxy" transports="https http" startOnLoad="true" trace="disable">
  <description/>
  <target>   
   <endpoint>       
    <address uri="http://xx.xx.xx.xxx:9765/services/Echo"/>    
   </endpoint>    
  <outSequence>      
   <send/>    
  </outSequence>  
 </target>
</proxy>

When you send a request to above SimpleStckQ_viaProxy, the request will be direct to stockquaote.org which will route to BE (http://localhost:9000/services/SimpleStockQuoteService).

nonProxy:
<proxy name="StockQuote_direct"
transports="https http"
startOnLoad="true"
trace="disable">
<description/>
<target>
<endpoint>
<address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
</endpoint>
<outSequence>
<send/>
</outSequence>
</target>
</proxy>
When you send a request to above StockQuote_direct, the request will be directly served by SimpleStockService in localhost.


known issue and fix:
https://wso2.org/jira/browse/ESBJAVA-3165 is fixed in 4.9.0 M4.

Lali DevamanthriAlert High severity JIRA issues through WSO2 ESB JIRA & TWILIO connectors

The below section describes how to configure cloud to cloud integration with WSO2 ESB Connectors using WSO2 Developer Studio.

Scenario:

Query new (open) high severity issues created in JIRA system and alert them by a SMS.

Untitled

The latest version of Developer Studio can be downloaded from [1]

Import connectors

Before you start creating ESB artifacts with connector operation, the necessary connectors should be imported into your workspace. You can download the ESB connectors from [2]

  • Create a  new ESBConfig project (SampleConnectorProject)
  • Right click on the created ‘SampleConnectorProject’ project and select ‘Import Connector’ from the context menu.

image07

  • Then browse the location of the connector zip and select the relevant connectors zip (jira-connector.zip, twilio-connector.zip) to import.
  • Create Sequence, Proxy or REST API and the imported connectors will be appeared in the tool palette.

image08

Create ESB Artifacts with Connector Operations

The detailed configurations on how to perform various operations on Jira Connector and Twilio connector can be found in [3] and [4] respectively.

  • Create a Sequence[5] with name ‘AlertSequence’
  • Connect to Jira

image10

Drag and drop the ‘init’  JIRA operation from the tool palette before use any other Jira connector operations.

This configuration authenticates with Jira system by configuring the user credentials and login url. Provide username and password and url for Jira System.

image11

  • Get high severity issues created.

Drag and drop ‘searchJira’ operation from the tool palette to retrieve data from Jira system.

image01

Set query to get open , highest severity issues in property window.

priority = Highest AND resolution = Unresolved AND status = Open

image00

  • Loop through retrieved issues

Jira system response have following  format.

image03

According to response there are two issues in high priority and open. To loop through them Drop Iterator mediator.


image02

Set ‘Iterate Expression’ property to “//issues”of Iterator mediator.

image04

  • Extract the issue link from iterated issue.

Drop a Property mediator into the iterator. Set values as follows. It will concat issue link and “WSO2 ALERT” message.

image12

  • Connect to Twilio

Drop Twilio Init operation from palette and provide required account details to authenticate.

image13

 

  • Send extracted issue link as a SMS alert

Drop a Twilo sendSMS mediator.
image14

Set ‘To’ value to receiver phone number . (‘From’  value needs to be find in your Twilio account).

Simply place the ‘body’ with property value.

image15

  • It might be useful to add Log mediators for log sequence status intermediately.


image18

  • Triggering sequence in periodically

ESB Scheduled Task component can be use to invoke the sequence we created. Create a Shedule Task[6] name “AlertTask” in same project.

 

In properties, get the ‘Task Properties’ pop-up configuration window. Set ‘sequenceName’  to “AlertSequence” , ‘injectTo’ to “sequence” and ‘message’

image19

In AlertTask properties change ‘interval’ to 900, which is 15 minutes, and ‘count’ to -1 .

Create the deploy archive to deploy in WSO2 ESB

  • Create a Composite Application Project (SampleCAPP) from Developer Studio and include the SampleConnectorProject as Dependencies.

image20

Deploying in WSO2 ESB

  • Download WSO2 ESB 4.8.0 from [7].
  • Install the Connectors (Jira Connector and Twilio Connector)  [8].

After install connectors in ESB server make sure to activate them.

  • Deploy the ‘SampleCAPP’ in Developer Studio[9]

Check issues intermittently by client REST application

This issues review scenario completely time synchronized. If someone needs to check whether are there high priority issues immediately, we should be able to invoke the sequence.

Considering user makes request from REST client.

  • Create REST API artifact in Sample SampleConnectorProject[10]

A REST API allows you to configure REST endpoints in the ESB by directly specifying HTTP verbs (such as POST and GET), URI templates, and URL mappings through an API.

image21

  • Drop AlertSequence into insequence from pallet Defined Sequences section.

image16

Make sure to set value ‘true’ for Continue Parent property in AlertSequence, Iterator mediator.

image17

  • Drop ‘Respond’ mediator. (this will redirect results to user)


image09

Now deploy the  SampleCAPP.  In management console, REST api menu page will show the SampleRESTAPI, and API Invocation URL. Using this url, we can simply check the issues created with high priority intermittently.

image05

[1] http://wso2.com/products/developer-studio/

[2]. https://github.com/wso2/esb-connectors/tree/master/distribution

[3]. http://docs.wso2.org/display/ESB480/JIRA+Connector

[4]. http://docs.wso2.org/display/ESB480/Twilio+Connector

[5].http://docs.wso2.org/display/DVS350/Creating+ESB+Artifacts#CreatingESBArtifacts-Workingwithsequences

[6]http://docs.wso2.org/display/DVS350/Creating+ESB+Artifacts#CreatingESBArtifacts-Creatingascheduledtask

[7] http://wso2.com/products/enterprise-service-bus/

[8] http://docs.wso2.org/display/ESB480/Managing+Connectors+in+Your+ESB+Instance

[9]http://docs.wso2.org/display/DVS350/Deploying+and+Debugging#DeployingandDebugging-DeployingaC-ApptoarunningserverinsideEclipse

[10] http://docs.wso2.org/display/DVS350/Creating+ESB+Artifacts#CreatingESBArtifacts-CreatingaRESTAPI


Keheliya GallabaFew tips on tweaking Elementary OS for Crouton

Activating Reverse Scrolling


 If you're a fan of reverse scrolling (or natural scrolling as some people call it) in Mac OS X, you can activate the same in Elementary OS by going to System Settings > Tweaks > General > Miscellaneous, and turn on Natural Scrolling. But I noticed in Elementary OS Luna for crouton, that setting resets to false after restarting. You can fix it by adding the following command as a startup application (System Settings > Startup Applications > Add)
/usr/lib/plugs/pantheon/tweaks/natural_scrolling.sh true


Getting back the minimize button


Method 1: Start dconf-editor and go to org > pantheon > desktop > gala > appearance and change "button layout" to re-order buttons in the window decoration. Eg: ":minimize:maximize:close"

Method 2: Run the following command.
gconftool-2 --set /apps/metacity/general/button_layout --type string ":minimize:maximize:close"

Note: To install dconf-editor use the command:
sudo apt-get install dconf-tools

To install elementary-tweaks use the command:
sudo add-apt-repository ppa:mpstark/elementary-tweaks-daily
sudo apt-get update
sudo apt-get install elementary-tweaks

Chandana NapagodaManage SOAPAction of the Out Message

 
 When you are sending a request message to a backend service through WSO2 ESB, there could be some scenarios where you need to remove or change the SOAPAction header value.


Using header mediator and property mediator which are available in WSO2 ESB, we can remove SOAPAction or set it empty.

Set SOAPAction as Empty:
<header name="Action" value=""/>
<property name="SOAPAction" scope="transport" value=""/>

Remove SOAPAction:
<header action="remove" name="Action"/> 
<property action="remove" name="SOAPAction" scope="transport"/>

Modify SOAPAction:

When setting SOAPAction one of the below approches can be used

1) .
<header name="Action" value="fixedAction"/>
2).
<header expression="xpath-expression" name="Action"/>
More Info: Header Mediator
TCPMon:

If we need to monitor the messages getting passed between ESB and backend service, we can point TCPMon[1] in between back-end and ESB. Using TCPMon, we can monitor messages and their header information(Including SOAPAction).

Bottom of the TCPMon there is a special control available to view Messages in XML format.

[1]. http://ws.apache.org/tcpmon/tcpmontutorial.html


Read more about WSO2 ESB: Enterprise Integration with WSO2 ESB

Pavithra MadurangiConfiguring Active Directory (Windows 2012 R2) to be used as a user store of WSO2 Carbon based products

The purpose of this blog post is not to explain the steps on how to configure AD as primary user store. Above information is covered from WSO2 Documentation. My intention is to give some guide on how to configure AD LDS instance to work over SSL and how to export/import certificates to the trust store of WSO2 servers.

To achieve this, we need to

  1. Install AD on Windows 2012 R2
  2. Install AD LDS role in Server 2012 R2
  3. Create an AD LDS instance
  4. Install Active Directory Certificate Service in the Domain Controller (Since we need to get AD LDS instance work over SSL)
  5. Export certificate used by Domain Controller.
  6. Import the certificate to client-truststore.jks in WSO2 servers.

Also this information is already covered from following two great blog posts by Suresh. So my post will be an updated version of them and will fill some gaps and link some missing bits and pieces.


1. Assume you have only installed Windows 2012 R2 and now you need to install AD too. Following article clearly explains all the steps required.


Note : As mentioned in the article itself, it is written assuming that there's no existing Active Directory Forrest. If you need to configure the server to act as the Domain Controller for an existing Forrest, then following article will be useful


2) Now you've installed Active Directory Domain Service and the next step is to install AD LDS role. 

- Start - > Open Server Manager -> Dashboard and Add roles and feature

- In the popup wizard, Installation type -> select Role-based or feature based option and click the Next button. 

- In the Server Selection, select current server which is selected by default. Then click Next.


- Select AD LDS (Active Directory Lightweight Directory Service ) check box in Server Roles  and click Next.


- Next you'll be taken through wizard and it will include AD LDS related information. Review that information and click Next.

- Now you'll be prompted to select optional feature. Review it and select the optional features you need (if any) and click next.

- Review installation details and click Install.

- After successful AD LDS installation you'll get a confirmation message.

3. Now let's create an AD LDS instance. 

- Start -> Open Administrative Tools.  Click Active Directory Lightweight Directory Service Setup Wizard.


-  You'll be directed to Welcome to the Active Directory Lightweight Directory Services Setup Wizard. Click Next.

- Then you'll be taken to Setup Options page. From this step onwards, configuration is same as mentioned in 


4. As explained in above blog, if you pick Administrative account for the service account selection, then you won't have to specifically create certificates and assign them to AD LDS instance. Instead the default certificates used by the Domain Controller can be accessed by AD LDS instance.

To achieve this, let's install certificate authority on Windows 2012 server (if it's not already installed). Again I'm not going to explain it in details because following article covers all required information


5. Now let's export the certificate used by Domain controller

- Go to MMC (Start -> Administrative tools -> run -> MMC)
- File -> Add or Remove Snap-ins
- Select certificates snap-in and click add.


-Select computer account radio button and click Next.
- Select Local computer and click Finish.

Now restart the Windows server.

- In MMC, click on Certificates (Local Computer) -> Personal -> Certificates.
- There you'll find bunch of certificates.
- Locate root CA certificate, right click on it -> All Tasks and select Export.

Note : The intended purpose of this certificate is all. (Not purely for server authentication.) It's possible to create a certificate for server authentication and use it for LDAPS authentication. [1] and [2] explains how it can be achieved.

For the moment I'm using the default certificate for LDAPS authentication.


- In the Export wizard, select Do not export private key option and click Next.
- Select DER encoded binary X.509 (.cer) format and provide a location to store the certificate.

6. Import the certificate to trust store in WSO2 Server.

Use following command to import the certificate to client-truststore.jks found inside CARBON_HOME/repository/resource/security.

keytool -import -alias adcacert -file/cert_home/cert_name.cer -keystore CARBON_HOME/repository/resource/security/client-trustsotre.jks -storepass wso2carbon

After this, configuring user-mgt.xml and tenant-mgt.xml is same as explained in WSO2 Documentation.





Chanaka FernandoWSO2 ESB tuning performance with threads

I have written several blog posts explaining the internal behavior of the ESB and the threads created inside ESB. With this post, I am talking about the effect of threads in the WSO2 ESB and how to tune up threads for optimal performance. You can refer [1] and [2] to understand the threads created within the ESB.

[1] http://soatutorials.blogspot.com/2015/05/understanding-threads-created-in-wso2.html

[2] http://wso2.com/library/articles/2012/03/importance-performance-wso2-esb-handles-nonobvious/

Within this blog post, I am discussing about the "worker threads" which are used for processing the data within the WSO2 ESB. There are 2 types of worker threads created when you start sending the requests to the server

1) Server Worker/Client Worker Threads
2) Mediator Worker (Synapse-Worker) Threads


Server Worker/Client Worker Threads

These set of threads will be used to process all the requests/responses coming to the ESB server. ServerWorker Threads will be used to process the request path and Client Worker threads will be used to process the responses.


Mediator Worker (Synapse-Worker) Threads

These threads will only be started if you have iterate/clone mediators in your ESB mediation flow. These threads will be used for processing iterate/clone operations in separate threads for parallel processing of a single request.


WSO2 ESB uses the java ThreadPoolExecutor implementation for spawning new threads for processing requests. Both the above mentioned thread categories will be using this implementation underneath.

The java.util.concurrent.ThreadPoolExecutor is an implementation of the ExecutorService interface. The ThreadPoolExecutor executes the given task (Callable or Runnable) using one of its internally pooled threads.

The thread pool contained inside the ThreadPoolExecutor can contain a varying amount of threads. The number of threads in the pool is determined by these variables:
  • corePoolSize
  • maximumPoolSize



If less than corePoolSize threads are created in the the thread pool when a task is delegated to the thread pool, then a new thread is created, even if idle threads exist in the pool.


If the internal queue of tasks is full, and corePoolSize threads or more are running, but less than maximumPoolSize threads are running, then a new thread is created to execute the task.

These parameter of the thread pools can be configured in the following configuration files in the WSO2 ESB

ServerWorker/ClientWorker Thread pool (ESB_HOME/repository/conf/passthru-http.properties)

worker_pool_size_core=400
worker_pool_size_max=500
#worker_thread_keepalive_sec=60
#worker_pool_queue_length=-1

The default values given in the standalone ESB pack would be enough for most of the scenarios. But you need to do some performance testing with a similar load and tune these values accordingly. In the above configuration, there are 2 commented out parameters.

worker_thread_keepalive_sec - If the pool currently has more than corePoolSize threads, excess threads will be terminated if they have been idle for more than the keepAliveTime. This provides a means of reducing resource consumption when the pool is not being actively used. If the pool becomes more active later, new threads will be constructed.

worker_pool_queue_length - This is the task queue length to which new tasks will be delegated by the server when there are new data to be processed. The length of this queue is -1 (infinite) by default. This is one of the most important parameter when you are tuning the server for capacity. When you have infinite length queue, it will never reject any request. But the drawback with this value is that, if there are less number of processing threads and you have a peak load, the server can easily go into OOM status since the task queue will hold all the requests coming in to the server. You need to decide on a considerable value for this queue length rather than keeping this value as -1. If you have a limited value for this queue length, it will reject some requests in a high load scenario. But the server will not crash (OOM). This would be better rather than loosing all the requests. Another disadvantage of having -1 as the queue length would be that server will never create the max number of threads but only create core number of threads in any kind of load. 

MediatorWorker (SynapseWorker) Threads (ESB_HOME/repository/conf/synapse.properties)

synapse.threads.core = 20
synapse.threads.max = 100
#synapse.threads.keepalive = 5
#synapse.threads.qlen = 10

The same theory which I have described above can be applied when tuning this thread pool. Apart from that, It is always better to have a matching core value with the ServerWorker threads if you have used iterate/clone mediators heavily in your mediation flow. Considerable value for these parameters would like below.

synapse.threads.core = 100
synapse.threads.max = 200


I hope this would help you when tuning WSO2 ESB server for your production deployments.




Lali Devamanthrivote for SourceForge Community Choice

The vote for July 2015 Community Choice SourceForge Project of the Month is now available, and will run until June 15, 2015 12:00 UTC. Here are the candidates:

Octave-Forge: Octave-Forge is a central location for the collaborative development of packages for GNU Octave. The Octave-Forge packages expand Octave’s core functionality by providing field specific features via Octave’s package system. For example, image and signal processing, fuzzy logic, instrument control, and statistics packages are examples of individual Octave-Forge packages. Download Octave-Forge now.

Smoothwall: Smoothwall is a best-of-breed Internet firewall/router, designed to run on commodity hardware and to provide an easy-to-use administration interface to those using it. Built using free and open source software (FOSS), it’s distributed under the GNU Public License. Download Smoothwall now.

Robolinux: RoboLinux is a Linux desktop solution for a home office, SOHO, and enterprise users looking for a well-protected migration path away from other operating systems. Download Robolinux now.

NAS4Free: NAS4Free is an embedded Open Source Storage distribution that supports sharing across Windows, Apple, and UNIX-like systems. It includes ZFS, Software RAID (0,1,5), disk encryption, S.M.A.R.T / email reports, etc. with following protocols: CIFS (samba), FTP, NFS, TFTP, AFP, RSYNC, Unison, iSCSI, UPnP, Bittorent (initiator and target), Bridge, CARP (Common Address Redundancy Protocol) and HAST (Highly Available Storage). All this can easily be setup by its highly configurable Web interface. NAS4Free can be installed on Compact Flash/USB/SSD media, hard disk or booted of from a Live CD with a USB stick. Download NAS4Free now.

NamelessROM: NamelessRom is an opportunity to have a voice to the development team of the after-market firmware that you run on your device. The main goal of NamelessRom is to provide quality development for android devices, phones, and tablets alike. NamelessRom developers are available nearly 24/7 and respond to bug reports and feature requests almost instantly. This availability will allow you, the end-user, to have direct input into exactly what features and functions are included on the firmware that you run. Download NamelessROM now.

CaesarIA (openCaesar3): CaesarIA is an open source remake of Caesar III game released by Impressions Games in 1998, it aims to expand the possibilities of the classical city-building simulators and to add new features showing the city life. Now the game work with Windows, Linux, Mac, Haiku, and Android. The original Caesar3 game is needed to play openCaesar3. Download CaesarIA (openCaesar3) now.

gnuplot development: A famous scientific plotting package, features include 2D and 3D plotting, a huge number of output formats, interactive input or script-driven options, and a large set of scripted examples. Download gnuplot development now.

Battle for Wesnoth: The Battle for Wesnoth is a free, turn-based tactical strategy game with a high fantasy theme, featuring both single-player and online/hotseat multiplayer combat. Fight a desperate battle to reclaim the throne of Wesnoth, or take hand in any number of other adventures. Download Battle for Wesnoth now.

SharpDevelop: SharpDevelop is the open-source IDE for the .NET platform. Write applications in languages including C#, VB.NET, F#, IronPython and IronRuby, as well as target rich and reach: Windows Forms or WPF, as well as ASP.NET MVC and WCF. It starts from USB drives, supports read-only projects, comes with integrated unit and performance testing tools, Git, NuGet, and a lot more features that make you productive as a developer. Download SharpDevelop now.


Dedunu DhananjayaHow to fix Incompatible clusterIDS in Hadoop?

When you are installing and trying to setup your Hadoop cluster you might face a issue like below.
FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to master/192.168.1.1:9000. Exiting. 
java.io.IOException: Incompatible clusterIDs in /home/hadoop/hadoop/data: namenode clusterID = CID-68a4c0d2-5524-486e-8bc9-e1fc3c5c2e29; datanode clusterID = CID-c6c3e9e5-be1c-4a3f-a4b2-bb9441a989c5
I just quoted first two line of the error. But full stack trace would look like below.

You might haven't formatted your name node properly. But if this was in test environment you can easily delete data and name node folders, and reformat the HDFS. To format you can run below command.

WARNING!!! : IF YOU RUN BELOW COMMAND YOU WILL LOOSE ALL YOUR DATA.
hdfs namenode -format
But if you have a lot of data in your Hadoop cluster and you can't easily format it. Then this post is for you.

First stop all Hadoop processes running. Then login into you name node. Find the value of dfs.namenode.name.dir property. Run below command with your namenode folder.
cat <dfs.namenode.name.dir>/current/VERSION
Then You will see a content like below.
#Thu May 21 08:29:01 UTC 2015
namespaceID=1938842004
clusterID=CID-68a4c0d2-5524-486e-8bc9-e1fc3c5c2e29
cTime=0
storageType=NAME_NODE
blockpoolID=BP-2104944316-127.0.1.1-1430820636449
layoutVersion=-60
Copy the clusterID from nematode. Then login into the problematic slave node. Find dfs.datanode.data.dir folder. Run below command to edit the VERSION file.
vim <dfs.datanode.data.dir>/current/VERSION 
Your datanode cluster VERSION file will look like below. Replace the cluster ID you copied from name node.
#Thu May 21 08:31:31 UTC 2015
storageID=DS-b7d3c421-0366-4a66-8d14-78362389ed73
clusterID=CID-c6c3e9e5-be1c-4a3f-a4b2-bb9441a989c5
cTime=0
datanodeUuid=724f8bad-c0ca-4ded-98d6-a860d3165289
storageType=DATA_NODE
layoutVersion=-56
Then everything will be okay!

Dedunu DhananjayaHadoop MultipleInputs Example

Let's assume you are working for ABC Group. And they have ABC America airline,  ABM Mobile, ABC Money and ABC hotel blah blah. ABC this and that. So you got multiple data sources. They have different types/columns. So you can't run single Hadoop Job on all the data.

You got several data files from all these businesses.
(Edited this data file 33 time to get it aligned. ;) Don't tell anyone!)

So your job is to calculate the total amount that one person spent for ABC group. For this you can run jobs for each company and then run another job to calculate the sum. But what I'm going to tell you is "NOOOO! You can do this with one job." Your Hadoop administrator will love this idea.

You need to develop custom InputFormat and a custom RecordReader. I have created both of these classes inside custom InputFormat class. Sample InputFormat should look like below.


nextKeyValue() method is the place where you should code according to your data files.

Developing custom InputFormat classes is not just enough. Also you need to change the main class of your job. You main class should look like below.

Line no. 26-28 adds your custom inputs to the job. Also you don't want to set Mapper class separately because you can't set it too. If you want you can develop separate mapper classes for your different file types. I'll write a blog post about that method also.
To build the JAR from my sample project you need Maven. Run below command to build JAR from Maven project. You can find the JAR file inside the target folder once you build the project.
mvn clean install
/
|----/user
|----/hadoop
|----/airline_data
| |----/airline.txt
|----/book_data
| |----/book.txt
|----/mobile_data
|----/mobile.txt
With this change you may have to change the way you run the job. My file structure looks like above. I have different folders for different types. You can run job from the command below.
hadoop jar /vagrant/muiltiinput-sample-1.0-SNAPSHOT.jar /user/hadoop/airline_data /user/hadoop/book_data /user/hadoop/mobile_data output_result
If you have followed all the steps properly you will get job's output like this.

Job will create a folder called output_result. If you want to see the content you can run below command.
hdfs dfs -cat output_result1/part*
I ran my sample project on my sample data set. My result file looked like below.
12345678 500
23452345 937
34252454 850
43545666 1085
56785678 709
67856783 384
Source code of this project is available on GitHub
https://github.com/dedunu/hadoop-multiinput-sample

Enjoy Hadoop!

Sajith RavindraA possible reason for "Error while accessing backend services for API key validation" in WSO2 API manager

Problem

When try to validate a token in WSO2 API manager if it returns the error,
 
<ams:fault xmlns:ams="http://wso2.org/apimanager/security">
<ams:code>900900</ams:code>
<ams:message>Unclassified Authentication Failure</ams:message>
<ams:description>Error while accessing backend services for API key validation</ams:description>
</ams:fault> 

Most likely cause of this problem is an error with Key Manager. This error means that it could not validate the tokens because it could not access the back-end or in other words, the  the Key Manager.

I had a distributed API manager 1.6 deployment and when I tried to generate a token for a user this error was returned. I went and had a look on the Key Manager's wso2carbon.log since it indicates an error in Key Manager. In the log file I noticed the following log But there was nothing wrong in Key Manager,

{org.wso2.carbon.identity.thrift.authentication.ThriftAuthenticatorServiceImpl} - Authentication failed for user: admin Hence, returning null for session id. {org.wso2.carbon.identity.thrift.authentication.ThriftAuthenticatorServiceImpl} 

 And In the API Gateway's log file following error was logged,

TID: [0] [AM] [2015-04-06 21:08:15,918] ERROR {org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler} -  API authentication failure {org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler}
org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityException: Error while accessing backend services for API key validation
        at org.wso2.carbon.apimgt.gateway.handlers.security.thrift.ThriftAPIDataStore.getAllURITemplates(ThriftAPIDataStore.java:97)
        at org.wso2.carbon.apimgt.gateway.handlers.security.APIKeyValidator.getAllURITemplates(APIKeyValidator.java:385)
        at org.wso2.carbon.apimgt.gateway.handlers.security.APIKeyValidator.doGetAPIInfo(APIKeyValidator.java:240)
        at org.wso2.carbon.apimgt.gateway.handlers.security.APIKeyValidator.getResourceAuthenticationScheme(APIKeyValidator.java:153)
        at org.wso2.carbon.apimgt.gateway.handlers.security.oauth.OAuthAuthenticator.authenticate(OAuthAuthenticator.java:85)
        at org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler.handleRequest(APIAuthenticationHandler.java:92)
        at org.apache.synapse.rest.API.process(API.java:284)
        at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:76)
        at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:63)
        at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:220)
        at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
        at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
        at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:336)
        at org.apache.synapse.transport.passthru.ServerWorker.processEntityEnclosingRequest(ServerWorker.java:377)
        at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:183)
        at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
        at org.wso2.carbon.apimgt.gateway.handlers.security.thrift.ThriftKeyValidatorClient.<init>(ThriftKeyValidatorClient.java:45)
        at org.wso2.carbon.apimgt.gateway.handlers.security.thrift.ThriftKeyValidatorClientPool$1.makeObject(ThriftKeyValidatorClientPool.java:40)
        at org.apache.commons.pool.impl.StackObjectPool.borrowObject(StackObjectPool.java:170)
        at org.wso2.carbon.apimgt.gateway.handlers.security.thrift.ThriftKeyValidatorClientPool.get(ThriftKeyValidatorClientPool.java:50)
        at org.wso2.carbon.apimgt.gateway.handlers.security.thrift.ThriftAPIDataStore.getAllURITemplates(ThriftAPIDataStore.java:94)

Solution

When I investigated the problem further I realized that I have NOT put the correct super user name and password in /repository/conf/api-manager.xml  in the gateway (or the user name and password used to log into the management console). When I used the correct user name and password the problem was solved. 

This error occurs because the Gateway could not connect to Key Manager validation service due to invalid credentials. 

In api-manager.xml following 3 sections contians <Username> and <Password> and make sure thy are correct,
1) <AuthManager> 
2)<APIGateway> 
3)<APIKeyManager>

Note

This is not the only possible reason for the above mentioned error. Some other common causes are(but not limited to),
- Mis-configured master-datasources.xml file of Key manager
- Connectivity issue between Gateway and Key Manager
- Connectivity issues between Database and Key Manager
- Key manager is not reachable
- etc .....
I suggest you should have a look at the Key manager log file when you investigate this error and it's very likely you would find a clue

John MathonAre we living in an age of Magic? Is Elon Musk a magician?

The age of Magic

Robot-gestures-011

The ability to do things is becoming more and more feasible from a purely engineering point of view.

NETHERLANDS-BUSINESS-AUTO-TESLA021_14A_HYPERLOOP ARTelon-musk-in-the-dragon-v2-640x356

Is Elon Musk a magician?

Elon is building the first re-usable space transportation system something the US government spent $150 billion trying to do 20 years ago.  He seems to be doing it for somewhere between 1/40th and 1/100th the cost that NASA  developed the Space Shuttle system for unsuccessfully.

Elon built the Tesla which is a car that accelerates from 0 – 60 in 2.8seconds, faster than any gas powered car, is safer than any gas powered car ever built and has a 98-99% customer satisfaction in the first 2 years of sales and improves itself by downloading new versions of itself overnight.

A couple years ago the State of California passed a bond measure to build a high speed rail between San Francisco and Los Angeles.  The initial cost was estimated at $20+ billion dollars.    Unfortunately, since the initial estimate costs have skyrocketed to an estimated $60 billon+.

Elon Musk proposed building a “hyperloop” instead of the train.   A hyperloop running at 750mph would take 30 minutes instead of 2 hours.   Analysis of the project costs are currently at $6-7 billion or 1/10th the cost of the train system.

I have no idea if the costs are correct but I have read through the description of the project and it seems doable.  The technology to be employed is pretty much off the shelf.   No unknown technology or materials are needed to do this project.  The cost of operating it would be a small fraction of what a rail system would cost.

The fact that it is virtually free from friction allows the system to use very little energy, so a major component of operational cost is vastly reduced as well as maintenance of rails, engines or other components.  Sure there will be some maintenance and other costs but the lower operating costs would make this system the system of choice even if it cost twice what the rail system will cost.  In fact, it is projected to cost less initially to build it and vastly less to operate as well as being 4 times faster.

So, is Elon Musk just lucky, a magician, super brilliant or will all these things crash and burn for reasons we just haven’t seen yet?  Why is it he thought of these things and it didn’t occur to lots of people?

“Unbelievable” technology that is available today or in delivery.

hand_chip

128GB chips have been available for years that fit on the surface of your finger which means we have the ability to write a trillion transistors in the space of your thumb.   That’s unimaginable.  Do any of you know how we do that?   An IBM engineer once told me the theoretical limit of chips density was 64,000 transistors on a bigger surface area.  He was off by about a factor of 100,000,000.    I don’t know how to describe being able to “print” one trillion transistors in such a small space as other than magic.   Not only that it costs tens of dollars not millions.

My cell phone is able to transmit and receive data at >1million bytes/second.  10 years ago smart phones didn’t exist and wireless technology allowed you more like 1 thousand bytes/second.  I remember the first time my mother saw a cell phone and I explained that people could call me on it.  She looked at me like I had said something completely crazy and unbelievable.   She honestly didn’t think it was possible.   When the phone rang  my mothers expression was the closest thing I think I’ve ever seen to someone seeing a miracle.

We have cell phones with screens that are resistant to scratching, resistant to cracking and last years without smudging.   I get that if you put diamond onto the surface of a transparent material it would be harder but wow, for someone who protected his screen religiously for decades it does seem awfully convenient we figured out how to make such resilient transparent material.

These are very real tangible to ordinary people “miracles” but below the surface are many “miracles” that are no less amazing.

crossbar_sneak_path

New 3D memory technology that uses resistive approach in layers is being brought to market by numerous players.  This technology will bring the dream of virtually unlimited 3 dimensional storage that is super fast and cheap.    In 10 years we will probably have 128 terabyte memory that are a thousand times faster than current SSD on our laptops.   Of course, we likely won’t have laptops if I’m right.

Software development has experienced at least a 10 fold to 100 fold increase in productivity in the last 10 years.   Surprisingly this has nothing to do with the technology improvement in hardware as in the past. It is due to open source, APIs, PaaS, DevOps. What I call the 3rd platform for software.  (Okay, I admit a commercial for my company but the software is available ubiquitously also from other open source vendors.)

I could go on and on with improvements in every field.

The question I am asking is Elon Musk a magician or is something else going on?

Caution:  Spoilers here.   Let me explain the magic Elon Musk uses.

I believe instead we have reached the age of Magic and whether Elon believes this or not he is leveraging it. This means we have reached a state where the things we can do technologically exceed what most people think we can do.

If you have an idea to do something, for instance, I want to go to Mars.  With some cash this is doable because we have the technology you just may not realize it.  It’s also not as much cash needed as you might think.

So, is it simply almost a naïveté that allowed Elon to achieve these dreams?   Did he KNOW that the technology was available?  Was he just lucky naive?   He admits to a fair amount of naïveté in his video interviews.

He describes how he first went to look on the NASA website for when NASA would be offering Mars rides.   He was surprised they had no plan to go to mars.   So, he decided to spark curiosity and get people excited about space again by doing some inspiring trifle hoping it would trigger an interest.    He says what he figured out was not that people weren’t inspired enough to go to Mars they simply didn’t believe it was possible.   In other words people lacked the “awareness” of what was possible.

Let me be clear.  I realize that Elon worked unbelievably hard and he sacrificed nearly his entire wealth and he is indisputably brilliant in multiple dimensions.  There are few who appreciate what he’s done but there were no breakthrough technologies to do what he did that I am aware of.  Almost all the companies he’s built have used off the shelf technology brilliantly engineered.

What this means is that if you have an idea for virtually anything, say you want to go to Mars or you want to cure cancer or eliminate hunger or whatever?  Is the thing that is stopping you or us from doing this simply a lack of will or belief and not technological?

The Age of Magic

The pace of progress has been so blistering that most people are simply unaware of how advanced we have become in many fields.

We have assumed so many things aren’t possible because frankly most people simply don’t know what is possible.  We have crossed things off off our list like my mother had.   If you haven’t been tracking all the technology improvements in the last 10 years you may not realize what is possible.

That is not surprising because keeping up on the technological changes is daunting.  There is a lot.

The past paradigm

In the past some new miracle of technology revelation happened, like we discover the vaccine for polio that fundamentally changed what was possible and made advances suddenly possible.     This was then followed by a period of mad creation and disruption.   We started building vaccines for lots of diseases and a revolution happened.   To those who experienced numerous diseases this surely seemed like magic at first.  Then we became used to it.

Certainly the first microscope or telescope brought the perception of magic and amazing revelations.

The same happens in Art or any creative endeavor.  When a discovery or new thing is created there is initially a “wow” factor and rapid advancement.

The New Paradigm

There was no “fundamental technology” discovery that enabled Elon to build these technologies.  Even the hyperloop doesn’t require anything but off the shelf components.   NASA did supply Elon with a material called Pica that enables him to build vastly superior heat shields but it was already invented.  He uses off the shelf Panasonic batteries for the Tesla.

Let me not impugne Elon Musk’s engineering skill.  There is no doubt these things are amazing engineering achievements and his skill in managing the process of bringing to market all these products is unquestioned.   In some sense he just had the courage to try.

I don’t believe in luck.  In my experience luck is the application of massive repeated “doing” that spontaneously finds opportunities but without the “doing” the luck doesn’t happen.  Sorry, so Elon’s not lucky.  He’s truly hard working and brilliant too.  No doubt.   He’s not super-human or an alien or a time traveler or a magician.   Also, I don’t believe this is a bubble and “unreal” in the sense that there is some illusion about these things he’s done.

When the internet came about 20 years ago many of us saw that amazing disruptive things would happen but the internet is just one of a vast panapoly of new technologies that is enabling not simply the cloud but physical creativity that was unimaginable before.

There is so much technology available now in the form of materials and computer related advancements but also in small stuff.  Low power stuff and just the ability to control and be smart with things.

One of the key abilities which allows massive growth in understanding in engineering ability is being able to see smaller and smaller dimensions.  We have microscopes that can see the quantum foam of the electron around a proton in a hydrogen atom.   We have developed a lot of technology that allows us to manipulate at incredibly tiny scales.  This has allowed us to count, assemble and feedback genetic code thousands of times faster than before at a fraction of the cost, to be able to assemble incredibly small electronic or biologic things.

Ushering in a new age of innovation

Being able to see at the tiny scale allows us to understand what is really happening and fix it or engineer around it.   This I believe is a large part of our “magic” ability.

Right now our technological ability far exceeds the applications of that technology that currently exists.  That’s the definition of the age of Magic.  There are so many ideas that are possible that even the hobbyist with few resources can create industry changing innovation.    In a sense Elon was just a hobbyist with a lot of money.

With kickstarter and other ways to enable small entrepreneurs we are seeing an explosion of innovation but without all the technical possibilities brought about because of the “magic” we live in there would be precious few successes or interest.

The bigger picture

The point is that many “problems” or “ideas” that 10 or 20 years ago seemed impossible or science fiction magic now appear to be eminently doable and it is just a matter of someone having the belief that it can be done and then scrupulously following the engineering trail to find the technology needed to build the magic.

Let’s say you wanted to build a robot for the home.  Today we have much of the technology to build real robots that we’ve all seen in movies.  The recognition software we have developed in just the last few years would enable a robot to “see” objects and recognize them, to read text or do other basic tasks.   We have figured out how to make robots walk “naturally” and to move smoothly.  We have the improvements in motor systems and control systems that is embeddable.     We also have with Siri and Google Now the ability to answer questions or to take commands and perform actions.   The age of robots cannot be far off.

There is no doubt we will see more robots in the home before a decade is out.   The first I want is a pick up robot.  I want a fairly unobtrusive robot that will just pick up stuff and put it where it belongs.   Clothes, food and such.   The next would be to do the laundry and dishes.   These are constrained tasks that are doable with our technology.

The rate of advancement is hardly slowing down

Discovery of the epigenetic code in DNA was a big advance that will lead to massive improvements in our understanding and ability to engineer improvements in healthcare.     The combination with IoT and BigData could create massive reduction  in healthcare costs and improvement of consistency in results.

Solar energy is on the cusp of a big “inflection” point.  Recent reductions in costs and efficiency of solar cells are turning what was an “iffy” proposition to an economic absolute win.    Energy is closely related to quality of life.

The “Cloud” and the virtualization of compute technology is already having massive effects but we are just in the infancy of this movement that will transform businesses and personal life in a decade.

Graphene, correlated-oxide, diamond infused glass, you name it.  We have new materials like the Pica Elon is using to build things that were science fiction before.

Our ability to leverage the quantum world as nature has done will enable truly unbelievable things in the next few decades.    We have done so in tunneling diodes central to computers but we will be building quantum computers soon in quantity.   They may give us “magic” computational capability.

We are in an incredible period of advancement in physics that few understand.  The implications of this will be truly staggering and affect our ability to engineer magical products that nobody even thought possible. We have discovered our world is not the simple one that Einstein imagined (:)) but that what we perceive as reality is actually  emergent from Twistor space.   You can read about that here.

Summary

We don’t need new physics to feel the age of magic.   The technology around us today is already being vastly underutilized in terms of the improvements in our lives.   It is simply a matter of will and belief that holds us back.   If I were a kid growing up today I don’t know how I could restrain myself from wanting to study engineering and science.   Without knowing what is available you can’t figure out what is possible.

Articles you may find interesting along these lines:

Artificial Intelligence

The greatest age of technology

Roger Penrose.  The smartest man to ever live and Twistor theory

Virtual Reality

Healthcare Improvements

Democracy revolutionized by new technology


Madhuka UdanthaOptions for Google Charts

In Google chart some different chart type contains different format of data sets

Google Chart Tools is with their default setting and all customizations are optional. Every chart exposes a number of options that customize its look and feel. These options are expressed as name:value pairs in the options object.
eg:
visualization supports a colors option that lets you specify

"colors": ['#e0440e', '#e6693e', '#ec8f6e', '#f3b49f', '#f6c7b6']

image

Lets create function to pass those

1 AddNewOption = function (name,value) {
2 options = $scope.chart.options
3 $scope.chart.options[name] = value;
4 };

Now use this option to improve our scatter chart


1 AddNewOption('pointShape','square');
2 AddNewOption('pointSize',20);

imageimage


Now we can do more play with Google chart options.


Crosshair Options


Crosshairs can appear on focus, selection, or both. They're available for scatter charts, line charts, area charts, and for the line and area portions of combo charts.


When you hover over the points with crosshair option you can see some helping axis for the point. 





image


Here is crosshair API for to play more.



  • crosshair: { trigger: 'both' }
    display on both focus and selection

  • crosshair: { trigger: 'focus' }
    display on focus only

  • crosshair: { trigger: 'selection' }
    display on selection only

  • crosshair: { orientation: 'both' }
    display both horizontal and vertical hairs

Harshan LiyanageHow to change the logging behaviour of http-access log in WSO2 Carbon based products



In this blog post I'm gonna tell you how to change the default behavior of access logging of WSO2 Carbon based products.

You may have seen the access log files with names such as "http_access_2014-08-19.log" created at <WSO2_PRODUCT_HOME>repository/logs folder. This log file contains the all the information related to tracking the clients who called to your server. Every request to the WSO2 Carbon server will be recorded in this log file as below.

127.0.0.1 - - [24/May/2015:00:00:04 +0530] "GET /carbon/dialog/css/jqueryui/jqueryui-themeroller.css HTTP/1.1" 200 4020 "https://localhost:9443/carbon/admin/login.jsp" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36"
127.0.0.1 - - [24/May/2015:00:00:04 +0530] "GET /carbon/admin/css/carbonFormStyles.css HTTP/1.1" 200 2050 "https://localhost:9443/carbon/admin/login.jsp" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36"
127.0.0.1 - - [24/May/2015:00:00:04 +0530] "GET /carbon/dialog/css/dialog.css HTTP/1.1" 200 556 "https://localhost:9443/carbon/admin/login.jsp" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36"
127.0.0.1 - - [24/May/2015:00:00:04 +0530] "GET /carbon/styles/css/main.css HTTP/1.1" 200 1240 "https://localhost:9443/carbon/admin/login.jsp" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36"
127.0.0.1 - - [24/May/2015:00:00:04 +0530] "GET /carbon/admin/js/jquery.ui.tabs.min.js HTTP/1.1" 200 3594 "https://localhost:9443/carbon/admin/login.jsp" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36"
127.0.0.1 - - [24/May/2015:00:00:04 +0530] "GET /carbon/admin/js/main.js HTTP/1.1" 200 15367 "https://localhost:9443/carbon/admin/login.jsp" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.152 Safari/537.36"

Now lets look at how to change this logging behavior. Please note that the all changes mentioned below must be done to "org.apache.catalina.valves.AccessLogValve" configuration in <WSO2_PRODUCT_HOME>repository/conf/tomcat/catalina-server.xml file.

Changing the prefix and suffix of access-log file name

You might be need to change the default prefix (http_access_) and suffix (.log) of the generated access log files. For example if you need to get the access log files as wso2_mdm_2014-08-19.txt  please change the prefix and suffix parameter of AccessLogValve configuration as below.

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="${carbon.home}/repository/logs"
               prefix="wso2_mdm_" suffix=".txt"
               pattern="combined"/>

Disabling the access log rotation

By default Tomcat server will create a new log file each day by including timestamp for the file name. The objective of this default behavior is to avoid issues when the log file eventually becomes larger. But when you set the rotatable property to "false", it will disable this default behavior and will use a single log file. When you run your carbon server with following configuration, it will use a single access log file (wso2_mdm.log) throughout its entire life-time.

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="${carbon.home}/repository/logs"
               prefix="wso2_mdm" suffix=".log"
               pattern="combined" rotatable="false" />

Removing the timestamp from current access-log file name & enabling rotation

There might be some scenarios where you need to remove the timestamp from the current access log file name with log rotation enabled. For example if you need to get the name of current file name of the access log file as wso2_mdm_.log and tomorrow you need to rename it to "wso2_mdm_2015-05-20.log" and use a new log file named "wso2_mdm_.log" . You can do it by setting the renameOnRotate parameter of AccessLogValve configuration to be "true".  

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="${carbon.home}/repository/logs"
               prefix="wso2_mdm_" suffix=".log"
               pattern="combined" renameOnRotate="true" />

There are some more configuration changes you can do to change the default access log behavior. You can find it by referring to the official tomcat documentation [1].

References:

[1]. https://tomcat.apache.org/tomcat-7.0-doc/config/valve.html#Access_Log_Valve/Attributes

Ajith VitharanaAccess token related issues - WSO2 API Manager

Create an API with following details.

Name      : StockquoteAPI
Context   : stockquote
Version   : 1.0.0
Endpoint : http://www.webservicex.net/stockquote.asmx
Resource : GetQuote
Query      : symbol




1. Invoke with invalid token.

Client side errors:

401 Unauthorized

<ams:fault>
 <ams:code>900901</ams:code>
 <ams:message>Invalid Credentials</ams:message>
 <ams:description>Access failure for API: /stockquote, version: 1.0.0 with key: lI2XVmmRJ9_B_rbh1rwV7Pg3Pp8</ams:description>
</ams:fault>

Backend error :

[2015-05-16 22:22:14,630] ERROR - APIAuthenticationHandler API authentication failure
org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityException: Access failure for API: /stockquote, version: 1.0.0 with key: lI2XVmmRJ9_B_rbh1rwV7Pg3Pp8
    at org.wso2.carbon.apimgt.gateway.handlers.security.oauth.OAuthAuthenticator.authenticate(OAuthAuthenticator.java:212)
    at org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler.handleRequest(APIAuthenticationHandler.java:94)
    at org.apache.synapse.rest.API.process(API.java:284)
    at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:83)
    at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:64)
    at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:220)
    at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
    at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:168)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Solution: Double check the token.

2. Invoke  API with invalid token type.

Eg: Invoke  API with application token , But resource is allowed only for the application user tokens.

Client Errors:

401 Unauthorized

<ams:fault>
   <ams:code>900905</ams:code>
   <ams:message>Incorrect Access Token Type is provided</ams:message>
   <ams:description>Access failure for API: /stockquote, version: 1.0.0 with key: lI2XVmmRJ9_B_rbh1rwV7Pg3Pp8a</ams:description>
 </ams:fault>

Back end Error:

[2015-05-16 22:29:05,262] ERROR - APIAuthenticationHandler API authentication failure
org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityException: Access failure for API: /stockquote, version: 1.0.0 with key: lI2XVmmRJ9_B_rbh1rwV7Pg3Pp8a
    at org.wso2.carbon.apimgt.gateway.handlers.security.oauth.OAuthAuthenticator.authenticate(OAuthAuthenticator.java:212)
    at org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler.handleRequest(APIAuthenticationHandler.java:94)
    at org.apache.synapse.rest.API.process(API.java:284)
    at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:83)
    at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:64)
    at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:220)
    at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
    at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:168)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Solution: Edit  API from publisher and go to the manage wizard. Then check for the authentication type.


3. Invoke non-existing API resource.

Client Errors:

403 Forbidden

<ams:fault>
 <ams:code>900906</ams:code>
 <ams:message>No matching resource found in the API for the given request</ams:message>
 <ams:description>Access failure for API: /stockquote, version: 1.0.0 with key: lI2XVmmRJ9_B_rbh1rwV7Pg3Pp8a</ams:description>
</ams:fault>


Back end Error:

[2015-05-16 22:40:00,506] ERROR - APIKeyValidator Could not find matching resource for /GetQuote1?symbol=ibm
[2015-05-16 22:40:00,507] ERROR - APIKeyValidator Could not find matching resource for request
[2015-05-16 22:40:00,508] ERROR - APIAuthenticationHandler API authentication failure
org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityException: Access failure for API: /stockquote, version: 1.0.0 with key: lI2XVmmRJ9_B_rbh1rwV7Pg3Pp8a
    at org.wso2.carbon.apimgt.gateway.handlers.security.oauth.OAuthAuthenticator.authenticate(OAuthAuthenticator.java:212)
    at org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler.handleRequest(APIAuthenticationHandler.java:94)
    at org.apache.synapse.rest.API.process(API.java:284)
    at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:83)
    at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:64)
    at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:220)
    at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
    at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:168)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745

Solution: Edit  API from publisher (Design wizard) and double check the availability of the resource names.


4. Token has generated without scope (scope as default), But API resource configured with scope.

Client Errors:

403 Forbidden

<ams:fault>
 <ams:code>900910</ams:code>
 <ams:message>The access token does not allow you to access the requested resource</ams:message>
 <ams:description>Access failure for API: /stockquote, version: 1.0.0 with key: 1e1b6aa805d4bfd89b6e36ac48345a</ams:description>
</ams:fault>

Back end Error:

[2015-05-16 23:08:57,103] ERROR - APIAuthenticationHandler API authentication failure
org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityException: Access failure for API: /stockquote, version: 1.0.0 with key: 1e1b6aa805d4bfd89b6e36ac48345a
    at org.wso2.carbon.apimgt.gateway.handlers.security.oauth.OAuthAuthenticator.authenticate(OAuthAuthenticator.java:212)
    at org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler.handleRequest(APIAuthenticationHandler.java:94)
    at org.apache.synapse.rest.API.process(API.java:284)
    at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:83)
    at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:64)
    at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:220)
    at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
    at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:168)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Solution: Generate new token with scope(s).
eg:
curl -k -d "grant_type=password&username=admin&password=admin&scope=stock" -H "Authorization: Basic THUwUVlFUUIxYVRKY3B6YTIxQnFxa0ZhU1I0YTo0ZE1FRUs3N1k4emZhSU56aVdGbTB1aFNBdjBh, Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token


5. Invoke with expired token.

Client Errors

401 Unauthorized

<ams:fault>
 <ams:code>900903</ams:code>
 <ams:message>Access Token Expired</ams:message>
 <ams:description>Access failure for API: /stockquote, version: 1.0.0 with key: 8d438b49d9b24c752ce2b89c24bc198</ams:description>
</ams:fault>

Back end error:

[2015-05-17 13:30:50,155] ERROR - APIAuthenticationHandler API authentication failure
org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityException: Access failure for API: /stockquote, version: 1.0.0 with key: 8d438b49d9b24c752ce2b89c24bc198
    at org.wso2.carbon.apimgt.gateway.handlers.security.oauth.OAuthAuthenticator.authenticate(OAuthAuthenticator.java:212)
    at org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler.handleRequest(APIAuthenticationHandler.java:94)
    at org.apache.synapse.rest.API.process(API.java:284)
    at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:83)
    at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:64)
    at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:220)
    at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
    at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:168)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Solution : You need to re-generate a token. If it is user token , you can use the refresh token to generate new token.

curl -k -d "grant_type=refresh_token&refresh_token=<retoken>&scope=PRODUCTION" -H "Authorization: Basic SVpzSWk2SERiQjVlOFZLZFpBblVpX2ZaM2Y4YTpHbTBiSjZvV1Y4ZkM1T1FMTGxDNmpzbEFDVzhh, Content-Type: application/x-www-form-urlencoded" https://localhost:8243/token

6. Token generated with 60 seconds life span , but API can invoke with that token after 60 seconds.


Reason:

The default token validation time is 3600 seconds that is configured in identity.xml file.
<AccessTokenDefaultValidityPeriod>3600</AccessTokenDefaultValidityPeriod>
But there is another configuraion called "TimestampSkew"
<TimestampSkew>300</TimestampSkew>
You can find the usage of that configuration here https://docs.wso2.com/display/AM180/Token+API#TokenAPI-Configuringthetokenexpirationtime

According to that description, token will be valid until the TimestampSkew eventhough the generated time less than the TimestampSkew.

7. User can generate access token, but API is not subscribed to that application.

Client Errors

401 Unauthorized

<ams:fault>
 <ams:code>900901</ams:code>
 <ams:message>Invalid Credentials</ams:message>
 <ams:description>Access failure for API: /stockquote, version: 1.0.0 with key: b31077463e7e7856762234c5d0b599</ams:description>
</ams:fault>
Back end Error

[2015-05-17 22:32:44,609] ERROR - APIAuthenticationHandler API authentication failure
org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityException: Access failure for API: /stockquote, version: 1.0.0 with key: b31077463e7e7856762234c5d0b599
    at org.wso2.carbon.apimgt.gateway.handlers.security.oauth.OAuthAuthenticator.authenticate(OAuthAuthenticator.java:212)
    at org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler.handleRequest(APIAuthenticationHandler.java:94)
    at org.apache.synapse.rest.API.process(API.java:284)
    at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:83)
    at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:64)
    at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:220)
    at org.apache.synapse.core.axis2.SynapseMessageReceiver.receive(SynapseMessageReceiver.java:83)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.synapse.transport.passthru.ServerWorker.processNonEntityEnclosingRESTHandler(ServerWorker.java:344)
    at org.apache.synapse.transport.passthru.ServerWorker.run(ServerWorker.java:168)
    at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745) 

Solution: Logged in to the store and subscribe API to application.




Shani Ranasinghe[WSO2-ESB] URI- template encoding in WSO2 ESB for reserved characters

 In a recent engagement with the WSO2 ESB, I've come across a situation where WSO2 ESB does not encode some reserved characters when using a URI-template as the endpoint to a get call.

In this particular instance it was the '&' that was not getting encoding and the <space>.

The reason is that WSO2 ESB uses the java.net libraries  and it ends up not encoding the characters correctly.

As a solution to this, we can use the script mediator in WSO2 ESB. We could get the value in the script mediator and encode it using the  javascript encode uri method, and encode our values correctly.

An example as shown below:

<?xml version="1.0" encoding="UTF-8"?>
<api
xmlns="http://ws.apache.org/ns/synapse"
name="testAPI"
context="/testAPI">
<resource methods="POST">
<inSequence>
<log level="full"/>
<property name="uri.var.fullname" expression="//APIRequest/Fullname/text()"/>
<property name="uri.var.address" expression="//APIRequest/HomeAddress/text()"/>
<script language="js">var fullname = mc.getProperty('uri.var.fullname');var homeAddress = mc.getProperty('uri.var.address'); mc.setProperty("uri.var.fullname",encodeURIComponent(fullname)); mc.setProperty("uri.var.address",encodeURIComponent(homeAddress)); </script>
<header name="Content-Type" scope="transport" action="remove"/>
<send>
<endpoint>
<http method="get"
uri-template="https://localhost/test?apikey=1234&amp;api=54&amp;fullname={+uri.var.fullname}&amp;address={+uri.var.address}"/>
</endpoint>
</send>
</inSequence>
</resource>
</api> 
 
 
 
In this what is done is that the API request values has been extracted and stored in a property using a property mediator.

Then by using the script mediator  we make use of javascript and store the value as a var and encode it using the "encodeURIComponent" method.

In the "encodeURIComponent" yet, we still have some characters that are not encoded. These include the !'()*~.

References
[1] https://docs.wso2.com/display/ESB480/Getting+Started+with+REST+APIs#GettingStartedwithRESTAPIs-URItemplates
[2] https://docs.wso2.com/display/ESB480/Template+Endpoint
[3] https://docs.wso2.com/display/ESB480/Property+Mediator
[4] https://docs.wso2.com/display/ESB480/Script+Mediator
[5] http://en.wikipedia.org/wiki/Percent-encoding
[6] http://xkr.us/articles/javascript/encode-compare/

John MathonVirtual Virtual Reality, Virtual Anthropomorphic Reality, Virtual Functional Reality, Virtual Extended Reality

Virtual Reality

I categorize virtual reality into these 4 different types.   They are progressively more difficult technologies but each will progress independently.   VVR, VAR, VFR, VER represent the 4 ways VR technology can be used.   VVR is about creating false worlds or artificial computer generated worlds.  VAR is about transferring ourselves in the real world virtually (telepresence).  VFR is about using using VR for functional purposes like building things. VER is about extending our perception to new senses and new environments that require our brains to adapt to these new senses.

VVR – Virtual virtual reality

lowLatencyHeadTracking

VVR is like the Matrix completely made up worlds for play or learning.

We are seeing the development of goggles giving us more virtual reality experience for gaming coming to market now, specifically Oculus.  So, VVR is happening as we speak.  This was attempted years ago but limited capability of the transducers and vertigo created by the headsets limited their use.  These problems appear to have been mitigated.   Improvements in 3d rendering, display technology and standards for transmission all suggest the possibility of virtual reality becoming mainstream in the next 10 years.

VAR – Virtual anthropomorphic reality

VAR is extending our presence through the network to physical devices whose purpose is to give us and others a human like experience for purposes of creating a life-like proxy in the real world.   Some people call this telepresence.  Another term with imprecise definition is Augmented reality.  Maybe VAR could be Virtual Augmented Reality.

telepresence-lineup 140501_TECH_TelepresenceRobot_Prod.jpg.CROP.promovar-medium2

This market is already vibrant as well.  There are more than a dozen telepresence virtual reality robot like devices on the market ranging in price from $1,000 to $16,000 in price.   With wheels for transport, a large battery to operate for a long time, camera, audio and a big screen for projecting your face you can project and move around in a separate locations as if you were there.

Recently Oculus purchased Surreal Vision which is allowing them to bring telepresence to Oculus and to paint 3d worlds more realistically.

I have seen these devices at a few companies and at conferences wandering the halls or attending meetings virtually.   Over time I expect that these devices could become more sophisticated, hence the anthropomorphic adjective.

female telepresence robot

Eventually these devices sensory inputs and outputs would reflect more than just audio and visual information.   It may be possible to transmit and receive touch sensations.

thevoltreport usc_robot_touch

These sensations can be fed back to the human as resistance in movement of body parts or even transduced as pressure on our senses at a similar point to the robots touch.

Eventually smell, temperature, breeze, radiance could be simulated resulting in a more lifelike experience for the robot controller.  The value of these additional senses is to create a more powerful experience and complete experience for the VAR subject but also to provide a more realistic feedback to the remote audience that the VAR subject is there experiencing the same things they are.

VFR – Virtual functional reality

VFR is about extending our ability to manipulate and see real world things at a level humans can’t do today either macroscopically (large devices) or microscopically.

robot playing violinrobot doctor

Today, examples of VFR include robot doctors who have been quite successful to enable doctors to perform surgery remotely. There is no reason to believe such control and dexterity wouldn’t be useful for jobs where physical presence of a human would be dangerous or difficult.  Construction of large things, space construction, construction in nuclear areas or where there are dangerous infectious agents or even as in the case of doctors bringing in specialists would all be extremely useful.

Also, prosthesis are necessary whenever we are controlling devices substantially bigger than us, heavier or more remote.

Robot-gestures-011 h_robonaut_construction_02

This technology requires the ability to translate human movement to robot movement in a more direct natural way.  Such control would require transducing as life-like as possible the sensations at the remote location or environment to the VFR worker and to enable the VFR worker to work as naturally as possible to control the robot on the other side of the VR connection.

girl pilots fighting robot body suit

The VFR technology could also be extended to the micro world.  Robots inserted into the body may be able to perform operations under human command.

2736_20_miniaturised-medibots-robot-doctors-the-future-of-the-nhs

VER – Virtual extended reality (The final frontier)

VER is about extending our physical capabilities beyond their current abilities possibly needing brain implants or other more direct stimulation to the brain to translate the new senses to the human brain directly.

8825intelbrain

Intel inside the brain.

Neural electrode array wrapped onto a model of the brain. The wrapping process occurs spontaneously, driven by dissolution of a thin, supporting base of silk. (C. Conway and J. Rogers, Beckman Institute)

Neural electrode array wrapped onto a model of the brain. The wrapping process occurs spontaneously, driven by dissolution of a thin, supporting base of silk.
(C. Conway and J. Rogers, Beckman Institute)

Ultimately brain / Network / Computation connection might allow humans to have instantaneous access to any information in the world ever created and the ability to virtually be connected to anywhere.

Bandwidth Requirements

The bandwidth requirements of all this depends on the resolution required.   For an immersive 3d visual field that is of the quality of real life we might need a bandwidth of 3 gigabits/second.  With compression and some smarts I’m guessing we could live with 30-100megabits/second.    Most senses will be orders of magnitude less in data requirements.    So, it is possible to imagine a completely translocated image of the world in 3d brought to us in realtime and vice versa to enable others to share in our reality.   It wouldn’t be cheap and if it was only visual it wouldn’t be complete.   Ideally eventually the sense of touch and physical duplicates who can replicate more than sight and sound would be needed but this is not hard to imagine given the technology we have today.

Networking

Ten years ago people at home frequently had thousands of bits/second to their home and their phones or data communications over wireless was practically nonexistent.  If you had it, very slow at hundreds of bits/second.   Ten years later cell phone 4rth generation LTE is common which allows communication at 10s of millions of bits/second over wireless and many homes have 100s of millions of bits/second.

We are talking about a 10,000 increase in communication throughput in 10 years.  I am frankly shocked this was possible.  Nyquist-Shannon showed in 1959 that there were theoretical limits to the amount of data one could transmit over a certain bandwidth.  Todays cell phones seem to break these laws (they don’t but seem close). They achieve these amazing feats by employing a tremendous amount of sophistication combining data from multiple antenna with mathematically complex calculations.  Cell phones are able to do what should be impossible, transmit and receive 10s of millions of bits of data to each individual portable device over the open air with thousands of other devices in the same vicinity doing the same thing.

What if we could do this again and get another 10,000 increase in bandwidth? One question is what would be the use of 10,000 times the performance we have today?  Such a level of performance would be mind-boggling and seemingly unnecessary.  It may be impossible to achieve wirelessly but wired communications could easily see such increases.

The purpose of such communication bandwidth for the average person could only be for virtual reality.   If I could create a 3d impression of a distant place here to a realistic enough level I may not need to travel to X to basically experience X.    I believe the technology to deliver this bandwidth is going to happen and it may take 10 or 20 years but it will happen.

There are other purposes.  We could have more immersive, higher realism streaming movies or more impressive gaming.   Some have suggested car to car communication could soak up some of that bandwidth.  I believe these will happen too but the VR is the most impactful and disruptive technology.

Summary

The continued acceleration and improvement of bandwidth makes it possible to do more and more over virtual connections than physical connections.  Today you can buy a device for a few thousands dollars that rolls around with your face on a screen.  The device is cute and allows you to be someplace else virtually.   You can control the remote robot with a joystick and run into people in the hall, come up to them at their desk and talk to them.

It’s not hard to imagine that these devices become more and more anthropomorphic.  If the remote “me” was connected such a way that I could control it simply by moving my body the way I would normally then I could become more and more virtual.   Technology that Stephen Hawking uses today allows him to communicate through an infrared sensor mounted on his eyeglasses.  Neuroscientists are working with Stephen on direct brain wave connection.

While a physical suit as depicted in pictures above could be used to translate movement into motion for remote robots technology such as  MYO armband

ThalmicLabsMYOpic

allow you to translate arm gestures into real world action.

Or for a direct brain control device:  New advances in brain control

There is no doubt this technology will transform the way we communicate, attend meetings, do work and even expand our ability to perform in new work environments.

We can see the utility of this in some of the technology being adopted today but I believe that over the next 5-10 years this technology will become more and more mainstream.


Madhuka UdanthaGoogle Chart with AngularJS

Google Charts provides many chart types that is useful for data visualization. Charts are highly interactive and expose events that let you connect them to create complex dashboards. Charts are rendered using HTML5/SVG technology to provide cross-browser compatibility. All chart types are populated with data using the DataTable class, making it easy to switch between chart types. Google chart contains main five elements

  • Chart has type
  • Chart has data. Different data fomat will have for some charts but basic format will be same.
  • Chart contains css style
  • Chart has options where it says chart title, axis labels
  • Chart format will focus on color format, date format and number format


Here I am trying to have one data set and try to switch my charts.
In data you will have columns and rows (first element will be the label).

1 chart.data = {"cols": [
2 {id: "month", label: "Month", type: "string"},
3 {id: "usa-id", label: "USA", type: "number"},
4 {id: "uk-id", label: "UK", type: "number"},
5 {id: "asia-id", label: "Asia", type: "number"},
6 {id: "other-id", label: "Other", type: "number"}
7 ], "rows": [
8 {c: [
9 {v: "January"},
10 {v: 22, f: "22 Visitors from USA"},
11 {v: 12, f: "Only 12 Visitors from UK"},
12 {v: 15, f: "15 Asian Visitors"},
13 {v: 14, f: "14 Others"}
14 ]},
15 {c: [
16 {v: "February"},
17 {v: 14},
18 {v: 33, f: "Marketing has happen"},
19 {v: 28},
20 {v: 6}
21 ]},
22 {c: [
23 {v: "March"},
24 {v: 22},
25 {v: 8, f: "UK vacation"},
26 {v: 11},
27 {v: 0}
28
29 ]}
30 ]};
31


First we need to added google chart  for your angular project then to the html file.


1. Added "angular-google-chart": "~0.0.11" into the “dependencies” of the package.json


2. Added ‘ng-google-chart.js’  file for html page, and define a “div” for chart


<script src="..\node_modules\angular-google-chart\ng-google-chart.js"></script>


<div google-chart chart="chart" style="{{chart.cssStyle}}"/>


3. Build the Controller


1 angular.module('google-chart-example', ['googlechart']).controller("ChartCtrl", function ($scope) {
2 var chart1 = {};
3
4
5 chart1.type = "BarChart";
6 chart1.cssStyle = "height:400px; width:600px;";
7 //used chart.data that I have show in above script
8 chart1.data = {"cols": [
9 //labels and types
10 ], "rows": [
11 //name and values
12 ]};
13
14 chart1.options = {
15 "title": "Website Visitors per month",
16 "isStacked": "true",
17 "fill": 20,
18 "displayExactValues": true,
19 "vAxis": {
20 "title": "Visit Count", "gridlines": {"count": 6}
21 },
22 "hAxis": {
23 "title": "Date"
24 }
25 };
26
27 chart1.formatters = {};
28
29 $scope.chart = chart1;
30
31 });

4. Let add few button for switching charts.


1 <button ng-click="switch('ColumnChart')">ColumnChart</button>
2 <button ng-click="switch('BarChart')">BarChart</button>
3 <button ng-click="switch('AreaChart')">AreaChart</button>
4 <button ng-click="switch('PieChart')">PieChart</button>
5 <button ng-click="switch('LineChart')">LineChart</button>
6 <button ng-click="switch('CandlestickChart')">CandlestickChart</button>
7 <button ng-click="switch('Table')">Table</button>

5. Now add the function for do the axis transformation and chart switching



1 $scope.switch = function (chartType) {
2 $scope.chart.type=chartType;
3 AxisTransform()
4 };
5
6 AxisTransform = function () {
7 tempvAxis = $scope.chart.options.vAxis;
8 temphAxis = $scope.chart.options.hAxis;
9 $scope.chart.options.vAxis = temphAxis;
10 $scope.chart.options.hAxis = tempvAxis;
11 };

6. Here we go!!!


imageimage


imageimageimageimage

sanjeewa malalgodaHow to use Authorization code grant type (Oauth 2.0) with WSO2 API Manager 1.8.0

1. Create API in WSO2 API Manager publisher and create application in API store. When you create application give some call back url as follows. http://localhost:9764/playground2/oauth2client
Since i'm running playground2 application in application server with port offset 1 i used above address. But you are free to use any url.

2. Paste the following on browser - set your value for client_id

Sample command
curl -v -X POST --basic -u YOUR_CLIENT_ID:YOUR_CLIENT_SECRET -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "client_id=YOUR_CLIENT_ID&grant_type=authorization_code&code=YOUR_AUTHORIZATION_CODE&redirect_uri=https://localhost/callback" https://localhost:9443/oauth2/token

Exact command:
http://localhost:8280/authorize?response_type=code&scope=PRODUCTION&client_id=O2OkOAfBQlicQeq5ERgE7Wh4zeka&redirect_uri=http://localhost:9764/playground2/oauth2client

3. Then it will return something like this. Copy the authorization code from:
Response from step 02:
http://localhost:9764/playground2/oauth2client?code=e1934548d0a0883dd5734e24412310

4. Get the access token and ID token from following

Sample command:
curl -v -X POST --basic -u YOUR_CLIENT_ID:YOUR_CLIENT_SECRET -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "client_id=YOUR_CLIENT_ID&grant_type=authorization_code&code=YOUR_AUTHORIZATION_CODE&redirect_uri=https://localhost/callback" https://localhost:9443/oauth2/token

Exact command:
curl -v -X POST --basic -u O2OkOAfBQlicQeq5ERgE7Wh4zeka:Eke1MtuQCHj1dhM6jKsIdxsqR7Ea -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "client_id=O2OkOAfBQlicQeq5ERgE7Wh4zeka&grant_type=authorization_code&code=e1934548d0a0883dd5734e24412310&redirect_uri=http://localhost:9764/playground2/oauth2client" http://localhost:8280/token

Response from step 04:
{"scope":"default","token_type":"bearer","expires_in":3600,
"refresh_token":"a0d9c7c4f96baed42da2c167e1ebbb75","access_token":"2de7da7e3822cf75fd7983cfe1337ec"}

5. Now call your API with the access token from step-4

curl -k -H "Authorization: Bearer 2de7da7e3822cf75fd7983cfe1337ec"
http://10.100.1.65:8280/test-api/1.0.0

Lakmal WarusawithanaConfigure Apache Stratos with Docker, CoreOS, Flannel and Kubernetes on EC2


Docker, CoreOS, Flannel and Kubernetes are latest cloud technologies, integrated into Apache Stratos and make more scalable and flexible PaaS, thereby enabling developers/devops to build their cloud applications with ease.

This post will focus on how you can create Kubernetes cluster using, CoreOS, Flannel on top of EC2. Then the later part will discuss how you can create application using docker based cartridges.

Setting up CoreOS, Flannel and Kubernetes on EC2


This section will cover how to  creates an elastic Kubernetes cluster with 3 worker nodes and a master. Also this includes;


I have followed [1] and also includes workaround I did for overcome some issues arose during my testing.

First of all, we need to setup some supporting tools which we need.

Install and configure kubectl


Kubectl is client command line tool provide by the kubernetes team for monitor and manage kubernetes cluster. Since I am using a mac bellow steps for set it up in mac. But you can find more details setting up on other OS at [2]

wget https://storage.googleapis.com/kubernetes-release/release/v0.9.2/bin/darwin/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/local/bin/

Install and configure AWS Command Line Interface


Below steps for setting up on Mac. For more information please see [3]
 
wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
sudo pip install awscli

If you encountered any issue, following command may help to resolve them

sudo pip uninstall six
sudo pip install --upgrade python-heatclient

Create the Kubernetes Security Group


aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"

aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 4500 --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes

Save the master and node cloud-configs



Launch the master


Attention: Replace <ami_image_id> below for a suitable version of CoreOS image for AWS. But I recommend to use CoreOS alpha channel ami_image_id (ami-f7a5fec7), because I have faced many issues with other channels AMIs. (at the time I have tested)

aws ec2 run-instances --image-id <ami_image_id> --key-name <keypair> \
--region us-west-2 --security-groups kubernetes --instance-type m3.medium \
--user-data file://master.yaml

Record the InstanceId for the master.




Gather the public and private IPs for the master node:
aws ec2 describe-instances --instance-id <instance-id>
{
   "Reservations": [
       {
           "Instances": [
               {
                   "PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com",
                   "RootDeviceType": "ebs",
                   "State": {
                       "Code": 16,
                       "Name": "running"
                   },
                   "PublicIpAddress": "54.68.97.117",
                   "PrivateIpAddress": "172.31.9.9",
...

Update the node.yaml cloud-config

Edit node.yaml and replace all instances of <master-private-ip> with the private IP address of the master node.

Launch 3 worker nodes

Attention: Replace <ami_image_id> below for a suitable version of CoreOS image for AWS. Recommend to use same ami_image_id used for the master.

aws ec2 run-instances --count 3 --image-id <ami_image_id> --key-name <keypair> \
--region us-west-2 --security-groups kubernetes --instance-type m3.medium \
--user-data file://node.yaml


Configure the kubectl SSH tunnel

This command enables secure communication between the kubectl client and the Kubernetes API.

ssh -i key-file -f -nNT -L 8080:127.0.0.1:8080 core@<master-public-ip>

Listing worker nodes

Once the worker instances have fully booted, they will be automatically registered with the Kubernetes API server by the kube-register service running on the master node. It may take a few mins.

kubectl get minions

Now you have successfully installed and configure kubernetes cluster with 3 worker (minions) nodes. If you want to try out more kubernetes sample please refere [4].

Lets setup Stratos now.

Configure Apache Stratos


I am recommending config Stratos in different EC2 instances. Create m3.medium instance from Ubuntu 14.04 ami. (I have used ami-29ebb519) Also make sure you have open following ports in the security group used. 9443, 1883, 7711

SSH into the created instance and follow the below steps to setup Stratos

  1. Download Stratos binary distribution ( apache-stratos-version.zip ) and unzip it. This folder will be referred as <STRATOS-HOME> for later reference.
  2. This can be done using any of the following methods:
    • Method 1 - Download the Stratos binary distribution from Apache Download Mirrors and unzip it. As per today (09/02/2015) Stratos 4.1.0 not done the GA release I recommend to use method 2 with master branch, until GA release available.
    • Methods 2 - Build the Stratos source to obtain the binary distribution and unzip it.
      1. git checkout tags/4.1.0-beta-kubernetes-v3
      2. Build Stratos using Maven.
      3. Navigate to the stratos/ directory, which is within the directory that you checked out the source:
        cd <STRATOS-SOURCE-HOME>/  
      4. Use Maven to build the source:
        mvn clean install
      5. Obtain the Stratos binary distribution apache-stratos-version.zip from the <STRATOS-SOURCE-HOME>/products/stratos/modules/distribution/target/ directory and unzip it.
  3. Start ActiveMQ:
    • Download and unzip ActiveMQ.
    • Navigate to the <ACTIVEMQ-HOME>/bin/ directory, which is in the unzipped ActiveMQ distribution.
    • Run the following command to start ActiveMQ.
    • ./activemq start
  4. Start Stratos server:
    • bash <STRATOS-HOME>/bin/stratos.sh start

If you wish you can tail the log and verify Stratos server is starting without any issues:
tail -f <STRATOS-HOME>/repository/logs/wso2carbon.log

Try our Stratos,kubernetes sample


Apache Stratos samples are located at following folder in git repo.

<STRATOS-SOURCE-HOME>/samples/

Here I will use simple sample called “single-cartridge” application which is in application folder. First you have to change the kubernetes cluster information with relevant information of you have setup.

Edit <STRATOS-SOURCE-HOME>/samples/applications/single-cartridge/artifacts/kubernetes/kubernetes-cluster-1.json and changed following highlighted to suite to your environment.

{
     "clusterId": "kubernetes-cluster-1",
     "description": "Kubernetes CoreOS cluster",
     "kubernetesMaster": {
                 "hostId" : "KubHostMaster1",
                 "hostname" : "master.dev.kubernetes.example.org",
       "privateIPAddress": "Kube Master Private IP Address",
                 "hostIpAddress" : "Kube Master Public IP Address",
                 "property" : [
                 ]
       },

       "portRange" : {
          "upper": "5000",
          "lower": "4500"
       },

       "kubernetesHosts": [
             {
                    "hostId" : "KubHostSlave1",
                    "hostname" : "slave1.dev.kubernetes.example.org",
          "privateIPAddress": "Kube Minion1 Private IP Address",
                    "hostIpAddress" : "Kube Minion1 Public IP Address",
                    "property" : [
                    ]
               },
               {
                    "hostId" : "KubHostSlave2",
                    "hostname" : "slave2.dev.kubernetes.example.org",
"privateIPAddress": "Kube Minion 2 Private IP Address",
                    "hostIpAddress" : "Kube Minion 2 Public IP Address",
                    "property" : [
                    ]
               },
               {
                    "hostId" : "KubHostSlave3",
                    "hostname" : "slave3.dev.kubernetes.example.org",
          "privateIPAddress": "Kube Minion 3 Private IP Address",
                    "hostIpAddress" : "Kube Minion 3 Public IP Address",
                    "property" : [
                    ]
               }
   ],   
   "property":[
 {
         "name":"payload_parameter.MB_IP",
         "value":"Apache Stratos instance Public IP Address"
      },
      {
         "name":"payload_parameter.MB_PORT",
         "value":"1883"
      },
      {
         "name":"payload_parameter.CEP_IP",
         "value":"Apache Stratos instance Public IP Address"
      },
      {
         "name":"payload_parameter.CEP_PORT",
         "value":"7711"
      },
      {
         "name":"payload_parameter.LOG_LEVEL",
         "value":"DEBUG"
      }
   ]
}

To speed up sample experience you can login to all 3 minions and pull docker image which we are going to used in the sample. This step is not mandatory but it will help to cache docker image in configured minions.

docker pull stratos/php:4.1.0-beta

core@ip-10-214-156-131 ~ $ docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
stratos/php                     4.1.0-beta          f761a71b087b        18 hours ago        418 MB



You can just run the following automated sample script located.

<STRATOS-SOURCE-HOME>/samples/applications/simple/single-cartridge-app/scripts/kubernetes/deploy.sh

You can used kubectl commands to view created pods in the kubernetes cluster from you local machine with SSH tunneling.

kubectl get pods

Lakmals-MacBook-Pro:ec2-kubernetes lakmal$ kubectl get pods
POD                                    IP                  CONTAINER(S)                                          IMAGE(S)                 HOST                LABELS                                                     STATUS
8b60e29c-b1d1-11e4-8bbb-22000adc133a   10.244.84.12        php1-php-domain338c238a-4856-4f00-b881-19aecda74cf7   stratos/php:4.1.0-beta   10.214.156.131/     name=php1-php-domain338c238a-4856-4f00-b881-19aecda74cf7   Running



Locate your browser to

https://<Stratos-Instance-IP>:9443/console and login as admin:admin which is coming default.


References







Dedunu DhananjayaIMAP Java Test program and JMeter Script

One of my colleagues wanted to write a JMeter script to test IMAP. But that code failed. So I also got involved in that. JMeter BeanShell uses Java in the backend. First I tried with a Maven project. Finally I could write a code to list the IMAP folders. Java implementation is shown below.

Then we wrote a code to print IMAP folder count for JMeter BeanShell. Code is show below.

Complete Maven project is available on GitHub - https://github.com/dedunu/imapTest

Dedunu DhananjayaIncrease memory and CPUs on Vagrant Virtual Machines

Last post I showed how to create multiple nodes in a single Vagrant project. Usually "ubuntu/trusty64" box comes with 500MB. For some developers need more RAM, more CPUs. From this post I'm going to show how to increase the memory and number of CPUs in a vagrant project. Run below commands

mkdir testProject1
cd testProject1
vagrant init
Then edit the Vagrant file like below.

Above changes will increase memory to 8GB and also it will add one more core. Run below commands to start the vagrant machine and get the SSH access.

vagrant up
vagrant ssh
If you have an existing project, you just have to add these lines. When you restart the project memory would be increased.

Dedunu DhananjayaMultiple nodes on Vagrant

Recently I started working with Vagrant. Vagrant is a good tool that you can use for development. From this post I'm going to explain how to create multiple nodes on Vagrant project.
mkdir testProject
cd testProject
vagrant init

If you run above commands, it will create a Vagrant project for you. Now we have to do changes to the vagrant file. Your initial vagrant file will look like below.

You have to edit Vagrantfile add content like below.

Above sample vagrant file will create three nodes. Now run below command to start Vagrant virtual machines.

vagrant up

If you followed the instruction properly, you will get and output like below.

If you want to connect to master node, run below command.

vagrant ssh master
If you want to connect to slave1 node, run below command.

vagrant ssh slave1
What ever the machine you want to connect you just have to type vagrant ssh . Hope this will help you!

Chathurika Erandi De SilvaTesting NTLM Grant Type with API Manager

In this post i will discuss about testing NTLM grant type in API Manager. You will need the following to get this working

Preconditions:

1. WSO2 API Manager latest version
2. Windows 2008 machine (should have java and ant configured)
3. Active directory setup

Let me first explain bit about NTLM before going in to details.

NTLM: Windows challenge / response

This is a interactive authentication process that involvs a three way communication to get the authentication done. This involves a client, server and a domain controller

Following diagram describes the process in a high level


Figure1: NTLM authentication in a high level

More information can be found in Microsoft NTLM


API Manager relevance of NTLM

Here the following components will come together to make the authentication process a success

1. API Manager: Server
2. Client : Some sort of API invoker
3. Domain controller: Active Directory

The steps of NTLM authentication with regards to API Manager can be described as below

1. Client generates the hash value of the password
2. Client sends the username to the server
3. The server generates the random 16 bit challenge and sends to client
4. Client uses the challenge and encrypts with its hash value and sends to server
5. The server sends the username, challenge sent to the client and the repsonse from the client to the domain controller
5. Domain controller obtains password of the user from the database, calculates the hash value of the password, users it to encrypt the challenge and checks whether the received encrypted value is similar to what's calculated. If so, the authentication is successful. 

Then the client can establish the communication to the server.

Testing the NTLM Grant Type in API Manager

Setting up NTLM in Windows

First of all we have to configure NTLM in the windows environment. To do that follow the below steps

1. Use the windows environment given in the preconditions
2. Go to Administrative tools and select Local Security Policies
3. On the left panel, expand Local Policies and select Security Options to display a list of security policies. Right-click on LAN Manager authentication level and click Properties to display the Properties screen.

 
Figure 2: Configuring NTLM

 3. Select "Send LM and NTLM Reponses" from the properties drop down

Now the NTLM is configured.

Setting up the API Manager

1. Unzip the API Manager folder 
2. Configure API Manager to use Active Directory as the user store (read more on configuring Active Directory for API Manage)
3. Go to API Manager Home/samples/NTLMGrantClient 
4. Configure the build file as instructed
5. Run the build file

When run, the relevant source code acting as the client will authenticate using the NTLM and then generate the access code for the API. The API can be successfully invoked using this access token.



Kalpa Welivitigoda/home when moving from Ubuntu to Fedora

After using Ubuntu (13.10) for like almost one year I decided to move back to Fedora (Fedora 21). This is going to be a short post on my experience on mounting the same /home I used in Ubuntu for Fedora.

I had a separate partition for /home in Ubuntu which I needed to be mounted as the /home in Fedora as well. In anaconda (Fedora installer) I choose to configure the partitions manually. In the manual partitioning window, it listed all the partition I had under Ubuntu (it was smart that it listed them under the label Ubuntu 13.10). I mounted the / of Ubuntu with re-formatting to be the same in Fedora. And for /home, I mounted the same /home in Ubuntu to be the same in Fedora. But /home was listed under both "New Fedora 21 Installation" and "Ubuntu 13.10" as well. I proceeded. During the installation I created the same user ("kalpa") which was there Ubuntu. It took a considerable amount of time for the "User creation" phase of the installation. This is to set the file permission for the new user. Time taken for the process may change based on the number of files that are there in home. The rest of the installation went smooth. And I have no issue with /home up to now using Fedora 21.

Prabath SiriwardenaTwo Security Patches Issued Publicly for WSO2 Identity Server 5.0.0

Wolfgang Ettlinger (discovery, analysis, coordination) from the SEC Consult Vulnerability Lab contacted WSO2 security team on 19th March and reported following three vulnerabilities in WSO2 Identity Server 5.0.0.

1) Reflected cross-site scripting (XSS, IDENTITY-3280)

Some components of the WSO2 Identity Server are vulnerable to reflected cross-site scripting vulnerabilities. The effect of this attack is minimal because WSO2 Identity Server does not expose cookies to JavaScript.

2) Cross-site request forgery (CSRF, IDENTITY-3280)

On at least one web page, CSRF protection has not been implemented. An attacker on the internet could lure a victim, that is logged in on the Identity Server administration web interface, on a web page e.g. containing a manipulated tag. The attacker is then able to add arbitrary users to the Identity Server.

3) XML external entity injection (XXE, IDENTITY-3192)

An unauthenticated attacker can use the SAML authentication interface to inject arbitrary external XML entities. This allows an attacker to read arbitrary local files. Moreover, since the XML entity resolver allows remote URLs, this vulnerability may allow to bypass firewall rules and conduct further attacks on internal hosts. This vulnerability was found already before being reported by Wolfgang Ettlinger and all our customers were patched. But the corresponding patch was not issued publicly. Also this attack is not harmful as it sounds to be since in all our production deployments, WSO2 Identity Server is run as a less privileged process, which cannot be used to exploit or gain access to read arbitrary local files.

WSO2 security team treats all the vulnerabilities that are reported to security@wso2.com, top most important and we contacted the reporter immediately and started working on the fix. The fixes were done on the reported components immediately - but we wanted to make sure we build a generic solution where all the possible XSS and CSRF attacks are mitigated centrally.

Once that solution is implemented as a patch to the Identity Server 5.0.0 - we tested the complete product using OWASP Zed Attack Proxy and CSRFTester. After testing almost all the Identity Server functionality with the patch - we released it to all our customers two weeks prior to the public disclosure date. The patch for XXE was released few months back. Also I would like to confirm that none of the WSO2 customers were exploited/attacked using any of theses vulnerabilities.

On 13th May, parallel to the public disclosure, we released both the security patches publicly. You can download following patches from http://wso2.com/products/identity-server/.
  • WSO2-CARBON-PATCH-4.2.0-1194 
  • WSO2-CARBON-PATCH-4.2.0-1095 
    WSO2 thanks Wolfgang Ettlinger (discovery, analysis, coordination) from the SEC Consult Vulnerability Lab for responsibly reporting the identified issues and working with us as we addressed them, at the same time we are disappointed with the over-exaggerated article published on threatpost. The article was not brought into the attention of WSO2 security team before its being published, although the WSO2 security team responded to the query by the reporter immediately over email. Anyway we are fully aware that such reports are unavoidable and not under our control.

    WSO2 security team is dedicated to protect all its customers and the larger community around WSO2 from all sort of security vulnerabilities. We appreciate your collaboration and please report any of the security issues you discover related to WSO2 products to security@wso2.com. 

Lali DevamanthriStandardized Service Contract?

The SOA architectural style is fundamentally about separation; establishing smaller, separate units of capability that create a more agile application and infrastructure environment. Most of the core SOA principles such as loose coupling, abstraction, virtualization etc depend upon the existence of a contract.

The concept of service contract appears in various guises at different points in both the software process and the service lifecycle. Contractual mechanisms are needed to:

  • specify the service that will be provided to the consumer regardless of how the service is implemented
  • specify constraints on how the service is to be implemented
  • specify the commercial terms and associated logistics that govern the way in which the service is provided and used.

a contract means rights and obligations, and not all the obligations are from the provider of the service : ie, the server offers a service, provided the client respects the conditions of calling it, not only on a syntactic point of view. For example, you could have a service which is implemented with some limitations in terms of enterprise capabilities (read availability, throughput, response time, security, …), for any valid reason (cost, time, resources, …). If one needs the same service in different conditions (for example 24×7), this has an impact on the implementation and should be paid for by the client. The server should check the preconditions before trying to execute the service.

MBA expert advice in this matter is :

Technology -wise, how the maintenance and upgrade and associated downtime (and to which part of the business/systems) will be handled

Business-wise, exit strategy (e.g. supplier change, bankrupt, takeover), how will this be handled by both parties.

Any way,There is no standard for the specification of SLAs. The most referenced and complete specifications that relate to SLAs for SOA environments, and in particular web services, are the Web Service Level Agreement (WSLA) language and framework and the Web Services Agreement Specification (WSAgreement).
The HWSLAH is a specification and reference implementation developed by IBM that provides detailed SLA specification capabilities that enable the runtime monitoring of SLA compliance.


Madhuka UdanthaGrammar induction

Few days I was working for pattern mining on huge files and came across with millions of pattern (even different length from 2 to 150).  Now I am looking for regex generation algorithms and came across by ‘Grammar induction’ which we knew some thing when in university time. But this is much more Smile to do.

Grammar induction

Grammar induction, also known as grammatical inference or syntactic pattern recognition, refers to the process in machine learning of learning a formal grammar (usually as a collection of re-write rules or productions or alternatively as a finite state machine or automaton). There is now a rich literature on learning different types of grammar and automata, under various different learning models and using various different methodologies. So researcher need to go back for book and read them .

Grammatical inference[1] has often been very focused on the problem of learning finite state machines of various types (Induction of regular languages), since there have been efficient algorithms for this problem since the 1980s.A more recent textbook is de la Higuera (2010) [1] which covers the theory of grammatical inference of regular languages and finite state automata. More recently these approaches have been extended to the problem of inference of context-free grammars and richer formalisms, such as multiple context-free grammars and parallel multiple context-free grammars. Other classes of grammars for which grammatical inference has been studied are contextual grammars, and pattern languages. Here is some summary of the topic

  • Grammatical inference by genetic algorithms[2]
  • Grammatical inference by greedy algorithms
    • Context-free grammar generating algorithms
      • Lempel-Ziv-Welch algorithm[3]
      • Sequitur
  • Distributional Learning algorithms
    • Context-free grammars languages
    • Mildly context-sensitive languages

Induction of regular languages
Induction of regular languages refers to the task of learning a formal description (e.g. grammar) of a regular language from a given set of example strings. Language identification in the limit[4] is a formal model for inductive inference. A regular language is defined as a (finite or infinite) set of strings that can be described by one of the mathematical formalisms called "finite automaton", "regular grammar", or "regular expression", all of which have the same expressive power. A regular expression can be

  • ∅ (denoting the empty set of strings),
  • ε (denoting the singleton set containing just the empty string),
  • a (where a is any character in Σ; denoting the singleton set just containing the single-character string a),
  • r+s (where r and s are, in turn, simpler regular expressions; denoting their set's union)
  • rs (denoting the set of all possible concatenations of strings from r 's and s 's set),
  • r+ (denoting the set of n-fold repetitions of strings from r 's set, for any n≥1), or
  • r* (similarly denoting the set of n-fold repetitions, but also including the empty string, seen as 0-fold repetition).

The largest and the smallest set containing the given strings, called the trivial overgeneralization and under-generalization respectively.

Brill[5] Reduced regular expressions

  • a (where a is any character in Σ; denoting the singleton set just containing the single-character string a),
  • ¬a (denoting any other single character in Σ except a),
  • • (denoting any single character in Σ)
  • a*, (¬a)*, or •* (denoting arbitrarily many, possibly zero, repetitions of characters from the set of a, ¬a, or •, respectively), or
  • rs (where r and s are, in turn, simpler reduced regular expressions; denoting the set of all possible concatenations of strings from r 's and s 's set).

Given an input set of strings, he builds step by step a tree with each branch labeled by a reduced regular expression accepting a prefix of some input strings, and each node labelled with the set of lengths of accepted prefixes[5]. He aims at learning correction rules for English spelling errors, rather than at theoretical considerations about learnability of language classes. Consequently, he uses heuristics to prune the tree-buildup, leading to a considerable improvement in run time.

[1] de la Higuera, Colin (2010). Grammatical Inference: Learning Automata and Grammars. Cambridge: Cambridge University Press.

[2] Dupont, Pierre. "Regular grammatical inference from positive and negative samples by genetic search: the GIG method." Grammatical Inference and Applications. Springer Berlin Heidelberg, 1994. 236-245.

[3] Batista, Leonardo Vidal, and Moab Mariz Meira. "Texture classification using the Lempel-Ziv-Welch algorithm." Advances in Artificial Intelligence–SBIA 2004. Springer Berlin Heidelberg, 2004. 444-453.

[4] Gold, E. Mark (1967). "Language identification in the limit". Information and Control 10 (5): 447–474.

[5] Eric Brill (2000). "Pattern–Based Disambiguation for Natural Language Processing". Proc. EMNLP/VLC

Sagara GunathungaTimeout and Circuit Breaker Pattern in WSO2 Way

When we develop enterprise scale software systems it's hard to avoid mistakes, sometimes these mistakes teach us very important lessons to avoid same mistake over again and also to craft softwares in much better way.  Sometimes experienced developers recognize these repetitive solutions and fromalize them as Design Patterns. Sometimes without knowing we may practise these patterns, in this post I try to describe two design patterns called 'Timeout' and 'Circuit Breaker' and how are they used in WSO2 stack specially within WSO2 ESB.    


Timeout Design Pattern 

Timeout pattern is not something new, it has been used widely from early days of computing and networking. The basic idea here is when one system communicate with another, systems should not wait infinity to receive messages instead after waiting for a pre-defined interval systems should release their resources and should assume that other party can't communicate at this time.  

- Timeout improves fault isolation,  this is very important factor to avoid failures of one system propagate into another.  

- Timeout also helps to manage and use system resources properly, since caller will not wait infinity it's possible to release expensive resources such as DB transactions, network connections etc. 

- Timeout is one way to achieve another important design principle called "Fail Fast", this means if a transaction/activity can't complete it should notify or throw a suitable error as early as possible. 
               


    Now let's look at how Timeout design pattern is implemented in WSO2 ESB. In WSO2 ESB external systems are represented as "Endpoints", these Endpoints encapsulate access URIs, QoS policies and availability of external systems.

    In ESB configuration language <timeout> element is used to configure timeout settings. Most important properties are given below.
    • duration  - This specify the time duration for timeout. 
    • responseAction - This specify what ESB should do to the current message once the timeout is exceeded. There are 2 possible values 
      1. discard - simply discard current message. 
      2. fault - redirect current message to immediate fault sequence.   

    As an example during the fault sequence one can persist those messages temporally and try to deliver them once the remote system is alive, WSO2 ESB supports out of the box concept called store-and-forward for this.
       
    Example Timeout Configuration

     <endpoint name="TimeoutEP">  
    <address uri="http://localhost:9764/CalculatorService-war_1.0.0/services/calculator_service/call">
    <timeout>
    <duration>200</duration>
    <responseAction>fault</responseAction>
    </timeout>
    </address>
    </endpoint>
                     
    In this sample, we try to call RESTful endpoint available on http://localhost:9764/CalculatorService-war_1.0.0/services/calculator_service/call URL and timeout value is set to 200 ms once the above limit exceed messages will re-route to error handling sequence.

    Here I have given a ESB REST API which calls above endpoint. In this sample once the timeout exceeded client will notify with a error message.

     <api name="CalAPI" context="/cal">  
    <resource methods="GET">
    <inSequence>
    <send>
    <endpoint key="TimeoutEP"/>
    </send>
    </inSequence>
    <outSequence>
    <send/>
    </outSequence>
    <faultSequence>
    <header name="To" action="remove"></header>
    <property name="RESPONSE" value="true"></property>
    <property name="NO_ENTITY_BODY" scope="axis2" action="remove"></property>
    <log level="full"></log>
    <payloadFactory media-type="xml">
    <format>
    <ns:MyResponse xmlns:ns="http://services.samples">
    <ns:Error>We can't response you at this time, we will reponse through E-mail soon</ns:Error>
    </ns:MyResponse>
    </format>
    </payloadFactory>
    <send/>
    </faultSequence>
    </resource>
    </api>

    Circuit Breaker Pattern

    In his book Michael T. Nygard formalized and nicely presented Circuit Breaker pattern. I would recommend to read another excellent writing of Martin Fowler about this pattern.  

        
    This is how Fowler introduce Circuit Breaker pattern.

    "The basic idea behind the circuit breaker is very simple. You wrap a protected function call in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, without the protected call being made at all. Usually you'll also want some kind of monitor alert if the circuit breaker trips."
    [Source - http://martinfowler.com/bliki/CircuitBreaker.html ] 

    Circuit Breaker pattern define three states. 
    • Closed - Circuit is closed and communication with remote party is possible without any communication issues. 
    • Open  - After N times of failures Circuit goes to  open state.
      • For certain time interval system does not try to send messages to remote system further. 
      • Client receive an error message.   
    • Half-Open - After certain time interval system try to send limited number of messages to remote system, if the communication is successful Circuit is reset to Closed state.   

    Here is a simple state change diagram according to the Circuit Breaker pattern.




    WSO2 ESB  Endpoint includes feature-rich error handling mechanism that can be used to implement Circuit Breaker pattern. One difference is WSO2 ESB endpoints status names are different from Circuit Breaker paten status names.

    Now let's look at an example Endpoint definition, which in fact an improved version of above TimeoutEP.

     <endpoint name="CircuitBreakerEP">  
    <address uri="http://localhost:9764/CalculatorService-war_1.0.0/services/calculator_service/call">
    <suspendOnFailure>
    <initialDuration>40000</initialDuration>
    </suspendOnFailure>
    <markForSuspension>
    <errorCodes>101507,101508,101505,101506,101509,101500,101510,101001,101000,101503,101504,101501</errorCodes>
    <retriesBeforeSuspension>3</retriesBeforeSuspension>
    <retryDelay>400</retryDelay>
    </markForSuspension>
    <timeout>
    <duration>200</duration>
    <responseAction>fault</responseAction>
    </timeout>
    </address>
    </endpoint>


    Following are the important configuration details of above example. 
    1. initialDuration - Once the endpoint reach to 'suspended' state ( 'open' state  in Circuit Breaker pattern), it waits for 400 ms to perform next retry.
    2. retriesBeforeSuspension - This is the failure count to move endpoint into 'Suspended' ( 'open' state  in Circuit Breaker pattern) state. 
    3. retryDelay - This is the delay in between failure calls.  
    You can use above "CalAPI" to test this endpoint as well. 


     <api name="CalAPI" context="/cal">  
    <resource methods="GET">
    <inSequence>
    <send>
    <endpoint key="CircuitBreakerEP"/>
    </send>
    </inSequence>
    <outSequence>
    <send/>
    </outSequence>
    <faultSequence>
    <header name="To" action="remove"></header>
    <property name="RESPONSE" value="true"></property>
    <property name="NO_ENTITY_BODY" scope="axis2" action="remove"></property>
    <log level="full"></log>
    <payloadFactory media-type="xml">
    <format>
    <ns:MyResponse xmlns:ns="http://services.samples">
    <ns:Error>We can't response you at this time, we will reponse through E-mail soon</ns:Error>
    </ns:MyResponse>
    </format>
    </payloadFactory>
    <send/>
    </faultSequence>
    </resource>
    </api>

    Internal Implementation

    I have included following details to clarify pattern implementation in terms of WSO2 ESB concepts, you may skip this section if you are not interested about internal implementation details.

    Following two diagrams illustrate state flow of original Circuit Breaker pattern and WSO2 ESB implementation. Basically WSO2 ESB Endpoint "Active" state is identical to "Closed" state of the pattern and "Open" state is identical to ESB Endpoint "Suspended" state. One main difference is, in WSO2 ESB there is no separate state as "Half-Open", "Suspend" state encapsulates logics belong to both "Open" and "Half-Open" states.

    Another difference  is, in WSO2 ESB after 1st failure, Endpoint is moved into "Timeout" state and successive attempts will be executed from "Timeout" state.
    Circuit Breaker pattern flow. 


    WSO2 ESB  internal flow 

    Following are some advanced configurations that can be used to achieve more flexible and  more complex implementations of Circuit Breaker pattern.

    1. errorCodes - You can define what the failure code that Circuit Breaker should act on. Complete list of supported status codes can be found here
    2. progressionFactor, maximumDuration - By combining  with "retriesBeforeSuspension" property you can form more complex and dynamic retry behaviours, for more details refer here

    NOTE  : - In WSO2 API Manger, API-Gateway is a lightweight ESB node hence above discussed "Timeout" and "Circuit Breaker" pattern implementations can be seamlessly used in WSO2 API Manger as well.  

    Shani RanasingheUnable to get the hostName or IP address of the server when starting WSO2 Products

    Today I faced an issue when a wso2 server that needed restarting did not start up, after shutting down due to the error

    {org.apache.synapse.ServerConfigurationInformation} -  Unable to get the hostName or IP address of the server {org.apache.synapse.ServerConfigurationInformation}
    java.net.UnknownHostException: <hostname> : <hostname> : Name or service not known
    at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
    at org.apache.synapse.ServerConfigurationInformation.initServerHostAndIP(ServerConfigurationInformation.java:215)
    at org.apache.synapse.ServerConfigurationInformation.<init>(ServerConfigurationInformation.java:66)
    at org.apache.synapse.ServerConfigurationInformationFactory.createServerConfigurationInformation(ServerConfigurationInformationFactory.java:48)
    at org.wso2.carbon.mediation.initializer.ServiceBusInitializer.initESB(ServiceBusInitializer.java:365)
    at org.wso2.carbon.mediation.initializer.ServiceBusInitializer.activate(ServiceBusInitializer.java:182)


    The issue in my case was that it was not a wso2 bug. It is an issue with the machine.

    The OS I was using was ubuntu. The issue was that the hostname was not correctly set.


    The easy way  - temporary fix

    1) Run the following command. (assuming the hostname you want to set is abc001)

                        #    hostname abc001

    2) To verify the hostname just run the following command

                               # hostname


    it should output the hostname that we set as the output.


    Proper fix

    1) We need to fix this properly. In that case, we need to change the /sysconfig/network file.

    open this file as sudo or super user, and change the hostname in that file to be the hostname you require.

                               HOSTNAME=abc001

    2) Then you need to update the etc/hosts file, to reflect this change. you should map the entry

                           127.0.0.1 abc001 

                                   or
                          <local-ip> abc001

    3) After that you need to restart the network service.

                       #service network restart
                                           or 
                       #/etc/init.d/network restart

    4) Then verify the hostname has been set by running the command

                        # hostname


    References

    1) http://www.howtogeek.com/50631/how-to-change-your-linux-hostname-without-rebooting/


    Shani Ranasinghe[WSO2 APIM] - The Basic Architecture of WSO2 APIM - Part 1

    After a long time I decided to write a blog post, and this time about APIM. Mainly because my new team is the WSO2 APIM . Today I received a training from Nuwan Dias (WSO2 APIM team) on the basic architecture of APIM, and that training inspired me to come up with this article.

    To start off with, yes I did have a prior knowledge on the WSO2 APIM and have worked on it on various instances. But the training I received today cleared many doubts and managed to organize my knowledge on the WSO2 APIM very clearly. Hence, this article is focused for dummies, to gather a basic idea on the WSO2 APIM architecture.

    So, before I start there are some main components of WSO2 APIM which I will introduce briefly.

     1) The API Publisher
     2) The API Store
     3) The API Gateway
     4) The Key Manager

    The Publisher

    This is the component that is responsible for the work flows associated with API creation. This is where an API developer will have most of the work to do. API Publisher is a jaggery application (Jaggery is a framework to write webapps and HTTP-focused web services for all aspects of the application: front-end, communication, Server-side logic and persistence in pure Javascript). The publisher allows the creation of the API by allowing the developer to fill in a form. This form consists of many details such as, the API name, version ,throttling level etc.

    There are basically two ways of creating an API.
    •  Filling out the form in create API  on the WSO2 APIM.
    •  By importing a Swagger document (Swagger is a simple yet powerful representation of your RESTful AP) to WSO2 APIM.      
    API's created in this component initially is of the Created State. An API is associated with many states.

     The Store

    The Store is the component that exposes the published API's. The API's which are created in the publisher needs to be pushed to published state in order for the user to view the API's in the API Store. The API store is responsible for maintaining the API access. Whilst the API Store is responsible for many other activities, I would like to believe that it's main responsibility is to allow users interact with the API's and  subscribe to them.

    How the subscription and access is maintained is that the Store will show API's in a tenant domain, when the domain is selected upon navigating to the Store by a user.As mentioned before, these API's are only API's which are in the published state. For a user to access this API, the user requires to create an Application in the API Store. In a practical scenario, let's say there's an API to get weather information, and in my application I need to get the data for a functionality I intend to deliver. Then I would create an application, which would be an OAuth application, in the APIM Store. For a API I can subscribe using this created application which would maintain the subscriptions to API's. when this is done, I can create a token. This token will be a consumer secret and consumer key pair, which are in the world of OAuth terms for something similar to username/password. With this  the store also provides me an access token, which is a APPLICATION type token. With these tokens an application can access the subscribed API and obtain the service.

    The API store, just like the Publisher is a Jaggery web application.

    The Gateway

    The gateway is the access point for a external user to obtain the services offered by the API.

    When the Publisher publishes the API a synapse config (Apache Synapse is a lightweight and high-performance Enterprise Service Bus (ESB)) of the API is sent to the Gateway. In a typical production deployment this is the component that will be exposed. The gateway its self is an ESB, and hence sequences can be enforced on these API's. The API when invoked would be running this synapse config, and this config would have a set of handlers which are being called in sequence.These handlers are:
    •  Security handler
    •  Throtling handler
    •  Stats Handler
    •  Google analytics Handler (disaled by default)
    •  Extentions Handler
    These handlers are required to run in sequence because each handler populates the message context in the synapse and passes it on, which is required by the next handler in sequence. The gateway has endpoints for token and revoke which are proxies for the actual endpoint which is embedded into the keymanager component.

    The Keymanager

    The Keymanager is the component which handles the authorization/authentication part. The Keymanager can be configured to be the WSO2 IS as well. The API manager Key manager handles the security related aspects and issues and verifies keys, to the store and from the gateway.

    There is a JAX-RS web app embedded into the key manager which has the implementations for token endpoints and revoking tokens. These are the endpoints to which the gateway calls upon requesting tokens.

    With these four components being introduced, I will next discuss on how a API is published, how it is involved in the store, gatway and keymanaer in my next blog post.

    Till then, I hope this gives you a heads up on the API Manager Basic architecture and hope you read about it more. If Interested I have noted references below for your use.

    References
    [1] https://docs.wso2.com/display/AM180/Key+Concepts

    [2] http://synapse.apache.org/index.html
    [3] https://docs.wso2.com/display/AM180/WSO2+API+Manager+Documentation

    Shani Ranasinghe[WSO2 APIM] - The Basic Architecture of WSO2 APIM - Part 2- The Publisher - under the hood

     As a continuation of my previous post  The Basic Architecture of WSO2 APIM - Part 1, this post will concentrate on the publisher module, and its major functions. It is recommended that you read the aforementioned post in order to clearly understand the concepts described in this post.

    The Publisher is the starting point of an API life cycle. An API is created at the API Publisher by an API publishers. The Publisher is also a major component in the WSO2 APIM and was introduced in my earlier post.

    To start off with let me bring up the basic outline of the Publisher in a graphical mode.



     An API Publisher will ideally interact with the API Publisher module to create and manage an API. The API Publisher is a jaggery web app which provides and easy to work with GUI.

    Databases associated with the API Publisher

     The database configuration can be found at <APIM_HOME>/repository /conf/datasources/master-datasources.xml

        - Registry Database
          The db handles all registry related storage.
        - AM_DB
           This db stores information required for the APIM.
        -UM_DB
           This db stores information such as user permissions.

    Logging in 

    The end user will log into the API Publisher by entering the username and password in the GUI provided by the web app. Once the credentials are entered the API Publisher will validate it against the authentication manager set in the api-manager.xml found at repository/conf. It is defined under the tag <AuthManager>. This is the server which the Publisher and the store will point to for authentication. By default it will be the embedded LDAP which is the user store which is being pointed by localhost which is started on the APIM server it self. In a typical production deployment this could either be WSO2 IS, or any other IDP (identity provider).

    When the end user logs into the API Publisher in a SSO configured environment, it will send a SAML request to the Authentication Server. The Authentication server will process the authentication and send a signed response. The API Publisher then will validate this with the key stores.

    Once the authentication is completed it checks for authorization in the UM_DB by checking for publisher permissions. Once the authentication and the authorization succeeds it will succeed in allowing the user to access the API Publisher.

    Creating an API

    There are two ways of creating an API.
       1. By Filling out the form in the API Publisher web app
       2. By Importing a Swagger doc.

    When the API Publsher creates an API, the API information is stored in the Registry Database. This is specifically stored at the governance space of the registry database.Together with the API the swagger document related to the created API is also stored in the Registry database.

    When the API is created a synapse config of the API is also stored at the API Gateway. Since the API Gateway is a WSO2 ESB, the API is also capable of having sequences defined in the API definition.

    <api>
       <in>
             </custom in sequence>
       </in>
       <out>
              </custom out sequence>
       </out>
    </api>

    In parallel to this, when the APi is created, the AM_DB is also updated. A reference to the API is stored in this database tables. Since we have all the information of the API in the REG_DB, we only store a subset of the API information in the AM_DB, which is required for AM functionality.

    When an API is created it is by default in the "created" state.

    I hope this gives you a clear understanding on what happens in the publisher end, under the hoods.

    In my next few blog posts I am planning to discuss on the  rest of the modules, API Store, and  API Gateway. these will cover the Key manager as well.

    References
    1.https://docs.wso2.com/display/AM180/Key+Concepts
    2.https://docs.wso2.com/display/AM180/API+Developer+Tutorials

    Shani Ranasinghe[WSO2 APIM] - The Basic Architecture of WSO2 APIM - Part 3- The Store - under the hood

     In continuation to my previous post two posts on the The Basic Architecture of WSO2 APIM - Part 1  &   The Basic Architecture of WSO2 APIM - Part 2- The Publisher - under the hood  , this post is going to briefly discuss the architecture of the API Store component.  These posts are mainly targeted for dummies on WSO2 APIM. The aforementioned posts are a recommended read in order to understand this post clearly.

    First off all, what is the API Store?

    The API store is the play ground for the API consumer. An API consumer can self-register,  discover on API functionalities, subscribe to the API's, evaluate and interact with API's.

    The APIM store is also a jaggery web app.


    Databases associated with the API Store

     The database configuration can be found at <APIM_HOME>/repository /conf/datasources/master-datasources.xml

        - Registry Database
          The db handles all registry related storage.
        - AM_DB
           This db stores information required for the APIM.
               * The AM_ tables store information related to API's.
               * The IDN_ tables store information related to OAuth Identity.
          

    View API

    The APIM store is multi tenanted, and hence, at login, if multiple tenants are registered, it will prompt to select the tenant.

    Upon selection of tenant, the API's which are published by that tenant can be viewed on the API store. Here the APIM-Store will extract API's to display from the registry_db through a cache.

    When logged in, a user can view the API in a much detailed version and also edit it if permission is granted.

    The store has a feature where it shows the recently added api's for convenience.

     An Application & Subscribe to an API

    An Application in the APIM world is a concept of detaching API's from consumers.

    An Application is a single entity to which api's can be subscribed to. This application is created on the AM_DB database and when the api is subscribed the subscription is also recorded on the APIM.

    This application is then the unit of reference in the APIM store.

    According to WSO2 docs an application is defined as ,

    An application is primarily used to decouple the consumer from the APIs. It allows you to :
    • Generate and use a single key for multiple APIs
    • Subscribe multiple times to a single API with different SLA le
     Creating Keys

    Creating keys in order to invoke an API can be done in the APIM store. Once the application is created, we can create tokens for that application.

    When it comes to tokens, we can create application tokens (access tokens) and application user tokens. The types are APPLICATION & APPLICATION_USER. Access tokens are tokens which the application developer gets.

    In the APIM Store when we create these tokens, we get the consumer_key and consumer_secret which is per application. The APIM store will talk to the APIM Key manager (in future releases there will be the capability of plugging in a custom Key manager, but for the time being it is either only the WSO2 APIM key manager or WSO2 IS as a keymanager) and the key manager will generate the keys. These keys will be stored in the AM_DB as well.

    The tokens generated are associated to the application with a validity period.

     Note : Consumer key is analogous to a user name and the consumer secret it analogous to a user password.

    Regenerating Tokens

    The WSO2 APIM allows the regeneration of access tokens. In this process  there are 3 steps that will be executed.

     1) The existing token will be revoked.
     2) A new token will be generated
     3) The validity period of the token will be updated in the database.

    When these three steps are performed the new generated token will be returned.


    Indexing 

    The store indexes the API's in order for better performance. the index info can be found at /solr location in the registry. Information on this can be found at [1].

    Workflows 

    In the APIM store internals, many workflows are used. One such example of this is the  subscription creation.

    In the subscription creation a workflow executor is used. This workflow executors have two methods.

         1. execute.


         2. complete

     In order for it to be more clear on the workflow implementations, let me bring up a diagram explaining it.



    The implementation of the workflow could take two paths.

    1) The default implementation
         this is where the method "complete" is directly called within the method "execute" method.
    2) A custom workflow definition.
       for example's sake we are using WSO2 BPS as the workflow execution unit here.  We need to write a custom workflow executor and use it in the APIM. Via a web service the external workflow will be executed.  (for the soap service, the call back URL would be the BPS url)

    When a workflow is executed, the workflow detail is being put into the AM_WORKFLOWS in the AM_DB, and the status of the workflow is being moved to ON_HOLD. Once the complete is called, it updates the status to either APPROVED or REJECTED.

    More detailed information on this can be found at [2] & [3].



    References


    [1] https://docs.wso2.com/display/AM180/Add+Apache+Solr-Based+Indexing
    [2] https://docs.wso2.com/display/AM180/Customizing+a+Workflow+Extension
    [3] https://docs.wso2.com/display/AM180/Adding+an+API+Subscription+Workflow
    [4] https://docs.wso2.com/display/AM180/Quick+Start+Guide#QuickStartGuide-Introductiontobasicconcepts






    Shani Ranasinghe[WSO2 APIM] - The Basic Architecture of WSO2 APIM - part 4- Gateway & Key Manager - Under the hoods

    In this post I will briefly introduce how the Gatway and the Key manager interacts in order for an API to be successfully invoked.

    Let me bring up a diagram first.




    In a real world distributed deployment, only the WSO2 APIM Gateway would be  exposed to the outside world. So with the creation of keys for a certain application that would be subscribed to many API's, (If you are not familiar with this context, please refer to the blog post I had written before The Basic Architecture of WSO2 APIM - Part 2- The Store - under the hood  for a crash course) you would be able to invoke an API via the gateway.

    Ideally an Application would have the consumer_key and the consumer_secret hardcoded in the application it self. When invoking the API, the username, password would have to be added and the application would pass in the username, password, consumer_key and consumer_secret  to the gateway. The Gateway has some token API's [1]. which are
       - /token
       - /revoke

    When calls are made to these API's,  the gateway calls the Key manager Jax-RX  in order to verify the access the token.  The key-manager would call the AM_DB, retrieve the access token, and verify the access token.  it will return the API Info DTO which includes the meta data of the access token, which includes the validity period, refresh token and scopes.

    When the API's are invoked, the Gateway does not call the Key manager at every invocation.  The APIM Gatway makes use of a cache  for this. However, this cache can be turned off. [2]

    Invoking the API

    When the API's are invoked there are several authorization grant types that we could use. The WSO2 APIM supports the same grant types the WSO2 IS supports.  The password grant type is used only when the application is a trusted application  Client Credentials grant type requires only the consumer_key & consumer_secret. [1].



    API Handlers

    When the API's is created it will store the synapse configuration of the API in the WSO2 APIM Gateway, being another ESB (WSO2 ESB).  When the api is invoked and it hits the Gateway, it will execute the API's in and out sequences  which would have a set of handlers. The API when created would have a set of default handlers defined in the API [3].

       1) Security handler/ APIAuthenticationHandler
         The security handler is to validate the Oauth token used to invoke the API.
      2) Throttle handler/ APIThrottleHandler
         Throttles the request based on the throttle policy. This is done based on two counts, 
         the global count and the local count.

       3) Stats hander/ APIMgtUsageHandler
         Helps to push data to the BAM for analytics.
       4) Google analytics handler/ APIMgtGoogleAnalyticsTrackingHandler
          Pushes events to Google analytics for analytics.

       5) Extentions handler/ APIManagerExtensionHandler
         Executes extention's sequences


     Each of these handler classes has 2 methods
      1) handle request
      2) handle response

     These methods have been overridden in each of these handlers to accomplish the task that the handler is written for. There is also a possibility that you could write your own handler, and plug it in. Details on this could be found at [4].

    I hope you were able to gain a basic knowledge on what happens internally when an API is invoked on the WSO2 APIM, on a highlevel.  By going through the references you would be able to gain a much detailed knowledge on the APIM gateway and WSO2 APIM as a whole.



    References

    [1] https://docs.wso2.com/display/AM180/Token+API
    [2] https://docs.wso2.com/display/AM180/Configuring+Caching
    [3] https://docs.wso2.com/display/AM180/Writing+Custom+Handlers
    [4] https://docs.wso2.com/display/AM180/Writing+Custom+Handlers#WritingCustomHandlers-Writingacustomhandler

    Shani Ranasinghe[WSO2 APIM] - The Basic Architecture of WSO2 APIM -part 5- Statistics in APIM

    In this blog post I will briefly go about how the APIM publisher and store are able to draw up graphs with statistical information on the API's. This blog post is targetted for new bee's who would like to get a bird's eye view on the functionality.

    In the WSO2 APIM Gatway, as I explained in my previous post The Basic Architecture of WSO2 APIM - part 3- Gateway & Key Manager - Under the hoods, it has a set of handlers defined per API.  One of these handlers is the APIMgtUsageHandler , which would invoke the org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataPublisher .  The APIMgtUsageDataPublisher  is configured to publish the events to the BAM server that has been configured with the APIM. 

    Illustrated by a diagram is the process of publishing of stats and viewing them on the publisher /store app's 





    The WSO2 APIM Gateway would  publish the events  via the thrift protocol to the WSO2 BAM server. The BAM is more likely a BAM cluster in the real world production environment. The BAM server then writes the data to a No Sql database, cassandra.  Then the BAM Analyzer to which the APIM tool box is deployed to, will fetch the data batch by batch from the  cassandra database.  The BAM Analyzer  is capable of summarizing the data, by using hive. The hive scripts have to pre written and deployed in the server. The BAM analyzer then will push the summarized data to an RDBMS.

    The WSO2 APIM Store and the WSO2 Publisher then will pull the data from the RDBMS  and will display the data in the APIM Store and publisher analytics pages.

    This is a very brief explanation of what happens in the APIM when statistics are to be displayed.

    Detailed information can be found  at the references listed below.

    References

    [1] https://docs.wso2.com/display/AM180/Publishing+API+Runtime+Statistics#PublishingAPIRuntimeStatistics-ConfigureWSO2BAM
    [2] http://wso2.com/library/articles/2013/08/how-to-use-wso2-api-manager-and-bam-together-to-analyse-api-usage-stats/

     

    Lali DevamanthriTech Giants in Image Recognition Supremacy

    The race to exascale isn’t the only rivalry stirring up the advanced computing space. Artificial intelligence sub-fields, like deep learning, are also inspiring heated competition from tech conglomerates around the globe.

    When it comes to image recognition, computers have already passed the threshold of average human competency, leaving tech titans, like Baidu, Google and Microsoft, vying to outdo each other.

    The latest player to up the stakes is Chinese search company Baidu. Using the ImageNet object classification benchmark in tandem with Baidu’s purpose-built Minwa supercomputer, the search giant achieved an image identification error rate of just 4.58 percent, besting humans, Microsoft and Google in the process.

    An updated paper [PDF] from a team of Baidu engineers, describes the latest accomplishment carried out by Baidu’s image recognition system, Deep Image, consisting of “a custom-built supercomputer dedicated to deep learning [Minwa], a highly optimized parallel algorithm using new strategies for data partitioning and communication, larger deep neural network models, novel data augmentation approaches, and usage of multi-scale high-resolution images.”

    “Our system has achieved the best result to date, with a top-5 error rate of 4.58% and exceeding the human recognition performance, a relative 31% improvement over the ILSVRC 2014 winner,” state the report’s authors.

    The Baidu colleagues add that this is significantly better than the latest results from both Google, which reported a 4.82 percent error rate, and Microsoft, which days prior had declared victory over the average human error rate (of 5.1 percent) when it achieved a 4.94 percent score. Both companies were also competing in the ImageNet Large Scale Visual Recognition Challenge.


    Dedunu DhananjayaAlfresco 5.0.1 Document Preview doesn't work on Ubuntu?

    I recently installed Alfresco for testing in vagrant instance. I used Ubuntu image for the vagrant instance. But I forgot to install all the libraries which is necessary to be installed on Ubuntu before you install alfresco. But fortunately alfresco worked with out those dependencies.

    http://docs.alfresco.com/5.0/concepts/install-lolibfiles.html

    Above link gives you what are the libraries you should install before you install Alfresco. You should run below command to install libraries.
    sudo apt-get install libice6 libsm6 libxt6 libxrender1 libfontconfig1 libcups2
    But still office document previews didn't work properly. Some documents worked properly but some of them did't. Then I tried to debug it with one of my colleagues. We found below text in our logs


    Then we tried to run soffice application from terminal. Look what we got!
    /home/vagrant/alfresco-5.0.1/libreoffice/program/oosplash: error while loading shared libraries: libXinerama.so.1: cannot open shared object file: No such file or directory
    Then we realised that we should install that library on Ubuntu. Run below command on Ubuntu server to install the missing library.

    sudo apt-get install libxinerama1

    Make sure you run both commands above!

    Hiranya JayathilakaUsing Java Thread Pools

    Here's a quick (and somewhat dirty) solution in Java to process a set of tasks in parallel. It does not require any third party libraries. Users can specify the tasks to be executed by implementing the Task interface. Then, a collection of Task instances can be passed to the TaskFarm.processInParallel method. This method will farm out the tasks to a thread pool and wait for them to finish. When all tasks have finished, it will gather their outputs, put them in another collection, and return it as the final outcome of the method invocation.
    This solution also provides some control over the number of threads that will be employed to process the tasks. If a positive value is provided as the max argument, it will use a fixed thread pool with an unbounded queue to ensure that no more than 'max' tasks will be executed in parallel at any time. By specifying a non-positive value for the max argument, the caller can request the TaskFarm to use as many threads as needed.
    If any of the Task instances throw an exception, the processInParallel method will also throw an exception.

    package edu.ucsb.cs.eager;

    import java.util.ArrayList;
    import java.util.Collection;
    import java.util.List;
    import java.util.concurrent.*;

    public class TaskFarm<T> {

    /**
    * Process a collection of tasks in parallel. Wait for all tasks to finish, and then
    * return all the results as a collection.
    *
    * @param tasks The collection of tasks to be processed
    * @param max Maximum number of parallel threads to employ (non-positive values
    * indicate no upper limit on the thread count)
    * @return A collection of results
    * @throws Exception If at least one of the tasks fail to complete normally
    */
    public Collection<T> processInParallel(Collection<Task<T>> tasks, int max) throws Exception {
    ExecutorService exec;
    if (max <= 0) {
    exec = Executors.newCachedThreadPool();
    } else {
    exec = Executors.newFixedThreadPool(max);
    }

    try {
    List<Future<T>> futures = new ArrayList<>();

    // farm it out...
    for (Task t : tasks) {
    final Task task = t;
    Future f = exec.submit(new Callable<T>() {
    @Override
    public T call() throws Exception {
    return task.process();
    }
    });
    futures.add(f);
    }

    List<T> results = new ArrayList<>();

    // wait for the results
    for (Future f : futures) {
    results.add(f.get());
    }
    return results;
    } finally {
    exec.shutdownNow();
    }
    }

    }

    Sajith KariyawasamSetting up a VM cluster in VirtualBox

    You may come across a requirement to setup a cluster of virtual machines which need to be able to communicate among themselves as well as to access internet within each virtual machine instance. With the default network settings in VirtualBox you won't be able to achieve inter-VM communication. For that you need to setup a Host-only adapter.

    Go to VirtualBox UI, File --> Preferences --> Network --> Host-only networks

    Click "Add", and fill out IPV4 address as 192.168.56.1 and Network Mask 255.255.255.0. In the DHCP Server tab, untick enable DHCP server to disable it.



    Now we have configured the host-only adapter. We can use this adapter when creating new virtual machines.

    My requirement is to setup 2 virtual machines with the IP s 192.168.56.20 and 192.168.56.21.
    I will show you how to setup one virtual machine.

    Select the virtual machine you need to configure your network settings, and click on "Settings" icon --> Then click on "Network", you will get a UI as follows.


    There, you tick on "Enable Network Adapter", Select Host-only Adapter in Attached to dropdown, and select the hostonly adapter that we configured earlier (vboxnet0) Click on the next tab to configure NAT, as follows.


    Now you have configured both host-only and NAT Start your virtual machine.
     But still, if you do an "ifconfig" from your virtual machine you will not see any 192.168.xx ip has assigned. You need to do one more setting as follows.

    Go to /etc/network/interfaces file and add following.

     #---------------------------------------
     auto lo
     iface lo inet loopback

     # Host-only interface
     auto eth0
     iface eth0 inet static
         address 192.168.56.20
         netmask 255.255.255.0
         network 192.168.56.0
         broadcast 192.168.56.255

     #-----------------------------------------

    Restart your virtual machine and now you will see your 192.168.56.20 interface is up. Same way, you can configure your 192.168.56.21 virtual machine and so on....

    You can ping from 192.168.56.21 machine to 192.168.56.20 and vise-versa

    Madhuka UdanthaAdding Configuration file for Python

    'configuration files' or 'config files' configure the initial settings for some computer programs. They are used for user applications. Files can be changed as needed. An administrator can control which protected resources an application can access, which versions of assemblies an application will use, and where remote applications and objects are located. It is important to have config files in your applications. Let look at how to implement python config file.

    The 'ConfigParser' module has been renamed to 'configparser' in Python 3. The 2to3 tool will automatically adapt imports when converting your sources to Python 3. This post I will be using Python 2. The ConfigParser class implements a basic configuration file parser language which provides a structure similar to what you would find on Microsoft Windows INI files.

    1. We have to create two files. config file and python file to read this config. (Both are locate in same directory for this sample. you can locate in directory when you need)

    • student.ini
    • configure-reader.py

    2. Add some data for configure files

    The configuration file consists of sections, led by a [section] header and followed by name: value entries. Lines beginning with '#' or ';' are ignored and may be used to provide comments. Here we can below lines for configure file

    1 [SectionOne]
    2 Name: James
    3 Value: Yes
    4 Age: 30
    5 Status: Single
    6 Single: True
    7
    8
    9 [SectionTwo]
    10 FavouriteSport=Football
    11 [SectionThree]
    12 FamilyName: Johnson
    13
    14 [Others]
    15 Route: 66

    3. Let try to read this configure files in python


    1 import os
    2 import ConfigParser
    3
    4 path = os.path.dirname(os.path.realpath(__file__))
    5 Config = ConfigParser.ConfigParser()
    6 Config.read(path+"\\student.ini")
    7 print Config.sections()
    8 #==>['Others', 'SectionThree', 'SectionOne', 'SectionTwo']

    4. Let modify the code more standard with function.


    1 import os
    2 import ConfigParser
    3
    4 path = os.path.dirname(os.path.realpath(__file__))
    5 Config = ConfigParser.ConfigParser()
    6 Config.read(path+"\\student.ini")
    7
    8
    9 def ConfigSectionMap(section):
    10 dict1 = {}
    11 options = Config.options(section)
    12 for option in options:
    13 try:
    14 dict1[option] = Config.get(section, option)
    15 if dict1[option] == -1:
    16 DebugPrint("skip: %s" % option)
    17 except:
    18 print("exception on %s!" % option)
    19 dict1[option] = None
    20 return dict1
    21
    22 Name = ConfigSectionMap("SectionOne")['name']
    23 Age = ConfigSectionMap("SectionOne")['age']
    24 Sport = ConfigSectionMap("SectionTwo")['favouritesport']
    25 print "Hello %s. You are %s years old. %s is your favourite sport." % (Name, Age,Sport)

    image


    It is you time too. Play more with it.

    Lahiru SandaruwanAutoscaling in Apache Stratos: Part II

    Elasticity fine tuning parameters to tune a PaaS built using Apache Stratos 4.0


    This is a continuation of the first blog post regarding Apache Stratos Autoscaling. Here it gives a brief explanation on how we can tune the elasticity parameters in Apache Stratos. I've copied the Drools file at the end, which we are using in Autoscaler for Apache Stratos 4.0.0 release which is the latest at the moment(Removed imports to shorten).

    Terms and usecases of terms

    What is a partition?

    A portion of an IaaS which can be defined by any separation that an IaaS has. 

    E.g. Zone, Rack, etc.

    What are Partition Groups/ Network partitions?
    Partition groups are also known as "network partitions". As the name implies, this is the an area of an IaaS that is bound by one network of an IaaS. We can include several partitions inside a network partition.

    E.g. Region

    How are the partitions used by Stratos?

    We can define partition in stratos to manage the instance count. You can define min and max for that. We can select to use 'round-robin' or 'one-after-another' algorithms when scaling happens among several partitions.

    What is a deployment policy?

    Your definition point of your PaaS is the deployment policy. There, you can organize your partitions and partition groups(network partitions) as you want.

    What is an Autoscaling policy?

    A policy that the user define the thresholds of memory consumption, load average, and request in flights. Those thresholds are used by Autoscaler to take the scaling decisions.

    Implementation Details 

    Basic logic has been described in part I of this series. Here i go into to implementation log in details. If you want a better understanding on the complete architecture and network partition go through Lakmal's blog.

    We are running the scaling rule against a network partition(Partition group defined in deployment policy) of the cluster.

    All the global attributes are set to the Drools session before firing it. (i.e. It should have the global values passed to before running it). We will have the clusterwise averages, gradient, and second derivative relevant to network partition of the particular cluster of following, since we have Network partition context object.

    • Requests in flight
    • Memory consumption
    • Load average
    We will take scale up decision based on all 3 mentioned above.

    We also receive memberwise Load average and memory consumption values. We use those values when we take the decision to scale down, to find the member who has least load at the moment. Then that member will be terminated. 

    Also we should know whether above said parameters are updated or not. We do not need to run the rule twice for same set of stats received. Therefore we have flags called 'rifReset', 'mcReset', and 'laReset' global values to represent Requests in flight, Memory consumption, and Load average respectively.

    Autoscale policy object(which has information from deployed autoscale policy json) will have thresholds that we require to take scale up/down decisions as i explained at terms definition.

    Tuning parameters


    Scale up and scale down factors control parameters (0.8 and 0.1)

    We use these values for deciding whether we need to scale up or down along with the threshold value given by user for particular cluster.

    E.g. Here we multiply threshold from scale up factor to get the decision of scaling up,

    scaleUp : Boolean() from ((rifReset && (rifPredictedValue > rifAverageLimit * 0.8)) || (mcReset && (mcPredictedValue > mcAverageLimit * 0.8)) || (laReset && (laPredictedValue > laAverageLimit * 0.8)))

    Users can adjust these parameters according to their requirement. i.e. If users want to scale up when the 70% of the threshold is reached, s/he should set 0.7 the scaling up factor.

    "overallLoad" formula

    Here we give the weight of 50% for both CPU and Memory consumption when calculating the overall load. Users can adjust this weight in the rules file in following fomula,

    overallLoad = (predictedCpu + predictedMemoryConsumption) / 2;

    Predicted value for next minute

    We have passed 1 in default for formula of predicting attributes. That means we get the result predicting for one minute. If user wants to change prediction interval, s/he should pass the number in minutes.

    E.g.
    getPredictedValueForNextMinute(loadAverageAverage, loadAverageGradient, loadAverageSecondDerivative, 1)

    Scale Down Requests Count

    Scale down request count mean the number of turns that we wait after first request is made to scale down, for actually take the scale down action for the cluster.

    $networkPartitionContext.getScaleDownRequestsCount() > 


    [1] http://www.slideshare.net/hettiarachchigls1/autoscaler-architecture-of-apache-stratos-400
    [2] http://www.youtube.com/watch?v=DyWtCXT8Vqk
    [3] http://www.sc.ehu.es/ccwbayes/isg/administrator/components/com_jresearch/files/publications/autoscaling.pdf
    [4] 

    global org.apache.stratos.autoscaler.rule.RuleLog log;
    global org.apache.stratos.autoscaler.rule.RuleTasksDelegator $delegator;
    global org.apache.stratos.autoscaler.policy.model.AutoscalePolicy autoscalePolicy;
    global java.lang.String clusterId;
    global java.lang.String lbRef;
    global java.lang.Boolean rifReset;
    global java.lang.Boolean mcReset;
    global java.lang.Boolean laReset;

    rule "Scaling Rule"
    dialect "mvel"
    when
            $networkPartitionContext : NetworkPartitionContext ()

            $loadThresholds : LoadThresholds() from  autoscalePolicy.getLoadThresholds()
       algorithmName : String() from $networkPartitionContext.getPartitionAlgorithm();
            autoscaleAlgorithm : AutoscaleAlgorithm() from  $delegator.getAutoscaleAlgorithm(algorithmName)

            eval(log.debug("Running scale up rule: [network-partition] " + $networkPartitionContext.getId() + " [cluster] " + clusterId))
            eval(log.debug("[scaling] [network-partition] " + $networkPartitionContext.getId() + " [cluster] " + clusterId + " Algorithm name: " + algorithmName))
            eval(log.debug("[scaling] [network-partition] " + $networkPartitionContext.getId() + " [cluster] " + clusterId + " Algorithm: " + autoscaleAlgorithm))


            rifAverage : Float() from  $networkPartitionContext.getAverageRequestsInFlight()
            rifGradient : Float() from  $networkPartitionContext.getRequestsInFlightGradient()
            rifSecondDerivative : Float() from  $networkPartitionContext.getRequestsInFlightSecondDerivative()
            rifAverageLimit : Float() from  $loadThresholds.getRequestsInFlight().getAverage()
       rifPredictedValue : Double() from $delegator.getPredictedValueForNextMinute(rifAverage, rifGradient, rifSecondDerivative, 1)

            memoryConsumptionAverage : Float() from  $networkPartitionContext.getAverageMemoryConsumption()
            memoryConsumptionGradient : Float() from  $networkPartitionContext.getMemoryConsumptionGradient()
            memoryConsumptionSecondDerivative : Float() from  $networkPartitionContext.getMemoryConsumptionSecondDerivative()
            mcAverageLimit : Float() from  $loadThresholds.getMemoryConsumption().getAverage()
       mcPredictedValue : Double() from $delegator.getPredictedValueForNextMinute(memoryConsumptionAverage, memoryConsumptionGradient, memoryConsumptionSecondDerivative, 1)

            loadAverageAverage : Float() from  $networkPartitionContext.getAverageLoadAverage()
            loadAverageGradient : Float() from  $networkPartitionContext.getLoadAverageGradient()
            loadAverageSecondDerivative : Float() from  $networkPartitionContext.getLoadAverageSecondDerivative()
            laAverageLimit : Float() from  $loadThresholds.getLoadAverage().getAverage()
       laPredictedValue : Double() from $delegator.getPredictedValueForNextMinute(loadAverageAverage, loadAverageGradient, loadAverageSecondDerivative, 1)


            scaleUp : Boolean() from ((rifReset && (rifPredictedValue > rifAverageLimit * 0.8)) || (mcReset && (mcPredictedValue > mcAverageLimit * 0.8)) || (laReset && (laPredictedValue > laAverageLimit * 0.8)))
            scaleDown : Boolean() from ((rifReset && (rifPredictedValue < rifAverageLimit * 0.1)) && (mcReset && (mcPredictedValue < mcAverageLimit * 0.1)) && (laReset && (laPredictedValue < laAverageLimit * 0.1)))

            eval(log.debug("[scaling] " + " [cluster] " + clusterId + " RIF predicted value: " + rifPredictedValue))
            eval(log.debug("[scaling] " + " [cluster] " + clusterId + " RIF average limit: " + rifAverageLimit))

            eval(log.debug("[scaling] " + " [cluster] " + clusterId + " MC predicted value: " + mcPredictedValue))
            eval(log.debug("[scaling] " + " [cluster] " + clusterId + " MC average limit: " + mcAverageLimit))

            eval(log.debug("[scaling] " + " [cluster] " + clusterId + " LA predicted value: " + laPredictedValue))
            eval(log.debug("[scaling] " + " [cluster] " + clusterId + " LA Average limit: " + laAverageLimit))

            eval(log.debug("[scaling] " + " [cluster] " + clusterId + " Scale-up action: " + scaleUp))
            eval(log.debug("[scaling] " + " [cluster] " + clusterId + " Scale-down action: " + scaleDown))

    then
            if(scaleUp){

                $networkPartitionContext.resetScaleDownRequestsCount();
                Partition partition =  autoscaleAlgorithm.getNextScaleUpPartition($networkPartitionContext, clusterId);
                if(partition != null){
                    log.info("[scale-up] Partition available, hence trying to spawn an instance to scale up!" );
                    log.debug("[scale-up] " + " [partition] " + partition.getId() + " [cluster] " + clusterId );
                    $delegator.delegateSpawn($networkPartitionContext.getPartitionCtxt(partition.getId()), clusterId, lbRef);
                }
            } else if(scaleDown){

                log.debug("[scale-down] Decided to Scale down [cluster] " + clusterId);
                if($networkPartitionContext.getScaleDownRequestsCount() > 5 ){
                    log.debug("[scale-down] Reached scale down requests threshold [cluster] " + clusterId + " Count " + $networkPartitionContext.getScaleDownRequestsCount());
                    $networkPartitionContext.resetScaleDownRequestsCount();
                    MemberStatsContext selectedMemberStatsContext = null;
                    double lowestOverallLoad = 0.0;
                    boolean foundAValue = false;
                    Partition partition =  autoscaleAlgorithm.getNextScaleDownPartition($networkPartitionContext, clusterId);
                    if(partition != null){
                        log.info("[scale-down] Partition available to scale down ");
                        log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] " + clusterId);
                        partitionContext = $networkPartitionContext.getPartitionCtxt(partition.getId());

                        for(MemberStatsContext memberStatsContext: partitionContext.getMemberStatsContexts().values()){

                            LoadAverage loadAverage = memberStatsContext.getLoadAverage();
                            log.debug("[scale-down] " + " [cluster] "
                                + clusterId + " [member] " + memberStatsContext.getMemberId() + " Load average: " + loadAverage);

                            MemoryConsumption memoryConsumption = memberStatsContext.getMemoryConsumption();
                            log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] "
                                + clusterId + " [member] " + memberStatsContext.getMemberId() + " Memory consumption: " + memoryConsumption);

                            double predictedCpu = $delegator.getPredictedValueForNextMinute(loadAverage.getAverage(),loadAverage.getGradient(),loadAverage.getSecondDerivative(), 1);
                            log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] "
                                + clusterId + " [member] " + memberStatsContext.getMemberId() + " Predicted CPU: " + predictedCpu);

                            double predictedMemoryConsumption = $delegator.getPredictedValueForNextMinute(memoryConsumption.getAverage(),memoryConsumption.getGradient(),memoryConsumption.getSecondDerivative(), 1);
                            log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] "
                                + clusterId + " [member] " + memberStatsContext.getMemberId() + " Predicted memory consumption: " + predictedMemoryConsumption);

                            double overallLoad = (predictedCpu + predictedMemoryConsumption) / 2;
                            log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] "
                                + clusterId + " [member] " + memberStatsContext.getMemberId() + " Overall load: " + overallLoad);

                            if(!foundAValue){
                                foundAValue = true;
                                selectedMemberStatsContext = memberStatsContext;
                                lowestOverallLoad = overallLoad;
                            } else if(overallLoad < lowestOverallLoad){
                                selectedMemberStatsContext = memberStatsContext;
                                lowestOverallLoad = overallLoad;
                            }
                        }
                        if(selectedMemberStatsContext != null) {
                            log.info("[scale-down] Trying to terminating an instace to scale down!" );
                            log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] "
                                + clusterId + " Member with lowest overall load: " + selectedMemberStatsContext.getMemberId());

                            $delegator.delegateTerminate(partitionContext, selectedMemberStatsContext.getMemberId());
                        }
                    }
                } else{
                     log.debug("[scale-down] Not reached scale down requests threshold. " + clusterId + " Count " + $networkPartitionContext.getScaleDownRequestsCount());
                     $networkPartitionContext.increaseScaleDownRequestsCount();

                 }
            }  else{
                log.debug("[scaling] No decision made to either scale up or scale down ... ");

            }

    end

    Sohani Weerasinghe

    Setup svn server on ubuntu

    This post describes about setting up the svn server on ubuntu.
    • Install Subversion
    $ sudo apt-get install subversion libapache2-svn apache2

    • Configure Subversion
    Create a directory for where you want to keep your subversion repositories:
    $ sudo mkdir /subversion
    $ sudo chown -R www-data:www-data /subversion/

    • Open the subversion config file dav_svn.conf (/etc/apache2/mods-available) and add the lines at the end as shown below. Comment or delete all the other lines in the config file:
    #</Location>
    <Location /subversion>
    DAV svn
    SVNParentPath /subversion
    AuthType Basic
    AuthName "Subversion Repository"
    AuthUserFile /etc/apache2/dav_svn.passwd
    Require valid-user
    </Location>
    • Now create a SVN user using the following command. Here I create a new SVN user called sohani with password msc12345:

    $ sudo htpasswd -cm /etc/apache2/dav_svn.passwd sohani
    New password: 
    Re-type new password: 
    Adding password for user sohani

    Use -cm parameter for the first time only, you can create another user with only -m parameter.
    • Create Subversion repository called sohani.repo under /subversion directory:

    $ cd /subversion/
    $ sudo svnadmin create sohani.repo
    Restart Apache service:

    $ /etc/apache2 sudo service apache2 reload
    • Open the browser and navigate to http://ip-address/subversion/sohani.repo. Enter the SVN username and password which you have created in the earlier step. In my case, username is sohani and password is msc12345.


    Ushani BalasooriyaSimple explaination for URL configurations done during IS as Key Manager in WSO2 API Manager

    When you configure as IS as Key Manager, this document can be refereed. The below explanations for some of the URLs given in the configuration.

    In WSO2 IS Side :


    Make the following changes in the api-manager.xml file you just copied.

    • Change the <RevokeAPIURL> so that it points to the API Manager server. Note that if API Manager is running in distributed mode (has a separate node for the Gateway), you need to point this URL to the Gateway node. This is done so that when the token is revoked, the Gateway cache is updated as well. The port value you enter here must be the NIO port. See Default Ports of WSO2 Products for more information.
     <RevokeAPIURL>https://${GATEWAY_SERVER_HOST}:{nio/passthrough port}/revoke</RevokeAPIURL>

    Why do we point to API Manager ? This is because this will call the Gateway node in API manager which points to the Key Manager

    Why we point to the NIO/Passthru port ? This is because, here we will call the _RevokeAPI_.xml which is deployed in synapse folder of the Gateway. (In a distributed scenario, we point to the _RevokeAPI_.xml that resides in the Gateway worker node)

    Note : So in a distributed scenario (APIM distributed scenario), In store if you call Revoke or Regenerate, store will call the key Manager/Validator and key Manager/Validator will call the gateway.

    So in Store we have to configure KeyValidator/Key manager server url and in Key Manager/Validator we have to configure gateway server url (Passthru). Inside Gateway the apis will be called (NIO)


    • Change the <ServerURL> occurring under the <APIGateway> (of the Key Manager/Key Validator node) section so that it points to the API Manager server. If you are using distributed mode, this needs to point to the Gateway node as well. This is done so that when the token is regenerated, the Gateway cache is updated as well. The port value you enter here must be the management transport port.

    <ServerURL>https://${GATEWAY_SERVER_HOST}:{port}/services/</ServerURL>

    Why we point to API Manager ? This  is because this will call the Gateway node in API manager. This is to identify the gateway node.

    Why we point to servlet port ? This is because it calls the admin services.

    Note : This is same like as above explained. Since IS is the Key Validator/Manager it calls the gateway.


    In WSO2 APIM Side :

    Open the api-manager.xml file found in the <APIM_HOME>/repository/conf directory and change the following. 

    • Change the ServerURL of the AuthManager to point to IS.
      <ServerURL>https://${IS_SERVER_HOST}:{port}/services/</ServerURL>

    Why IS : This is because authentication will be done via Key manager/validator

    Why we point to servlet port ?
    This is because it calls the admin services.

    • Change the ServerURL of the APIKeyManager to point to IS.
        <ServerURL>https://${IS_SERVER_HOST}:{port}/services/</ServerURL>

    Why IS : This is because authentication will be done via Key manager/validator
    Why we point to servlet ports : This is because it calls admin services

    Usage of ports in a gateway cluster when fronted by Nginx:

    GW manager : Use to publish APIs - So only the servlet ports will be used
    GW worker : Use when invoking - So only the passthrough/NIO ports will be used

    Note : When you configure API endpoints in gateway (Which is in synapse folder  (E.g.,     _AuthorizeAPI_.xml, _RevokeAPI_.xml, _TokenAPI_.xml), you should edit them only in gateway manager since it will depsync to workers. Otherwise it can cause issues.

    sanjeewa malalgodaHow to add custom message to WSO2 API Publisher - Show custom message based on user input or selections

    In API Manager we can add sub themes and change look and feel of jaggery applications.
    Please follow below instructions. Here i have listed instructions to add new warning message when you changed API visibility.

    (1) Navigate to "/repository/deployment/server/jaggeryapps/publisher/site/themes/default/subthemes" directory.
    (2) Create a directory with the name of your sub theme. For example "new-theme".
    (3) Copy /repository/deployment/server/jaggeryapps/publisher/site/themes/default/templates/item-design/template.jag to the new subtheme location "/repository/deployment/server/jaggeryapps/publisher/site/themes/default/subthemes/new-theme/templates/item-design/template.jag".

    (4) Go to template.jag file in sub themes directory we created and find following code block.

            $('#visibility').change(function(){
                var visibility = $('#visibility').find(":selected").val();
                if (visibility == "public" || visibility == "private" || visibility == "controlled"){
                    $('#rolesDiv').hide();
                } else{
                    $('#rolesDiv').show();
                }
            });
    Then change it as follows. You can change message if need.

            $('#visibility').change(function(){
                var visibility = $('#visibility').find(":selected").val();

                if (visibility != "public"){
                jagg.message({content:"You have changed visibility of API, so based on visibility subscription availability may change",type:"warn"});
                }

                if (visibility == "public" || visibility == "private" || visibility == "controlled"){
                    $('#rolesDiv').hide();
                } else{
                    $('#rolesDiv').show();
                }
            });



    (5) Edit "/repository/deployment/server/jaggeryapps/publisher/site/conf/site.json" file as below in order to make the new sub theme as the default theme.
            "theme" : {
                   "base" : "fancy",
                    "subtheme" : "new-theme"
            }



    Here i have listed generic instructions to change this using sub themes.

    Evanthika AmarasiriReason for 'javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?' while working with WSO2 products

    While configuring WSO2 API manage to work with WSO2 IS as the Auth Manager, while trying to login to API Store/API Publisher, noticed the following exception.


    javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection? at com.sun.net.ssl.internal.ssl.InputRecord.handleUnknownRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(Unknown Source) at sun.net.www.protocol.https.HttpsClient.afterConnect(Unknown Source) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(Unknown Source) at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(Unknown Source) The reason for this was that we have configured a HTTP port for an HTTPS URL.

     E.g.:-
    <authmanager>
            <!--
                Server URL of the Authentication service
            -->
            <serverurl>https://localhost:9763/services/</serverurl>
            <!--
                Admin username for the Authentication manager.
            -->
            <username>admin</username>
            <!--
                Admin password for the Authentication manager.
            -->
            <password>admin</password>
        </authmanager>
       

    Ajith VitharanaH2 database in WSO2 products.

    H2 is open source and free to distribute. Therefore WSO2 product has selected H2 as the default embedded database. All WSO2 products use embedded H2  databases to store application data in default distribution.

    Eg:  WSO2 Identity Server use WSO2CARBON_DB to store registry, user management and identity management related data (service providers, SSO configurations, tokens ..etc). WSO2 API Manager use WSO2AM_DB to store API related data.

    You can browse those databases , please look at this post http://www.vitharana.org/2012/04/how-to-browse-h2-database-of-wso2.html

    The H2 database can run in different modes including embedded mode, server mode and mixed mode. http://www.h2database.com/html/features.html#connection_modes

    This embedded database can be corrupted due to various reasons including failed file locking at the concurrent access (http://www.h2database.com/html/features.html#database_file_locking), kill Java process to shut down the server, low disc space  ..etc.

    WSO2 highly recommend to use standard production ready database like MySQL, Oracle, PostgreSQL ..etc for production/Dev/QA deployment rather using embedded H2 database(s).

    The registry database has three logical partition called , local, config and governance. You can only use the default embedded H2 database only for the local registry.

    In local registry we store mount configurations, if the local registry is corrupted we can simply delete default embedded H2 database and restart the server  with -Dsetup parameter. Then new database will be created and mount configuration will populate automatically.

    John MathonSometimes we have to understand what we don’t know

    man in wheat field

    You will find it hard to find someone as enthusiastic about the potential of science and how it could benefit us.    However, there are several things I think everyone should be aware of in spite of our amazing advances.

    Our knowledge is recent in most fields.   Something I say is that “man has really been AWARE for about 100 years.”  What I mean by this is that 100 or so years ago we looked into a human body and saw gross organs and had no idea what they did or how they worked.   100 years ago we barely learned that there was a speed limit in the universe and bizarre things like space and time can be warped and we had no inkling of quantum mechanics.   We fared the seas and land of the earth but had no idea of the environment or our potential impact on it.

    Today we have advanced to such a great extent it seems like a lot of people including scientists have become arrogant about our knowledge.  I find that bizarre because the constant discoveries if anything should inform us of how little we actually know.

    dontleaveme

    We recently learned about the human genetic code completing the Human Genome project after taking years and years.  Now we can sequence a humans genome in minutes a leap of thousands of time in performance.  We have in the laboratory modified genes, found ways to manipulate them and contemplated modifying human genome as well as animal and plant.   We have even run trials on inserting genes into humans.  In China recently they tried modifying a human stem cell with modified genes to engineer a fix for a genetic disorder in a baby.

    Genetic Understanding is limited and recent

    As a counterbalance to these seeming advances should be an enormous awareness of how little we know.   Only a few years ago we discovered that there was an entirely separate code in the genome that we thought was “junk” that is called the Epigenetic code.  It turns out the epigenetic code actually controls the operation of the genes.  We didn’t seem to realize this was missing.    We also found a 5th base pair enzyme that is used for some of that coding.

    Recently we discovered a 6th base pair enzyme that is used in the epigenetic code of at least some plants and maybe in humans.   We learned fairly recently the nucleus has a place for storing new genetic information learned while a plant, animal or human is still alive and potentially in some cases transferring this to subsequent offspring.    We though this was categorically impossible for the last 100 years.

    When we are learning such basic things about the way genes work it should make you wonder about the claims made by scientists who say they “know” this or that.

    Apis_mellifera_carnica_worker_hive_entrance_3

    We really have no serious understanding of the epigenetic code operation as a whole.  We don’t understand today how a single cell proceeds to become a human being and how the genes accomplish this.  We have some ideas, we have some strong suspicions.  However, what this is meant to illustrate is that from the standpoint of saying we actually understand the genetic code of any plant or animal we are still a long way away from really fully grokking how this “life thing” works.

    What would knowing look like?

    If we knew how this stuff works it would be like building an airplane for example or building a chip in a CAD machine.   You would say I need an airplane that goes 450mph, flies to 50,000 feet, holds so many passengers and I would be able to design you that plane using tools we have.  I would have a pretty good idea of just about everything about the plane before I even built the first one.   If you ask me to build a computer circuit to solve a certain problem I can use CAD tools to lay out and solve the problem.  I would know how to manufacture it.    If I said I want a gene that does X or I want to modify a gene to do Y we are really clueless.

    When we are doing things with genetics we are clueless about so much it’s almost hard to know where to start.  We understand living things build proteins from the genes and we have an idea how the process of actually constructing a protein works from reading out the DNA nucleotide sequence to the creation of the protein.  For the vast majority of genes we dont know what they do.  For those we have an idea of what they impact we usually have no idea how it does what it does.   We have no idea when these machines get created on the whole or when they get turned off.  Vast sections of the DNA appear to be off and is unused.

    What we have been doing for the last 20 years is building the tools to even start to experiment with DNA.  Our technique is essentially pattern matching.  We’re hoping that vast amounts of data will help us decipher what genes and epigenetic coding is associated with what characteristics, problems, advantages or purpose.    We are about as familiar with this in plants, animals or humans.  Once we start to make all those correlations we can study the proteins themselves and start to understand their makeup and function.

    Proteins appear to be made of common components.  Sections of DNA repeat among different genes pointing to a subcomponent like architecture where the protein machines are made of modules like a motorbike has wheels, an engine, brakes.   A protein machine probably has components that can detect things, other things that can manipulate things at the atomic level.  Some proteins are transportation devices that carry things along fibers in the skeletal structure of the cell.  Some machines can carry larger things, some smaller things.  Some can ferry things to the border of the cell and some inside the nucleus.

    How do these proteins do these things?  Convert one chemical into another?  Help separate one material from another.  What do all these proteins do?  We are just at the very early stages of trying to compile this information.   At the pace we are going, possibly in 10 or 20 years we will actually know a lot more about how all this works.   Maybe then we can start introducing into the environment, into the plant, animal and human modifications of the genes.

    Is there a compelling need?

    Do you understand what I am saying?  I am proud of our achievements.  I am not in any way diminishing anything we have learned.  I am simply saying it is healthy to understand how weak our understanding is because with respect to genetics the possibilities for causing damage to an individual or to large numbers of plants, animals or humans is large because we really have no idea how all this works or what the consequences longer term on other plants, animals or humans.

    snail_01

    I strongly suggest that we do NOT proceed down the road to use GMO crops.  I say this knowing they have studied this, that the changes they are doing are limited, that they have tested it to some extent.  I am hoping we are lucky and that the GMO stuff we have done so far is truly limited in terms of its impact but we face a decision.

    We do tests on new drugs for 10 years frequently when the impact will be on a few humans.  We carefully control those in the trials and we monitor everyone after the drug is released for years because even after trials we sometimes find problems.  We tried to do our first genetic insertions via virus about 10 years ago with catastrophic results.  Some people died.    Before we modify the genetic code of the plants and animals we eat we should have a pretty robust understanding of the longer term and broader impact of what we are doing, how to reverse it should we find out we need to.    This should take at least 10 year study in a controlled environment.

    We have to be aware that the nature of plants, animals and humans is to sexual reproduction which mixes genes across widely disbursed areas.   There is no way to guarentee that any change we make will not produce consequences we did not anticipate.  We cannot be sure that the genes we change will not find their way into things we had no clue about before.

    H1N1-flu-virus-particle-001

    There are several established organizations around research and ethics of human genome research.   We are extremely cautious at this point about doing anything with humans genetically.   When we’ve done animal experiments it’s been very controlled in the laboratory.   All I’m saying is that the same rules be applied to the plant genome for now.   We should apply the same rules across plants, animals and humans.

    My feeling is that it will take us 10 or 20 years to gain enough understanding to be able to do some of these things with the data and experience to know what the consequences will be.

    I am not aware of any truly compelling reason why we need GMO crops.  I understand they help a lot, sometimes tremendously, sometimes it may seem essential but the fact is we have seen dramatic drops in poverty and food supply without GMO crops.

    I am not saying we should permanently stop ourselves from doing this.   I am not at all saying we should stop researching it.  I am not saying it is impossible.  I am simply saying we need to apply the same rules across humans, animals and plants.  It is unlikely we are ready to do this on plants in general in the outsdie world for another decade.

    I am not sure where the bar is but just a few of the points I made earlier clearly point out that our knowledge of this is just too nascent to endeavor on this road at this time.  This seems like common sense to me.

    nucleusfigure1

    I frequently hear that scientists are unified on this that it is safe to use GMO products and to do GMO plants.  I find that puzzling and I am a scientist.  Yes, a computer scientist but I have studied physics and read extensively about science in general.  I graduated from MIT and so I’m not an idiot about science.

    We are too arrogant

    I am frequently stunned by how much humans tend to elevate themselves above others.   It’s true among sub-groups of humans who like to portray themselves as elevated compared to others, sometimes based on national identity or racial identity. How normal they are or intelligent or whatever.    We greatly overestimate our superiority to other creatures.   We are an egotistical species.

    Human beings at most have existed for a few hundred thousand years over the monkeys.   99% of our genetic coding is virtually identical to some monkeys.   There has not been enough time for there to have been a significant “evolution” of human above other species.

    We also have to consider that we live with animals and plants on this planet together.  Even if we are superior to them in some measures we are still codependent and not to recognize that is just stupid and evidence of how we aren’t superior at all.   We cannot treat the genetic material of plants, animals and humans differently because plants are inferior, animals inferior to humans, not as valuable or more expendable.    These are short-sighted and arrogant attitudes that are dangerous.

    We have to consider that whatever we do in this area may have far reaching effects we didn’t contemplate, that we will not be able to reverse them.  We certainly have no idea what to do if something goes wrong.  Our inability to understand means we also have no understanding of how to correct problems that might arise including how to eliminate the spread of genes that turn out to be a problem.

    Can I say it any clearer?  I don’t believe we are ready to start mass use of gene modifications in plants, animals or humans on this planet yet.

    I realize there is a lot of money at stake in GMO crops and GMO research.

    There are other areas I’ve seen where humans have gotten arrogant about our knowledge saying we know more than we do, thinking we understand the consequences of things when we don’t or how things would react.  It can be relatively harmless if applied to an area where it won’t cause permanent impact or is purely a monetary thing.

    Unfortunately, modifying the genetic code is an area we should be aware of how little we know.


    Hasitha Aravinda[SOAPUI] Generating an unique property per test case and referring it in multiple requests.


    1) Create a SoapUI project and create a test case. 
    2) Crete a Script test step and enter following code snippet. This should be the first test step of the test case. 

    Node : this will create a property called id  in test case scope. So it won't get altered in multithreaded test.

    Groovy Script

    def key = java.util.UUID.randomUUID().toString()
    context.getTestCase().getProperty( "id" ).setValue(key);



    3) You can access property id in using following inline script. 

    ${=context.getTestCase().getPropertyValue( "id" )}

    Example :






    sanjeewa malalgodaUpdate key stores in WSO2 API Manager and Identity server(production recommendation )


    Here in this post i will discuss how we can generate keystores and include them to WSO2 products before they deploy in production deployment.

    Most of organizations have their own crt file and keys. So lets see how we can use them to create certificates required for WSO2 servers and deploy them.

    Step 1: Follow follow steps to update the keystore.

    openssl pkcs12 -export -in /etc/pki/tls/certs/sanjeewa-com.crt -inkey /etc/pki/tls/private/private-key.key -name "wso2sanjeewa" -certfile /etc/pki/tls/certs/DigiCertCA.crt -out wso2sanjeewa.pfx
    Enter Export Password:
    Verifying - Enter Export Password:

    /usr/java/default/bin/keytool -importkeystore -srckeystore wso2sanjeewa.pfx -srcstoretype pkcs12 -destkeystore wso2sanjeewa.jks -deststoretype JKS
    Enter destination keystore password: 
    Re-enter new password:
    Enter source keystore password: 
    Entry for alias wso2sanjeewa successfully imported.
    Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

     /usr/java/default/bin/keytool -export -alias wso2sanjeewa -keystore wso2sanjeewa.jks -file wso2sanjeewa.pem
    Enter keystore password: 
    Certificate stored in file

    /usr/java/default/bin/keytool -import -alias wso2sanjeewa -file /opt/o2/wso2sanjeewa.pem -keystore client-truststore.jks -storepass wso2carbon
    Certificate was added to keystore

    Now we have all files we need to copy.

    Step 2: We need to copy them to /opt/o2/WSO2Servers/wso2am-1.8.0/repository/resources/security
    This file path change with product you use(normally its /repository/resources/security ).
    We need to do this for all products deployed in this deployment.

    Step 3: Then after that you need to find all occurrence of wso2carbon.jks( grep -rnw 'wso2carbon.jks') and replace them with wso2sanjeewa.jks file we generated in above steps.

    Step 4:Then search for alias(grep -rnw 'alias') and KeyAlias(grep -rnw 'KeyAlias') in all files in wso2Server. If you found them as wso2carbon then only replace it with wso2sanjeewa.

    We need to follow these steps very carefully. Here we have listed filenames and changed parameters for your reference.

    While you carry out step 3 and 4 for IS you need to change following files and lines

    repository/conf/security/application-authentication.xml:96:         
    <Parameter name="TrustStorePath">/repository/resources/security/wso2sanjeewa.jks</Parameter>

    repository/conf/identity.xml:244:              
    <Location>${carbon.home}/repository/resources/security/wso2sanjeewa.jks</Location>

    repository/conf/carbon.xml:302:           
    <Location>${carbon.home}/repository/resources/security/wso2sanjeewa.jks</Location>

    repository/conf/carbon.xml:308:           
    <KeyAlias>wso2sanjeewa</KeyAlias>

    repository/conf/carbon.xml:318:           
    <Location>${carbon.home}/repository/resources/security/wso2sanjeewa.jks</Location>

    repository/conf/carbon.xml:324:          
    <KeyAlias>wso2sanjeewa</KeyAlias>




    While you carry out step 3 and 4 for APIM you need to change following files and lines

    repository/deployment/server/jaggeryapps/store/site/conf/site.json:14:       
    "identityAlias" : "wso2sanjeewa",

    repository/deployment/server/jaggeryapps/store/site/conf/site.json:16:       
    "keyStoreName" :"/opt/o2/WSO2Servers/wso2am-1.8.0/repository/resources/security/wso2sanjeewa.jks"

    repository/deployment/server/jaggeryapps/publisher/site/conf/site.json:14:       
    "identityAlias" : "wso2sanjeewa",

    repository/deployment/server/jaggeryapps/publisher/site/conf/site.json:16:       
    "keyStoreName" :"/opt/o2/WSO2Servers/wso2am-1.8.0/repository/resources/security/wso2sanjeewa.jks"

    repository/conf/security/secret-conf.properties:21:
    keystore.identity.location=repository/resources/security/wso2sanjeewa.jks

    repository/conf/security/secret-conf.properties:23:
    keystore.identity.alias=wso2sanjeewa

    repository/conf/security/secret-conf.properties:32:
    keystore.trust.alias=wso2sanjeewa

    repository/conf/carbon.xml:313:           
    <Location>${carbon.home}/repository/resources/security/wso2sanjeewa.jks</Location>

    repository/conf/carbon.xml:319:           
    <KeyAlias>wso2sanjeewa</KeyAlias>

    repository/conf/carbon.xml:329:           
    <Location>${carbon.home}/repository/resources/security/wso2sanjeewa.jks</Location>

    repository/conf/carbon.xml:335:           
    <KeyAlias>wso2sanjeewa</KeyAlias>



    Madhuka UdanthaNLTK tutorial–03 (n-gram)

    An n-gram is a contiguous sequence of n items from a given sequence of text or speech. The items can be syllables, letters, words or base pairs according to the application. n-grams may also be called shingles.

    Tokenization

    My first post was mainly on this.

    1 from nltk.tokenize import RegexpTokenizer
    2
    3 tokenizer = RegexpTokenizer("[a-zA-Z'`]+")
    4 #skipping the numbers in here, include ' for tokens
    5 print tokenizer.tokenize("I am Madhuka Udantha, I'm going to write 2blog posts")
    6 #==>['I', 'am', 'Madhuka', 'Udantha', "I'm", 'going', 'to', 'write', 'blog', 'posts']
    7

    Generating N-grams for each token


    nltk.util.ngrams(sequence, n, pad_left=False, pad_right=False, pad_symbol=None).



    • sequence –  the source data to be converted into ngrams (sequence or iter)

    • n  – the degree of the ngrams (int)

    • pad_left  – whether the ngrams should be left-padded (bool)

    • pad_right  – whether the ngrams should be right-padded (bool)

    • pad_symbol – the symbol to use for padding (default is None, any)

    1 from nltk.util import ngrams
    2
    3 print list(ngrams([1,2,3,4,5], 3))
    4 print list(ngrams([1,2,3,4,5], 2, pad_right=True))
    5 print list(ngrams([1,2,3,4,5], 2, pad_right=True,pad_symbol="END"))

    image


    Counting each N-gram occurrences


    1 ngrams_statistics = {}
    2
    3 for ngram in ngrams:
    4 if not ngrams_statistics.has_key(ngram):
    5 ngrams_statistics.update({ngram:1})
    6 else:
    7 ngram_occurrences = ngrams_statistics[ngram]
    8 ngrams_statistics.update({ngram:ngram_occurrences+1})
    9

    Sorting


    1 ngrams_statistics_sorted = sorted(ngrams_statistics.iteritems(), reverse=True)
    2 print ngrams_statistics_sorted

    image

    Madhuka UdanthaNLTK tutorial–02 (Texts as Lists of Words / Frequency words)

    Previous post was  basically about installing and introduction for NLTK and searching text with NLTK basic functions. This post main going on ‘Texts as Lists of Words’ as text is nothing more than a sequence of words and punctuation.  Frequency Distribution also visited at the end of this post.

    sent1 = ['Today', 'I', 'call', 'James', '.']

    len(sent1)—> 4

    • Concatenation combines the lists together into a single list. We can concatenate sentences to build up a text.
      text1 = sent1 + sent2
    • Index the text to find the word in the index. (indexes start from zero)
      text1[12]
    • We can do the converse; given a word, find the index of when it first occurs
      text1.index('call')
    • Slicing the text(By convention, m:n means elements m…n-1)
      text1[165:198]
      • NOTE
        If accidentally we use an index that is too large, we get an error: 'IndexError: list index out of range'
    • Sorting
      noun_phrase = text5[1:6]
      sorted(noun_phrase)

    NOTE
    Remember that capitalized words appear before lowercase words in sorted lists


    Strings

    Few to play with String in python. These are very basic but usefull to know when you are work with NLP.
    name = 'Madhuka'
    name[0] --> 'M'
    name[:5] --> 'Madhu'
    name * 2 --> 'MadhukaMadhuka'
    name + '.' --> 'Madhuka.'

    Splitting and join
    ' '.join(['NLTK', 'Python']) --> 'NLTL Python'
    'NLTL Python'.split() --> ['NLTK', 'Python']

     

    Frequency Distributions

    Text contains frequency distributed words. NLTK provides built-in support for them. Let's use a FreqDist to find the 50 most frequent words in text/book

    Lets check frequency distributions of 'The Book of Genesis'

    1 from nltk.book import *
    2
    3 fdist1 = FreqDist(text3)
    4 print(fdist1)
    5 print fdist1.most_common(50)

    Here is the frequency distributions of the text3 ('The Book of Genesis')


    image


    Long words


    Listing words that are more than 12 characters long. For each word w in the vocabulary V, we check whether len(w) is greater than 12;


    1 from nltk.book import *
    2
    3 V = set(text3)
    4 long_words = [w for w in V if len(w) > 12]
    5 print sorted(long_words)

    Here are all words from the chat corpus that are longer than 8 characters, that occur more than 10 times


    1 sorted(w for w in set(text3) if len(w) > 8 and fdist3[w] > 10)

    image


    Collocation


    A collocation is a sequence of words that occur together unusually often. Thus red wine is a collocation, whereas the wine is not. To get a handle on collocations, we start off by extracting from a text a list of word pairs, also known as bigrams. This is easily accomplished with the function bigrams():


    In particular, we want to find bigrams that occur more often than we would expect based on the frequency of the individual words. The collocations() function does this for us


    1 from nltk.book import *
    2
    3 phase = text3[:5]
    4 print "===Bigrams==="
    5 print list(bigrams(phase))
    6 print "===Collocations==="
    7 print text3.collocations()

    Here is out put of the sample code


    image


     



    • fdist = FreqDist(samples)
      create a frequency distribution containing the given samples

    • fdist[sample] += 1
      increment the count for this sample

    • fdist['monstrous']
      count of the number of times a given sample occurred

    • fdist.freq('monstrous')
      frequency of a given sample

    • fdist.N()
      total number of samples

    • fdist.most_common(n)
      the n most common samples and their frequencies

    • for sample in fdist:
      iterate over the samples

    • fdist.max()
      sample with the greatest count

    • fdist.tabulate()
      tabulate the frequency distribution

    • fdist.plot()
      graphical plot of the frequency distribution

    • fdist.plot(cumulative=True)
      cumulative plot of the frequency distribution

    • fdist1 |= fdist2
      update fdist1 with counts from fdist2

    • fdist1 < fdist2
      test if samples in fdist1 occur less frequently than in fdist2

    Madhuka UdanthaNatural Language Toolkit (NLTK) sample and tutorial - 01

    What is NLTK?

    Natural Language Toolkit (NLTK) is a leading platform for building Python programs to work with human language data (Natural Language Processing). It is accompanied by a book that explains the underlying concepts behind the language processing tasks supported by the toolkit. NLTK is intended to support research and teaching in NLP or closely related areas, including empirical linguistics, cognitive science, artificial intelligence, information retrieval, and machine learning.

    Library contains

    • Lexical analysis: Word and text tokenizer
    • n-gram and collocations
    • Part-of-speech tagger
    • Tree model and Text chunker for capturing
    • Named-entity recognition

    Download and Install

    1. You can download NLTK from here in windows

    2. Once NLTK is installed, start up the Python interpreter to install the data required for rest of the work.

    1 import nltk
    2 nltk.download()

    image


    It consists of about 30 compressed files requiring about 100Mb disk space. If any disk space issue or network issue you can pick only you need.


    Once the data is downloaded to your machine, you can load some of it using the Python interpreter.


    1 from nltk.book import *

    image


    Basic Operation in Text



    1 from __future__ import division
    2 from nltk.book import *
    3
    4
    5 #Enter their names to find out about these texts
    6 print text3
    7 #Length of a text from start to finish, in terms of the words and punctuation symbols that appear.
    8 print 'Length of Text: '+str(len(text3))
    9
    10 #Text is just the set of tokens
    11 #print sorted(set(text3))
    12 print 'Length of Token: '+str(len(set(text3)))
    13
    14 #lexical richness of the text
    15 def lexical_richness(text):
    16 return len(set(text)) / len(text)
    17
    18 #percentage of the text is taken up by a specific word
    19 def percentage(word, text):
    20 return (100 * text.count(word) / len(text))
    21
    22 print 'Lexical richness of the text: '+str(lexical_richness(text3))
    23 print 'Percentage: '+ str(percentage('God',text3));


    Now we will pick ‘text3’ called '”The Book of Genesis” for try NLTK features. Above code sample is showing



    • Name of the Text

    • The length of a text from starting to end

    • Token count of the text. (A token is the technical name for a sequence of characters. Text is just the set of tokens that it uses, since in a set, all duplicates are collapsed together.)

    • Calculate a measure of the lexical richness of the text (number of distinct words by total number of words)

    • How often a word occurs in a text (compute what percentage of the text is taken up by a specific word)

    Note
    In Python 2, to start with from __future__ import for division.


    Output of above code snippet


    image


    Searching Text



    • Count(word) - support count the word in the text

    • Concordance(word) - give every occurrence of a given word, together with some context.

    • Similar(word) - appending the term similar to the name of the text

    • Common_contexts([word]) - contexts are shared by two or more words

    1 from nltk.book import *
    2
    3 #names of the Text
    4 print text3
    5
    6 #count the word in the Text
    7 print "===Count==="
    8 print text3.count("Adam")
    9
    10 #'concordance()' view shows us every occurrence of a given word, together with some context.
    11 #Here 'Adam' search in 'The Book of Genesis'
    12 print "===Concordance==="
    13 print text3.concordance("Adam")
    14
    15 #Appending the term similar to the name of the text
    16 print "===Similar==="
    17 print text3.similar("Adam")
    18
    19 #Contexts are shared by two or more words
    20 print "===Common Contexts==="
    21 text3.common_contexts(["Adam", "Noah"])

    output of the code sample


    image


    Now I need plot word that are distributing over the text. Such as "God","Adam", "Eve", "Noah", "Abram","Sarah", "Joseph", "Shem", "Isaac" word are place in the text/book.



    1 text3.dispersion_plot(["God","Adam", "Eve", "Noah", "Abram","Sarah", "Joseph", "Shem", "Isaac"])


    image


    References


    [1] Bird, Steven; Klein, Ewan; Loper, Edward (2009). Natural Language Processing with Python. O'Reilly Media Inc. ISBN 0-596-51649-5.

    Chanaka FernandoWhat happens to a message going through WSO2 ESB synapse mediation engine ( Looking up the source code)


    In the previous blog post [1], I have discussed about the PassThrough Transport and how it works. If you follow this [2] article, you can learn about what is happening inside PTT when a message is received. The below diagram depicts the relationship of PTT, Axis2 and Synaps Mediation engine within WSO2 ESB.






    As depicted in the above diagram, When a message comes to the ESB, it will be received by the reactor thread of Pass Through Listener and the message will be read in to an internal buffer. Then it will be processed through a separate worker thread and will flows through the Axis2 In message flow. Then Axis2 engine will call the respective message receiver (SynapseMessageReceiver, ProxyMessageReceiver or SynapseCallbackReceiver) class. This would be the main entry point to the synapse mediation engine. Then it will go through the synapse mediation flow and handed over to Axis2FlexibleMEPClient and and it will hand over the message to Axis2 engine for the out flow. Then the message will be send to the backend through PassThroughSender thread.

    This is a high level description of what happens inside WSO2 ESB. The following section will describe what happens within the synapse mediation engine when it receives by the MessageReceiver interface class.

    I am taking the following sample ESB configuration for this debugging session. It contains all the basic elements of a proxy service definition which are 

    InSequence
    OutSequence
    FaultSequence
    Endpoint

    <?xml version="1.0" encoding="UTF-8"?>
    <proxy xmlns="http://ws.apache.org/ns/synapse"
           name="DebugProxy"
           transports="https,http"
           statistics="disable"
           trace="disable"
           startOnLoad="true">
       <target>
          <inSequence>
             <log level="full">
                <property name="IN_SEQ" value="Executing In Sequence"/>
             </log>
          </inSequence>
          <outSequence>
             <log level="full">
                <property name="OUT_SEQ" value="Inside the out sequence"/>
             </log>
             <send/>
          </outSequence>
          <faultSequence>
             <log level="full">
                <property name="FAULT_SEQ" value="Inside Fault Sequence"/>
             </log>
          </faultSequence>
          <endpoint>
             <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
          </endpoint>
       </target>
       <description/>
    </proxy>


    Here I have used the log mediator with level “full” such that message will be converted in to canonical format (building the message) within the ESB. 

    The below description provides information on the classes and methods executed when a message received from a client to the ESB. This will be really helpful when you are debugging a particular issue related to ESB and you can start debugging at a desired level without starting from the beginning.


    Request Path
    AxisEngine.receive()
    ProxyServiceMessageReceiver.receive()
    ————————————
    Get the Fault Sequence and push in to fault handler stack

    ————————————
    Execute the inSequence

    SequenceMediator.mediate()
    AbstractListMediator.mediate()
    RelayUtils.buildMessage()
    DeferredMessageBuilder.getDocument()
    SOAPBuilder.processDocument()
    LogMediator.mediate()
    ————————————
    If the inSequence execution returns true, execute the endpoint

    AddressEndpoint.send()
    AbstractEndpoint.send()
    If message needs to build at this level, call the RelayUtils.build() method
    Axis2SynapseEnvironment.send()
    AxisSender.sendOn()
    Axis2FlexibleMEPClient.send()
    Create OperationClient and register the callback to process the response (if any)
    OperationClient.execute()
    OutInAxisOperation.executeImpl()
    Add the registered callback to the callback store with the messageID
    AxisEngine.send()
    PassThroughHttpSender.invoke()
    ————————————







    When a response is received from the back end, it will be captured by the TargetHandler class and spawn a new ClientWorker thread to process the response and hand it over to the axis engine to process. AxisEngine will call the respective message receiver (in this case CallBackReceiver).

    Response Path

    AxisEngine.receive()
    SynapseCallbackReceiver.receive()
    SynapseCallbackReceiver.handleMessage()
    If there is an error, pop the fault handler and set the fault parameters and execute the fault handler
    If there are no errors or special cases, hand over the message to synapse environment
    Axis2SynapseEnvironment.injectMessage()
    Check if this response is for a proxy service or not and proceed
    If this is for a proxy service, then check the fault handler and add that to fault stack
    ————————————
    Execute outSequence

    SequenceMediator.mediate()
    AbstractListMediator.mediate()
    RelayUtils.buildMessage()
    DeferredMessageBuilder.getDocument()
    SOAPBuilder.processDocument()
    LogMediator.mediate()
    SendMediator.mediate()
    Axis2SynapseEnvironment.send()
    Axis2Sender.sendBack()
    AxisEngine.send()
    PassThroughHttpSender.invoke()





    After going through this article, it would be better if you can do some debugging on the synapse code.














    Prabath SiriwardenaIdentity Mediation Language (IML) - Requirements Specification

    A recent research done by the analyst firm Quocirca confirms that many businesses now have more external users than internal ones: in Europe 58 percent transact directly with users from other businesses and/or consumers; for the UK alone the figure is 65 percent. If you look at the history, most enterprises grow today via acquisitions, mergers and partnerships. In U.S only, mergers and acquisitions volume totaled to $865.1 billion in the first nine months of 2013, according to Dealogic. That’s a 39% increase over the same period a year ago — and the highest nine-month total since 2008.

    Gartner predicts by 2020, 60% of all digital identities interacting with enterprises will come from external identity providers.

    I have written two blog posts in detail highlighting the need for an Identity Bus.
    The objective of the Identity Mediation Language (IML) is to define a configuration language, that would run in an Identity Bus - to mediate and transform identity tokens between multiple service providers and identity providers in a protocol agnostic manner.

    The objective of this blog post is to define the high-level requirements for the Identity Mediation Language. Your thoughts/suggestions are extremely valuable and highly appreciated to evolve this into a language where the global identity management industry will benefit.

    1. Transform identity tokens from one protocol to another.

    For example, the Identity Mediation Language should have the provision to the transform an incoming SAML request into an OpenID Connect request, and then the OpenID Connect response from the Identity Provider into a SAML response.

    2.  The language should have the ability define a handler chain in inbound-authentication-request flow, outbound-authentication-request flow, outbound-authentication-response flow, inbound-authentication-response flow and any of the major channels identified as this specification evolves.


    3. The language should define a common syntax and semantics, independent from any of the protocols.

    Having a common a common syntax and semantics for the language, independent from any of the specific protocols will make it extensible. Support for specific protocols should be implemented as handlers. Each handler should be aware of how to translate the common syntax defined by the language to its own on syntax - and how to process them in a protocol specific manner.

    Following is a sample configuration (which must evolve in the future). Language is not coupled to any implementations.  There can be global handlers for each protocol, but then again, should be able to override in each request flow if needed.


      "inbound-authentication-request" : 
                  { "protocol": "saml", 
                     "handler" : "org.wso2.SAMLAuthReqRespHandler"
                   },
      "outbound-authentication-request" :
                  { "protocol": "oidc",
                     "handler" : "org.wso2.OIDCAuthReqRespHandler"
                  }
    }

    4. The language should have the provision to define a common request/response path as well as override it per service provider.

    5. The language should have the provision to identify the service provider by a unique name or by any other associate attributes. These attributes can be read from incoming transport headers or from the Identity Token itself. If read from the ID token, the way to identify that attribute must be protocol agnostic. For example, if the incoming ID token is a SAML token, then - if we need to identify the issuer from  an attribute in the SAML token, the language should define it as an XPATH, with out using  any SAML specific semantics.

    6. The language should have the provision to define, whether an incoming request is just a pass-through.

    7. The language should have the provision to define, which transport headers should be passed-through to outbound-authentication-request path.

    8. The language should have the provision to define, which transport headers needs to be added to the outbound-authentication-request path.

    9. The language should have the provision to log all the requests and responses. Should be able to configure logging per service providers. Log handler should be configurable, per service provider. Due to PCI compliance requirements we may not be able to log the complete message, all the time.

    10. The language should have provision to retrieve attributes and attribute metadata, required by a given service provider, via an attribute handler. Also - it should have the provision to define attribute requirements inline.

    11. The language should have provision to define, authentication requirements per service providers. The authentication can be using one more - local authenticators, federated authenticators or federated authenticators. The local authenticators will authenticate the end user, using a local credential store. The federated authenticators will talk to an external identity provider to authenticate users. The request path authenticators will authenticate users from the credentials attached to request itself.

    12. The language should have the provision to define, multiple-option and multiple-steps authentication, per service provider. In multiple-option scenario, the relationship between in authenticators is an OR. That means, user needs to authenticate only using a single authenticator. With multiple-steps, the relationship between steps is an AND. User must authenticate in successfully in each step.

    13. The language should have the provision to define, multiple-option and multiple-steps authentication, per user. The user can be identified by the username or by any other attribute of the user. For example, if user belongs to the 'admin' role, then he must authenticate with multi-factor authentication.

    14. The language should have the provision to define, multiple-option and multiple-steps authentication, per user, per service provider. The user can be identified by the username or by any other attribute of the user. For example, if user belongs to the 'admin' role and if he accesses a high-privileged application then he must authenticate with multi-factor authentication, if the same person accesses the user profile dashboard, then multi-factor authentication is not required.

    15. The language should not define any authenticators by itself.

    16. The language should have the provision to define, authenticator sequences independently and then associate them to an authentication request path, just by a reference. Also should have the ability to define them inline.

    17. The language should support defining requirements for adaptive authentication. A service provider may have a concrete authenticator sequence attached to it. At the same time, the language should have the provision to dynamically pick authenticator sequences in a context aware manner.

    18. The language should have the provision to define, per service provider, authorization policies. Authorization policy may define the criteria under which an end-user can access the given service provider. Only if its satisfied by the authenticated user, the identity provider should send back the authentication response.

    19. The language should have the provision to define how to transform a claim set obtained from an identity provider to a claim dialect specific to a give service provider. This can be a simple one to one claim transformation, for example http://claims.idp1.org/firstName --> http://claims.sp1.org/firstName, or a complex claim transformation like, http://claims.idp1.org/firstName + http://claims.idp1.org/lastName --> http://claims.sp1.org/fullName. The language should have the provision to do this claim transformation from inboudAuthenticationRequest flow to outboundAuthenticationRequest/localAuthenticationRequest flow and from outboundAuthenticationResponse/localAuthenticationResponse flow to inboundAuthenticationResponse flow.

    20.  The language should have the provision to authenticate service providers, irrespective of the protocol used in the inboundAuthenticationRequest.

    21. The language should have the provision to accept, authentication requests, provisioning requests,  authorization requests, attribute requests and any other types of requests from the service provider. The language should not be just limited to above four types of requests and must have the provision to extend.

    22.  The language should define a way to retrieve identity provider and service provider metadata.

    23. The language should have the provision to define rule-based user provisioning per service provider, per identity provider or per user - and per attributes associated with any of the above entities. For example, if the user is provisioned to the system via foo service provider and if his role is sales-team, provision the user to salesforce. Another example could be, provision everyone authenticated against the Facebook identity provider to a internal user store.

    24. The language should have the provision to indicate to a policy decision point (for access controlling), from which service provider the access controlling request is initiated from and also with the other related metadata.

    25. The language should have the provision to the specify just-in-time provisioning requirements per service provider, per identity provider or per user - and and per attributes associated with any of the above entities.

    26. The language should have the provision to specify, per service provider - the user
    store and the attribute store for local authentication and to retrieve user attributes locally.

    27. The language should have the provision to specify an algorithm and a handler for force authentication for users already logged into the identity provider. There can be a case where user is already authenticated to the system with a single factor - when the same user tries to log into a service provider which needs two-factor authentication, then the algorithm handler should decide whether to force user authentication - and if so at which level. There can be an algorithm handler defined at the global level and also per service provider request flow.

    28. The language should have the provision to specify a home-realm-discovery handler, per service provider. This handler should know how to read the request to find out relevant attributes that can be used to identify the authentication options for that particular request.

    29. The language should have the provision to define attribute requirements for out-bound provisioning requests.

    30. The language should have the provision to define claim transformations prior to just-in-time provisioning or out-bound provisioning.

    31. The language must have the provision to define how to authenticate to external endpoints. The language should not be coupled into any specific authentication protocol.

    Dammina SahabanduIntelliJ IDEA: How to restore default settings [Ubuntu]

    Recently my IDE started to act weird. There were issues like I can't use the key board short cuts (ie: can't use at least to keys at once). So the IDE did almost act like the vi editor ;)

    Anyhow I did solve this issue by restoring default IDE UI settings.

    You can easily do it by deleting the current configurations by running the following commands.

    rm ~/.IntelliJIdeaXX/config        (lets you reconfigure user-specific settings.)
    rm ~/.IntelliJIdeaXX/system        (lets you reconfigure IntelliJ IDEA data caches.)

    Chanaka FernandoUnderstanding Threads created in WSO2 ESB

    WSO2 ESB is an asynchronous high performing messaging engine which uses Java NIO technology for its internal implementations. You can find more information about the implementation details about the WSO2 ESB’s high performing http transport known as Pass-Through Transport (PTT) from the links given below.



    From this tutorial, I am going to discuss about various threads created when you start the ESB and start processing requests with that. This would help you to troubleshoot critical ESB server issues with the usage of a thread dump. You can monitor the threads created by using a monitoring tool like Jconsole or java mission control (java 1.7.40 upwards). Given below is a list of important threads and their stack traces from an active ESB server. 



    PassThroughHTTPSSender ( 1 Thread )

    This thread is the reactor(acceptor) thread which is responsible for handling new connection requests to https endpoints from ESB. When there is a <send/> or <call/> mediator within the mediation flow to and https endpoint, it will eventually executes this thread. This thread needs to be running all the time during the server is up and running (unless https transport is disabled).

    Name: PassThroughHTTPSSender
    State: RUNNABLE
    Total blocked: 1  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@2bef6f1
       - locked java.util.Collections$UnmodifiableSet@578b8a47
       - locked sun.nio.ch.KQueueSelectorImpl@1a8f39bb
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:366)
    org.apache.synapse.transport.passthru.PassThroughHttpSender$2.run(PassThroughHttpSender.java:195)
    java.lang.Thread.run(Thread.java:744)


    HTTPS-Sender I/O dispatcher-1 (4 Threads in 4 Core)

    This thread is the reactor thread which is responsible for handling IO events and trigger different states of the state machine. There will be n number of this threads where n equals to number of cores in the machine. All the I/O events related to sending messages to https endpoints are handled by these threads. These threads need to be running all the time during the server is up and running (unless https transport is disabled).


    Name: HTTPS-Sender I/O dispatcher-1
    State: RUNNABLE
    Total blocked: 1  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@25484cc9
       - locked java.util.Collections$UnmodifiableSet@3f6342dd
       - locked sun.nio.ch.KQueueSelectorImpl@66ac838c
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:259)
    org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106)
    org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:604)
    java.lang.Thread.run(Thread.java:744)


    PassThroughHTTPSender ( 1 Thread )

    This thread is the reactor(acceptor) thread which is responsible for handling new connection requests to http endpoints from ESB. When there is a <send/> or <call/> mediator within the mediation flow to http endpoint, it will eventually executes this thread and create a new socket channel. This thread needs to be running all the time during the server is up and running (unless http transport is disabled).

    Name: PassThroughHTTPSender
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@c15e5c9
       - locked java.util.Collections$UnmodifiableSet@7e49d31e
       - locked sun.nio.ch.KQueueSelectorImpl@9052336
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:366)
    org.apache.synapse.transport.passthru.PassThroughHttpSender$2.run(PassThroughHttpSender.java:195)
    java.lang.Thread.run(Thread.java:744)


    HTTP-Sender I/O dispatcher-1 (4 Threads in 4 Core)

    This thread is the reactor thread which is responsible for handling IO events and trigger different states of the state machine. There will be n number of this threads where n equals to number of cores in the machine. All the I/O events related to sending messages to http endpoints are handled by these threads. These threads need to be running all the time during the server is up and running (unless http transport is disabled).

    Name: HTTP-Sender I/O dispatcher-1
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@327d74e9
       - locked java.util.Collections$UnmodifiableSet@460208f5
       - locked sun.nio.ch.KQueueSelectorImpl@33f55d67
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:259)
    org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106)
    org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:604)
    java.lang.Thread.run(Thread.java:744)


    PassThroughHTTPSListener (1 Thread)

    This thread is the reactor(acceptor) thread which is responsible for handling new connection requests from https requests from clients. When there is a https request towards ESB, it will eventually executes this thread. When this request is received, it will create a new socket channel and register with a selection key and a channel. Further IO events are handled by the dispatcher threads. This thread needs to be running all the time during the server is up and running (unless https transport is disabled).

    Name: PassThroughHTTPSListener
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@50cff7e
       - locked java.util.Collections$UnmodifiableSet@271d259c
       - locked sun.nio.ch.KQueueSelectorImpl@2282db4d
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:366)
    org.apache.synapse.transport.passthru.PassThroughHttpListener$4.run(PassThroughHttpListener.java:251)
    java.lang.Thread.run(Thread.java:744)




    HTTPS-Listener I/O dispatcher-1(4 Threads in 4 Core)

    This thread is the reactor thread which is responsible for handling IO events and trigger different states of the state machine. There will be n number of this threads where n equals to number of cores in the machine. All the I/O events related to receiving messages from https requests are handled by these threads. These threads need to be running all the time during the server is up and running (unless https transport is disabled).

    Name: HTTPS-Listener I/O dispatcher-1
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@48c97759
       - locked java.util.Collections$UnmodifiableSet@8ac860c
       - locked sun.nio.ch.KQueueSelectorImpl@ff4fe7c
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:259)
    org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106)
    org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:604)
    java.lang.Thread.run(Thread.java:744)


    PassThroughHTTPListener (1 Thread)

    This thread is the reactor(acceptor) thread which is responsible for handling new connection requests from http requests from clients. When there is a http request towards ESB, it will eventually executes this thread. When this request is received, it will create a new socket channel and register with a selection key and a channel. Further IO events are handled by the dispatcher threads. This thread needs to be running all the time during the server is up and running (unless http transport is disabled).

    Name: PassThroughHTTPListener
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@1f6438be
       - locked java.util.Collections$UnmodifiableSet@152987f9
       - locked sun.nio.ch.KQueueSelectorImpl@11d60796
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:366)
    org.apache.synapse.transport.passthru.PassThroughHttpListener$4.run(PassThroughHttpListener.java:251)
    java.lang.Thread.run(Thread.java:744)



    HTTP-Listener I/O dispatcher-1(4 Threads in 4 Core)

    This thread is the reactor thread which is responsible for handling IO events and trigger different states of the state machine. There will be n number of this threads where n equals to number of cores in the machine. All the I/O events related to receiving messages from http requests are handled by these threads. These threads need to be running all the time during the server is up and running (unless http transport is disabled).

    Name: HTTP-Listener I/O dispatcher-1
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@34095061
       - locked java.util.Collections$UnmodifiableSet@60e8e6e5
       - locked sun.nio.ch.KQueueSelectorImpl@5fbe8e73
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:259)
    org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106)
    org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:604)
    java.lang.Thread.run(Thread.java:744)


    PassThroughMessageProcessor-1(400)

    This is the worker thread which is responsible for processing requests coming in to the ESB when the request message is received. Passthrough transport will start these worker threads once the HTTP headers are received from the clients. Then these threads will run independently from PTT threads which are described earlier. There will be n number of threads created where n is the worker_pool_size_core parameter defined in the passthrough-http.properties file. 

    Name: PassThroughMessageProcessor-1
    State: WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7050afc9
    Total blocked: 0  Total waited: 1

    Stack trace: 
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:744)


    SynapseWorker-1(20)

    This is the worker thread which is responsible for handling iterate/clone mediator executions. There will be a separate thread pool to handle such executions. Number of threads in this thread pool can be configured in synapse.properties file with the parameter synapse.threads.core. 

    Name: SynapseWorker-1
    State: WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@22d3b7d9
    Total blocked: 1  Total waited: 2

    Stack trace: 
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:744)



    Following are threads related to embedded tomcat instance run within the ESB. It will be used to serve requests coming in to ESB management console, admin services. You can configure the thread pool and other parameters from ESB_HOME/repository/conf/tomcat/catalina-server.xml file.



    NioBlockingSelector.BlockPoller-1(2)

    Name: NioBlockingSelector.BlockPoller-1
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@5d630f69
       - locked java.util.Collections$UnmodifiableSet@59cdfa64
       - locked sun.nio.ch.KQueueSelectorImpl@510b6d29
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.tomcat.util.net.NioBlockingSelector$BlockPoller.run(NioBlockingSelector.java:327)


    http-nio-9763-ClientPoller-0(2)

    Name: http-nio-9763-ClientPoller-0
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@5f185036
       - locked java.util.Collections$UnmodifiableSet@1b5f14d
       - locked sun.nio.ch.KQueueSelectorImpl@5b8b369f
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1146)
    java.lang.Thread.run(Thread.java:744)


    http-nio-9763-Acceptor-0(2)

    Name: http-nio-9763-Acceptor-0
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
    sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
       - locked java.lang.Object@525f8df5
    org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:787)
    java.lang.Thread.run(Thread.java:744)


    http-nio-9443-ClientPoller-0(2)

    Name: http-nio-9443-ClientPoller-0
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.KQueueArrayWrapper.kevent0(Native Method)
    sun.nio.ch.KQueueArrayWrapper.poll(KQueueArrayWrapper.java:200)
    sun.nio.ch.KQueueSelectorImpl.doSelect(KQueueSelectorImpl.java:103)
    sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
       - locked sun.nio.ch.Util$2@3d8e1f2f
       - locked java.util.Collections$UnmodifiableSet@2f3ecb19
       - locked sun.nio.ch.KQueueSelectorImpl@113dc8a9
    sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
    org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1146)
    java.lang.Thread.run(Thread.java:744)


    http-nio-9443-Acceptor-0(2)

    Name: http-nio-9443-Acceptor-0
    State: RUNNABLE
    Total blocked: 0  Total waited: 0

    Stack trace: 
    sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
    sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
       - locked java.lang.Object@38fcf802
    org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:787)
    java.lang.Thread.run(Thread.java:744)


    http-nio-9443-exec-1(50)

    Name: http-nio-9443-exec-1
    State: WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4a4e4fb2
    Total blocked: 1  Total waited: 6

    Stack trace: 
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
    java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:104)
    org.apache.tomcat.util.threads.TaskQueue.take(TaskQueue.java:32)
    java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:744)





    sanjeewa malalgodaWSO2 API Manager - Decode JWT in ESB using JWT Decode mediator.

    Here in this post i will list code for JWTDecorder mediator which we can use in WSO2 ESB or any other synapse based WSO2 product to decode JWT header.

    After message gone through this mediator, all claims in JWT will present in message context as properties. Property name will be claim name. So you can use them in rest of the mediation flow.

    import java.io.FileInputStream;
    import java.io.FileNotFoundException;
    import java.io.IOException;
    import java.security.InvalidKeyException;
    import java.security.KeyStore;
    import java.security.KeyStoreException;
    import java.security.MessageDigest;
    import java.security.NoSuchAlgorithmException;
    import java.security.Signature;
    import java.security.SignatureException;
    import java.security.cert.Certificate;
    import java.security.cert.CertificateEncodingException;
    import java.security.cert.CertificateException;
    import java.security.cert.X509Certificate;
    import java.util.Enumeration;
    import java.util.Map;
    import java.util.regex.Matcher;
    import java.util.regex.Pattern;

    import org.apache.axiom.util.base64.Base64Utils;
    import org.apache.commons.logging.Log;
    import org.apache.commons.logging.LogFactory;
    import org.apache.oltu.oauth2.jwt.JWTException;
    import org.apache.oltu.oauth2.jwt.JWTProcessor;
    import org.apache.synapse.ManagedLifecycle;
    import org.apache.synapse.MessageContext;
    import org.apache.synapse.SynapseException;
    import org.apache.synapse.SynapseLog;
    import org.apache.synapse.core.SynapseEnvironment;
    import org.apache.synapse.core.axis2.Axis2MessageContext;
    import org.apache.synapse.mediators.AbstractMediator;
    import org.wso2.carbon.context.CarbonContext;
    import org.wso2.carbon.core.util.KeyStoreManager;



    public class JWTDecoder extends AbstractMediator implements ManagedLifecycle {
       
        private static Log log = LogFactory.getLog(JWTDecoder.class);

        private final String CLAIM_URI = "http://wso2.org/claims/";
        private final String SCIM_CLAIM_URI = "urn:scim:schemas:core:1.0:";

        private KeyStore keyStore;

        public void init(SynapseEnvironment synapseEnvironment) {
            if (log.isInfoEnabled()) {
                log.info("Initializing JWTDecoder Mediator");
            }
            String keyStoreFile = "";
            String password = "";

            try {
                keyStore = KeyStore.getInstance("JKS");
            } catch (KeyStoreException e) {
                //throw new Exception("Unable to get JKS KeyStore instance");
            }
            char[] storePass = password.toCharArray();

            // load the key store from file system
            FileInputStream fileInputStream = null;
            try {
                fileInputStream = new FileInputStream(keyStoreFile);
                keyStore.load(fileInputStream, storePass);
                fileInputStream.close();
            } catch (FileNotFoundException e) {
                if (log.isErrorEnabled()) {
                    log.error("Error loading keystore", e);
                }
            } catch (NoSuchAlgorithmException e) {
                if (log.isErrorEnabled()) {
                    log.error("Error loading keystore", e);
                }
            } catch (CertificateException e) {
                if (log.isErrorEnabled()) {
                    log.error("Error loading keystore", e);
                }
            } catch (IOException e) {
                if (log.isErrorEnabled()) {
                    log.error("Error loading keystore", e);
                }
            }
        }

        public boolean mediate(MessageContext synapseContext) {
            SynapseLog synLog = getLog(synapseContext);

            if (synLog.isTraceOrDebugEnabled()) {
                synLog.traceOrDebug("Start : JWTDecoder mediator");
                if (synLog.isTraceTraceEnabled()) {
                    synLog.traceTrace("Message : " + synapseContext.getEnvelope());
                }
            }

            // Extract the HTTP headers and then extract the JWT from the HTTP Header map
            org.apache.axis2.context.MessageContext axis2MessageContext = ((Axis2MessageContext) synapseContext).getAxis2MessageContext();
            Object headerObj = axis2MessageContext.getProperty(org.apache.axis2.context.MessageContext.TRANSPORT_HEADERS);
            @SuppressWarnings("unchecked")
            Map headers = (Map) headerObj;
            String jwt_assertion = (String) headers.get("x-jwt-assertion");

            // Incoming request does not contain the JWT assertion
            if (jwt_assertion == null || jwt_assertion == "") {
                // Since this is an unauthorized request, send the response back to client with 401 - Unauthorized error
                synapseContext.setTo(null);
                synapseContext.setResponse(true);
                axis2MessageContext.setProperty("HTTP_SC", "401");
                // Log the authentication failure
                String err = "JWT assertion not found in the message header";
                handleException(err, synapseContext);
                return false;
            }

            boolean isSignatureVerified = verifySignature(jwt_assertion, synapseContext);

            try {
                if (isSignatureVerified) {
                    // Process the JWT, extract the values and set them to the Synapse environment
                    if (log.isDebugEnabled()){
                        log.debug("JWT assertion is : "+jwt_assertion);
                    }
                    JWTProcessor processor = new JWTProcessor().process(jwt_assertion);
                    Map claims = processor.getPayloadClaims();
                    for (Map.Entry claimEntry : claims.entrySet()) {
                        // Extract the claims and set it in Synapse context
                        if (claimEntry.getKey().startsWith(CLAIM_URI)) {
                            String tempPropName = claimEntry.getKey().split(CLAIM_URI)[1];
                            synapseContext.setProperty(tempPropName, claimEntry.getValue());
                            if(log.isDebugEnabled()){
                                log.debug("Getting claim :"+tempPropName+" , " +claimEntry.getValue() );
                            }
                        } else if (claimEntry.getKey().startsWith(SCIM_CLAIM_URI)) {
                            String tempPropName = claimEntry.getKey().split(SCIM_CLAIM_URI)[1];
                            if (tempPropName.contains(".")) {
                                tempPropName = tempPropName.split("\\.")[1];
                            }

                            synapseContext.setProperty(tempPropName, claimEntry.getValue());
                            if(log.isDebugEnabled()){
                                log.debug("Getting claim :"+tempPropName+" , " +claimEntry.getValue() );
                            }
                        }
                    }
                } else {
                    return false;
                }
            } catch (JWTException e) {
                log.error(e.getMessage(), e);
                throw new SynapseException(e.getMessage(), e);
            }

            if (synLog.isTraceOrDebugEnabled()) {
                synLog.traceOrDebug("End : JWTDecoder mediator");
            }
           
            return true;
        }

        private boolean verifySignature(String jwt_assertion, MessageContext synapseContext) {
            boolean isVerified = false;
            String[] split_string = jwt_assertion.split("\\.");
            String base64EncodedHeader = split_string[0];
            String base64EncodedBody = split_string[1];
            String base64EncodedSignature = split_string[2];

            String decodedHeader = new String(Base64Utils.decode(base64EncodedHeader));
            byte[] decodedSignature = Base64Utils.decode(base64EncodedSignature);
            Pattern pattern = Pattern.compile("^[^:]*:[^:]*:[^:]*:\"(.+)\"}$");
            Matcher matcher = pattern.matcher(decodedHeader);
            String base64EncodedCertThumb = null;
            if (matcher.find()) {
                base64EncodedCertThumb = matcher.group(1);
            }
            byte[] decodedCertThumb = Base64Utils.decode(base64EncodedCertThumb);

            Certificate publicCert = null;


            publicCert = getSuperTenantPublicKey(decodedCertThumb, synapseContext);
            try {
                if (publicCert != null) {
                    isVerified = verifySignature(publicCert, decodedSignature, base64EncodedHeader, base64EncodedBody,
                            base64EncodedSignature);
                } else if (!isVerified) {
                    publicCert = getTenantPublicKey(decodedCertThumb, synapseContext);
                    if (publicCert != null) {
                        isVerified = verifySignature(publicCert, decodedSignature, base64EncodedHeader, base64EncodedBody,
                                base64EncodedSignature);
                    } else {
                        throw new Exception("Couldn't find a public certificate to verify signature");
                    }

                }

            } catch (Exception e) {
                handleSigVerificationException(e, synapseContext);
            }
            return isVerified;
        }

        private Certificate getSuperTenantPublicKey(byte[] decodedCertThumb, MessageContext synapseContext){
            String alias = getAliasForX509CertThumb(keyStore, decodedCertThumb, synapseContext);
            if (alias != null) {
                // get the certificate associated with the given alias from
                // default keystore
                try {
                    return keyStore.getCertificate(alias);
                } catch (KeyStoreException e) {
                    if (log.isErrorEnabled()) {
                        log.error("Error when getting server public certificate: " , e);
                    }
                }
            }
            return null;
        }

        private Certificate getTenantPublicKey(byte[] decodedCertThumb, MessageContext synapseContext){
            SynapseLog synLog = getLog(synapseContext);

            int tenantId = CarbonContext.getThreadLocalCarbonContext().getTenantId();
            String tenantDomain = CarbonContext.getThreadLocalCarbonContext().getTenantDomain();
           
            if (synLog.isTraceOrDebugEnabled()) {
                synLog.traceOrDebug("Tenant Domain: " + tenantDomain);
            }

            KeyStore tenantKeyStore = null;
            KeyStoreManager tenantKSM = KeyStoreManager.getInstance(tenantId);
            String ksName = tenantDomain.trim().replace(".", "-");
            String jksName = ksName + ".jks";
            try {
                tenantKeyStore = tenantKSM.getKeyStore(jksName);
            } catch (Exception e) {
                if (log.isErrorEnabled()) {
                    log.error("Error getting keystore for " + tenantDomain, e);
                }
            }
            if (tenantKeyStore != null) {
                String alias = getAliasForX509CertThumb(tenantKeyStore, decodedCertThumb, synapseContext);
                if (alias != null) {
                    // get the certificate associated with the given alias
                    // from
                    // tenant's keystore
                    try {
                        return tenantKeyStore.getCertificate(alias);
                    } catch (KeyStoreException e) {
                        if (log.isErrorEnabled()) {
                            log.error("Error when getting tenants public certificate: " + tenantDomain, e);
                        }
                    }
                }
            }

            return null;
        }
       
        private boolean verifySignature(Certificate publicCert, byte[] decodedSignature, String base64EncodedHeader,
                String base64EncodedBody, String base64EncodedSignature) throws NoSuchAlgorithmException,
                InvalidKeyException, SignatureException {
            // create signature instance with signature algorithm and public cert,
            // to verify the signature.
            Signature verifySig = Signature.getInstance("SHA256withRSA");
            // init
            verifySig.initVerify(publicCert);
            // update signature with signature data.
            verifySig.update((base64EncodedHeader + "." + base64EncodedBody).getBytes());
            // do the verification
            return verifySig.verify(decodedSignature);
        }

        private String getAliasForX509CertThumb(KeyStore keyStore, byte[] thumb, MessageContext synapseContext) {
            SynapseLog synLog = getLog(synapseContext);
            Certificate cert = null;
            MessageDigest sha = null;

            try {
                sha = MessageDigest.getInstance("SHA-1");
            } catch (NoSuchAlgorithmException e) {
                handleSigVerificationException(e, synapseContext);
            }
            try {
                for (Enumeration e = keyStore.aliases(); e.hasMoreElements();) {
                    String alias = e.nextElement();
                    Certificate[] certs = keyStore.getCertificateChain(alias);
                    if (certs == null || certs.length == 0) {
                        // no cert chain, so lets check if getCertificate gives us a result.
                        cert = keyStore.getCertificate(alias);
                        if (cert == null) {
                            return null;
                        }
                    } else {
                        cert = certs[0];
                    }
                    if (!(cert instanceof X509Certificate)) {
                        continue;
                    }
                    sha.reset();
                    try {
                        sha.update(cert.getEncoded());
                    } catch (CertificateEncodingException e1) {
                        //throw new Exception("Error encoding certificate");
                    }
                    byte[] data = sha.digest();
                    if (new String(thumb).equals(hexify(data))) {
                        if (synLog.isTraceOrDebugEnabled()) {
                            synLog.traceOrDebug("Found matching alias: " + alias);
                        }
                        return alias;
                    }
                }
            } catch (KeyStoreException e) {
                if (log.isErrorEnabled()) {
                    log.error("Error getting alias from keystore", e);
                }
            }
            return null;
        }

        private String hexify(byte bytes[]) {
            char[] hexDigits = {'0', '1', '2', '3', '4', '5', '6', '7',
                                '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'};

            StringBuffer buf = new StringBuffer(bytes.length * 2);

            for (int i = 0; i < bytes.length; ++i) {
                buf.append(hexDigits[(bytes[i] & 0xf0) >> 4]);
                buf.append(hexDigits[bytes[i] & 0x0f]);
            }

            return buf.toString();
        }

        private void handleSigVerificationException(Exception e, MessageContext synapseContext) {
            synapseContext.setTo(null);
            synapseContext.setResponse(true);
            org.apache.axis2.context.MessageContext axis2MessageContext = ((Axis2MessageContext) synapseContext).getAxis2MessageContext();
            axis2MessageContext.setProperty("HTTP_SC", "401");
            String err = e.getMessage();
            handleException(err, synapseContext);
        }

        public void destroy() {
            if (log.isInfoEnabled()) {
                log.info("Destroying JWTDecoder Mediator");
            }
        }
    }




    sanjeewa malalgodaHow to cleanup old and unused tokens in WSO2 API Manager

    When we use WSO2 API Manager over few months we may have lot of expired, revoked and inactive tokens in IDN_OAUTH2_ACCESS_TOKEN table.
    As of now we do not clear these entries for logging and audit purposes.
    But with the time when table grow we may need to clear table.
    Having large number of entries will slow down token generation and validation process.
    So in this post we will discuss about clearing unused tokens in API Manager.

    Most important thing is we should not try this with actual deployment to prevent data loss.
    First take a dump of running servers database.
    Then perform these instructions.
    And then start server pointing to updated database and test throughly to verify that we do not have any issues.
    Once you are confident with process you may schedule it for server maintenance time window.
    Since table entry deletion may take considerable amount of time its advisable to test dumped data before actual cleanup task.



    Stored procedure to cleanup tokens

    • Back up the existing IDN_OAUTH2_ACCESS_TOKEN table.
    • Turn off SQL_SAFE_UPDATES.
    • Delete the non-active tokens other than a single record for each state for each combination of CONSUMER_KEY, AUTHZ_USER and TOKEN_SCOPE.
    • Restore the original SQL_SAFE_UPDATES value.

    USE `WSO2AM_DB`;
    DROP PROCEDURE IF EXISTS `cleanup_tokens`;

    DELIMITER $$
    CREATE PROCEDURE `cleanup_tokens` ()
    BEGIN

    -- Backup IDN_OAUTH2_ACCESS_TOKEN table
    DROP TABLE IF EXISTS `IDN_OAUTH2_ACCESS_TOKEN_BAK`;
    CREATE TABLE `IDN_OAUTH2_ACCESS_TOKEN_BAK` AS SELECT * FROM `IDN_OAUTH2_ACCESS_TOKEN`;

    -- 'Turn off SQL_SAFE_UPDATES'
    SET @OLD_SQL_SAFE_UPDATES = @@SQL_SAFE_UPDATES;
    SET SQL_SAFE_UPDATES = 0;

    -- 'Keep the most recent INACTIVE key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'
    SELECT 'BEFORE:TOTAL_INACTIVE_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE';

    SELECT 'TO BE RETAINED', COUNT(*) FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y;

    DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE' AND ACCESS_TOKEN NOT IN (SELECT ACCESS_TOKEN FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y);

    SELECT 'AFTER:TOTAL_INACTIVE_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE';

    -- 'Keep the most recent REVOKED key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'
    SELECT 'BEFORE:TOTAL_REVOKED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED';

    SELECT 'TO BE RETAINED', COUNT(*) FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y;

    DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED' AND ACCESS_TOKEN NOT IN (SELECT ACCESS_TOKEN FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y);

    SELECT 'AFTER:TOTAL_REVOKED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED';


    -- 'Keep the most recent EXPIRED key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'
    SELECT 'BEFORE:TOTAL_EXPIRED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED';

    SELECT 'TO BE RETAINED', COUNT(*) FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y;

    DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED' AND ACCESS_TOKEN NOT IN (SELECT ACCESS_TOKEN FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y);

    SELECT 'AFTER:TOTAL_EXPIRED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED';

    -- 'Restore the original SQL_SAFE_UPDATES value'
    SET SQL_SAFE_UPDATES = @OLD_SQL_SAFE_UPDATES;

    END$$

    DELIMITER ;


    Schedule event to run cleanup task per week
    USE `WSO2AM_DB`;
    DROP EVENT IF EXISTS `cleanup_tokens_event`;
    CREATE EVENT `cleanup_tokens_event`
        ON SCHEDULE
          EVERY 1 WEEK STARTS '2015-01-01 00:00.00'
        DO
          CALL `WSO2AM_DB`.`cleanup_tokens`();

    -- 'Turn on the event_scheduler'
    SET GLOBAL event_scheduler = ON;


    These scripts initially created by Rushmin Fernando(http://rushmin.blogspot.com/). Listing here to help API Manager users.

    Prabath SiriwardenaThe Mobile Connect implementation by Dialog Axiata and Sri Lanka Telecom Mobitel gets a special award at the EIC 2015

    Dialog Axiata and Sri Lanka Telecom Mobitel won a special award at the European Identity Conference 2015, for the Mobile Connect implementation with the WSO2 Identity Server. It was my pleasure to accept the award on behalf of all three companies - and credit goes to the fantastic teams behind this whole effort from all three companies.


    The combined WSO2-Dialog-Mobitel Mobile Connect solution is a fast, secure log-in system for mobile authentication that enables people to access their online accounts with just one click. There are different levels of security from low-level website access to highly secure bank-grade authentication. People who have subscribed to a participating operator know that when they click on a website’s Mobile Connect button they are making passwords a thing of the past.

    It is the world’s first multi-operator Mobile Connect solution that provides an out-of-band medium for authenticating a user to any service provider without requiring a password. The GSM/UMTS network is used to send a USSD prompt, which can be simple ‘click OK’ or a request for a PIN, at the point of authentication. The response then verifies the user’s authenticity. For users accessing the site via a mobile network, seamless login can be enabled, removing the need for the USSD interactions and further simplifying the user experience.

    The WSO2-Dialog-Mobitel solution fully conforms to the GSMA CPAS5 OpenID Connect OpConnect Profile 1.0. This is based on the OpenID Connect 1.0 core specification. Additional details on this are included in the answer to question 6 regarding the technologies implemented.

    Mobile Connect was initially launched on trial basis to a significant user base as a combined effort between Dialog Axiata, Mobitel and WSO2 in July 2014.

    Based on the success of the trial, Dialog Axiata and Mobitel are running this service live for the entire customer base of both operators totaling more than 14 million subscribers.

    Prabath SiriwardenaConnected Identity : Benefits, Risks & Challenges

    It was a day in August 2012. Mat Honan - a reporter for the Wired magazine, San Francisco, returned home and was playing with his little daughter. He had no clue, what was going to happen next. Suddenly his iPhone was powered down. He was expecting a call - so he plugged it into a wall power socket and rebooted back. What he witnessed next blew him away. Instead of the iPhone home screen with all the apps - now it asks for him to set up a new phone with a big Apple logo and a welcome screen. Honan thought his iPhone was misbehaving - but was not that worried since he backed up daily to the iCloud - restoring everything from iCloud could simply fix this. Honan tried to login to iCloud. Tried once - failed. Tried again - failed. Again - failed. Thought he was excited. Tried once again… and failed. Now he knew something weird has happened. His last hope was his MacBook. Thought at least he could restore everything from the local backup. Booted up the MacBook - nothing in it - and prompted him to enter a four digit passcode - that he has never set up before.

    Honan never gives up. He called Apple tech support to reclaim his iCloud account. Then he learnt he has called Apple, 30 minutes before, to reset his iCloud password. Only information required at that time to reset an iCloud account were, the billing address and the last four digits of the credit card. The billing address was readily available under the Whois internet domain record Honan had for his personal website. The attacker was good enough to get the last four digits of Honan’s credit card by talking to Amazon helpdesk - he already had Honan’s email address and the full mailing address - those were more than enough for a social engineering attack.

    Honan lost almost everything. The attacker was still desperate - next he broke into Honan’s GMail account. Then from there to his Twitter account. One by one - Honan’s connected identity fall into the hands of the attacker.

    Something we all know by heart - the principle of the weakest link. Any computer system is as strong as the strength of the weakest link in it. No matter you have biometrics to log in to your iPhone or to the MacBook - someone gaining access to your iCloud account can wipe off both of them.

    Today, the global Internet economy is somewhere in the neighborhood of $10 trillion US dollars. By 2016, almost half the world’s population - about 3 billion people - will use the internet. Not just humans - during 2008 – the number of things connected to Internet exceeded the number of people on earth. Over 12.5 billion devices were connected to the Internet in 2012. It is estimated that 25 billion devices will be connected to the Internet by the end of 2015 - and by the end of 2020, 50 billion devices will be connected.

    Connected devices have existed in one form or another since the introduction of the first computer networks and the consumer electronics. However, it wasn’t until the Internet emerged, that the idea of a globally connected planet started to take off. In 1990s researchers theorized how human and machine could weave together a completely new form of communication and interaction via machines. That reality is now unfolding before our eyes.

    The word ‘Identity’ is no more just about humans. It represents both humans and ‘things’.

    The Identity of Things (IDoT) is an effort that involves assigning unique identifiers (UID) with associated metadata to devices and objects (things), enabling them to connect and communicate effectively with other entities over the Internet.

    The metadata associated with the unique identifier collectively defines the identity of an endpoint. Identity of things is an essential component of the Internet of Things (IoT), in which almost anything imaginable can be addressed and networked for exchange of data online. In this context, a thing can be any entity -- including both physical and logical objects -- that has a unique identifier and the ability to transfer data over a network.

    The definition of Identity has evolved with the Internet of Things. Identity cannot be defined just by by attributes or claims. It can be further refined by patterns or behaviors. The Fitbit you wear while you sleep, knows your sleeping patterns. The sensors attached to your connected car knows your driving patterns. The sensors attached to your refrigerator knows your daily food consumption patterns. None of the identity stores out there can build a complete image of a given entity’s identity. Forget about the aggregated view - another challenge we face today in the connected world is the data migration and the ownership.

    Connected cars collect and store a vast amount of data. This data goes well beyond a vehicle owner's personal preferences and settings. Connected cars collect driver data such as travel routes, travel destinations, car speeds, driver behavior, commute patterns and much more.


    Connected cars are only a fraction of the millions and millions of Internet-connected devices that enable users to set their personal preferences and that collect vast amounts of user data. All of these IoT devices are helping to create a "virtual identity" for each and every user.

    While this user-generated data will most likely last forever, connected cars and all the other Internet-connected devices will not. This leads to several important questions concerning connected car owners and their data:
    • What happens to connected car owners' data when they want to purchase a new car? 
    • Can the car owner's data be transferred to another connected car, even if that car is made by a different manufacturer? 
    • How and where is all of these connected car data being stored?
    Connected car data and user preferences are primarily stored in cloud-based silos. There are no universal standards or agreed upon best practices among car manufacturers or in the connected car industry for collecting, storing and managing connected car owner data. Also there are no universal standards or best practices for managing connected car owner's "Identity," which includes the storage and export of personal preferences and user history.

    Identity silos create a lot of friction in the connected business. One way to reduce friction - but still keeping data in silos - and of course the ownership of data - is by exposing Identity data via APIs. That would help end users to build and understand a better picture of their own identity. Now they can relate driving patterns with sleeping patterns. The daily food consumption patterns with sleeping patterns and many more. The BMW Connected Drive, which connects to your Fitbit, now knows if you haven’t slept well last night - and possibly, present you the options that helps you for a safe drive.

    Propagating end user identity across these APIs is the next challenge. Building a protocol agnostic security model is the key for connected identity. If we build BMW Connected Drive to be compliant with the security model of Fitbit, then, when it talks to Yelp to find the best nearby restaurant, that’s mostly rated by your friends in Facebook, BMW Connected Drive should then also be compatible with the security model of Facebook and Yelp. In other words, we are just starting to build a point to point security model, that would lead to the spaghetti identity - antipattern.

    Identity silos and spaghetti identity anti-patterns are not only present in the IoT world.

    If you look at the history, most enterprises grow today via acquisitions, mergers and partnerships. In U.S only, mergers and acquisitions volume totaled to $865.1 billion in the first nine months of 2013, according to Dealogic. That’s a 39% increase over the same period a year ago — and the highest nine-months total since 2008.

    A research done by the analyst firm Quocirca confirms that many businesses now have more external users than internal ones: in Europe 58 percent transact directly with users from other businesses and/or consumers; for the UK alone the figure is 65 percent. Another analyst firm predicts by 2020, 60% of all digital identities interacting with enterprises will come from external IdPs.

    Each external Identity Provider can be treated as an Identity silo. The way identity data are shared, is through APIs. The Identity consumer, or the service provider, must trust the identity provider to accept a given user identity. Beyond the trust, both the service provider - and the identity provider must speak the same language to bootstrap the trust relationship, and then transport identity data. How about a service provider, which is incompatible with the Identity token sharing protocol supported by the Identity provider? Either you need to fix the Identity provider end to speak the same language of the service provider, or the otherway around.

    Under today’s context connected business is a very dynamic and complex environment. Your desire is to reach out to your customers, partners, distributors and suppliers and create more and more business interactions and activities, that will generate more revenue. The goal here is not just integrate technological silos, in your enterprise – but also make your business more accessible and reactive.

    Having friction to build connections between your business entities - is something that cannot be tolerated. The cost of provisioning a service provider or an identity provider into the system could be high, due to the protocol incompatibilities. Also, building point-to-point trust relationships between service providers and identity providers - does not scale well.

    With the Identity Bus or the Identity Broker pattern, a given service provider is not coupled to a given identity provider - and also not coupled to a given federation protocol. The broker maintains the trust relationships between each entity and also mediates identity tokens between multiple heterogeneous security protocols. It can further centrally enforce access controlling, auditing and monitoring.

    With ever evolving standards for identity federation - and having no proper standard to manage and propagate device identities, Identity Broker will play a key role in building a common, connected identity platform in a protocol agnostic manner.

    Following highlights some of the benefits of the Identity Broker pattern.
    • Introducing a new service provider is frictionless. You only need to register the service provider at the identity bus and from the there pick which identity providers it trusts. No need to add the service provider configuration to each and every identity provider. 
    • Removing an existing service provider is frictionless. You only need to remove the service provider from the identity bus. No need to remove the service provider from each and every identity provider. Introducing a new identity provider is frictionless. You only need to register the identity provider at the identity bus. It will be available for any service provider. 
    • Removing an existing identity provider is extremely easy. You only need to remove the identity provider from the identity bus. 
    • Enforcing new authentication protocols is frictionless. Say you need to authenticate users with both the username/password and duo-security (SMS based authentication) - you only need to add that capability to the identity bus and from there you pick the required set of authentication protocols against a given service provider at the time of service provider registration. Each service provider can pick how it wants to authenticate users at the identity bus. 
    • Claim transformations. Your service provider may read user's email address from the http://sp1.org/claims/email attribute id - but the identity provider of the user may send it as http://idp1.org/claims/emai. Identity bus can transform the claims it receives from the identity provider to the format expected by the service provider. 
    • Role mapping. Your service provider needs to authorize users once they are logged in. What the user can do at the identity provider is different from what the same user can do at the service provider. User's roles from the identity provider define what he can do at the identity provider. Service provider's roles define the things a user can do at the service provider. Identity bus is capable of mapping identity provider's roles to the service provider's roles. For example a user may bring idp-admin role from his identity provider - in a SAML response - then the identity bus will find the mapped service provider role corresponding to this, say sp-admin, and will add that into the SAML response returning back to the service provider from the identity bus. 
    • Just-in-time provisioning. Since identity bus is at the middle of all identity transactions - it can provision all external user identities to an internal user store. Centralized monitoring and auditing. 
    • Centralized access control. 
    • Introducing a new federation protocol needs minimal changes. If you have a service provider or an identity provider, which supports a proprietary federation protocol, then you only need to add that capability to the identity bus. No need to implement it at each and every identity provider or service provider. 
    References:

    1. The Internet of Things (The MIT Press Essential Knowledge series), http://www.amazon.com/Internet-Things-Press-Essential-Knowledge/dp/0262527731

    2. Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It, http://www.amazon.com/Future-Crimes-Everything-Connected-Vulnerable/dp/0385539002

    3. http://www.programmableweb.com/news/identity-and-access-management-iam-will-greatly-impact-future-connected-car-sales/analysis/2014/08/20

      sanjeewa malalgodaUse OpenID with OAuth 2.0 in WSO2 API Manager to retrieve user information

      When we calling an API, we request for an oauth token which we send with each API call.

      The reason for introduce OpenID on top of oauth 2.0 is oauth 2.0 access token retrieving process does not provide any additional information about user who is generating access tokens. Oauth is only authorization mechanism and we cannot derive more information from that.
      So to get more information about user we will use openId scope to get user access token.

      In that case we will pass openid scope with token request. Then we will get JWT as a part of the response from API manager, which contains user information in addition to access token and refresh token. So we will have all required user information after token generation process and we can use it for next steps if we need to do anything specific to users.
      Issue following command to request OpenID based OAuth token.

      Request
      curl -k -d "grant_type=password&username=admin&password=admin&scope=openid" -H "Authorization: Basic M1J6RFNrRFI5ZmQ5czRqY296R2xfVjh0QU5JYTpXeElqSkFJd0dqRWVYOHdHZGFfcGM1Wl94RjRh, Content-Type: application/x-www-form-urlencoded" https://apiw.test.com/token

      Response

      {"scope":"openid","token_type":"Bearer","expires_in":3600,
      "refresh_token":"65af3dbea3294b1524832d3869361e3e",
      "id_token":"eyJhbGciOiJSUzI1NiJ9.eyJhdXRoX3RpbWUiOjE0MzA0NTY4MzM5OTgsImV4cCI6MTQzMDQ2MDQzNDAxNCwic3ViIjoiYWRtaW5AY2FyYm9uLnN1cGVyIiwiYXpwIjoiM1J6RFNrRFI5ZmQ5czRqY296R2xfVjh0QU5JYSIsImF0X2hhc2giOiJNV013WXpreVl6UmxPVGhsTkRNM01XTTVNVFEyTTJWbE0yWXlNamcwWXc9PSIsImF1ZCI6WyIzUnpEU2tEUjlmZDlzNGpjb3pHbF9WOHRBTklhIl0sImlzcyI6Imh0dHBzOlwvXC9sb2NhbGhvc3Q6OTQ0M1wvb2F1dGgyZW5kcG9pbnRzXC90b2tlbiIsImlhdCI6MTQzMDQ1NjgzNDAxNH0.Fc4DO8A22euo04vnBoE87RVBtDQ-73Z2hNZ8_WpeKslkumhEuUVcf6y03D5HZBlGDUi8zC1SUHewg4WEE8HvI6wA59wp8BErK6pY3Zb02pWbJsPh7VBHwky2g5PtvKSsGiy0rd2tuehY-_dAy7LBKNSUOhkmGdLXkSSThuIQxKOHDAJKHCY4I_36B9OH1scs34EG9MKG4vSNdfdcf4mSg0KUD98Jdw_NS-T4pRZK_sCeT-1BBodYEabEVREHxfcDr7BGYugMiiWThVUzd4WIHD83bVwxXP17POzuo6dS_l78pBWZtBBMPKXqhd9VMNZpc-sR07DS7KkHoV6Fp3l0oA",
      "access_token":"1c0c92c4e98e4371c91463ee3f2284c"}

      This response contain JWT as well. Then we can invoke user info API as follows.
      curl -k -v -H "Authorization: Bearer 1c0c92c4e98e4371c91463ee3f2284c" https://km.test.com/oauth2/userinfo?schema=openid

      HTTP/1.1 200 OK
      {"email":"sanjeewa@wso2.com","family_name":"malalgoda"}As you see we can get results for user details.

      sanjeewa malalgodaConfigure Categorization APIs for store via Tabs

      If you want to see the APIs grouped according to different topics in the API Store, do the following:
      Go to /repository/deployment/server/jaggeryapps/store/site/conf directory, open the site.json file and set the tagWiseMode attribute as true.
      Go to the API Publisher and add tags with the suffix "-group" to APIs (e.g., Workflow APIs-group, Integration APIs-group, Quote APIs-group.)
      Restart the server.
      After you publish the APIs, you see the APIs listed under their groups. You can click on a group to check what the APIs are inside it.
      If you want to have any other name for group you can do that as well. For that you need to change following property in site.json configuration file.
      "tagGroupKey" :"-group",
      Here -group can replace with any postfix you like.



      Ishara PremadasaMounting WSO2 ESB registry partitions into MySQL

      By default WSO2 ESB comes with an embedded H2 database where the config and governance partitions of the registry is embedded there. In most of the production scenarios it is required to externalize these partitions to out of ESB for the convenience of managing registry resources. This post is a step by step guide on how to mount ESB registry partitions into a MySQL database.

      1. Download MySQL JDBC connector
      1. Installing MySQL
      2. Create new registry database
      3. Configure ESB to use externalized MySQL database

      1. Download MySQL JDBC connector
      The jdbc connector for MySQL can be downloaded by the link provided here. 
      [https://dev.mysql.com/downloads/connector/j/5.0.html]
      Extract the zip file and obtain the mysql-connector-java-5.0.8-bin.jar file.

      2. Installing MySQL

      There are two ways to install MySQL server in Windows 7. One is by downloading the MSI installer and the other option is downloading the mysql zip file.
      I have used the option two below as it was quick. You can download the mysql ""Windows (x86, 64-bit), ZIP Archive"" using this link. 
      [https://dev.mysql.com/downloads/mysql/5.5.html#downloads]

      Once downloaded, extract it and go to \mysql-5.5.43-winx64\bin. Run the below command to start mysql server. This will start the MySQL service.

      > mysqld.exe

      After that execute below command to log in to mysql as root user. The default password is empty, so once prompted for password you can just press enter and login. If you need set a new root password later for increased security.

      > mysql.exe -u root -p
      Enter password: [Press Enter here as there is no password to type]
      Welcome to the MySQL monitor.  Commands end with ; or \g.
      Your MySQL connection id is 2
      Server version: 5.5.43 MySQL Community Server (GPL)


      That is all, you can use any mysql commands now to work with databases here.

      3. Create a new database in MySQL server, which will be used as the external database for WSO2 ESB. I have given the name reg_db for this.

      mysql> create database reg_db;
      Query OK, 1 row affected (0.00 sec)


      4. Configure ESB to use externalized MySQL database

      The last step to do is pointing ESB to use this newly created MySQL database. Here we are going to link ESB with new sql database, so the entries written to config and governance spaces will be stored in reg_db mysql database from here onwards.

      4.1  Copy the previously download jdbc connector jar into ESB_HOME/repository/components/lib folder.
      4.2  Goto ESB_HOME/repository/conf/datasources and open the master-datasources.xml file. Add the below entry for a new datasource. Do not remove any existing entries too.
           We have to give username and password for the database. Since my MySQL password is empty i have kept <password></password>  without any value here.

              <datasource> 
                  <name>WSO2_CARBON_DB_Reg</name> 
                  <description>External DB used for registry and config spaces</description> 
                  <jndiConfig> 
                      <name>jdbc/WSO2CarbonDB_Reg</name> 
                  </jndiConfig> 
                  <definition type="RDBMS"> 
                      <configuration> 
                          <url>jdbc:mysql://localhost:3306/reg_db</url> 
                          <username>root</username> 
                          <password></password> 
                          <driverClassName>com.mysql.jdbc.Driver</driverClassName> 
                          <maxActive>50</maxActive> 
                          <maxWait>60000</maxWait> 
                          <testOnBorrow>true</testOnBorrow> 
                          <validationQuery>SELECT 1</validationQuery> 
                          <validationInterval>30000</validationInterval> 
                      </configuration> 
                  </definition> 
              </datasource>

      4.3 Goto ESB_HOME/repository/conf/registry.xml file and add a new dbConfig for remote MySQL database.

          <dbConfig name="mounted_registry"> 
              <dataSource>jdbc/WSO2CarbonDB_Reg</dataSource> 
          </dbConfig>
         
      4.4 Add a new remoteInstance entry to the same registry.xml.

          <remoteInstance url="https://localhost:9443/registry"> 
              <id>instanceid</id> 
              <dbConfig>mounted_registry</dbConfig> 
              <readOnly>false</readOnly> 
              <enableCache>true</enableCache> 
              <registryRoot>/</registryRoot> 
              <cacheId>root@jdbc:mysql://localhost:3306/reg_db</cacheId> 
          </remoteInstance>   

      4.5 Add the mount configurations to registry.xml as below.

          <mount path="/_system/config" overwrite="true"> 
              <instanceId>instanceid</instanceId> 
              <targetPath>/_system/nodes</targetPath> 
          </mount> 
          <mount path="/_system/governance" overwrite="true"> 
              <instanceId>instanceid</instanceId> 
              <targetPath>/_system/governance</targetPath> 
          </mount>
         
      That is all. Now when you restart the ESB server, go to the Registry menu from the admin console. When you 'Browse' the registry the icons for config and governance spaces would look like below.

       



      The blue arrow in the icons mean those partitions are now pointed into an external database.

      Now if you goto the mysql console and try to see the tables that in the 'reg_db' database, it is now loaded with list of new tables as below.

      mysql> use reg_db;
      Database changed
      mysql>
      mysql> show tables;
      +-----------------------+
      | Tables_in_reg_db     |
      +-----------------------+
      | reg_association       |
      | reg_cluster_lock      |
      | reg_comment           |
      | reg_content           |
      | reg_content_history   |
      +-----------------------+
      39 rows in set (0.00 sec)

      Hiranya JayathilakaParsing Line-Oriented Text Files Using Go

      The following example demonstrates several features of Golang, such as reading a file line-by-line (with error handling), deferred statements and higher order functions.

      package main

      import (
      "bufio"
      "fmt"
      "os"
      )

      func ParseLines(filePath string, parse func(string) (string,bool)) ([]string, error) {
      inputFile, err := os.Open(filePath)
      if err != nil {
      return nil, err
      }
      defer inputFile.Close()

      scanner := bufio.NewScanner(inputFile)
      var results []string
      for scanner.Scan() {
      if output, add := parse(scanner.Text()); add {
      results = append(results, output)
      }
      }
      if err := scanner.Err(); err != nil {
      return nil, err
      }
      return results, nil
      }

      func main() {
      if len(os.Args) != 2 {
      fmt.Println("Usage: line_parser ")
      return
      }

      lines, err := ParseLines(os.Args[1], func(s string)(string,bool){
      return s, true
      })
      if err != nil {
      fmt.Println("Error while parsing file", err)
      return
      }

      for _, l := range lines {
      fmt.Println(l)
      }
      }
      The ParseLines function takes a path (filePath) to an input file, and a function (parse) that will be applied on each line read from the input file. The parse function should return a [string,boolean] pair, where the boolean value indicates whether the string should be added to the final result of ParseLines or not. The example shows how to simply read and print all the lines of the input file.
      The caller can inject more sophisticated transformation and filtering logic into ParseLines via the parse function. The following example invocation filters out all the strings that do not begin with the prefix "[valid]", and extracts the 3rd field from each line (assuming a simple whitespace separated line format).

      lines, err := ParseLines(os.Args[1], func(s string)(string,bool){
      if strings.HasPrefix(s, "[valid] ") {
      return strings.Fields(s)[2], true
      }
      return s, false
      })
      A function like ParseLines is suitable for parsing small to moderately large files. However, if the input file is very large, ParseLines may cause some issues, since it accumulates the results in memory.

      Ajith VitharanaSetup Canon MX450 series scaner in linux 14.04

      The "Simple Scan" in linux doesn't work with  Canon MX450 series. But the  ScanGear driver provides a tool to use that scanner with linux .

      1. Download the debian package archive from here.
      2. Extract the archive.
      >tar -xvf scangearmp-mx450series-2.10-1-deb.tar.gz
      3. Go inside the scangearmp-mx450series-2.10-1-deb directory from command window.
      4. Change permission to execute the install.sh
      > sudo chmod a+x install.sh
      5. Execute the install.sh
      > ./install.sh 
      6. Connect your scanner to machine (USB) and execute the following command to open the GUI.
      > scangearmp & 

      Manoj KumaraCarbon Pluggable Runtime Framework - Part 2

      The Carbon Pluggable Runtime Framework is the core module on kernel level for handling 3rd party Runtime management on Carbon server. A high level architecture of the runtime framework is shown in below diagram.

      Runtime service
      For runtime to be registered on the Runtime Framework, it should extend the provided Runtime SPI and expose the corresponding runtime service.

      Runtime Service Listener
      This will be listening to runtime register/unregister events and once runtime service is registered, the Runtime Service Listener will notify the RuntimeManager.

      Runtime Manager
      This will keep the information about available runtimes.



      Once all the required runtime instances get registered, the Runtime Framework will register the Runtime Service which provides the utility functionalities on the Runtime Framework.

      What will be the advantage of using Pluggable Runtime Framework

      With Carbon Pluggable Runtime Framework we can integrate/plug different 3rd party Runtime implementations to Carbon server. Not only that this framework facilitate the server to manage the registered Runtime's in a controlled manner. For example during server maintenance the underlying framework will handle the Runtime's by putting them on maintenance mode and start back when server is back on normal state. The developer or user did not need to know underlying process once you registered the Runtime on the framework.

      Manoj KumaraIntroduction to CARBON Runtimes - Part 1

      What is a Runtime

      When we considering a Runtime many aspects of a Runtime can be taken into consideration depending on its capabilities and features of it. For example if we consider Apache Axis2 Runtime it is a web service engine, Apache Tomcat is a implementation of the Java Servlet and JavaServer and Apache Synapse is a mediation engine likewise. Each of these Runtimes has its own features and capabilities. 


      How features of a Runtime can be used

      To facilitate its features a Runtime can expose different services that can be used. For example if we consider Axis2 Runtime it has Axis2Deployer which is responsible for artifact deployment and Axis2Runtime which is responsible about the Runtime aspect.


      The Runtime can be utilized in Carbon by implementing the provided SPI implementations on the server. For example, by implementing the Deployer SPI, server can expose its custom deployer in such a way that it is recognised by the Deployment Engine. Likewise, Runtime SPI can be used to implement the runtime service.


      Carbon Runtime Status

      In the Carbon context, Runtime can be defined as an application level runtime instance that can run on top of Carbon OSGI framework. Runtime can have a different runtime status, depending on the state of the carbon server. Following diagram shows the available status options of a runtime.



      Pending : A given Runtime is in idle state (before the Runtime initialization).

      Inactive : The runtime has being initialized successfully. Runtimes can perform operations such as deploying artifacts.

      Active : After the start() method of the runtime has been completed, the given runtime is fully functional and it can server requests to the runtime.

      Maintenance : In this status, the runtime is on hold for maintenance work. That is, we are temporarily holding the serving of requests while the runtime is on intermediate/maintenance mode.

      Manoj KumaraGit Branching Model

      In Git version control system we can maintains multiple branches. For this example lets take Master branch and Development Branch.
      • Master Branch
      This branch contains the most recently released Carbon Kernel. This is the main branch where the source code of HEAD always reflectsproduction-ready state.
      • Development Branch 
      This branch is the main branch where the HEAD always reflects a state with the latest on going trunk development changes for the next release.

      Start working on new feature

      When you start to work on a new feature, you can create a separate feature branch for your self from the develop branch and start working. You can then merge your changes to the development branch once you complete the feature. The following instructions will guide you to start.


      Task
      Command
      Description
      Commit your changesgit commit -a -m "your commit message"Committing your changes to your local git repository
      Creating a feature branchgit checkout -b myfeature developmentYou can create a new feature branch called myfeature from thedevelopment branch
      Delete the worked feature branchgit branch -d myfeature
      Incorporating changes to development branchgit checkout developmentThis will add your changes to your local development branch
      Push the changes to development branchgit push origin developmentPush the changes to the central git repository under development branch


      For more information visit http://nvie.com/posts/a-successful-git-branching-model/

      Ajith VitharanaSAML2 bearer tokens with OAuth2 - (WSO2 API Manager + WSO2 Identity Server)



      1. Download the latest version of WSO2 API Manager (AM)1.8.0 (http://wso2.com/api-management/).

      2. Download the latest version of WSO2 Identity Server(IS) 5.0.0 and apply the Service Pack(http://wso2.com/products/identity-server/).

      3. I'm going to run  AM in port off set 0 and IS in 1 (change <Offset>in carbon.xml).

      Change the HostName and MgtHostName in the carbon.xml file of the IS.
      <HostName>is.wso2.com</HostName>
      <MgtHostName>is.wso2.com</MgtHostName>

      4.Change the HostName and MgtHostName in the carbon.xml file of the AM
      <HostName>am.wso2.com</HostName>
      <MgtHostName>am.wso2.com</MgtHostName> 


      Share the user stores and mount registry spaces.


      i. In default product , AM has  JDBC user store manager and IS has embedded read write LDAP server. (check the user-mgt.xml file under the <HOME>/repository/conf)

      ii. Open the user-mgt.xml file in IS and disable the default ReadWriteLDAPUserStoreManager.

      iii.  Enable the JDBCUserStoreManager.

              <UserStoreManager class="org.wso2.carbon.user.core.jdbc.JDBCUserStoreManager">
                  <Property name="TenantManager">org.wso2.carbon.user.core.tenant.JDBCTenantManager</Property>
                  <Property name="ReadOnly">false</Property>
                  <Property name="MaxUserNameListLength">100</Property>
                  <Property name="IsEmailUserName">false</Property>
                  <Property name="DomainCalculation">default</Property>
                  <Property name="PasswordDigest">SHA-256</Property>
                  <Property name="StoreSaltedPassword">true</Property>
                  <Property name="ReadGroups">true</Property>
                  <Property name="WriteGroups">true</Property>
                  <Property name="UserNameUniqueAcrossTenants">false</Property>
                  <Property name="PasswordJavaRegEx">^[\S]{5,30}$</Property>
                  <Property name="PasswordJavaScriptRegEx">^[\S]{5,30}$</Property>
                  <Property name="UsernameJavaRegEx">^[^~!#$;%^*+={}\\|\\\\&lt;&gt;,\'\"]{3,30}$</Property>
                  <Property name="UsernameJavaScriptRegEx">^[\S]{3,30}$</Property>
                  <Property name="RolenameJavaRegEx">^[^~!#$;%^*+={}\\|\\\\&lt;&gt;,\'\"]{3,30}$</Property>
                  <Property name="RolenameJavaScriptRegEx">^[\S]{3,30}$</Property>
                  <Property name="UserRolesCacheEnabled">true</Property>
                  <Property name="MaxRoleNameListLength">100</Property>
                  <Property name="MaxUserNameListLength">100</Property>
                  <Property name="SharedGroupEnabled">false</Property>
                  <Property name="SCIMEnabled">false</Property>
              </UserStoreManager>
      iv)  Create database (shareddb) for user store and configure in master-datasource.xml file.
      <datasource>
                  <name>WSO2_UM_DB</name>
                  <description>The datasource used for registry and user manager</description>
                  <jndiConfig>
                      <name>jdbc/WSO2CarbonDB_SHARE</name>
                  </jndiConfig>
                  <definition type="RDBMS">
                      <configuration>
                          <url>jdbc:mysql://<host>:<port>/shareddb</url>
                          <username>root</username>
                          <password>root</password>
                          <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                          <maxActive>50</maxActive>
                          <maxWait>60000</maxWait>
                          <testOnBorrow>true</testOnBorrow>
                          <validationQuery>SELECT 1</validationQuery>
                          <validationInterval>30000</validationInterval>
                      </configuration>
                  </definition>
      </datasource>
      v) Change the JNDI name in dataSource property in user-mgt.xml file.

      <Property name="dataSource">jdbc/WSO2CarbonDB_SHARE</Property> 

      vi) Do the  steps (iv) and (v) in AM.

      vi) Open the registry.xml in IS and add the mount configurations.

      <dbConfig name="wso2registry_shared">
              <dataSource>jdbc/WSO2CarbonDB_SHARE</dataSource>
      </dbConfig>

      <remoteInstance url="https://localhost:9443/registry">
              <id>instanceid</id>
              <dbConfig>wso2registry_shared</dbConfig>
              <readOnly>false</readOnly>
              <enableCache>true</enableCache>
              <registryRoot>/</registryRoot>
              <cacheId>root@jdbc:mysql://<host>:<port>/shareddb</cacheId>
      </remoteInstance>
      <mount path="/_system/config" overwrite="true">
              <instanceId>instanceid</instanceId>
              <targetPath>/_system/isnodes</targetPath>
      </mount>
      <mount path="/_system/governance" overwrite="true">
              <instanceId>instanceid</instanceId>
              <targetPath>/_system/governance</targetPath>
      </mount>

      vii) Open the registry.xml of the AM and add the mount configurations.

        <dbConfig name="wso2registry_shared">
              <dataSource>jdbc/WSO2CarbonDB_SHARE</dataSource>
          </dbConfig>

      <remoteInstance url="https://localhost:9443/registry">
              <id>instanceid</id>
              <dbConfig>wso2registry_shared</dbConfig>
              <readOnly>false</readOnly>
              <enableCache>true</enableCache>
              <registryRoot>/</registryRoot>
             <cacheId>root@jdbc:mysql://<host>:<port>/shareddb</cacheId>
      </remoteInstance>
      <mount path="/_system/config" overwrite="true">
              <instanceId>instanceid</instanceId>
              <targetPath>/_system/amnodes</targetPath>
      </mount>
      <mount path="/_system/governance" overwrite="true">
              <instanceId>instanceid</instanceId>
              <targetPath>/_system/governance</targetPath>
      </mount>


      (You need to start the servers with -Dsetup parameter to generate tables in shareddb.)

      Configure Identity Provider 


      1.  Logged in to the AM and add the WSO2 IS as an Identity provider.

      i) You need the  public certificate of the Identity server. You can export that from default wso2carbon.jks file  using the following key tool command.

      Go to the wso2is-5.0.0/repository/resources/security location from command window and execute the bellow command.
      keytool -export -alias wso2carbon  -keystore wso2carbon.jks -storepass wso2carbon -file carbonjks.pem 
      Identity provider Name*               = WSO2_IS
      Identity Provider Public Certificate* = Browse and select the carbonjks.pem file
      Enable SAML2 with SSO   = checked
      Identity Provider Entity Id = WSO2_IDP
      Service Provider Entity Id = WSO2_AM
      SSO URL = https://is.wso2.com:9444/samlsso
      Alias = https://am.wso2.com:9443/oauth2/token



      Configure Service Providers

       


      1. Open the identity.xml  in  IS (wso2is-5.0.0/repository/conf), and change the  following two parameters as bellow.
      <EntityId>WSO2_IDP</EntityId>
      <IdentityProviderURL>https://is.wso2.com:9444/samlsso</IdentityProviderURL>
      2. Enable the DEBUG logs in IS to capture the SAML response. Open the log4j.properties file in wso2is-5.0.0/repository/conf and add the following DEBUG package.
      log4j.logger.org.wso2.carbon.identity=DEBUG
      3. Restart the IS server.

      4. Logged in to the identity Server and add the store  as a service provider.
      Service Provider Name = AM_STORE 


      5. Configure the "SAML2 web SSO  Configuration".



      Assertion Consumer URL                     = https://am.wso2.com:9443/store/jagg/jaggery_acs.jag
      Use fully qualified username in the NameID = checked
      Enable Response Signing  = checked
      Enable Assertion Signing  = checked
      Enable Single Logout = checked
      Enable Audience Restriction = checked 
      Add the Audience = https://am.wso2.com:9443/oauth2/token
      Enable Recipient Validation                = checked
      Add the Recipient                          = https://am.wso2.com:9443/oauth2/token 

       6. Logged in to the identity server and add the management console  as service provider.

      Assertion Consumer URL                     = https://am.wso2.com:9443/acs
      Use fully qualified username in the NameID = checked
      Enable Response Signing  = checked
      Enable Assertion Signing  = checked
      Enable Single Logout = checked
      Enable Audience Restriction = checked 
      Add the Audience = https://am.wso2.com:9443/oauth2/token
      Enable Recipient Validation                = checked
      Add the Recipient                          = https://am.wso2.com:9443/oauth2/token




      Configure SAML2SSOAuthenticator in AM

       

      1.  Open the authenticators.xml file under  wso2am-1.8.0/repository/conf/security and configure SAML2SSOAuthenticator as bellow.
      <Authenticator name="SAML2SSOAuthenticator" disabled="false">
              <Priority>0</Priority>
              <Config>
                  <Parameter name="LoginPage">/carbon/admin/login.jsp</Parameter>
                  <Parameter name="ServiceProviderID">WSO2_AM</Parameter>
                  <Parameter name="IdentityProviderSSOServiceURL">https://is.wso2.com:9444/samlsso</Parameter>
                  <Parameter name="NameIDPolicyFormat">urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified</Parameter>

                  <!-- <Parameter name="IdPCertAlias">wso2carbon</Parameter> -->
                  <!-- <Parameter name="ResponseSignatureValidationEnabled">false</Parameter> -->
                  <!-- <Parameter name="LoginAttributeName"></Parameter> -->
                  <!-- <Parameter name="RoleClaimAttribute"></Parameter> -->
                  <!-- <Parameter name="AttributeValueSeparator">,</Parameter> -->

                  <!-- <Parameter name="JITUserProvisioning">true</Parameter> -->
                  <!-- <Parameter name="ProvisioningDefaultUserstore">PRIMARY</Parameter> -->
                  <!-- <Parameter name="ProvisioningDefaultRole">admin</Parameter> -->
                  <!-- <Parameter name="IsSuperAdminRoleRequired">true</Parameter> -->
              </Config>

              <!-- If this authenticator should skip any URI from authentication, specify it under "SkipAuthentication"
              <SkipAuthentication>
                  <UrlContains></UrlContains>
              </SkipAuthentication> -->

              <!-- If this authenticator should skip any URI from session validation, specify it under "SkipAuthentication
              <SkipSessionValidation>
                  <UrlContains></UrlContains>
              </SkipSessionValidation> -->
          </Authenticator>

      2. Open the site.json file under wso2am-1.8.0/repository/deployment/server/jaggeryapps/store/site/conf and configure the ssoConfiguration as bellow for store application.

              "enabled" : "true",
              "issuer" : "API_STORE",
              "identityProviderURL" : "https://is.wso2.com:9444/samlsso",
              "keyStorePassword" : "wso2carbon",
              "identityAlias" : "wso2carbon",
              "responseSigningEnabled":"true",
              "keyStoreName" :"/home/ajith/wso2am-1.8.0/repository/resources/security/wso2carbon.jks"

      3. Restart the AM server.

      Generate SAML Assertion


      1. When you try to access   store (https://am.wso2.com:9443/store) url it will redirect to Identity Server.


      2. When you logged in to the store you should see the SAML response (64bit encoded) print as debug log in Identity Server side (in wso2carbon.log file).


      3. You can decode  that 64bit encoded text to get the SAML response. (https://www.base64decode.org/)

      <?xml version="1.0" encoding="UTF-8"?>
      <saml2p:Response Destination="https://am.wso2.com:9443/store/jagg/jaggery_acs.jag" ID="kmiklfhjnmildgbdeaopcghdnkighhhplmlddffb" InResponseTo="dnhgclfhfjjlllnmjfboklkkeeeijcdomhacjgfc" IssueInstant="2015-05-03T04:08:24.643Z" Version="2.0" xmlns:saml2p="urn:oasis:names:tc:SAML:2.0:protocol"><saml2:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion">WSO2_IDP</saml2:Issuer><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI="#kmiklfhjnmildgbdeaopcghdnkighhhplmlddffb"><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>faIOp/bxRf9Qe2GcT1cMIps77n0=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>Jac8fOJKdb4UOxvYd8McjidNGmTH2HKMUiaPWN0551xvIFCTLCoR4iD3tYxLpdHJJpJGznKOZFN5NwHYA9d7S1oH7L4HDfhf4LqBww+538glSwCxGTpIA07sOsozCGCgP41QXcMugqJanP252rTUQJD+fUnJHpuxaPMxEJ5hy0E=</ds:SignatureValue><ds:KeyInfo><ds:X509Data><ds:X509Certificate>MIICNTCCAZ6gAwIBAgIES343gjANBgkqhkiG9w0BAQUFADBVMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDU1vdW50YWluIFZpZXcxDTALBgNVBAoMBFdTTzIxEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xMDAyMTkwNzAyMjZaFw0zNTAyMTMwNzAyMjZaMFUxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEWMBQGA1UEBwwNTW91bnRhaW4gVmlldzENMAsGA1UECgwEV1NPMjESMBAGA1UEAwwJbG9jYWxob3N0MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCUp/oV1vWc8/TkQSiAvTousMzOM4asB2iltr2QKozni5aVFu818MpOLZIr8LMnTzWllJvvaA5RAAdpbECb+48FjbBe0hseUdN5HpwvnH/DW8ZccGvk53I6Orq7hLCv1ZHtuOCokghz/ATrhyPq+QktMfXnRS4HrKGJTzxaCcU7OQIDAQABoxIwEDAOBgNVHQ8BAf8EBAMCBPAwDQYJKoZIhvcNAQEFBQADgYEAW5wPR7cr1LAdq+IrR44iQlRG5ITCZXY9hI0PygLP2rHANh+PYfTmxbuOnykNGyhM6FjFLbW2uZHQTY1jMrPprjOrmyK5sjJRO4d1DeGHT/YnIjs9JogRKv4XHECwLtIVdAbIdWHEtVZJyMSktcyysFcvuhPQK8Qc/E/Wq8uHSCo=</ds:X509Certificate></ds:X509Data></ds:KeyInfo></ds:Signature><saml2p:Status><saml2p:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/></saml2p:Status><saml2:Assertion ID="fecbjhnbjladdihfokcopndojkpbaddhbofnmckd" IssueInstant="2015-05-03T04:08:24.643Z" Version="2.0" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion"><saml2:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">WSO2_IDP</saml2:Issuer><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI="#fecbjhnbjladdihfokcopndojkpbaddhbofnmckd"><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>nFwyKCsaek+M2IgXv6cBKhKkxl0=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>QheQIxioLpDwMg5yQtjHT38eATc7ldte4vHCduNxw3fXNarRiaSAktZpRflPLbFYjaGt7wWu5LTypT54AsiKfGilbc25bkB6BKIBbxbfucpSIHKW1qYbUPmw4QYFccv4DCwj+PffbSR5MgqO94n/0LoF3ExqHa4+tb+kIO0sQb4=</ds:SignatureValue><ds:KeyInfo><ds:X509Data><ds:X509Certificate>MIICNTCCAZ6gAwIBAgIES343gjANBgkqhkiG9w0BAQUFADBVMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDU1vdW50YWluIFZpZXcxDTALBgNVBAoMBFdTTzIxEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xMDAyMTkwNzAyMjZaFw0zNTAyMTMwNzAyMjZaMFUxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEWMBQGA1UEBwwNTW91bnRhaW4gVmlldzENMAsGA1UECgwEV1NPMjESMBAGA1UEAwwJbG9jYWxob3N0MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCUp/oV1vWc8/TkQSiAvTousMzOM4asB2iltr2QKozni5aVFu818MpOLZIr8LMnTzWllJvvaA5RAAdpbECb+48FjbBe0hseUdN5HpwvnH/DW8ZccGvk53I6Orq7hLCv1ZHtuOCokghz/ATrhyPq+QktMfXnRS4HrKGJTzxaCcU7OQIDAQABoxIwEDAOBgNVHQ8BAf8EBAMCBPAwDQYJKoZIhvcNAQEFBQADgYEAW5wPR7cr1LAdq+IrR44iQlRG5ITCZXY9hI0PygLP2rHANh+PYfTmxbuOnykNGyhM6FjFLbW2uZHQTY1jMrPprjOrmyK5sjJRO4d1DeGHT/YnIjs9JogRKv4XHECwLtIVdAbIdWHEtVZJyMSktcyysFcvuhPQK8Qc/E/Wq8uHSCo=</ds:X509Certificate></ds:X509Data></ds:KeyInfo></ds:Signature><saml2:Subject><saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">admin</saml2:NameID><saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml2:SubjectConfirmationData InResponseTo="dnhgclfhfjjlllnmjfboklkkeeeijcdomhacjgfc" NotOnOrAfter="2015-05-03T04:13:24.643Z" Recipient="https://am.wso2.com:9443/store/jagg/jaggery_acs.jag"/></saml2:SubjectConfirmation><saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml2:SubjectConfirmationData InResponseTo="dnhgclfhfjjlllnmjfboklkkeeeijcdomhacjgfc" NotOnOrAfter="2015-05-03T04:13:24.643Z" Recipient="https://am.wso2.com:9443/oauth2/token"/></saml2:SubjectConfirmation></saml2:Subject><saml2:Conditions NotBefore="2015-05-03T04:08:24.643Z" NotOnOrAfter="2015-05-03T04:13:24.643Z"><saml2:AudienceRestriction><saml2:Audience>API_STORE</saml2:Audience><saml2:Audience>https://am.wso2.com:9443/oauth2/token</saml2:Audience></saml2:AudienceRestriction></saml2:Conditions><saml2:AuthnStatement AuthnInstant="2015-05-03T04:08:24.644Z" SessionIndex="79e867d8-0aee-488e-b3e1-6c2db01a256d"><saml2:AuthnContext><saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</saml2:AuthnContextClassRef></saml2:AuthnContext></saml2:AuthnStatement></saml2:Assertion></saml2p:Response>

       

      Create Application and  Generate Consumer Key/Secret


      1. Create a new application (eg TestApp) and subscribed to that application.



       

       Generate OAuth Token -Type1



      1.Now you can  generate OAuth token using the bellow command.

      curl -k -d "grant_type=urn:ietf:params:oauth:grant-type:saml2-bearer&assertion=<Base 64 URL encoded SAML assertion>&scope=PRODUCTION" -H "Authorization: Basic <base64Encode(consumer Key:consumer Secret)>, Content-Type: application/x-www-form-urlencoded" https://am.wso2.com:9443/oauth2/token

      <Base 64 URL encoded SAML assertion>

      You can get the SAML response as mentioned in above steps . Then capture only the SAML assertion (<saml2:Assertion> element) and get the 64bit URL encoded text using that  SAML assertion http://kjur.github.io/jsjws/tool_b64uenc.html

      <base64encode(consumer Key:consumer Secret)>

      You can use this site (https://www.base64encode.org/) to encode  consumer Key:consumer Secret
      eg

      curl -k -d "grant_type=urn:ietf:params:oauth:grant-type:saml2-bearer&assertion=PHNhbWwyOkFzc2VydGlvbiBJRD0iZHBvcG1qb3BoamlhY2hma29mZmhwb2htbG9uam9vZmNtanBmcHBibSIgSXNzdWVJbnN0YW50PSIyMDE1LTA1LTAzVDA0OjMxOjAxLjg5NFoiIFZlcnNpb249IjIuMCIgeG1sbnM6c2FtbDI9InVybjpvYXNpczpuYW1lczp0YzpTQU1MOjIuMDphc3NlcnRpb24iPjxzYW1sMjpJc3N1ZXIgRm9ybWF0PSJ1cm46b2FzaXM6bmFtZXM6dGM6U0FNTDoyLjA6bmFtZWlkLWZvcm1hdDplbnRpdHkiPldTTzJfSURQPC9zYW1sMjpJc3N1ZXI-PGRzOlNpZ25hdHVyZSB4bWxuczpkcz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC8wOS94bWxkc2lnIyI-PGRzOlNpZ25lZEluZm8-PGRzOkNhbm9uaWNhbGl6YXRpb25NZXRob2QgQWxnb3JpdGhtPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxLzEwL3htbC1leGMtYzE0biMiLz48ZHM6U2lnbmF0dXJlTWV0aG9kIEFsZ29yaXRobT0iaHR0cDovL3d3dy53My5vcmcvMjAwMC8wOS94bWxkc2lnI3JzYS1zaGExIi8-PGRzOlJlZmVyZW5jZSBVUkk9IiNkcG9wbWpvcGhqaWFjaGZrb2ZmaHBvaG1sb25qb29mY21qcGZwcGJtIj48ZHM6VHJhbnNmb3Jtcz48ZHM6VHJhbnNmb3JtIEFsZ29yaXRobT0iaHR0cDovL3d3dy53My5vcmcvMjAwMC8wOS94bWxkc2lnI2VudmVsb3BlZC1zaWduYXR1cmUiLz48ZHM6VHJhbnNmb3JtIEFsZ29yaXRobT0iaHR0cDovL3d3dy53My5vcmcvMjAwMS8xMC94bWwtZXhjLWMxNG4jIi8-PC9kczpUcmFuc2Zvcm1zPjxkczpEaWdlc3RNZXRob2QgQWxnb3JpdGhtPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwLzA5L3htbGRzaWcjc2hhMSIvPjxkczpEaWdlc3RWYWx1ZT5HcXdvWUJGVFZZL29zVEg0ZVV6cXM3dVFwZ2M9PC9kczpEaWdlc3RWYWx1ZT48L2RzOlJlZmVyZW5jZT48L2RzOlNpZ25lZEluZm8-PGRzOlNpZ25hdHVyZVZhbHVlPk1NdkNTOElUR205MDZGRzEzakdRdFNlb0h2VWlDb1I5SmYrTXBRMVlnQVk2WVdaVjhtY3p2M0dKdEx4N2xYRmtQdkFhZENlNU1xMTM1L29hWklydEhQL25TaFpnTDN1bkhLaWJWdU1odnZXbE9tN3hIM0ZRTFRCNFk5aTEyZmlrK1Ryc1lOeHhnT3JLQjkrTkJnTTgrY2J2cDFPR2x5K2dvcW04ZzNZVzJrVT08L2RzOlNpZ25hdHVyZVZhbHVlPjxkczpLZXlJbmZvPjxkczpYNTA5RGF0YT48ZHM6WDUwOUNlcnRpZmljYXRlPk1JSUNOVENDQVo2Z0F3SUJBZ0lFUzM0M2dqQU5CZ2txaGtpRzl3MEJBUVVGQURCVk1Rc3dDUVlEVlFRR0V3SlZVekVMTUFrR0ExVUVDQXdDUTBFeEZqQVVCZ05WQkFjTURVMXZkVzUwWVdsdUlGWnBaWGN4RFRBTEJnTlZCQW9NQkZkVFR6SXhFakFRQmdOVkJBTU1DV3h2WTJGc2FHOXpkREFlRncweE1EQXlNVGt3TnpBeU1qWmFGdzB6TlRBeU1UTXdOekF5TWpaYU1GVXhDekFKQmdOVkJBWVRBbFZUTVFzd0NRWURWUVFJREFKRFFURVdNQlFHQTFVRUJ3d05UVzkxYm5SaGFXNGdWbWxsZHpFTk1Bc0dBMVVFQ2d3RVYxTlBNakVTTUJBR0ExVUVBd3dKYkc5allXeG9iM04wTUlHZk1BMEdDU3FHU0liM0RRRUJBUVVBQTRHTkFEQ0JpUUtCZ1FDVXAvb1YxdldjOC9Ua1FTaUF2VG91c016T000YXNCMmlsdHIyUUtvem5pNWFWRnU4MThNcE9MWklyOExNblR6V2xsSnZ2YUE1UkFBZHBiRUNiKzQ4RmpiQmUwaHNlVWRONUhwd3ZuSC9EVzhaY2NHdms1M0k2T3JxN2hMQ3YxWkh0dU9Db2tnaHovQVRyaHlQcStRa3RNZlhuUlM0SHJLR0pUenhhQ2NVN09RSURBUUFCb3hJd0VEQU9CZ05WSFE4QkFmOEVCQU1DQlBBd0RRWUpLb1pJaHZjTkFRRUZCUUFEZ1lFQVc1d1BSN2NyMUxBZHErSXJSNDRpUWxSRzVJVENaWFk5aEkwUHlnTFAyckhBTmgrUFlmVG14YnVPbnlrTkd5aE02RmpGTGJXMnVaSFFUWTFqTXJQcHJqT3JteUs1c2pKUk80ZDFEZUdIVC9ZbklqczlKb2dSS3Y0WEhFQ3dMdElWZEFiSWRXSEV0VlpKeU1Ta3RjeXlzRmN2dWhQUUs4UWMvRS9XcTh1SFNDbz08L2RzOlg1MDlDZXJ0aWZpY2F0ZT48L2RzOlg1MDlEYXRhPjwvZHM6S2V5SW5mbz48L2RzOlNpZ25hdHVyZT48c2FtbDI6U3ViamVjdD48c2FtbDI6TmFtZUlEIEZvcm1hdD0idXJuOm9hc2lzOm5hbWVzOnRjOlNBTUw6MS4xOm5hbWVpZC1mb3JtYXQ6ZW1haWxBZGRyZXNzIj5hZG1pbjwvc2FtbDI6TmFtZUlEPjxzYW1sMjpTdWJqZWN0Q29uZmlybWF0aW9uIE1ldGhvZD0idXJuOm9hc2lzOm5hbWVzOnRjOlNBTUw6Mi4wOmNtOmJlYXJlciI-PHNhbWwyOlN1YmplY3RDb25maXJtYXRpb25EYXRhIEluUmVzcG9uc2VUbz0iZ2RkbmZsb2dpcGVscGVvaGdjYmVnZGlsZmRpaWtsZWlqZGZvb2hvaCIgTm90T25PckFmdGVyPSIyMDE1LTA1LTAzVDA0OjM2OjAxLjg5NFoiIFJlY2lwaWVudD0iaHR0cHM6Ly9hbS53c28yLmNvbTo5NDQzL3N0b3JlL2phZ2cvamFnZ2VyeV9hY3MuamFnIi8-PC9zYW1sMjpTdWJqZWN0Q29uZmlybWF0aW9uPjxzYW1sMjpTdWJqZWN0Q29uZmlybWF0aW9uIE1ldGhvZD0idXJuOm9hc2lzOm5hbWVzOnRjOlNBTUw6Mi4wOmNtOmJlYXJlciI-PHNhbWwyOlN1YmplY3RDb25maXJtYXRpb25EYXRhIEluUmVzcG9uc2VUbz0iZ2RkbmZsb2dpcGVscGVvaGdjYmVnZGlsZmRpaWtsZWlqZGZvb2hvaCIgTm90T25PckFmdGVyPSIyMDE1LTA1LTAzVDA0OjM2OjAxLjg5NFoiIFJlY2lwaWVudD0iaHR0cHM6Ly9hbS53c28yLmNvbTo5NDQzL29hdXRoMi90b2tlbiIvPjwvc2FtbDI6U3ViamVjdENvbmZpcm1hdGlvbj48L3NhbWwyOlN1YmplY3Q-PHNhbWwyOkNvbmRpdGlvbnMgTm90QmVmb3JlPSIyMDE1LTA1LTAzVDA0OjMxOjAxLjg5NFoiIE5vdE9uT3JBZnRlcj0iMjAxNS0wNS0wM1QwNDozNjowMS44OTRaIj48c2FtbDI6QXVkaWVuY2VSZXN0cmljdGlvbj48c2FtbDI6QXVkaWVuY2U-QVBJX1NUT1JFPC9zYW1sMjpBdWRpZW5jZT48c2FtbDI6QXVkaWVuY2U-aHR0cHM6Ly9hbS53c28yLmNvbTo5NDQzL29hdXRoMi90b2tlbjwvc2FtbDI6QXVkaWVuY2U-PC9zYW1sMjpBdWRpZW5jZVJlc3RyaWN0aW9uPjwvc2FtbDI6Q29uZGl0aW9ucz48c2FtbDI6QXV0aG5TdGF0ZW1lbnQgQXV0aG5JbnN0YW50PSIyMDE1LTA1LTAzVDA0OjMxOjAxLjg5NFoiIFNlc3Npb25JbmRleD0iZGJhMTYwMDktYTA4Zi00MzQ5LWE1NzAtMDdjYzMzZDlkOWUwIj48c2FtbDI6QXV0aG5Db250ZXh0PjxzYW1sMjpBdXRobkNvbnRleHRDbGFzc1JlZj51cm46b2FzaXM6bmFtZXM6dGM6U0FNTDoyLjA6YWM6Y2xhc3NlczpQYXNzd29yZDwvc2FtbDI6QXV0aG5Db250ZXh0Q2xhc3NSZWY-PC9zYW1sMjpBdXRobkNvbnRleHQ-PC9zYW1sMjpBdXRoblN0YXRlbWVudD48L3NhbWwyOkFzc2VydGlvbj4&scope=PRODUCTION" -H "Authorization: Basic RmdwUktWTnVMVmpRR2twWDNmZnNwazhNVllzYTo5ZUg0VmtiWXd4RkVqcjhKX25ra0k5dTR6TWNh, Content-Type: application/x-www-form-urlencoded" https://am.wso2.com:9443/oauth2/token

      The output would be;

      {"scope":"PRODUCTION","token_type":"Bearer","expires_in":3299,"refresh_token":"9b1354db82ce8a2eaf2b66ff965a3da","access_token":"74cac5b8259d15f39c4be9352655a969"}




       Generate OAuth Token -Type2

       

      1. You can use the this tool to get the 64bit URL encoded SAML assertion and generate the OAuth token. (This is developed by WSO2 IS team)

      2. Unzip the SAML2AssertionCreator.zip file and execute the following command inside that SAML2AssertionCreator directory.
      java -jar SAML2AssertionCreator.jar <Identity_Provider_Entity_Id> <user_name> <recipient> <requested_audience> <Identity_Provider_JKS_file> <Identity_Provider_JKS_password> <Identity_Provider_certificate_alias>  <private_key_password>

      eg:

      java -jar SAML2AssertionCreator.jar WSO2_IDP admin https://am.wso2.com:9443/oauth2/token https://am.wso2.com:9443/oauth2/token /home/ajith/wso2/blog/wso2is-5.0.0/repository/resources/security/wso2carbon.jks wso2carbon wso2carbon wso2carbon

      3. Now you will get  64bit url encoded SAML2 assertion.

      4. Execute the following command to get OAuth token.

      curl -k -d "grant_type=urn:ietf:params:oauth:grant-type:saml2-bearer&assertion=<64bit encoded SAML Assertion>&scope=PRODUCTION" -H "Authorization: Basic <base64Encode(ConsumerKey:Consumer:Secret)>, Content-Type: application/x-www-form-urlencoded" https://am.wso2.com:9443/oauth2/token



      Mohanadarshan VivekanandalingamChange Admin Console Url in WSO2 Servers

      Let’s say that I need to change the WSO2 Server admin console url to something like below,

      https://localhost:9443/random-text/

      By default all the carbon servers will have below management console url. https://localhost:9443/carbon/

      Here, we have an option where we can change the ‘WebContextRoot’ value. If we provide ‘/mohan’ as the value for web context root then management console url will be as below

      https://10.100.0.44:9443/mohan/carbon/

      We can change the ‘WebContextRoot’ value in carbon.xml (which is available in <ESB_HOME>/repository/conf/ folder) by changing the below entry

      <WebContextRoot>/</WebContextRoot>

      as

      <WebContextRoot>/mohan</WebContextRoot>

      Anyway, we cannot completely replace the ‘carbon’ post-fix from the url. This value is used in the code and cannot be removed. But we can use some proxy servers like nginx and mask the url (refer [1] which is similar to your requirement)

      [1] http://udarakr.blogspot.com/2014/05/fronting-wso2-management-console-ui.html


      Mohanadarshan VivekanandalingamSolution for ORA-01450: maximum key length (6398) exceeded exception when running WSO2 IS in Oracle

      Above exception is possible to occur in Oracle database which has AL32UTF8 character encoding. Here, the maximum key length for Oracle database is approximately 40% of the database block size minus some overhead. If the block size is 8K maximum key length is 3218.

      AL32UTF8 is multi-byte character-set of up to 4 bytes. So, a single 1000 char defined column could hold 4000 bytes.

      In WSO2 Identity Server, there is a table called IDN_OAUTH2_ACCESS_TOKEN which has a unique constraint with six attributes.Removing the unique key might cause issues when adding token entries in high frequency. Then possible solutions that we have given below,

      1) Reduce the TOKEN_SCOPE column size to 255 and CONSUMER_KEY column size to 40 and try out.
      2) Increase the block size value
      3) Change the character-set (AL32UTF8) of the database (to WE8MSWIN1252) .

      More reference,
      [1] http://www.dba-oracle.com/t_ora_01450_maximum_key_length_exceeded.htm


      Mohanadarshan VivekanandalingamChange Store application as default in APIM

      There is a common requirement where we need to redirect to Store application when http://example.com:9443/ typed. Currently it will redirect to the publisher application by default.

      This can be done by changing the component.xml of org.wso2.am.styles_x.x.x.jar in the AM_HOME/repository/components/plugins. Default it contains like follows which has pointed the default-context to publisher. There you can change it to “store”.

      <context>
      <context-id>default-context</context-id>
      <context-name>publisher</context-name>
      <protocol>http</protocol>
      <description>API Publisher Default Context</description>
      </context>
      
      

      directory_structure

      please make sure that your modified jar is created according to the correct directory structure (see the attached image).. Better to apply the modified jar as patch to the API Manager rather than replacing the existing jar in plugins.


      Mohanadarshan VivekanandalingamConverting xml response coming from backend to json in APIM

      Above requirement is very much straight forward. You can achieve this requirement by doing simple change in synapse configuration..

      In the response path (outSequece of the api) add the below property to change the message type from xml to json.

      <property name="messageType" scope="axis2" value="application/json"/>
      

      Eg : –

      If you created an api called test1 and publish it through the publisher. Then, corresponding configuration file will be deployed in the <APIM_HOME>/repository/deployment/server/synapse-configs/default/api as admin–test1_v1.0.0.xml. Then you can change the outSequence as follow.

      </inSequence>
      <outSequence>
      <property name="messageType" scope="axis2" value="application/json"/>
      <send/>
      </outSequence>
      </resource>
      

      Chanaka FernandoPerformance Tuning WSO2 ESB with a practical example

      WSO2 ESB is arguably the highest performing open source ESB available in the industry. With the default settings, it will provide a really good performance OOTB. You can find the official performance tuning guide in the below link.

      https://docs.wso2.com/display/ESB481/Performance+Tuning

      Above document includes lot of parameters and how to change them for best performance. Even though it provides some recommended values for the performance tuning, that is not straightforward. The values you put in the configuration files will highly reliant on your use case. Idea of this blog post is to provide a practical example of tuning these parameters with a sample use case.

      For the performance testing, I am using a simple proxy service which iterate through set of xml elements and send request to a sample server and aggregate the responses back to the client.

      <?xml version="1.0" encoding="UTF-8"?>
      <proxy xmlns="http://ws.apache.org/ns/synapse"
             name="SplitAggregateProxy"
             startOnLoad="true">
         <target>
            <inSequence>
               <iterate xmlns:m0="http://services.samples"
                        preservePayload="true"
                        attachPath="//m0:getQuote"
                        expression="//m0:getQuote/m0:request">
                  <target>
                     <sequence>
                        <send>
                           <endpoint>
                              <address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
                           </endpoint>
                        </send>
                     </sequence>
                  </target>
               </iterate>
            </inSequence>
            <outSequence>
               <aggregate>
                  <completeCondition>
                     <messageCount/>
                  </completeCondition>
                  <onComplete xmlns:m0="http://services.samples" expression="//m0:getQuoteResponse">
                     <send/>
                  </onComplete>
               </aggregate>
            </outSequence>
         </target>
      </proxy>

      The reason for selecting this type of proxy is that it will allow us to test different thread groups within the WSO2 ESB.  Request for this proxy is given below.

      <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ser="http://services.samples" xmlns:xsd="http://services.samples/xsd">
         <soapenv:Header/>
         <soapenv:Body>
            <ser:getQuote>
                <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>
                 <!--Optional:-->
               <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>
                <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>  
                <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>
                <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>
                <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>    
                <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>
                 <!--Optional:-->
               <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>
                <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>  
                <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>
                <ser:request>
                  <!--Optional:-->
                  <xsd:symbol>IBM</xsd:symbol>
               </ser:request>        
            </ser:getQuote>    
         </soapenv:Body>
      </soapenv:Envelope>

      We have used the apache jmeter as the test client. There we have created a Thread group with following configurations.

      Number of Threads (Users) - 50
      Ramp up period (seconds) - 10
      Loop count - 200
      Total requests - 10000

      With this information in hand, we have used WSO2 ESB 4.8.1 within a ubuntu linux machine with 8GB memory and 4 core cpu.

      First, let's talk about the parameters which we have used to carry out this performance test.

      (ESB_HOME/bin/wso2server.sh)
      Memory (Xmx) - Maximum heap memory allocated for JVM. We kept both Xmx and Xms the same.

      (ESB_HOME/repository/conf/synapse.properties)
      synapse.threads.core - Core number of threads allocated for ThreadPoolExecutor used for creating threads for iterate mediator executions

      synapse.threads.qlen - Task queue length used for ThreadPoolExecutor used for creating threads for iterate mediator executions

      (ESB_HOME/repository/conf/passthru-http.properties)
      worker_pool_size_core - Core number of threads allocated for ThreadPoolExecutor used for processing incoming requests to the ESB

      io_buffer_size - Size of the memory buffer used for reading/writing data from NIO channels


      Performance tests were carried out while monitoring the server using Java Mission Control tool and the server load is kept at a perfect value around 60-70 % CPU usage and Load average value around 3-4 (4 core). There were no GC related issues observed during this testing.

      You can read the following article to get an understanding about the performance of a server and the methods for tuning servers.

      http://www.infoq.com/articles/Tuning-Java-Servers

      We have captured both Latency and the TPS value for monitoring the performance of the server


      Performance variation with Memory allocation

      Theoretically, the performance of the server should be increased with the allocated memory. By performance, we consider both Latency and the TPS value of the server. According to the below results, we can see that TPS is increased with the memory and the Latency has reduced (increased performance).






      Performance variation with number of Server Worker/Client Worker Threads

      WSO2 ESB uses the ThreadPoolExecutor to create threads when there is data to be processed from the client requests. worker_pool_size_core parameter controls the number of core threads for this executor pool. By increasing the thread pool, we would expect that we see a performance improvement. According to the below graphs, we can see that latency is reduced and the TPS is slightly improved with this parameter (performance increased with number of threads)

















      Performance variation with Synapse Worker Threads count
      When using iterate or clone mediator within the ESB, it will use a separate thread pool to create new threads when there is data to be iterated and processed. The size of this thread pool is configured with the parameter synapse.threads.core. By increasing the value here, we would expect a better performance when iterate mediator is used. According to the test results, we can see that performance is increased when the value is changed from 20 to 100. But after that, we can see some performance degradation since when the number of threads in the system increases, that will degrade the performance due to the high load in the thread scheduler in the operating system.




      Performance variation with Synapse Worker Queue length

      When using iterate or clone mediator within the ESB, it will use a separate thread pool to create new threads when there is data to be iterated and processed. We can configure the task queue length of this thread pool with the synapse.threads.qlen parameter. By giving a finite value to this task queue, it will make the system to create new threads when all the core number of threads are created and task queue is full. This is the only time that max value of thread pool is used. If the queue length is infinite (-1), max value is never used and there will only be core number of threads at any given time. According to the results, we can see that, when there is a finite value for queue length, we can see increased performance. One thing to note here is that, when we have a limited value for this queue length, there can be situations that requests will be rejected when the task queue is full and all the threads are occupied. Therefore, you need to make a decision according to your actual load. One additional thing with this parameter is that if there is some thread blocking happens when the queue length is infinite, server can go in to OOM situation.






      Performance variation with IO Buffer size
      IO buffer size parameter(io_buffer_size) decides the value of the memory buffer allocated for reading data in to the memory from the underlying socket/file channels. According to the average value of the payloads, we can configure this parameter. According to the results we observed from the testing, we cannot clearly come to a conclusion for this scenario since the request/response size is <4k during this testing.











      According to the above results, we can see that when tuning the WSO2 ESB is not straightforward and you need to have a proper idea about your use cases.




      Chanaka FernandoWSO2 ESB Development best practices

      Naming Conventions

      When developing artifacts for WSO2 servers, follow the guidelines mentioned below.

      1. Project names - Create project names which reflects the intention of the project. It is better to end the project name with the server type which this project is deployed in to. This name should be unique within your source code repository

      1. Artifact names - When you create artifacts, always better to start the name with the project name and end with the artifact type. This would make the artifact name unique within the source repository.

      1. Custom Java Classes - When you create custom java code (custom mediators, custom formatters), adhere to the standard java packaging names with project names to distinguish your code

      Project Structures

      When developing artifacts for WSO2 servers, follow the guidelines mentioned below

      1. Start with a “Maven multi module” project as your parent project. This will be the top most level project in your source code hierarchy.

      1. Create projects according to the artifact types which you are going to develop.
      Ex: ESBConfigProject to develop ESB artifacts, RegistryResourceProject for governance artifacts

      1. Create Composite ARchive (CAR) projects to bundle the relevant artifacts into an archive.

      1. If you need to create a common (core) project to include different types of projects (ESB, Registry), create a maven multi module project and add them as sub projects.

      1. Do not bundle custom mediators with the ESB artifacts which referenced that specific mediators in a single CAR file. This may cause deployment time exceptions during the due to dependencies conflicts. If you have a custom java code, always deploy that before deploying the ESB artifacts, which refers this custom code



      General Best Practices

      When you have environment specific artifacts such as endpoints, data source configurations, always better to use governance registry to store them.

      When sharing the implemented projects among team members of the development team, use WSO2 Project Import Wizard since WSO2 Project Import Wizard is knowledgeable on handling the nested project structures supported by Developer Studio.

      When exporting Developer Studio Projects you can either export individual projects and generate individual deployable artifacts or you can package them into a CAR file and export thar CAR file from IDE itself.

      If you need to integrate your solution with a Continuous Integration server(Ex:Bamboo, Jenkins) and automate the build and deploy process, you need to use Maven support provided by Developer Studio for that. You should not include the Carbon Server extensions in CAR files and deploy to Carbon Servers,


      Developer Studio - Deployment Best Practices

      When you want to deploy artifacts to a Carbon Server, first you need to package all the artifacts to a CAR file. There are 2 possible methods to deploy to a Carbon Server.

      Via Developer Studio IDE
      Via Maven-Car-Deploy-Plugin

      You can use any of the above 2 methods to deploy the CAR files to a Carbon Server. When you have modified your projects in Developer Studio, you can simply use Maven or IDE to publish latest version of the CAR file to the Carbon Server.

      Design and Development Best Practices

      1. Use mediators like XSLT and XQuery in complex transformation. Payload Factory and Enrich mediator perform better in simple transformation scenarios.

      2. When a message is sent out from the inSequence , the reply is received by the outSequence by default. To receive the reply to a named sequence, use send mediator in the inSequence .

      3. Do not use two consecutive send mediators without cloning the message using Clone mediator.

      4. Always use REST APIs to expose RESTful interfaces over invoking Proxy Services.

      5. Aggregation mediator works in tandem with Iterate or Clone.

      6. Use templates to reuse the ESB configuration whenever possible.

      7. Limit the use of in-line endpoints and use references to existing endpoints from sequence or proxy services.

      Maintenance is easier when you isolate endpoint definitions from sequences and proxy services.

      Security Guidelines and Best Practices
      This section provides some security guidelines and best practices useful in service development.

      • Use DMZ-based deployments. Also see  
      http://wso2.com/blogs/architecture/2011/04/public-services-gateway-and-internal-services-gateway-patterns.
      • Change default admin username and password.
      • Change the default key stores.
      • Secure plain text passwords and configure user password policies.
      • In REST services deployments, secure your services with OAuth (Bearer and HMAC) and Basic authentication with HTTPS. This is recommended because:
      • It adds authentication, authorization and validation in to the services.
      • It enables communication to be done in a confidential manner while achieving non-repudiation and integrity of the messages.
      • It helps you avoid attacks such as replay, DoS and to define rules for throttling the service calls.
      • It helps you to control access to services in a fine-grained manner using role-based, attribute-based and policy-based access control.

      Chanaka FernandoUnderstanding WSO2 ESB Pass-Through Transport concepts

      WSO2 ESB is arguably the highest performing open source ESB available in the industry. It has been tested and deployed in various enterprise environments with serving zillions of transactions. The ultimate reason behind this resounding success is it's high performance HTTP transport implementation known as Pass-Through Transport which provides efficient mechanism to process client requests in a highly concurrent scenarios. In this blog post/s we are going to give you some introduction in to PTT by covering some of the theories behind this implementation. The actual implementation is far more complex than described in this tutorial and I have added some useful resources whenever possible to provide you more information.

      Most of the content present in this tutorial are extracted from following blog posts/tutorials on the web. I would like to give credit to those authors for writing such great content.






      Pass-Through transport uses the httpcore-nio library as the underlying implementation library which uses the java NIO api for the implementation. First we will go through the concepts of java NIO and then go through httpcore-nio concepts and finally the PTT implementation.

      Java Non-blocking IO (NIO)

      In the standard IO API you work with byte streams and character streams. In NIO you work with channels and buffers. Data is always read from a channel into a buffer, or written from a buffer to a channel. Java NIO enables you to do non-blocking input output. As an example, a thread can ask a channel to read data into a buffer. While the channel reads data in to the buffer, the thread can do something else. Once the data is read into the buffer, the thread can continue processing it. In standard IO, java uses byte streams and character streams. But in NIO, java uses channels and buffers. Data is always read from channel to buffer or write from a buffer to channel.

      Channels
      Java NIO Channels are similar to streams with a few differences:
      • You can both read and write to a Channels. Streams are typically one-way (read or write).
      • Channels can be read and written asynchronously.
      • Channels always read to, or write from, a Buffer.
      As mentioned above, you read data from a channel into a buffer, and write data from a buffer into a channel. Here are the most important Channel implementations in Java NIO:

      FileChannel - The FileChannel reads data from and to files.

      DatagramChannel - The DatagramChannel can read and write data over the network via UDP.

      SocketChannel - The SocketChannel can read and write data over the network via TCP.

      ServerSocketChannel - The ServerSocketChannel allows you to listen for incoming TCP connections, like a web server does. For each incoming connection a SocketChannel is created.

      Buffers
      Buffers are used when interacting with NIO channels. Buffer is a block of memory into which data can be written and later can read from. Buffer provides set of methods which makes it easier to work with the memory block.

      Using a Buffer to read and write data typically follows this little 4-step process:

      1 Write data into the Buffer
      2 Call buffer.flip()
      3 Read data out of the Buffer
      4 Call buffer.clear() or buffer.compact()

      When you write data into a buffer, the buffer keeps track of how much data you have written. Once you need to read the data, you need to switch the buffer from writing mode into reading mode using the flip() method call. In reading mode the buffer lets you read all the data written into the buffer.


      Selectors
      A selector is an object that can monitor multiple channels for events. Therefore, a single thread can monitor multiple channels for data. A Selector allows a single thread to handle multiple Channels. This is a very useful concept in cases where your application has many open connections but has low traffic on each connection. To use a Selector, you register the Channels with it. Then you call it’s select() method. This method will block until there is an event ready for one of the registered channels. Once the method returns, the thread can process the events.

      Now we have the basic understanding about the concepts related to Java NIO and let's take a look at the actual problem we are going to solve with this technology.

      Network services and Reactor pattern

      Web services, Distributed Objects, etc Most have same basic structure:

      • Read request 
      • Decode request 
      • Process service 
      • Encode reply 
      • Send reply


      But differ in nature and cost of each step.
      XML parsing, File transfer, Web page generation, computational services

      In a classical server design, there will be new handler thread for each client connection.


      This approach makes following scalability challenges.

      • Graceful degradation under increasing load (more clients)
      • Continuous improvement with increasing resources (CPU, memory, disk, bandwidth)
      • Also meet availability and performance goals
      • Short latencies
      • Meeting peak demand Tunable quality of service


      Divide processing into small tasks makes it more effective when processing high and variable loads.



      • Each task performs an action without blocking
      • Execute each task when it is enabled Here, an IO event usually serves as trigger
      • Basic mechanisms supported in java.nio Non-blocking reads and writes
      • Dispatch tasks associated with sensed IO events
      • Usually more efficient than alternatives Fewer resources
      • Don't usually need a thread per client Less overhead
      • Less context switching, often less locking But dispatching can be slower
      • Must manually bind actions to events Usually harder to program
      • Must break up into simple non-blocking actions
      • Similar to GUI event-driven actions
      • Cannot eliminate all blocking: GC, page faults, etc
      • Must keep track of logical state of service


      According to the above comparison, we can clearly see that dividing the task in to small non-blocking small operations make it more efficient. But programming this model is more complex than the first approach. Reactor pattern is used to implement this behavior.

      Reactor pattern

      Basic reactor pattern can be depicted with the below diagram.




      A Reactor runs in a separate thread and its job is to react to IO events by dispatching the work to the appropriate handler. Its like a telephone operator in a company who answers the calls from clients and transfers the communication line to the appropriate receiver.

      A Handler performs the actual work to be done with an IO event similar to the actual officer in the company the client who called wants to speak to.

      Selection Keys maintain IO event status and bindings. Its a representation of the relationship between a Selector and a Channel. By looking at the Selection Key given by the Selector, the Reactor can decide what to do with the IO event which occurs on the Channel.

      Here, there is a single ServerSocketChannel which is registered with a Selector. The SelectionKey 0 for this registration has information on what to do with the ServerSocketChannel if it gets an event. Obviously the ServerSocketChannel should receive events from incoming connection requests from clients. When a client requests for a connection and wants to have a dedicated SocketChannel, the ServerSocketChannel should get triggered with an IO event. What does the Reactor have to do with this event? It simply has to Accept it to make a SocketChannel. Therefore SelectionKey 0 will be bound to an Acceptor which is a special handler made to accept connections so that the Reactor can figure out that the event should be dispatched to the Acceptor by looking at SelectionKey 0. Notice that ServerSocketChannel, SelectionKey 0 and Acceptor are all in same colour ( light green )

      The Selector is made to keep looking for IO events. When the Reactor calls Selector.select() method, the Selector will provide a set of SelectionKeys for the channels which have pending events. When SelectionKey 0 is selected, it means that an event has occurred on ServerSocketChannel. So the Reactor will dispatch the event to the Acceptor.

      When the Acceptor accepts the connection from Client 1, it will create a dedicated SocketChannel 1 for the client. This SocketChannel will be registered with the same Selector with SelectionKey 1. What would the client do with this SocketChannel? It will simply read from and write to the server. The server does not need to accept connections from client 1 any more since it already accepted the connection. Now what the server needs is to Read and Write data to the channel. So SelectionKey 1 will be bound to Handler 1 object which handles reading and writing. Notice that SocketChannel 1, SelectionKey 1 and Handler 1 are all in Green.

      The next time the Reactor calles Selector.select(), if the returned SelectionKey Set has SelectionKey 1 in it,  it means that SocketChannel 1 is triggered with an event. Now by looking at SelectionKey 1, the Reactor knows that it has to dispatch the event to Handler 1 since Hander 1 is bound to SelectionKey 1. If the returned SelectionKey Set has SelectionKey 0 in it, it means that ServerSocketChannel has received an event from another client and by looking at the SelectionKey 0 the Reactor knows that it has to dispatch the event to the Acceptor again. When the event is dispatched to the Acceptor it will make SocketChannel 2 for client 2 and register the socket channel with the Selector with SelectionKey 2.

      So in this scenario we are interested in 3 types of events.
      1 Connection request events which get triggered on the ServerSocketChannel which we need to Accept.
      2 Read events which get triggerd on SocketChannels when they have data to be read, from which we need to Read.
      3 Write events which get triggered on SocketChannels when they are ready to be written with data, to which we need to Write.

      This is the theory behind the reactor pattern and this pattern is implemented in the apache httpcore-nio library. WSO2 ESB PTT uses this library as the underlying realization of the reactor pattern.

      This gives a basic understanding about the reactor pattern and Java NIO framework. Let’s map this knowledge to Passthrough transport implementation of WSO2 ESB. 

      You need to download following items before we continue debugging the code.






      Once you download all the components, extract the ESB 4.8.1 distribution to a location (ESB_HOME) and then import the maven projects to your favorite IDE (Intellij Idea or Eclipse) and create a remote debugging configuration (with port 5005) to debug in to the source code.

      Then start the ESB with the following command

      sh ESB_HOME/bin/wso2server.sh -debug 5005

      This will start the ESB in the debug mode and now you can start your remote debugging session from your IDE. Once it is connected with the ESB, it will startup the server. During the server startup, you can observe the following INFO logs printed in the console.

      [2015-04-04 13:54:48,996]  INFO - PassThroughHttpSSLSender Pass-through HTTPS Sender started...
      [2015-04-04 13:54:48,996]  INFO - PassThroughHttpSender Initializing Pass-through HTTP/S Sender...
      [2015-04-04 13:54:48,998]  INFO - PassThroughHttpSender Pass-through HTTP Sender started...

      [2015-04-04 13:54:54,370]  INFO - PassThroughHttpSSLListener Initializing Pass-through HTTP/S Listener...
      [2015-04-04 13:54:56,114]  INFO - PassThroughHttpListener Initializing Pass-through HTTP/S Listener…

      The above log lines will confirm that 4 main components of the ESB message flow has been started with the server startup. These transport listener and sender classes are configured in the axis2.xml file.

      PassThroughHttpSSLSender - ( HTTPS transport for sending messages from ESB to back end )
      PassThroughHttpSender - ( HTTP transport for sending messages from ESB to back end )
      PassThroughHttpSSLListener - ( HTTPS transport for receiving messages to ESB from clients )
      PassThroughHttpListener - ( HTTP transport for receiving messages to ESB from clients )

      During the server startup, these components will be started from the CarbonServerManager class.

      Let’s add some debug point in to PassThroughHttpListener class (#init method) and see what is happening inside this class initialization.

      Within the init() method of this class, it creates the following 3 major objects.

      ServerConnFactory - Factory class for creating connections
      SourceHandler -  This is the class where transport interacts with the client. This class receives events for a particular connection. These events give information about the message and its various states.
      ServerIODispatch - This class receives various events from http core and dispatch them to PTT level code (SourceHandler)

       connFactory = connFactoryBuilder.build(sourceConfiguration.getHttpParams());
       handler = new SourceHandler(sourceConfiguration);
       ioEventDispatch = new ServerIODispatch(handler, connFactory);

      Within the start() method, it creates Reactor object with a thread group and calls the execute() method within a separate thread.

      This will call the execute method of the AbstractMultiworkerIOReactor(httpcore-nio) class in which it will start N number of Worker threads (N equals to number of cores in the processor) with the prefix HTTP-Listener I/O dispatcher. After starting these worker threads, this class will go in to infinite loop to process the events received by the selector. Within this loop, it will process all the connection requests from the clients. This class acts as the Acceptor of the reactor pattern. It will create a new socketChannel and add that to the channel list of the dispatcher object. 



      These worker threads will execute the execute() method of the BaseIOReactor class which eventually calls the AbstractIOReactor class’s execute method. This will execute infinite for loop for processing the IO events. Within this infinite loop, it will first process the existing events which can be processed (ex: events of already registered channels which can accept events). After processing, it will check for new channels added and create sessions for newly added channels for future processing.

      Now we have an understanding about how the requests are processed at the http core level. Once the client sends an http request to the ESB, it will trigger a series of events at the IO level within the ESB server side. This series of events is modeled in to a state machine within the http core level. 



      Converting of incoming events to this state machine is done within the http core level. 

      • client sends a request to the ESB. This will be picked by the AbstractMultiworkerIOReactor  thread and create a new socketChannel for this request and add this channel to reactor threads channel list and notify the selector (wakeup()).
      • Once this notification is received by the worker thread, it will execute the processNewChannells() method within the AbstractIOReactor and During this process it will create a new HTTP session and call the Connected method on the SourceHandler (State Connected)
      • Then it will go into the processEvents() method of the AbstractIOReactor class and process the remaining IO events for this channel. During this process it will consume the incoming message and change the state and call the requestReceived method from DefaultNHttpServerConnection class. This will also call the inputReady method. 


      When sending a request from ESB to back end server, message flow is mentioned below.

      Message flow should contain either send or call mediator to send a message to a back end server. From either of this mediators, it calls the 

      Axis2SynapseEnvironment.send()
      Axis2Sender.sendon()
      Axis2FlexibleMEPClient.send()
      OperationClient.execute()
      OutInOperationClient.executeImpl()
      AxisEngine.send()
      PassThroughHttpSender.invoke()
      DeliveryAgent.submit()
      TargetConnections.getConnection()
      DefaultConnectingIOReactor.connect()
      requestQueue.add()
      selector.wakeup()
      DefaultConnectingIOReactor.processEvents()
      DefaultConnectingIOReactor.processSessionRequests()
      DefaultConnectingIOReactor.processEvent()
      AbstractIOReactor.processNewChannels()
      BaseIOReactor.sessionCreated()
      AbstractIODispatch.connected()
      ClientIODispatch.onConnected()
      TargetHandler.Connected()

      Detailed description about the internal state transition could be found in the following article.


      This will give you some understanding about the PTT implementation. Idea of this blog post is to give you a starting point to study about the complex implementation of PTT. Here are some important links to study more about the PTT.


      Java NIO - 




      Reactor pattern - 





      Pass-Through transport







      Madhuka UdanthaBower: Front-end Package Manager

      What is bower?

      "Web sites are made of lots of things — frameworks, libraries, assets, utilities, and rainbows. Bower manages all these things for you."
      Bower is a front-end package manager built by Twitter

      Bower works by fetching and installing packages from all over, taking care of hunting, finding, downloading, and saving the stuff you’re looking for. Bower keeps track of these packages in a manifest file, bower.json. How you use packages is up to you. Bower provides hooks to facilitate using packages in your tools and workflows. Bower is optimized for the front-end. Bower uses a flat dependency tree, requiring only one version for each package, reducing page load to a minimum.

      Bower is a node module, and can be installed with the following command:
      npm install -g bower

      Let try to get bootstrap for our web app. Let type.
      bower install bootstrap

      image

      You will get latest version of boostrap and it dependencies as well such as jquery.
      You can call for version of bootstrap from
      bower install bootstrap#2.2

      Those files will reside inside the '/bower_components' folder

      You can used them

      1 <link rel="stylesheet" type="text/css" ref="bower_components/bootstrap/dist/css/bootstrap.css">
      2 <script src="bower_components/jquery/dist/jquery.js"></script>
      3 <script src="bower_components/jquery/dist/js/bootstrap.js"></script>

      To updating all the packages
      bower update


      The --save flag will instruct bower to create (if it does not exist) a bower.json file and include the installed packages in it. This is an example of the generated bower.json file:


      When any developer who has access to the repository runs bower install, it installs all the dependencies


      bower install


      bower install jquery#1 bootstrap --save


      Build tools: Grunt


      Grunt and Gulp are build tools, used to automate common and recurrent tasks, such as minifying scripts, optimizing images, minifying stylesheets, compiling less/sass/stylus. Bower plays well with Grunt/Gulp because of ready made plugins.



      Grunt has a plugin called grunt-bower-concat which compiles all the main files for each bower component you have into a bower.js file. Which you can then use Grunt to minify (uglify), resulting in bower.min.js.


      Grunt bower concat sample configuration:


      1 bower_concat:{
      2 all: {
      3 dest: "src/js/vendor/bower.js",
      4 destCss: “src/css/vendor/bower.css”
      5 }
      6 },

      Finally think about 'package.json'

      1 "scripts": {
      2 "prestart": "npm install",
      3 "postinstall": "bower update --unsafe-perm",
      4 "start": "grunt"
      5 }


      'pre start' is the first command triggered when you run 'npm start'
      'post install' is triggered by npm install. This will keep all our front-end packages up to date.
      Finally 'start' runs grunt.

      Niranjan KarunanandhamHow to access H2 database using WSO2 Products?

      All WSO2 products are shipped with H2 database and by default you cannot access it. In-order to access the H2 database via the browser, you need to enable "H2DatabaseConfiguration" in carbon.xml.
      • Open carbon.xml in <CARBON_HOME>/repository/conf.
      • Enable "H2DatabaseConfiguration" as follows:
      <H2DatabaseConfiguration>
              <property name="web" />
              <property name="webPort">8082</property>
              <property name="webAllowOthers" />
      </H2DatabaseConfiguration>
      • Start the server.
      • In your browser, type "http://localhost:8082
      • Enter the JDBC URL, User name and Password.
      Example:
      JDBC URL : jdbc:h2:<CARBON_HOME>/repository/database/WSO2CARBON_DB
      User Name : wso2carbon
      Password : wso2carbon


      • Click on "Connect".


      Prabath SiriwardenaBorderless Identity: Managing Identity in a Complex World

      IT consumerization is an emerging topic or trend for last few years. It talks about reorientation of product and service designs around the individual end-user.This important trend is not just about new devices; it is about the entire relationship between IT and its user population. This trend also introduces significant security issues because critical IT assets need to be available — securely — to an increasingly distributed and diverse user base that is using consumer devices of their own choice.

      While the initial consumerization hype was focused on the bring your own device (BYOD) trend, we are now seeing the emergence of bring your own identity (BYOID) concept.
      The rise of BYOID is being driven by users' "identity fatigue."

      Users have too many accounts, too many usernames and too many passwords. When the competition is literally a click away, organizations must enable the easiest user experience possible or users migrate to sites that offer the simplest registration and login process. Many web sites have moved quickly to accept identities from popular online identity providers like Facebook, Google and LinkedIn.

      A research done by the analyst firm Quocirca confirms that many businesses now have more external users than internal ones: in Europe 58 percent transact directly with users from other businesses and/or consumers; for the UK alone the figure is 65 percent.

      If you look at the history, most enterprises grow today via acquisitions, mergers and partnerships. In U.S only, mergers and acquisitions volume totaled to $865.1 billion in the first nine months of 2013, according to Dealogic. That’s a 39% increase over the same period a year ago — and the highest nine-month total since 2008.

      What does this mean to enterprise identity management ? You would have to work with multiple heterogeneous user stores - authentication protocols - legacy systems and many more. BYOID (Bring Your Own IDentity) is not just about bridging social identity with enterprise identity - it is also about bridging different heterogeneous identities between different corporates or enterprises.

      SAML, OpenID, OpenID Connect, WS-Federation all support identity federation - cross domain authentication. But, can we always expect all the parties in a federation use case to support SAML, OpenID or OpenID Connect ? Most of the federation systems we see today are in silos. It can be a silo of SAML federation, a silo of OpenID Connect federation or a silo of OpenID federation. You are not able to talk between silos.



       SAML was mostly used to facilitate web single sign on. It can be just within the same domain or between domains. SAML V2.0 - in 2005 - was built on that success. It unified the building blocks of federated identity in V1.1 with the inputs from Shibboleth initiative and the Liberty Alliance's Identity Federation Framework. It was a very critical step towards the full convergence for federated identity standards.

      OpenID, in 2005 - followed the footsteps of SAML. It was initiated by the founder of LiveJournal - Brad Fitzpatrick. The basic principle behind both OpenID and SAML, is the same. Both can be used to facilitate web single on and cross-domain Identity Federation. OpenID is more community friendly, user centric and decentralized.

      In mid-January 2008 - Yahoo added OpenID support - late July MySpace announced its support for OpenID - and late October Google joined the party. By December 2009 - there were more than 1 billion OpenID enabled accounts.

      It was a huge success in web single sign on - but started to fade - after OAuth 2.0 and OpenID Connect. One of the very popular and very first OpenID Provider - MyOpenID - was shut down on 1st of February 2014. But SAML - still stable as it was ten years back.

      OpenID Connect - has some history. It has its roots in OAuth 2.0 - although its been developed outside IETF - under the OpenID Foundation.

      OAuth 2.0 is a framework for delegated authorization. Its a misconception to think it does authentication. OpenID Connect is the one built on top of OAuth 2.0 to do authentication. As in the case of SAML and OpenID, both OAuth and OpenID Connect can be used for web single sign on and cross domain identity federation.

      Building a setup from scratch to fit into these standards is not hard. Say you’ve got Liferay - which supports OpenID. You can enable federated login to Liferay, to a partner - who is having an OpenID Provider, deployed over its own user store. Similarly, if you have a SAML 2 Identity Provider deployed in your environment - you can federate that identity to cloud service providers like Salesforce or Google Apps. Basically, that means - the users from your domain will be able to login into Salesforce or Google Apps using their corporate LDAP credentials. That’s the easy part of BYOID. How do you handle a situation where you have a partnership, and the applications running in your domain - secured with SAML 2.0 - should be accessible to your partner, who is only having an OpenID Connect Identity Provider.If you support true BYOID - with no code changes - you should be able to let a user from the partner domain with an OpenID Connect token - to log into your application which is secured with SAML 2.0.

      Internet Identity always - has an unsolved problem. Very frequently you will see new standards and profiles are popping up.

      SAML was dominating the last decade and still to some extent - OpenID Connect and JWT could be dominating the next. As we go on - as we move one - there will be lot of legacy around us.
      We need to find a way to get rid of these federation silos and build a way to facilitate communication between different heterogeneous protocols. In addition to federation silos, another anti-pattern we see in large-scale federation deployments, is the spaghetti identity. You create many point-to-point trust relationships between multiple identity providers and service providers.


      Even in a given federation silo how do you scale with increasing number of service providers and identity providers? Each service provider has to trust each identity provider and this leads into the Spaghetti Identity anti-pattern.

      With Identity Bus, a given service provider is not coupled to a given identity provider - and also not coupled to a given federation protocol. A user should be able to login into a service provider which accepts only SAML 2.0 tokens with an identity provider who only issues OpenID Connect tokens. The identity bus acts as the middle-man who mediates and transforms identity tokens between heterogeneous identity protocols.

      Let's see some of the benefits of the Identity Bus pattern.
      • Introducing a new service provider is frictionless. You only need to register the service provider at the identity bus and from the there pick which identity providers it trusts. No need to add the service provider configuration to each and every identity provider. 
      • Removing an existing service provider is frictionless. You only need to remove the service provider from the identity bus. No need to remove the service provider from each and every identity provider. 
      • Introducing a new identity provider is frictionless. You only need to register the identity provider at the identity bus. It will be available for any service provider. 
      • Removing an existing identity provider is extremely easy. You only need to remove the identity provider from the identity bus. 
      • Enforcing new authentication protocols is frictionless. Say you need to authenticate users with both the username/password and duo-security (SMS based authentication) - you only need to add that capability to the identity bus and from there you pick the required set of authentication protocols against a given service provider at the time of service provider registration. Each service provider can pick how it wants to authenticate users at the identity bus. 
      • Claim transformations. Your service provider may read user's email address from the http://sp1.org/claims/email attribute id - but the identity provider of the user may send it as http://idp1.org/claims/emai. Identity bus can transform the claims it receives from the identity provider to the format expected by the service provider. 
      • Role mapping. Your service provider needs to authorize users once they are logged in. What the user can do at the identity provider is different from what the same user can do at the service provider. User's roles from the identity provider define what he can do at the identity provider. Service provider's roles define the things a user can do at the service provider. Identity bus is capable of mapping identity provider's roles to the service provider's roles. For example a user may bring idp-admin role from his identity provider - in a SAML response - then the identity bus will find the mapped service provider role corresponding to this, say sp-admin, and will add that into the SAML response returning back to the service provider from the identity bus. 
      • Just-in-time provisioning. Since identity bus is at the middle of all identity transactions - it can provision all external user identities to an internal user store. 
      • Centralized monitoring and auditing. 
      • Centralized access control.
      • Introducing a new federation protocol needs minimal changes. If you have a service provider or an identity provider, which supports a proprietary federation protocol, then you only need to add that capability to the identity bus. No need to implement it at each and every identity provider or service provider.

      Ajith VitharanaBuild single bundle checkout - WSO2

      Sometime you need to checkout single bundle from WSO2 source repository and build. Lets say you need to check out this bundle and build.

      svn co https://svn.wso2.org/repos/wso2/carbon/platform/branches/turing/components/identity/org.wso2.carbon.identity.application.authenticator.facebook/4.2.0

      If you try to build using maven (version >= 3.0.5), it will end with following error.
      [INFO] Scanning for projects...
      Downloading: http://repo.maven.apache.org/maven2/org/wso2/carbon/identity/4.2.0/identity-4.2.0.pom
      [ERROR] The build could not read 1 project -> [Help 1]
      [ERROR]  
      [ERROR]   The project org.wso2.carbon:org.wso2.carbon.identity.application.authenticator.facebook:4.2.0 (/home/ajith/Desktop/4.2.0/pom.xml) has 1 error
      [ERROR]     Non-resolvable parent POM: Could not find artifact org.wso2.carbon:identity:pom:4.2.0 in central (http://repo.maven.apache.org/maven2) and 'parent.relativePath' points at wrong local POM @ line 18, column 10 -> [Help 2]
      [ERROR]
      [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
      [ERROR] Re-run Maven using the -X switch to enable full debug logging.
      [ERROR]
      [ERROR] For more information about the errors and possible solutions, please read the following articles:
      [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
      [ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException
       
      How to fix.

      1. You need to remove the <parent> element.
      <!--parent>
              <groupId>org.wso2.carbon</groupId>
              <artifactId>identity</artifactId>
              <version>4.2.0</version>
              <relativePath>../../pom.xml</relativePath>
      </parent-->
      2. Add groupId element.
      <groupId>org.wso2.carbon</groupId>
      3. Define the version for all the dependencies that not available. Eg:
      <dependency>
               <groupId>org.wso2.carbon</groupId>
               <artifactId>org.wso2.carbon.logging</artifactId>
               <version>4.2.0</version>
      </dependency>

            <dependency>
               <groupId>org.wso2.carbon</groupId>
               <artifactId>org.wso2.carbon.ui</artifactId>
               <version>4.2.0</version>
      </dependency>
      4. Define the maven repositories to get the artifacts.
      <repositories>
            <!-- WSO2 released artifact repository -->
            <repository>
               <id>wso2.releases</id>
               <name>WSO2 Releases Repository</name>
               <url>http://maven.wso2.org/nexus/content/repositories/releases/</url>
               <releases>
                  <enabled>true</enabled>
                  <updatePolicy>daily</updatePolicy>
                  <checksumPolicy>ignore</checksumPolicy>
               </releases>
            </repository>
            <repository>
               <id>wso2-nexus</id>
               <name>WSO2 internal Repository</name>
               <url>http://maven.wso2.org/nexus/content/groups/wso2-public/</url>
               <releases>
                  <enabled>true</enabled>
                  <updatePolicy>daily</updatePolicy>
                  <checksumPolicy>ignore</checksumPolicy>
               </releases>
            </repository>
      </repositories> 
       
       

      Ajith VitharanaJava client to invoke secured(UsernameToken) proxy service - WSO2 ESB.

      1. Download the latest version of WSO2 ESB and deploy  this proxy service.(You can copy this file to to wso2esb-4.8.1/repository/deployment/server/synapse-configs/default/proxy-services directory)

      <?xml version="1.0" encoding="UTF-8"?>
      <proxy xmlns="http://ws.apache.org/ns/synapse"
             name="SimpleStockQuoteServiceProxy"
             transports="https"
             startOnLoad="true"
             trace="disable">
          <description/>
          <target>
              <inSequence>
                  <send>
                      <endpoint>
                          <address uri="https://localhost:9002/services/SimpleStockQuoteService"/>
                      </endpoint>
                  </send>
                  <header xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
                          name="wsse:Security"
                          action="remove"/>
              </inSequence>
              <outSequence>
                  <send/>
              </outSequence>
          </target>
      </proxy>


      2. Start the ESB  server.

      3. Go to the wso2esb-4.8.1/samples/axis2Server/src/SimpleStockQuoteService and execute the ant command . That will build the SimpleStockQuoteService.aar and copy in to deployment directory of the  simple axis2 server.

      4. Go to the wso2esb-4.8.1/samples/axis2Server and start the simple axis2 server(sh axis2server.sh)

      5. The service endpoint  of SimpleStockQuoteService will be https://localhost:9002/services/SimpleStockQuoteService .

      6. Login to the ESB management console and upload this policy to  registry. (/_system/config/repository/components/org.wso2.carbon.security.mgt/policy)



      7. Go to the service list and start to secure the SimpleStockQuoteServiceProxy.


      8. Check the UsernameToken option and pick the policy.xml from registry.



      9. Select the user roles that proxy service can invoke.


      10.Execute the ant command inside wso2esb-4.8.1/bin and set the wso2esb-4.8.1/lib as the class path of the project.

      11. Copy the policy.xml to ESB home directory that path used in in  StockQuoteSecureClient.

      12. Update the carbon_home and proxyEndpoint in StockQuoteSecureClient class and execute the client .



      sanjeewa malalgodaWSO2 API Manager relationship between timestamp skew, token validity period and cache expiration time

      Let me explain how time stamp skew works and how it effect to token generation.
      First  time stamp skew is there to fix the issues due to small time differences in system clock values of servers. 
      Let say you have 2 key managers and you generate token from one and authenticate with other. 
      When first key manager generates token(say life span is 3600sec), time stamp skew value(say 300sec) will be deducted from token life time(client will notify that 3300sec is token validity period). 
      Then he call to second key manager with that token exactly after 3200 secs and there is time different between key managers(second key manager has +300 sec time difference). 
      In such cases time stamp skew will take care of those small gaps.

      So theoretically 
      time stamp skew should never large than token life time
      It should be very small comparing to token validity period.
      Token cache duration should never large than token validity period.

      You can change configuration values according to requirements but you cannot put any random numbers as you need because those are inter related :-)

      Madhuka UdanthaAffinityPropagation Clustering Algorithm

      Affinity Propagation (AP)[1] is a relatively new clustering algorithm based on the concept of "message passing" between data points. AP does not require the number of clusters to be determined or estimated before running the algorithm.

      “An algorithm that identifies exemplars among data points and forms clusters of datapoints around these exemplars. It operates by simultaneously considering all data point as potential exemplars and exchanging messages between data points until a
      good set of exemplars and clusters emerges.”[1]

       

      Let x1 through xn be a set of data points, with no assumptions made about their internal structure, and let s be a function that quantifies the similarity between any two points, such that s(xi, xj) > s(xi, xk) iff xi is more similar to xj than to xk.

      Algorithm

      The algorithm proceeds by alternating two message passing steps, to update two matrices

      • The "responsibility" matrix R has values r(i, k) that quantify how well-suited xk is to serve as the exemplar for xi, relative to other candidate exemplars for xi.
        • First, responsibility updates by below function

      r(i,k) \leftarrow s(i,k) - \max_{k' \neq k} \left\{ a(i,k') + s(i,k') \right\}

      • The "availability" matrix A contains values a(i, k) represents how "appropriate" it would be for xi to pick xk as its exemplar, taking into account other points' preference for xkas an exemplar.
        • Availability is updated
      a(i,k) \leftarrow \min \left( 0, r(k,k) + \sum_{i' \not\in \{i,k\}} \max(0, r(i',k)) \right) for i \neq k and
      a(k,k) \leftarrow \sum_{i' \neq k} \max(0, r(i',k)).

      Input for Algo is {s(i, j)}i,j∈{1,...,N} (data similarities and preferences)

      Both matrices are initialized to all zeroes.

       

      Let is Implement the Algorithm.

      I will be using python sklearn.cluster.AffinityPropagation. I will using my previously[2] generated data set.

      1 # Compute Affinity Propagation
      2 af = AffinityPropagation().fit(X)

      Parameters


      All parameters are optional



      • damping : Damping factor between 0.5 and 1 (float, default: 0.5)

      • convergence_iter : Number of iterations with no change in the number of estimated clusters (int, optional, default: 15)
        max_iter : Maximum number of iterations. (int, default: 200) 

      • copy : Make a copy of input data (boolean, default: True)

      • preference : Preferences for each point - points with larger values of preferences are more likely to be chosen as exemplars. The number of exemplars, ie. of clusters, is influenced by the input preferences value. If the preferences are not passed as arguments, they will be set to the median of the input similarities. (array-like, shape (n_samples,) or float)

      • affinity : Which affinity to use. At the moment `precomputed` and `euclidean` are supported.  (string, optional, default=`euclidean`)

      • verbose : Whether to be verbose (boolean, default: False)

      Implementation can be found in here[4]


      Attributes



      • cluster_centers_indices_ : Indices of cluster centers (array)

      • cluster_centers_ : Cluster centers  (array)

      • labels_ : Labels of each point (array)

      • affinity_matrix_ : Stores the affinity matrix used in `fit` (array)

      • n_iter_ : Number of iterations taken to converge (int)

      I will be using same result comparison variables that we used for DBSCAN[2]. Charting will be update for AF.  


      Estimated number of clusters: 6
      Homogeneity: 1.000
      Completeness: 0.801
      V-measure: 0.890
      Adjusted Rand Index: 0.819
      Adjusted Mutual Information: 0.799
      Silhouette Coefficient: 0.574


      image


      When data set is more spread (sd) from 0.5 to 0.9 


      image


      sample dataset center point are [[5, 5], [0, 0], [1, 5],[5, -1]]. Let try to turning algo parameters for to get better clustering


      Let is see the effect of iteration in AF



      When Iteration is 30                                                                           Iteration Is 75


      imageimage



      150 Iterations                                                                        200 Iterations


      imageimage


      Gist : https://gist.github.com/Madhuka/2e27dce9680f42619b83#file-affinity-propagation-py


      References


      [1] Brendan J. Frey; Delbert Dueck (2007). "Clustering by passing messages between data points". Science 315 (5814): 972–976.


      [2] http://madhukaudantha.blogspot.com/2015/04/density-based-clustering-algorithm.html





      [3] http://www.cs.columbia.edu/~delbert/docs/DDueck-thesis_small.pdf


      [4] https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/cluster/affinity_propagation_.py#L256

      Ruwan JanapriyaDIY Aquarim Stand

      Last weekend, I’ve built an stand for my 75 gallon aquarium. It was based on numerous designs available in the internet.
      This is what I modeled in sketchup.
      DIY aquarium stand sketchup

      I have used 4x2x96 inch studs & #10 2.5 inch wood screws from Lowes. Total cost was roughly $30.

      I’ve used following spreadsheet for all caculations. It can be changed for any aquarium size.
      DIY Aquarium Stand Spreadsheet

      This is after cutting all the pieces according to the spreadsheet.
      2

      This is the end result.
      3

      Ushani BalasooriyaHow to Enable and test Custom SSL Profiles in WSO2 ESB used for SSL communicating

      For this I have used WSO2 ESB 4.8.1 and WSO2  Application Server 5.2.1.
      WSO2 ESB uses the truststore for SSL communicating and keystore-truststore pair for Mutual SSL communicating.

      In here I have used a trust store for SSL communicating.

      Configure App Server as backend

      Configure backend :


      1. Use app server as backend

      2. Create a new keystore in App server in <Appserver_Home>/repository/resources/security

      keytool -genkey -alias appserver -keyalg RSA -keysize 1024 -keypass password -keystore appserver.jks -storepass password

      3. Export in to a pem file by following command

      keytool -export -alias appserver -keystore appserver.jks -storepass password -file appserver.pem

      4. Edit the carbon.xml in appserver as below :

       <KeyStore>  
      <!-- Keystore file location-->
      <Location>${carbon.home}/repository/resources/security/appserver.jks</Location>
      <!-- Keystore type (JKS/PKCS12 etc.)-->
      <Type>JKS</Type>
      <!-- Keystore password-->
      <Password>password</Password>
      <!-- Private Key alias-->
      <KeyAlias>appserver</KeyAlias>
      <!-- Private Key password-->
      <KeyPassword>password</KeyPassword>
      </KeyStore>



      Configure ESB :


      1. Created a new keystore.

      keytool -genkey -alias esb -keyalg RSA -keysize 1024 -keypass password -keystore esb.jks -storepass password

      2. Copy and paste appserver.pem in to the <ESB_HOME>repository/resources/security folder Import appserver.pem in to esb.jks by following command.

      keytool -import -alias appservernewesb -file appserver.pem -keystore esb.jks -storepass password

      3. Configure esb for custom profile in axis2.xml as below.

       <parameter name="customSSLProfiles">  
      <profile>
      <servers>10.100.0.31:9443</servers>
      <TrustStore>
      <Location>repository/resources/security/esb.jks</Location>
      <Type>JKS</Type>
      <Password>password</Password>
      </TrustStore>
      </profile>
      </parameter>

      Invoke and Test :


      1. Restart Appserver (offset=0) and ESB (offset = 10) by the command :

       "sh wso2server.sh" or  "sh wso2server.sh -Djavax.net.debug=ssl:handshake " to view the detailed logs.

      Following logs should be printed during restart.

      [2015-04-27 18:33:19,397]  INFO - ClientConnFactoryBuilder HTTPS Loading Identity Keystore from : repository/resources/security/wso2carbon.jks
      [2015-04-27 18:33:19,400]  INFO - ClientConnFactoryBuilder HTTPS Loading Trust Keystore from : repository/resources/security/client-truststore.jks
      [2015-04-27 18:33:19,408]  INFO - ClientConnFactoryBuilder HTTPS Loading custom SSL profiles for the HTTPS sender
      [2015-04-27 18:33:19,408]  INFO - ClientConnFactoryBuilder HTTPS Loading Trust Keystore from : repository/resources/security/esb.jks
      [2015-04-27 18:33:19,409]  INFO - ClientConnFactoryBuilder HTTPS Custom SSL profiles initialized for 1 servers



      2. Create the below proxy in ESB.

       <?xml version="1.0" encoding="UTF-8"?>  
      <proxy xmlns="http://ws.apache.org/ns/synapse"
      name="SecureHello"
      transports="https,http"
      statistics="disable"
      trace="disable"
      startOnLoad="true">
      <target>
      <outSequence>
      <send/>
      </outSequence>
      <endpoint>
      <address uri="https://localhost:9443/services/HelloService/"/>
      </endpoint>
      </target>
      <publishWSDL uri="http://localhost:9763/services/HelloService?wsdl"/>
      <description/>
      </proxy>

      3. Invoke the Proxy.

       <body>  
      <p:greet xmlns:p="http://www.wso2.org/types">
      <!--0 to 1 occurrence-->
      <name>ushani</name>
      </p:greet>
      </body>

      Following response will be received.

       <ns:greetResponse xmlns:ns="http://www.wso2.org/types">  
      <return>Hello World, ushani !!!</return>
      </ns:greetResponse>


      Manula Chathurika ThantriwatteHow to write subscriber and publisher to JBOSS MQ topic

      This Post explains Topics in JBOSS MQ with Subscribing and Publishing. For this we will write two java clients.

      • TopicSubscriber.java to Subcribe for messages
      • TopicPublisher.java to to Publish the messages
      First you have to download JBOSS Application Server from here. In this sample I'm using jboss-4.2.2.GA. Before starting the JBOSS application server you have to create a topic in JBOSS server. To do that, you have to create myTopoc-service.xml (you can used what every name you want) under the <JBOSS_SERVER>/server/default/deploy and enter following xml into it.


      1
      2
      3
      <mbean code="org.jboss.mq.server.jmx.Topic" name="jboss.mq.destination:service=Queue,name=topicA">
      <depends optional-attribute-name="DestinationManager">jboss.mq:service=DestinationManager</depends>
      </mbean>

      After that you can start the JBOSS application server. From the console log you can verify the topicA was created.

      Now you can create the sample project on IDE that you preferred. Also make sure to add client and lib directory jars in the JBOSS application server to the project. Now you can create TopicSubscriber.java and TopicPublisher.java sample programs as follows.

      TopicSubscriber.java sample program

       1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      72
      73
      74
      75
      76
      77
      78
      79
      80
      81
      82
      83
      84
      85
      86
      87
      88
      89
      90
      91
      92
      93
      94
      package simple;

      import java.util.Properties;

      import javax.jms.*;
      import javax.naming.InitialContext;
      import javax.naming.NamingException;

      public class TopicSubscriber {

      private String topicName = "topic/topicA";

      private boolean messageReceived = false;

      private static javax.naming.Context mContext = null;
      private static TopicConnectionFactory mTopicConnectionFactory = null;
      private TopicConnection topicConnection = null;

      public static void main(String[] args) {
      TopicSubscriber subscriber = new TopicSubscriber();
      subscriber.subscribeWithTopicLookup();
      }

      public void subscribeWithTopicLookup() {

      Properties properties = new Properties();
      properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
      properties.put(Context.PROVIDER_URL, "jnp://localhost:1099");
      properties.put("topic." + topicName, topicName);

      try {

      mContext = new InitialContext(properties);
      mTopicConnectionFactory = (TopicConnectionFactory)mContext.lookup("ConnectionFactory");

      topicConnection = mTopicConnectionFactory.createTopicConnection();

      System.out.println("Create Topic Connection for Topic " + topicName);

      while (!messageReceived) {
      try {
      TopicSession topicSession = topicConnection
      .createTopicSession(false, Session.AUTO_ACKNOWLEDGE);

      Topic topic = (Topic) mContext.lookup(topicName);
      // start the connection
      topicConnection.start();

      // create a topic subscriber
      javax.jms.TopicSubscriber topicSubscriber = topicSession.createSubscriber(topic);

      TestMessageListener messageListener = new TestMessageListener();
      topicSubscriber.setMessageListener(messageListener);

      Thread.sleep(5000);
      topicSubscriber.close();
      topicSession.close();
      } catch (JMSException e) {
      e.printStackTrace();
      } catch (NamingException e) {
      e.printStackTrace();
      } catch (InterruptedException e) {
      e.printStackTrace();
      }
      }
      } catch (NamingException e) {
      throw new RuntimeException("Error in initial context lookup", e);
      } catch (JMSException e) {
      throw new RuntimeException("Error in JMS operations", e);
      } finally {
      if (topicConnection != null) {
      try {
      topicConnection.close();
      } catch (JMSException e) {
      throw new RuntimeException(
      "Error in closing topic connection", e);
      }
      }
      }
      }

      public class TestMessageListener implements MessageListener {
      public void onMessage(Message message) {
      try {
      System.out.println("Got the Message : "
      + ((TextMessage) message).getText());
      messageReceived = true;
      } catch (JMSException e) {
      e.printStackTrace();
      }
      }
      }

      }

      TopicPublisher.java sample program

       1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      package simple;

      import javax.jms.*;
      import javax.naming.InitialContext;
      import javax.naming.NamingException;
      import java.util.Properties;

      public class TopicPublisher {
      private String topicName = "topic/topicA";

      private static javax.naming.Context mContext = null;
      private static TopicConnectionFactory mTopicConnectionFactory = null;

      public static void main(String[] args) {
      TopicPublisher publisher = new TopicPublisher();
      publisher.publishWithTopicLookup();
      }

      public void publishWithTopicLookup() {
      Properties properties = new Properties();
      TopicConnection topicConnection = null;
      properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
      properties.put(Context.PROVIDER_URL, "jnp://localhost:1099");
      properties.put("topic." + topicName, topicName);

      try {

      mContext = new InitialContext(properties);
      mTopicConnectionFactory = (TopicConnectionFactory)mContext.lookup("ConnectionFactory");
      topicConnection = mTopicConnectionFactory.createTopicConnection();

      try {
      TopicSession topicSession = topicConnection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);

      // create or use the topic
      System.out.println("Use the Topic " + topicName);
      Topic topic = (Topic) mContext.lookup(topicName);

      javax.jms.TopicPublisher topicPublisher = topicSession.createPublisher(topic);

      String msg = "Hi, I am Test Message";
      TextMessage textMessage = topicSession.createTextMessage(msg);

      topicPublisher.publish(textMessage);
      System.out.println("Publishing message " + textMessage);

      topicPublisher.close();
      topicSession.close();

      Thread.sleep(20);
      } catch (InterruptedException e) {
      e.printStackTrace();
      }

      } catch (JMSException e) {
      throw new RuntimeException("Error in JMS operations", e);
      } catch (NamingException e) {
      throw new RuntimeException("Error in initial context lookup", e);
      }
      }

      }u

      You have to used PROVIDER_URL as "java.naming.provider.url" and INITIAL_CONTEXT_FACTORY as "java.naming.factory.initial".

      First you have to run TopicSubscriber and then run the TopicPublisher. Here are the output of them.

      TopicSubscriber;
      Create Topic Connection for Topic topic/topicA
      Got the Message : Hi, I am Test Message

      TopicPublisher;
      Use the Topic topic/topicA
      Publishing message SpyTextMessage {
      Header {
         jmsDestination  : TOPIC.topicA
         jmsDeliveryMode : 2
         jmsExpiration   : 0
         jmsPriority     : 4
         jmsMessageID    : ID:2-13977171929621
         jmsTimeStamp    : 1397717192962
         jmsCorrelationID: null
         jmsReplyTo      : null
         jmsType         : null
         jmsRedelivered  : false
         jmsProperties   : {}
         jmsPropReadWrite: true
         msgReadOnly     : false
         producerClientId: ID:2
      }
      Body {
         text            :Hi, I am Test Message
      }

      }


      Java Code Geeks

      Manula Chathurika ThantriwatteRESTful Java Client for Apache HttpClient

      Apache HttpClient is a robust and complete solution Java library to perform HTTP operations, including RESTful service. This blog post shows how to create a RESTful Java client with Apache HttpClient, to perform a “GET” and “POST” request.

      Apache HttpClient is available in Maven central repository, just declares it in your Maven pom.xml file.

      1
      2
      3
      4
      5
      <dependency>
      <groupId>org.apache.httpcomponents</groupId>
      <artifactId>httpclient</artifactId>
      <version>4.1.1</version>
      </dependency>

      Following is the way to implement GET request.

       1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      public HttpResponse doGet(DefaultHttpClient httpClient, String resourcePath, String userName, String passWord) {
      try {
      HttpGet getRequest = new HttpGet(resourcePath);
      getRequest.addHeader("Content-Type", "application/json");

      String userPass = userName + ":" + passWord;
      String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userPass.getBytes("UTF-8"));
      getRequest.addHeader("Authorization", basicAuth);

      httpClient = (DefaultHttpClient) WebClientWrapper.wrapClient(httpClient);

      HttpParams params = httpClient.getParams();
      HttpConnectionParams.setConnectionTimeout(params, 300000);
      HttpConnectionParams.setSoTimeout(params, 300000);

      HttpResponse response = httpClient.execute(getRequest);

      return response;
      } catch (ClientProtocolException e) {
      e.printStackTrace();
      return null;
      } catch (IOException e) {
      e.printStackTrace();
      return null;
      }
      }

      Following is the way to implement POST request.

       1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      public HttpResponse doPost(DefaultHttpClient httpClient, String resourcePath, String jsonParamString, String userName,
      String passWord) throws Exception{
      try {
      HttpPost postRequest = new HttpPost(resourcePath);

      StringEntity input = new StringEntity(jsonParamString);
      input.setContentType("application/json");
      postRequest.setEntity(input);

      String userPass = userName + ":" + passWord;
      String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userPass.getBytes("UTF-8"));
      postRequest.addHeader("Authorization", basicAuth);

      httpClient = (DefaultHttpClient) WebClientWrapper.wrapClient(httpClient);

      HttpParams params = httpClient.getParams();
      HttpConnectionParams.setConnectionTimeout(params, 300000);
      HttpConnectionParams.setSoTimeout(params, 300000);

      HttpResponse response = httpClient.execute(postRequest);

      return response;
      } catch (ClientProtocolException e) {
      throw new ClientProtocolException();
      } catch (ConnectException e) {
      throw new ConnectException();
      }
      catch (IOException e) {
      e.printStackTrace();
      return null;
      }
      }

      You can used WebClientWrapper class to ignore the certificate. You can find it from following.

       1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      import java.security.cert.CertificateException;
      import java.security.cert.X509Certificate;
      import javax.net.ssl.SSLContext;
      import javax.net.ssl.TrustManager;
      import javax.net.ssl.X509TrustManager;

      import org.apache.http.client.HttpClient;
      import org.apache.http.conn.ClientConnectionManager;
      import org.apache.http.conn.scheme.Scheme;
      import org.apache.http.conn.scheme.SchemeRegistry;
      import org.apache.http.conn.ssl.SSLSocketFactory;
      import org.apache.http.impl.client.DefaultHttpClient;

      public class WebClientWrapper {

      public static HttpClient wrapClient(HttpClient base) {
      try {
      SSLContext ctx = SSLContext.getInstance("TLS");
      X509TrustManager tm = new X509TrustManager() {
      public void checkClientTrusted(X509Certificate[] xcs,
      String string) throws CertificateException {
      }

      public void checkServerTrusted(X509Certificate[] xcs,
      String string) throws CertificateException {
      }

      public X509Certificate[] getAcceptedIssuers() {
      return null;
      }
      };
      ctx.init(null, new TrustManager[] { tm }, null);
      SSLSocketFactory ssf = new SSLSocketFactory(ctx);
      ssf.setHostnameVerifier(SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER);
      ClientConnectionManager ccm = base.getConnectionManager();
      SchemeRegistry sr = ccm.getSchemeRegistry();
      sr.register(new Scheme("https", ssf, 443));
      return new DefaultHttpClient(ccm, base.getParams());
      } catch (Exception ex) {
      return null;
      }
      }
      }


      Java Code Geeks

      Chamila WijayarathnaData Model of Sinmin Corpus

      In my last two blogs, I wrote about the project I carried out in the final year at university. In this blog I will write about the data storage model we used in the Sinmin corpus.
      After doing a performance analysis with various data storage candidates for the corpus we decided to use Cassandra as the main data storage system of the corpus.
      Cassandra is a open source column store database system. It uses a query based data modeling where data model is decided based on the information expected to retrieve.
      Following table shows information needs of the corpus and column families defined to fulfill those needs with corresponding indexing.


      Information Need
      Corresponding Column Family with Indexing
      Get frequency of a given word in given time period and given category
      corpus.word_time_category_frequency ( id bigint, word varchar, year int, category varchar, frequency int, PRIMARY KEY(word,year, category))
      Get frequency of a given word in given time period
      corpus.word_time_category_frequency ( id bigint, word varchar, year int, frequency int, PRIMARY KEY(word,year))
      Get frequency of a given word in given category
      corpus.word_time_category_frequency ( id bigint, word varchar, category varchar, frequency int, PRIMARY KEY(word, category))
      Get frequency of a given word
      corpus.word_time_category_frequency ( id bigint, word varchar, frequency int, PRIMARY KEY(word))
      Get frequency of a given bigram in given time period and given category
      corpus.bigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, year int, category int,    frequency int, PRIMARY KEY(word1,word2,year, category))
      Get frequency of a given bigram in given time period
      corpus.bigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, year int, frequency int, PRIMARY KEY(word1,word2,year))
      Get frequency of a given bigram in given category
      corpus.bigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, category varchar, frequency int, PRIMARY KEY(word1,word2, category))
      Get frequency of a given bigram
      corpus.bigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, frequency int, PRIMARY KEY(word1,word2))
      Get frequency of a given  trigram in given time period and in a given category
      corpus.trigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, word3 varchar, year int, category int, frequency int,    PRIMARY KEY(word1,word2,word3,year, category))
      Get frequency of a given  trigram in given time period
      corpus.trigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, word3 varchar, year int, frequency int,    PRIMARY KEY(word1,word2,word3,year))
      Get frequency of a given  trigram in a given category
      corpus.trigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, word3 varchar, category varchar, frequency int,    PRIMARY KEY(word1,word2,word3, category))
      Get frequency of a given  trigram
      corpus.trigram_time_category_frequency ( id bigint, word1 varchar, word2 varchar, word3 varchar, frequency int,    PRIMARY KEY(word1,word2,word3))
      Get most frequently used words in a given time period and in a given category
      corpus.word_time_category_ordered_frequency ( id bigint, word varchar, year int, category int, frequency int, PRIMARY KEY((year, category),frequency,word))
      Get most frequently used words in a given time period
      corpus.word_time_category_ordered_frequency ( id bigint, word varchar, year int,frequency int, PRIMARY KEY(year,frequency,word))
      Get most frequently used words in a given category,
      Get most frequently used words
      corpus.word_time_category_ordered_frequency ( id bigint, word varchar,category varchar, frequency int, PRIMARY KEY(category,frequency,word))
      Get most frequently used bigrams in a given time period and in a given category
      corpus.bigram_time_ordered_frequency ( id bigint, word1 varchar, word2 varchar, year int, category varchar, frequency int, PRIMARY KEY((year,category),frequency,word1,word2))
      Get most frequently used bigrams in a given time period
      corpus.bigram_time_ordered_frequency ( id bigint, word1 varchar, word2 varchar, year int, frequency int, PRIMARY KEY(year,frequency,word1,word2))
      Get most frequently used bigrams in a given category,
      Get most frequently used bigrams
      corpus.bigram_time_ordered_frequency ( id bigint, word1 varchar, word2 varchar, category varchar, frequency int, PRIMARY KEY(category) ,frequency,word1,word2))
      Get most frequently used trigrams in a given time period and in a given category
      corpus.trigram_time_category_ordered_frequency (id bigint, word1 varchar, word2 varchar, word3 varchar, year int, category varchar, frequency int, PRIMARY KEY((year, category),frequency,word1,word2,word3))
      Get most frequently used trigrams in a given time period
      corpus.trigram_time_category_ordered_frequency (id bigint, word1 varchar, word2 varchar, word3 varchar, year int,frequency int, PRIMARY KEY(year,frequency,word1,word2,word3))
      Get most frequently used trigrams in a given category
      corpus.trigram_time_category_ordered_frequency (id bigint, word1 varchar, word2 varchar, word3 varchar, category varchar, frequency int, PRIMARY KEY( category,frequency,word1,word2,word3))
      Get latest key word in contexts for a given word in a given time period and in a given category
      corpus.word_year_category_usage (id bigint, word varchar, year int, category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word,year,category),date,id))
      Get latest key word in contexts for a given word in a given time period
      corpus.word_year_category_usage (id bigint, word varchar, year int, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word,year),date,id))
      Get latest key word in contexts for a given word in a given category
      corpus.word_year_category_usage (id bigint, word varchar, category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word,year),date,id))
      Get latest key word in contexts for a given word
      corpus.word_year_category_usage (id bigint, word varchar,sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY(word,date,id))
      Get latest key word in contexts for a given bigram in a given time period and in a given category
      corpus.bigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, year int, category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,year,category),date,id))
      Get latest key word in contexts for a given bigram in a given time period
      corpus.bigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, year int, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,category),date,id))
      Get latest key word in contexts for a given bigram in a given category
      corpus.bigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,category),date,id))
      Get latest key word in contexts for a given bigram
      corpus.bigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2),date,id))
      Get latest key word in contexts for a given trigram in a given time period and in a given category
      corpus.trigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, word3 varchar, year int, category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,word3,year,category),date,id))
      Get latest key word in contexts for a given trigram in a given time period
      corpus.trigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, word3 varchar, year int, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,word3,year),date,id))
      Get latest key word in contexts for a given trigram in a given category
      corpus.trigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, word3 varchar,category varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,word3,category),date,id))
      Get latest key word in contexts for a given trigram
      corpus.trigram_year_category_usage ( id bigint, word1 varchar, word2 varchar, word3 varchar, sentence varchar, postname text, url varchar, date timestamp, PRIMARY KEY((word1,word2,word3),date,id))
      Get most frequent words at a given position of a sentence
      corpus.word_pos_frequency ( id bigint, content varchar, position int, frequency int, PRIMARY KEY(position, frequency, content))
      corpus.word_pos_id ( id bigint, content varchar, position int, frequency int, PRIMARY KEY(position, content))
      Get most frequent words at a given position of a sentence in a given time period
      corpus.word_pos_frequency ( id bigint, content varchar, position int, year int, frequency int, PRIMARY KEY((position,year), frequency, content))
      corpus.word_pos_id ( id bigint, content varchar, position int, year int,frequency int, PRIMARY KEY(position,year,content))
      Get most frequent words at a given position of a sentence in a given category
      corpus.word_pos_frequency ( id bigint, content varchar, position int, category varchar,frequency int, PRIMARY KEY((position,category), frequency, content))
      corpus.word_pos_id ( id bigint, content varchar, position int, category varchar,frequency int, PRIMARY KEY(position, category,content))
      Get most frequent words at a given position of a sentence in a given time period and in a given category
      corpus.word_pos_frequency ( id bigint, content varchar, position int, year int, category varchar,frequency int, PRIMARY KEY((position,year,category), frequency, content))
      corpus.word_pos_id ( id bigint, content varchar, position int, year int, category varchar,frequency int, PRIMARY KEY(position,year,category,content))
      Get the number of words in the corpus in a given category and year
      corpus.word_sizes ( year varchar, category varchar, size int, PRIMARY KEY(year,category));

      Afkham AzeezVega - the Sri Lankan Supercar


      For the past 3 months, I have been on sabbatical & away from WSO2. During this period, I got the privilege of working with Team Vega, which is building a fully electric supercar. It was a great opportunity for me since anything to do with vehicles is my passion. The car is being 100% hand built, with around 95% of the components being manufactured locally in Sri Lanka.


      Vega Logo


      The work on the car is progressing nicely. The images below show the plug which will be used to develop a mold. The mold in turn, will be used to develop the body panels. The final product will have Carbon Fiber body panels.

      The vehicle chassis is a space frame. This provides the required strength & rigidity, as well as ease of development using simple fabrication methods. The following images show the space frame chassis of Vega.




      The following image shows the 450 HP motors coupled with a reduction gear box that powers Vega. This setup will power the rear wheels. The wheels are not coupled in any sort, and differential action is controlled via software. You can see the gear box in the center, and the two motors on either side of it.

      450 HP motor & motor controller

      450 HP motor


      One of the highlights of this vehicle is its mechanical simplicity. The vehicles uses very little mechanical parts compared to traditional vehicles, and all the heavy lifting is done by the electronics & software.  There will be around 25 micro-controllers that communicate via CAN bus. Most of the actuation & monitoring will be via messaging between these micro-controllers.

      The power required is supplied by a series of battery packs. The battery packs are built using 3.3V Lithium Iron Phosphate (LiFePO4) cells. This cell has high chemical stability under varying conditions. There is a battery management system which monitors the batteries & handles charging of the batteries.

      A single LiFePO4 cell

      Battery module with cooling lines

      A single battery module mounted on the Vega chassis


       When it comes to electric vehicle charging, there are two leading standards; J1772 & CHAdeMO. The team is also building chargers which will be deployed is various locations. The image below shows a Level 2 charger. There are 3.3kW & 6.5kW options available. 1 hour of charging using this charger will give a range of 25Km on average.

      The image below shows the super charger that is being built. There are 12.5kW & 25kW options available at the moment. This charger can charge the battery up to 80% of its capacity within a period of 20 minutes.  




      With electric vehicles gradually gaining popularity in Sri Lanka & the rest of the world, it has become a necessity to deploy chargers in public locations. This leads to a new problem of managing & monitoring chargers, as well as billing for charging. OCPP (Open Charge Point Protocol) is a specification which has been adopted by a number of countries & organizations to carry out these tasks. The Vega chargers will also support this standard.


      CAD diagrams of the Vega supercar (in the background)
      Last day with Team Vega
      It was a wonderful experience working with Team Vega, even though it was for a very short time, and I am looking forward to the day where I get to test drive the supercar.

      Update: Video introducing Vega 

      Kalpa WelivitigodaWorkaround for absolute path issue in SFTP in VFS transport

      In WOS2 ESB, VFS transport can be used to access SFTP file system. The issue is that we cannot use absolute paths with SFTP and this affects to WSO2 ESB 4.8.1 and prior versions. The reason is that SFTP uses SSH to login, and it will by default log into the user's home directory and the path specified will be considered relative to the user's home directory.

      For example consider the VFS URL below,
      vfs:sftp://kalpa:*****@localhost/myPath/file.xml
      The requirement is to refer to /myPath/file.xml but it will refer /home/kalpa/myPath/file.xml (/home/kalpa) is the user's home directory.

      To overcome this issue we can create a mount for the desired directory in the home directory of the user in the FTP file system. Considering the example above, we can create the mount as follows,
      mount --bind /myPath /home/kalpa/myPath
       With this the VFS URL above will actually refer to /myPath/file.xml.

      Thilini IshakaImporting Swagger API definitions to Create APIs in WSO2 API-M 1.8.0

      Now you have the facility to add/edit the Swagger definition (1.2 version) of an API in the API design time with API-M 1.8.0

      A sample swagger API definition URI: http://petstore.swagger.io/api/api-docs

      Using Import option;




      Designer API UI after importing the Swagger API definition;

      Upcoming API-M version (1.9.0) has the support for Swagger 2.0 API definitions.

      Madhuka UdanthaBuilding Zeppelin in windows 8

      Pre - Requirements

      • java 1.7
      • maven 3.2.x or 3.3.x
      • nodejs
      • npm
      • cywin

      Here is my version in windows8 (64 bit)

      image

      1. Clone git repo

           git clone https://github.com/apache/incubator-zeppelin.git

      2. Let’s build Incubator-zeppelin from the source

          mvn clean package

      Since you are running in windows shell command or space in dir, new line issue in windows (Unix to dos issue) will break some test so you can skip them for now by ‘-DskipTests’. Used –u to get updated snapshot of the repo while it is building.

      Incubator-zeppelin is build success.

      image

      image

      Few issues you can face with windows

      ERROR 01

      [ERROR] Failed to execute goal com.github.eirslett:frontend-maven-plugin:0.0.23:bower (bower install) on project zeppelin-web: Failed to run task: 'bower --allow-root install' failed. (error code 1) -> [Help 1]

      you can find 'bower' in incubator-zeppelin\zeppelin-web folder. So you can go for zeppelin-web directory and enter 'bower install' and wait till it complete.

      Some time you will get 'issue in node-gyp' then check you nodejs version and nodejs location is it pointed correctly.

      • $node –version
      • $which node

      Then you can get newest version of node-gyp

      Some time depending on cywin-user permission you have to install 'bower' if not.

      • npm install -g bower

       

      Error 02

      [ERROR] bower json3#~3.3.1  ECMDERR Failed to execute "git ls-remote --tags --heads git://github.com/bestiejs/json3.git", exit code of #128 fatal: unable to connect to github.com: github.com[0: 192.30.252.130]: errno=Connection timed out

      Instead to run this command:

        git ls-remote --tags --heads git://github.com/bestiejs/json3.git

      you should run this command:
           git ls-remote --tags --heads git@github.com:bestiejs/json3.git
      or
          git ls-remote --tags --heads https://github.com/bestiejs/json3.git

      or you can run 'git ls-remote --tags --heads git://github.com/bestiejs/json3.git' but you need to make git always use https in this way:

          git config --global url."https://".insteadOf git://

      Lot of time this issue occur deu to corporate network / proxy. So we can added proxy settings to git's config and all was well.
          git config --global http.proxy http://proxyuser:proxypwd@proxy.server.com:8080
          git config --global https.proxy https://proxyuser:proxypwd@proxy.server.com:8080

       

      Error 03

      You will have fix new line issue in windows. In windows new line is mark as ‘/r/n’.

      Madhuka UdanthaBuilding Apache Zeppelin

      Apache Zeppelin (incubator) is a collaborative data analytics and visualization tool for Apache Spark, Apache Flink. It is web-based tool for the data scientists to collaborate over large-scale data exploration. Zeppelin is independent of the execution framework. Zeppelin integrated full support of Apache Spark so I will try sample with spark in it's self. Zeppelin interpreter allows any language/data-processing-backend to be plugged into Zeppelin (Scala- Apache Spark, SparkSQL, Markdown and Shell)
      Let’s build from the source.
      1. Get repo to machine.
      git clone https://github.com/apache/incubator-zeppelin.git
      2. Build code
      mvn clean package
      You can used '-DskipTests' to skip the test in the build
      [note]
      For cluster mode
      mvn install -DskipTests -Dspark.version=1.1.0 -Dhadoop.version=2.2.0
      Change spark.version and hadoop.version to your cluster's one.
      Mint
      3. Add jars, files
      spark.jars, spark.files property in ZEPPELIN_JAVA_OPTS adds jars, files into SparkContext
      ZEPPELIN_JAVA_OPTS="-Dspark.jars=/mylib1.jar,/mylib2.jar -Dspark.files=/myfile1.dat,/myfile2.dat"
      4. Start Zeppelin
      bin/zeppelin-daemon.sh start
      In console you will see print ‘Zeppelin start ‘ So go to  http://localhost:8080/
      Mint1
      Go to NoteBook –> Tutorial
      There you can see the chats and graph with queries. There you can pick chart attributes with drag and drop mode.

      Mint2

      Madhuka UdanthaZeppelin Note for load data and Analyzing

      Previous post is introduction for zeppelin notebook. Here we will more more detail view where it will used for researcher. Using shell interpreter we can download / retrieve data sets / files from remote server or internet. Then using Scala in Spark to make class from that data and then used SQL to play with the data. You can analysis data very quickly as it support dynamic form in zeppelin display. 

      1. Loading Data for zeppelin from local file called csv

      1 1 val bankText = sc.textFile("/home/max/zeppelin/zp1/bank.csv")
      2 2
      3 3 case class Bank(age:Integer,job:String,marital:String,education:String,balance:Integer)
      4 4 val bank = bankText.map(s=>s.split(";")).map(
      5 5 s=>Bank(s(0).toInt,
      6 6 s(1).replaceAll("\"",""),
      7 7 s(2).replaceAll("\"",""),
      8 8 s(3).replaceAll("\"",""),
      9 9 s(5).replaceAll("\"","").toInt
      10 10 )
      11 11 )
      12 12 bank.registerAsTable("bank");

       


      Here we are, Creating an RDD of Bank objects and register it as a table called ‘bank’


      Note: Case classes in Scala 2.10 can support only up to 22 fields.


      2. Next run SQL for newly created table


      %sql select count(*) as count from bank


      p99


      Give total record count we have in table


      1 %sql
      2 select age, count(1) value
      3 from bank
      4 where age < 30
      5 group by age
      6 order by age

      p90


      It support tool tip on chart where it show age and user count for that age


      [1] http://madhukaudantha.blogspot.com/2015/04/zeppelin-notebook.html

      Sivajothy VanjikumaranWSO2 ESB send same request to different Rest services

      In this scenario I need to send a post request to two different REST services, I am using REST API configuration of WSO2 ESB First I need to post a request to first service and based on successful posting then need to post this same original request to another service and also But I need to obtain the response from first service and send it to client.However I do not need to obtain the response from second service.

      I have illustrate this scenario in the simple flow diagram.



      Relevant Synapse configuration.





      Madhuka UdanthaZeppelin NoteBook

      Here is my previous post to build zeppelin from source. This post will take you a tour on “notebook feature of zeppelin”. NoteBook contain with note. Note will have paragraphs.

      1. Start you zeppelin by entering

      /incubator-zeppelin $ ./bin/zeppelin-daemon.sh start
      p2

      2. Go to localhost:8080 and click on ‘NoteBook’ in top menu. Then click on ‘Create new note’. Now you will have note so go for the note you just created.

      p3

      3. Make you note title as “My Note Book” by click on it. Let interpreter to be default for now.

      4. Let add title for note by click on title of the note.

      p4

      Multiple languages by Zeppelin interpreter

      Zeppelin is analytical tool. NoteBook is Multi-purpose  for data ingestion, discovery, visualization. It supports multiple lanuages by Zeppelin interpreter and data-processing-backend also plugable into Zeppelin. Default interpreter  are Scala with Apache Spark, Python with Sparkcontext, SparkSQL, Hive, Markdown and Shell.

      Markdown

      %md
      ##Hi Zeppelin

      You can run it by press shift + enter or click on play button on top the note or pharagraph

      p6

      Dynamic form for markdown

      %md

      Hello ${name=bigdata}

      p7

      Scala with Apache Spark

      now we will try scala in our note book. Let is get the version

      sc.version

      p9

      val text = “Hey, scala”

      p8

      In my next post will go more deep in scala

      Table and Charts

      you can used escape characters  ‘/n’ for new line and  ‘/t’ for tab and build below data set (hard coded for sample)

      println("student\tsubject\tmarks\nMadhuka\tScience\t95\nJhon\tScience\t85\nJhon\tMaths\t100\n")

      p10

      Table magic come in here

      By using %table

      println("%table student\tsubject\tmarks\nMadhuka\tScience\t95\nJhon\tScience\t85\nJhon\tMaths\t100\n")

      p10

      Adding form by z.input(“key”,”value”)

      println("%table student\tsubject\tmarks\nMadhuka\tScience\t"+z.input("Marks", 95)+"\nJhon\tScience\t85\nJhon\tMaths\t100\n")

      p11

      Here it also support remote sharing

      p44

      Next post we will go more in to dynamic form idea and really data analysis

      Madhuka UdanthaMaven 3.3.x for Mint

      1.Open the terminal and download the 'apache-maven-3.3.1-bin.zip'
      wget http://mirrors.sonic.net/apache/maven/maven-3/3.3.1/binaries/apache-maven-3.3.1-bin.zip

      2. Unpack the binary distribution
      unzip apache-maven-3.3.1-bin.zip

      3. Move the apache maven directory to /usr/local
      sudo cp -R apache-maven-3.3.1 /usr/local/

      4. Adding PATH and MAVEN_HOME
      gedit .bashrc OR vi .bashrc

      Then add two of them as below
      export PATH="/usr/local/apache-maven-3.3.1/bin:/opt/java/jdk1.8.0_40/bin:$PATH"
      export MAVEN_HOME="/usr/local/apache-maven-3.3.1"

      source .bashrc

      Optional way for step four
      Create a soft link or symbolic link for maven
      sudo ln -s /usr/local/apache-maven-3.3.1/bin/mvn /usr/bin/mvn

      5. Test Maven version
      mvn –version

      p1

      Yumani RanaweeraWhat happens to HTTP transport when service level security is enabled in carbon 4.2.0

      I was under impression this was a bug and am sure many of us will assume like that.

      In carbon 4.2.0 products for example WSO2 AS 5.2.0, when you apply security, the HTTP endpoint disables and disappears from service dashboard as well.

      Service Dashboad



      wsdl1.1


      In earlier carbon versions this did not happen, both endpoints use to still appear even if you have enabled security.

      Knowing this I tried accessing HTTP endpoint and when failed tried;
      - restarting server,
      - dis-engaging security
      but neither help.

      The reason being; this is not a bug, but is as design.  The HTTP transport disables when you enable security and to activate it again you need to enable HTTPS from service level transport settings.

      Transport management view - HTTP disabled

      Change above as this;

      Transport management view - HTTP enabled


      Kasun Dananjaya DelgollaDebugging the Android EMM Agent - WSO2 EMM 1.1.0


      We have done a custom debugger for Android Agent to check the input and output operations real time on the Agent app itself. To use this feature, we should 1st build the Android Agent by following [1]. And before we generate the APK file, make sure you change the constant "DEBUG_MODE_ENABLED' in the CommonUtilities.java [2] to True.

      Once it's built and installed on a device. You can first enroll the device to the EMM server. Once you reach the Register success screen of the Agent app. (When you reach the below Screen)

      Once you reach this screen, click on the options icon on the top right corner of the app and click on Debug Log option and you will see the live logs. Click on Refresh to view the latest command logs. And these logs will be saved in a file named "wso2log.txt" in the device external storage.

      If you need more information on configuring and debugging EMM visit [3] and [4]

      [1] - https://docs.wso2.com/display/EMM110/Android+Configurations
      [2] - https://github.com/wso2/emm-agent-android/blob/master/src/org/wso2/emm/agent/utils/CommonUtilities.java
      [3] - http://wso2.com/library/articles/2014/03/how-wso2-emm-addresses-the-android-challenge/
      [4]  - http://wso2.com/library/webinars/2014/09/getting-your-android-device-managed-by-wso2-emm/

      Shani RanasingheWaiting for network configuration at boot up in ubuntu (14.04)

      Today I was meddling around with the network configurations as I could not connect to a wired network, and with something I did, when I restarted I was keeping on getting the message "Waiting for network configuration at boot up in ubuntu" and then wait upto 60 seconds, and start up without any network manager settings.

      Because of this I could not connect to a network, and the network manager was literally stalled.

      After looking at my previous steps, hitting the history command

           #history

      it shows that I was meddling with the resolve.conf file. But when I looked at the file, it showed that no DNS nameservers were defined in the file. The file was EMPTY.... PANIC!!!!


      So then I tried setting the nameserver in the /etc/resolve.conf file, however it was getting replaced.

      The solution was,

      1) go to /etc/resolveconf/resolv.conf.d/
            # cd  /etc/resolveconf/resolv.conf.d/
      2) Open the base file n super user mode
           # sudo vim base
      3) add the nameserver details
          e.g : nameserver 127.0.1.1
      4) Save file, and restart.


      This should fix the issue.

      References 

      [1] http://unix.stackexchange.com/questions/128220/how-do-i-set-my-dns-on-ubuntu-14-04

      Shani RanasingheWired network not detected in ubuntu 14.04

      I was at a customer site, and it was required that I connect to their wired network. Everyone in the customer site could connect to the wired network, except me... :) Why? well probably two reasons, I had a ubuntu machine where as everyone else had windows, and I had a machine I used at WSO2,  with WSO2 network.


      Hence, I googled a bit, and figured out the issue. The issue in my case was that the /etc/network/interfaces file had some unwanted lines in there.

      If you do happen to come across the same issue, you could try this as well.

      Comment or delete any lines in the /etc/network/interfaces file except for the lines 

      auto lo
      iface lo inet loopback

      I personally would comment.. :)


      References

      [1] http://ubuntuforums.org/showthread.php?t=2247351

      Sivajothy VanjikumaranSimple http based file sharing server with in a second!

      Recently i have come across a situation where I have to quickly need to share a file with my colluge and i did not had any ftp servers or portable devices.

      So instance solution use Python!!!! if you have already installed the python in you machine.

      Go to the directory that you want to share your files
      Run "python -m SimpleHTTPServer 8000"

      Now you can access your directory via browser ::)

      John Mathon“The Cloud” is facing resistance or failing in many organizations

      cloudlayers2

      Barriers to Cloud / PaaS / DevOps Adoption and New Technology Renewal

      As described in my simple guide to PaaS, SaaS, IaaS, BaaS,  here “The Cloud” is divided into different services which all have common characteristics of incrementally, remote manageability, low initial cost and other characteristics.   It is my most popular article.

      Many organizations are taking significant steps to adoption of “The Cloud” but there are many that are struggling.   I want to emphasize that the “Cloud” as a technology, as a business is a huge success with over $100 billion in sales in aggregate and gaining market share and importance daily at a huge pace but some organizations are still faltering.

      In talking to some CIO’s recently I discovered some interesting problems that many organizations are running into adopting Cloud technologies around IaaS and PaaS that are critical to future success.

      complexity

      Some of the major stumbling blocks

      If you are a startup organization  then